Nvidia Embraces LLMs & Commonsense Cybersecurity Strategy

  /     /     /  
Publicated : 23/11/2024   Category : security


Nvidia Embraces LLMs & Commonsense Cybersecurity Strategy


Nvidia doesnt just make the chips that accelerate a lot of AI applications — the company regularly creates and uses its own large language models, too.



As the builder of the processors used to train the latest AI models, Nvidia has embraced the generative AI (GenAI) revolution. It runs its own proprietary large language models (LLMs), and several internal AI applications; the latter include the companys NeMo platform for building and deploying LLMs, and a variety of AI-based applications, such as object simulation and the reconstruction of DNA from extinct species.
At Black Hat USA next month in a session entitled
Practical LLM Security: Takeaways From a Year in the Trenches
, Richard Harang, principal security architect for AI/ML at the chip giant, plans to talk about lessons the Nvidia team has learned in red-teaming these systems, and how cyberattack tactics against LLMs are continuing to evolve. The good news, he says, is that existing security practices dont have to shift that much in order to meet this new class of threats, even though they do pose an outsized enterprise risk because of how privileged they are.
Weve learned a lot over the past year or so about how to secure them and how to build security in from first principles, as opposed to trying to tack it on after the fact, Harang says. We have a lot of valuable practical experience to share as a result of that.
Businesses are increasingly creating applications that rely on next-generation AI, often in the form of integrated AI agents capable of taking privileged actions. Meanwhile, security and AI researchers have both already pointed out potential weaknesses in these environments, from
AI-generated code expanding the resulting applications attack surface
to
overly helpful chatbots that give away sensitive corporate data
. Yet, attackers often do not need specialized techniques to exploit these, Harang says, because theyre just new iterations of already known threats.
A lot of the issues that were seeing with LLMs are issues we have seen before, in other systems, he says. Whats new is the attack surface and what that attack surface looks like — so once you wrap your head around how LLMs actually work, how inputs get into the model, and how outputs come out of the model ... once you think that through and map it out, securing these systems is not intrinsically more difficult than securing any other system.
GenAI applications still require the same essential triad of security attributes that other apps do — confidentiality, integrity, and availability, he says. So software engineers need to perform standard security architecting due-diligence processes, such as drawing out the security boundaries, drawing out the trust boundaries, and looking at how data flows through the system.
In the defenders favor: Because randomness is often injected into AI systems to make them creative, they tend to be less deterministic. In other words, because the same input does not always produce the same output, attacks do not always succeed in the same way either.
For some exploits in a conventional information security setting, you can get close to 100% reliability when you inject this payload, Harang says. When [an attacker] introduces information to try to manipulate the behavior of the LLM, the reliability of LLM exploits in general is lower than conventional exploits.
One thing that sets AI environments apart from their more traditional IT counterparts is their ability for autonomous agency. Companies do not just want AI applications that can automate the creation of content or analyze data, they want models that can take action. As such, those so-called
agentic AI systems
do pose even greater potential risks. If an attacker can cause an LLM to do something unexpected, and the AI systems has the ability to take action in another application, the results can be dramatic, Harang says.
Weve seen, even recently, examples in other systems of how tool use can sometimes lead to unexpected activity from the LLM or unexpected information disclosure, he says, adding: As we develop increasing capabilities — including tool use — I think its still going to be an ongoing learning process for the industry.
Harang notes that even with the greater risk, its important to realize that its a solvable issue. He himself avoids the sky is falling hyperbole around the risk of GenAI use, and often taps it to hunt down specific information, such as the grammar of a specific programming function, and to summarize academic papers.
Weve made significant improvements in our understanding of how LLM-integrated applications behave, and I think weve learned a lot over the past year or so about how to secure them and how to build security in from first principles, he says.

Last News

▸ Veritabile Defecte de Proiectare a Securitatii in Software -> Top 10 Software Security Design Flaws ◂
Discovered: 23/12/2024
Category: security

▸ Sony, XBox Targeted by DDoS Attacks, Hacktivist Threats ◂
Discovered: 23/12/2024
Category: security

▸ There are plenty of online tools for reporting bugs. ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Nvidia Embraces LLMs & Commonsense Cybersecurity Strategy