ChatGPT Gut Check: Cybersecurity Threats Overhyped or Not?

  /     /     /  
Publicated : 23/11/2024   Category : security


ChatGPT Gut Check: Cybersecurity Threats Overhyped or Not?


UK cybersecurity authorities and researchers tamp down fears that ChatGPT will overwhelm current defenses, while the CEO of OpenAI worries about its use in cyberattacks.



The dizzying capacity for OpenAI to vacuum up vast amounts of data and spit out custom-tailored content has ushered in all sorts of worrying predictions about the technologys ability to overwhelm everything — including cybersecurity defenses.
Indeed, ChatGPTs latest iteration,
GPT-4
, is smart enough to pass the bar exam, generate thousands of words of text, and write malicious code. And thanks to its stripped-down interface anyone can use, concerns that the OpenAI tools could turn any would-be petty thief into a technically savvy malicious coder in moments were, and still are, well-founded.
ChatGPT-enabled cyberattacks
started popping up just after its user-friendly interface premiered in November 2022.
OpenAI co-founder
Greg Brockman told a crowd gathered at SXSW this month
that he is concerned about the technologys potential to do two specific things really well: spread disinformation and launch cyberattacks.
Now that theyre getting better at writing computer code, [OpenAI] could be used for offensive cyberattacks, Brockman said.
No word on what OpenAI intends to do to mitigate the chatbots cybersecurity threat, however. For the time being, it appears to be up to the cybersecurity community to mount a defense.
There are current safeguards put in place to keep users for using ChatGPT for unintended purposes, or for content deemed too violent or illegal, but users are quickly finding
jailbreak workarounds
for those content limitations.
Those threats warrant concern, but a growing chorus of experts, including a recent post by the UKs National Cyber Security Centre, are tempering concerns over the true dangers to enterprises with the rise of ChatGPT and large language models (LLMs).
Work products of chatbots can save time taking care of less complex tasks, but when it comes to performing expert work like writing malicious code, OpenAIs ability to do that from scratch isnt really ready for prime time yet, the NCSCs blog post explained.
For more complex tasks, its currently easier for an expert to create the malware from scratch, rather than having to spend time correcting what the LLM has produced, the
ChatGPT cyber-threat
post said. However, an expert capable of creating highly capable malware is likely to be able to coax an LLM into writing capable malware.
The problem with
ChatGPT as a cyberattack tool
on its own is that it lacks the ability to test whether the code its creating actually works or not, says Nathan Hamiel, senior director of research with Kudelski Security.
I agree with the NCSCs assessment, Hamiel says. ChatGPT responds to every request with a high degree of confidence whether its right or wrong, whether its outputting functional or nonfunctional code.
More realistically, he says, cyberattackers could use ChatGPT the same way they

do other tools,

like pen testing.
The harm to IT teams is that overblown cybersecurity risks being ascribed to
ChatGPT and OpenAI
are sucking already scarce resources away from more immediate threats, as Jeffrey Wells, partner at Sigma7, points out.
The threats from ChatGPT are massively overhyped, Wells says. The technology is still in its infancy, and there is little to no reason why a threat actor would want to use ChatGPT to create malicious code when there is an abundance of existing malware or crime-as-a-service (CaaS) that can be used to exploit the list of known and growing vulnerabilities.
Rather than worrying about ChatGPT, enterprise IT teams should focus their attention on cybersecurity fundamentals, risk management, and resource allocation strategies, Wells adds.
The value of ChatGPT, as well as an array of other tools available to threat actors, come down to their ability to exploit human error, says Bugcrowd founder and CTO Casey Ellis. The remedy is human problem-solving, he notes.
The entire reason our industry exists is because of human creativity, human failures, and human needs, Ellis says. Whenever automation solves a swath of the cyber-defense problem, the attackers simply innovate past these defenses with newer techniques to serve their goals.
But Patrick Harr, CEO of SlashNext, warns organizations not to underestimate the longer-term threat ChatGPT could pose. Security teams, meanwhile, should look to leverage similar LLMs in their defenses, he says.
Suggesting that ChatGPT is low risk is like putting your head in the sand and carrying on like it doesn’t exist, Harr says. ChatGTP is only the start of the generative AI revolution, and the industry needs to take it seriously and focus on developing AI technology to combat AI-borne threats.

Last News

▸ Debunking Machine Learning in Security. ◂
Discovered: 23/12/2024
Category: security

▸ Researchers create BlackForest to gather, link threat data. ◂
Discovered: 23/12/2024
Category: security

▸ Travel agency fined £150,000 for breaking Data Protection Act. ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
ChatGPT Gut Check: Cybersecurity Threats Overhyped or Not?