WormGPT Cybercrime Tool Heralds an Era of AI Malware vs. AI Defenses

  /     /     /  
Publicated : 23/11/2024   Category : security


WormGPT Cybercrime Tool Heralds an Era of AI Malware vs. AI Defenses


A black-hat alternative to GPT models specifically designed for malicious activities like BEC, malware, and phishing attacks is here, and will push organizations to level up with generative AI themselves.



Cybercriminals are leveraging generative AI technology to aid their activities and launch
business email compromise
(BEC) attacks, including use of a tool known as WormGPT, a black-hat alternative to GPT models specifically designed for malicious activities.
According to a report from SlashNext, WormGPT was trained on various data sources, with a focus on malware-related data, generating human-like text based on the input it receives and is able to create highly convincing fake emails.
Screenshots from a cybercrime form illustrate exchanges between malicious actors on how to deploy ChatGPT to aid successful BEC attacks, indicating hackers with limited fluency in the target language can use gen AI to fabricate a convincing email. 
The research team also conducted an evaluation of the potential risks associated with WormGPT, with a specific focus on BEC attacks, instructing the tool to generate an email aimed at pressuring an unsuspecting account manager into making payment for a fraudulent invoice.
The results revealed WormGPT could not only execute a persuasive tone but was also strategically cunning, an indicator of its capabilities for mounting sophisticated phishing and BEC attacks.
Its like ChatGPT but has no ethical boundaries or limitations, the report said, noting the development of tools underscores the threat posed by generative AI technologies including WormGPT, even in the hands of novice cybercriminals.
The report also revealed cyber criminals are designing jailbreaks, specialized prompts designed to manipulate generative AI interfaces into creating output that could involve disclosing sensitive information, producing inappropriate content, or even executing harmful code.
Some ambitious cybercriminals are even taking things a step further by crafting custom modules akin to those used by ChatGPT but designed to help them carry out attacks, an evolution that could make cyber defense even more complicated. 
Malicious actors can now launch these attacks at scale at zero cost, and they can do it with much more targeted precision than they could before, explains SlashNext CEO Patrick Harr. If they arent successful with the first BEC or phishing attempt, they can simply try again with retooled content.
The use of generative AI will head to what Harr calls the polymorphic nature of attacks that can be launched at great speed and with no cost to the individual or organization backing the attack. Its that targeted nature, along with the frequency of attack, which is going to really make companies rethink their security posture, he says. 
The rise of generative AI tools is introducing additional complexities and challenges in cybersecurity efforts
increased attack sophistication
and highlighting the need for
more robust defense mechanisms
against evolving threats.
Harr says he thinks the threat of AI-aided BEC, malware, and phishing attacks can best be fought with AI-aided defense capabilities. 
Youre going to have to integrate AI to fight AI, otherwise, youre going to be on the outside looking in and youre going to see continued breaches, he says. And that requires training AI-based defense tools to discover, detect, and ultimately block a sophisticated, rapidly evolving set of AI-generated threats.
If a threat actor creates an attack and then tells the gen AI tool to modify it, theres only so many ways you can say the same thing, for something like invoice fraud, for example, Harr explains. What you can do is tell your AI defenses to take that core and clone it to create 24 different ways to say that same thing. Security teams can then take those synthetic data clones and go back and train the organizations defense model.
You can almost anticipate what their next threat will be before they launch it, and if you incorporate that into your defense, you can detect it and block it before it actually infects, he says. This is an example of using AI to fight AI.
From his perspective, organizations will ultimately become reliant on AI to do the discovery detection, and ultimately the remediation of these threats, because theres simply no way humanly possible to get out ahead of the curve without doing so.
In April, a Forcepoint researcher convinced the AI tool to create
malware for finding and exfiltrating specific documents
, despite its directive to refuse malicious requests.
Meanwhile, developers enthusiasm for ChatGPT and other large language model (LLM) tools have left most organizations 
largely unprepared
 to defend against the vulnerabilities that the nascent technology creates.

Last News

▸ ArcSight prepares for future at user conference post HP acquisition. ◂
Discovered: 07/01/2025
Category: security

▸ Samsung Epic 4G: First To Use Media Hub ◂
Discovered: 07/01/2025
Category: security

▸ Many third-party software fails security tests ◂
Discovered: 07/01/2025
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
WormGPT Cybercrime Tool Heralds an Era of AI Malware vs. AI Defenses