FraudGPT Malicious Chatbot Now for Sale on Dark Web

  /     /     /  
Publicated : 23/11/2024   Category : security


FraudGPT Malicious Chatbot Now for Sale on Dark Web


The subscription-based, generative AI-driven offering joins a growing trend toward generative AI jailbreaking to create ChatGPT copycat tools for cyberattacks.



Threat actors riding on the
popularity of ChatGPT
have launched yet another copycat hacker tool that offers similar charbot services to the real generative AI-based app but is aimed specifically at promoting malicious activity.
Researchers have found ads posted on the Dark Web for an AI-driven hacker tool dubbed FraudGPT, which is sold on a subscription basis and has been circulating on Telegram since Saturday, researchers from Netenrich revealed
in a post
published July 25. FraudGPT starts at $200 per month and goes up to $1,700 per year, and its aimed at helping hackers conduct their nefarious business with the help of AI. The actor claims to have more than 3,000 confirmed sales and reviews so far for FraudGPT.
Another similar, AI-driven hacker tool,
WormGPT,
has been in circulation since July 13, and was outlined in detail in a report by SlashNext. Like ChatGPT, these emerging
adversarial AI
tools also are based on models trained on large data sources, and they can generate human-like text based on the input they receive.
The tools appear to be among the first inclinations that threat actors are building generative AI features into their tooling, John Bambenek, principal threat hunter at Netenrich, tells Dark Reading. Prior to this, our discussion of the threat landscape has been theoretical.
FraudGPT — which in ads is touted as a bot without limitations, rules, [and] boundaries — is sold by a threat actor who claims to be a verified vendor on various underground Dark Web marketplaces, including Empire, WHM, Torrez, World, AlphaBay, and Versus.
Both WormGPT and FraudGPT can help attackers use AI to their advantage when crafting phishing campaigns, generating messages aimed at pressuring victims into falling for business email compromise (BEC), and other email-based scams, for starters.
FraudGPT also can help threat actors do a slew of other bad things, such as: writing malicious code; creating undetectable malware; finding non-VBV bins; creating phishing pages; building hacking tools; finding hacking groups, sites, and markets; writing scam pages and letters; finding leaks and vulnerabilities; and learning to code or hack. 
Even so, it does appear that helping attackers create convincing phishing campaigns is still one of the main use cases for a tool like FraudGPT, according to Netenrich. The tools proficiency at this was even touted in promotional material that appeared on the Dark Web that demonstrates how FraudGPT can produce a draft email that will entice recipients to click on the supplied malicious link, Krishnan said.
While
ChatGPT also can be exploited
as a hacker tool to write socially engineered emails, there are ethical safeguards that limit this use. However, the growing prevalence of AI-driven tools like WormGPT and FraudGPT demonstrate that it isn’t a difficult feat to re-implement the same technology without those safeguards, Krishnan wrote.
In fact, FraudGPT and WormGPT are yet more evidence of what one security expert calls generative AI jailbreaking for dummies, in which bad actors are misusing generative AI apps to bypass
ethical guardrails
for generative AI that OpenAI has actively been combatting — a battle thats been mostly uphill.
Its been an ongoing struggle, says Pyry Avist, co-founder and CTO at Hoxhunt. Rules are created, rules are broken, new rules are created, those rules are broken, and on and on.
While one cant just tell ChatGPT to create a convincing phishing email and credential harvesting template sent from your CEO, someone can pretend to be the CEO and easily draft an urgent email to the finance team demanding them to alter an invoice payment, he says.
Indeed, across the board, generative AI tools provide criminals the same core functions that they provide technology professionals: the ability to operate at greater speed and scale, Bambenek says. Attackers can now generate phishing campaigns quickly and launch more simultaneously.
As phishing remains one of the primary ways that cyberattackers gain initial entry onto an enterprise system to conduct further malicious activity, its essential to implement conventional security protections against it. These defenses can still detect AI-enabled phishing, and, more importantly, subsequent actions by the threat actor.
Fundamentally, this doesn’t change the dynamics of what a phishing campaign is, nor the context in which it operates, Bambenek says. As long as you arent dealing with phishing from a compromised account, reputational systems can still detect phishing from inauthentic senders, i.e., typosquatted domains, invoices from free Web email accounts, etc.
Implementing a defense-in-depth strategy with all the security telemetry available for fast analytics also can help organizations identify a phishing attack before attackers compromise a victim and move on to the next phase of attack, he says.
Defenders don’t need to detect every single thing an attacker does in a threat chain, they just have to detect something before the final stages of an attack — that is, ransomware or data exfiltration — so having a strong security data analytics program is essential, Bambenek says.
Other security professionals also promote using
AI-based security tools
, the numbers of which are growing,
to fight adversarial AI
, in effect fighting fire with fire to combat the increased sophistication of the threat landscape.

Last News

▸ ArcSight prepares for future at user conference post HP acquisition. ◂
Discovered: 07/01/2025
Category: security

▸ Samsung Epic 4G: First To Use Media Hub ◂
Discovered: 07/01/2025
Category: security

▸ Many third-party software fails security tests ◂
Discovered: 07/01/2025
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
FraudGPT Malicious Chatbot Now for Sale on Dark Web