OpenAI Disrupts 5 AI-Powered, State-Backed Influence Ops

  /     /     /  
Publicated : 23/11/2024   Category : security


OpenAI Disrupts 5 AI-Powered, State-Backed Influence Ops


Most of the operations were feckless efforts with little impact, but they illustrate how AI is changing the game for inauthentic content on both the adversary and defense sides.



OpenAI has identified and upset five influence operations using its artificial intelligence (AI) tools in one way or another.
The various operations — from China, Iran, Israel, and two from Russia — focused on spreading political messaging. As OpenAI reports, they
primarily used AI to generate text
such as social media posts and comments
None of them were particularly effective, however. On the
Brookings Breakout Scale
, which measures the
impact of influence operations
on a scale of 1 to 6, none scored higher than a 2. A score of 1 means the campaign spread only within a single community or platform, while a 6 means triggering a policy response or some other form of concrete action, like violence. A 2 means the operation spread across multiple communities on one platform, or one community across multiple platforms.
The influence operations in question, while geographically diverse, ultimately were rather similar in nature:
Among the most notorious of them is
Spamouflage, from China
. It used OpenAI tooling to debug its code, research social media activity, and post content to X, Medium, and Blogspot in multiple languages.
Bad Grammar, a newly discovered threat from Russia, operated primarily on Telegram, targeting individuals in Eastern Europe and the United States. It also used AI to debug code it employed to run a Telegram bot and write political comments on Telegram in both Russian and English.
A second Russian group, Doppelganger, used AI to post comments on X and 9GAG in five European languages, plus generate headlines, and translate, edit, and convert news articles into Facebook posts.
An Iranian entity, known as the International Union of Virtual Media (IUVM), used AI for generating and translating articles, as well as headlines and website tags for its site.
Finally theres Zero Zeno, an operation run by Stoic, a Tel Aviv-based political marketing and business intelligence company. Stoic
used OpenAI to generate articles and comments
for Instagram, Facebook, X, and other websites.
Stoic has also drawn attention lately from Meta. In
its latest Adversarial Threat Report,
Meta reported taking down 510 Facebook accounts, 32 Instagram accounts, 11 pages, and one group associated with the company. Only around 2,000 accounts followed its various Instagram accounts. About 500 accounts followed its Facebook pages, and less than 100 joined its Facebook group.
Overall, while useful as case studies, these campaigns wont be missed by many.
Text-based campaigns are largely ineffective, says Jake Williams, former NSA hacker and faculty member at IANS Research, because generative AI only helps scale those disinformation ops where people are reading content, something thats becoming increasingly rare, especially from sites without significant reputation. I think most people realize you cant trust everything you read on the Internet at this point. But images and video? Thats what scares me. People are far more likely to consume synthetically created images and video than text.”
To combat greater AI misuse, OpenAI wrote in a
more detailed report
that it is collaborating with industry partners, and using threat activity to design more secure platforms for users. The company also invest[s] in technology and teams to identify and disrupt actors like the ones we are discussing here, including leveraging AI tools to help combat abuses.
Dark Reading has reached out to OpenAI to clarify what it does, precisely, to disrupt and combat malicious actors, but has not yet received a reply.
Ultimately, the job of blocking fake content online is more technically difficult than many realize.
One of the biggest challenges here is that reliable solutions [for] detection so far do not exist, explains Naushad UzZaman, CTO and co-founder of Blackbird.AI. They are also unlikely to exist in the future. There have now been many cases of state-of-the-art generated text detection applications being used to punish students. But many of these results are false. The problem is that the false positive rate of these detectors is too high. They flag too many instances of real text.
The situation is also unlikely to improve in the future for two reasons, he continues. First, any reliable detector of fake content can be used to create training data to improve the realism of the fake content generator. Second, the best way to make a fake content detector is actually to train a powerful fake content generator, and then use it for detection. By these two routes, any efforts to build better fake content detectors will lead to better fake content generators.
Williams agrees with the sentiment. We cannot count on technical solutions here. Even digital watermarks are easily bypassed. This is a policy problem enabled by technology, where unfortunately technology wont be effective in addressing it, he says.

Last News

▸ Scan suggests Heartbleed patches may not have been successful. ◂
Discovered: 23/12/2024
Category: security

▸ IoT Devices on Average Have 25 Vulnerabilities ◂
Discovered: 23/12/2024
Category: security

▸ DHS-funded SWAMP scans code for bugs. ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
OpenAI Disrupts 5 AI-Powered, State-Backed Influence Ops