Images Play Persuasive Role in Disinformation Campaigns

  /     /     /  
Publicated : 23/11/2024   Category : security


Images Play Persuasive Role in Disinformation Campaigns


If the 2016 election is any indication, images included in state-sponsored social media posts are effective at disseminating propaganda, new analysis shows.



Disinformation campaigns similar to the ones leading to the presidential election in 2016 are again being developed and will ramp up before November, according to Gianluca Stringhini, a researcher and the co-director of Boston Universitys Security Lab (SeclaBU).
Stringhini recently released research for which he studied disinformation spread through Russian state-sponsored accounts during the 2016 election and events thereafter. Specifically, his work analyzed a dataset of 1.8 million images that were posted in 9 million tweets shared from 3,600 Russian troll accounts (confirmed as such by Twitter in October 2018).
His findings, published in the report Characterizing the Use of Images in State-Sponsored Information Warfare Operations by Russian Trolls on Twitter, suggest that much of the disinformation spread during the 2016 election, as well as high-profile national events that followed (e.g., the white supremacist rally in Charlottesville, Va.), leveraged image-based posts on social media.
Nobody had looked at images before we did, he says. But images are used as a vehicle of information.
Stringhini points to memes as particularly effective. They essentially can convey information much faster than text, he says. Its more straightforward for people to share these images. Its a more immediate vehicle of disinformation.
The analysis shows that Russian state-sponsored trolls were more influential in spreading political imagery when compared with other topics, and that images fell just behind URLs in their effectiveness at disseminating propaganda. The dataset Stringhinis team analyzed included a mix of original images created by Russian accounts, as well as existing, real images and memes that trolls would find and share to groups with opposing viewpoints to create controversy.
The report further points out that the same images often appear on both their feeds and specific domains, indicating that state-sponsored trolls might be trying to make their accounts look more credible and push their agenda by targeting unwitting users on popular Web communities like Twitter.
Indeed, a ranking of the top 20 domains that shared the same images as the state-sponsored accounts (
page 6 of the report
) puts Twitter in second place, behind Pinterest. Popular social media destinations including Facebook, Reddit, and YouTube made the list as well. 
For its part, Twitter last week disclosed 32,242 accounts in its archive of state-linked information operations. The archived accounts include distinct operations Twitter attributes to the Peoples Republic of China (PRC), Russia, and Turkey. In a 
blog post
outlining basic information about these actors, Twitter says Russian trolls comprised just 1,152 of those accounts, and that they were suspended for violations of our platform manipulation policy, specifically cross-posting and amplifying content in an inauthentic, coordinated manner for political ends [including] promoting the United Russia party and attacking political dissidents.
While the transparency is appreciated, its unclear whether these methods of locating and removing trolls are sophisticated enough or, crucially, whether they will ever be more sophisticated than the trolls themselves.
Indeed, Stringhinis report notes that the small number of Russian state-sponsored accounts identified by Twitter suggest these actors work hard to be taken seriously - and succeed.
It may be too soon to tell whether images will play a large role in efforts to impact the 2020 elections. Stringhinis report notes that Russian trolls also launched the Qanon conspiracy theory, at least partially supported by the use of images initially appearing on imageboards like 4chan and 8chan.
Further, he says, theres some evidence of new activity: Right now we are seeing a lot of image-based campaigns propagating … but its difficult to figure out which are organically developing, Stringhini says. Essentially, what we see are these campaigns pushing certain narratives that are geared toward polarizing public discourse. People should keep an eye on this activity.  
However, he adds, Its tricky because this is what regular discussion looks like. Its usually not very straightforward to identify this malicious activity. Trolls blend into regular discussion and slowly hijack it.
Platforms, of course, have a greater ability than users do to identify suspicious accounts, by tracking access patterns, IP addresses, timing of posts, and coordination between campaigns. And while they are doing that to some extent, says Stringhini, ultimately its an arms race between the sophistication of platform tools vs. troll techniques. So far, the trolls are in the lead.
Related Content:
Project Aims to Unmask Disinformation Bots
Disarming Disinformation: Why CISOs Must Fight Back Against False Info
How China & Russia Use Social Media to Sway the West
How Cybersecurity Incident Response Programs Work (and Why Some Dont)
 
 
 
 
 
 
 
Learn from industry experts in a setting that is conducive to interaction and conversation about how to prepare for that really bad day in cybersecurity. Click for 
more information and to register


Last News

▸ Researchers create BlackForest to gather, link threat data. ◂
Discovered: 23/12/2024
Category: security

▸ Travel agency fined £150,000 for breaking Data Protection Act. ◂
Discovered: 23/12/2024
Category: security

▸ 7 arrested, 3 more charged in StubHub cyber fraud ring. ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Images Play Persuasive Role in Disinformation Campaigns