Reshaping the Threat Landscape: Deepfake Cyberattacks Are Here

  /     /     /  
Publicated : 23/11/2024   Category : security


Reshaping the Threat Landscape: Deepfake Cyberattacks Are Here


Its time to dispel notions of deepfakes as an emergent threat. All the pieces for widespread attacks are in place and readily available to cybercriminals, even unsophisticated ones.



Malicious campaigns involving the use of deepfake technologies are a lot closer than many might assume. Furthermore, mitigation and detection of them are hard.
A new study of the use and abuse of deepfakes by cybercriminals shows that all the needed elements for widespread use of the technology are in place and readily available in underground markets and open forums. The study by Trend Micro shows that many deepfake-enabled phishing, business email compromise (BEC), and promotional scams are already happening and are
quickly reshaping the threat landscape
.
From hypothetical and proof-of-concept threats, [deepfake-enabled attacks] have moved to the stage where non-mature criminals are capable of using such technologies, says Vladimir Kropotov, security researcher with Trend Micro and the main author of a report on the topic that the security vendor released this week. 
We already see how deepfakes are integrated into attacks against financial institutions, scams, and attempts to impersonate politicians, he says, adding that whats scary is that many of these attacks use identities of real people — often scraped from content they post on social media networks.
One of the main takeaways from Trend Micros study is the ready availability of tools, images, and videos for generating deepfakes. The security vendor found, for example, that multiple forums, including GitHub, offer source code for developing deepfakes to anyone who wants it. Similarly, enough high-quality images and videos of ordinary individuals and public figures are available for bad actors to be able to create millions of fake identities or to impersonate politicians, business leaders, and other famous personalities.
Demand for deepfake services and people with expertise on the topic is also growing in underground forums. Trend Micro found ads from criminals searching for these skills to carry out cryptocurrency scams and fraud targeting individual financial accounts. 
Actors can already impersonate and steal the identities of politicians, C-level executives, and celebrities, Trend Micro said in its report. This could significantly increase the success rate of certain attacks such as financial schemes, short-lived disinformation campaigns, public opinion manipulation, and extortion.
Theres a growing risk also of stolen or recreated identities belonging to ordinary people being used to defraud the impersonated victims, or to conduct malicious activities under their identities. 
In many discussion groups, Trend Micro found users actively discussing ways to use deepfakes to bypass banking and other account verification controls — especially those involving video and face-to-face verification methods.
For example, criminals could use a victims identity and use a deepfake video of them to open bank accounts, which could later be used for money laundering activities. They can similarly hijack accounts, impersonate top-level executives at organizations to initiate fraudulent money transfer or plant fake evidence to extort individuals, Trend Micro said. 
Devices like Amazons Alexa and the iPhone, which use voice or face recognition, could soon be on the list of target devices for deepfake-based attacks, the security vendor noted.
Since many companies are still working in remote or mixed mode, there is an increased risk of
personnel impersonation in conference calls
which can affect internal and external business communications and sensitive business processes and financial flows, Kropotov says.
Trend Micro isnt alone in sounding the alarm on deepfakes. A recent online survey that VMware conducted of 125 cybersecurity and incident response professionals also found that deepfake-enabled threats are not just coming — 
they are already here
. A startling 66% — up 13% from 2021 — of the respondents said they had experienced a security incident involving deepfake use over the past 12 months.
Examples of deepfake attacks [already] witnessed include CEO voice calls to a CFO
leading to a wire transfer
, as well as employee calls to IT to initiate a password reset, says Rick McElroy, VMwares principal cybersecurity strategist.
Generally speaking, these types of attacks can be effective, because no technological fixes are available yet to address the challenge, McElroy says. 
Given the rising use and sophistication in creating deepfakes, I see this as one of the biggest threats to organizations from a fraud and scam perspective moving forward, he warns. 
The most effective way to
mitigate the threat currently
is to increase awareness of the problem among finance, executive, and IT teams who are the main targets for these social engineering attacks. 
Organizations can consider low-tech methods to break the cycle. This can include using a challenge and passphrase amongst executives when wiring money out of an organization or having a two-step and verified approval process, he says.
Gil Dabah, co-founder and CEO of Piaano, also recommends strict access control as a mitigating measure. No user should have access to big chunks of personal data and organizations need to set rate limits as well as anomaly detection, he says.
Even systems like business intelligence, which require big data analysis, should access only masked data, Dabah notes, adding that no sensitive personal data should be kept in plaintext and data such as PII should be tokenized and protected.
Meanwhile on the detection front, developments in technologies such as AI-based
Generative Adversarial Networks
(GANs) have made deepfake detection harder. That means we cant rely on content containing artifact clues that there has been alteration, says Lou Steinberg, co-founder and managing partner at CTM Insights.
To detect manipulated content, organizations need fingerprints or signatures that prove something is unchanged, he adds.
Even better is to take micro-fingerprints over portions of the content and be able to identify whats changed and what hasnt, he says. Thats very valuable when an image has been edited, but even more so when someone is trying to hide an image from detection.
Steinberg says deepfake threats fall into three broad categories. The first is disinformation campaigns mostly involving edits to legitimate content to change the meaning. As an example, Steinberg points to nation-state actors using fake news images and videos on social media or inserting someone into a photo that wasnt present originally — something that is often used for things like implied product endorsements or revenge porn.
Another category involves subtle changes to images, logos, and other content to bypass automated detection tools such as those used to detect knockoff product logos, images used in phishing campaigns or even tools for detecting child pornography.
The third category involves synthetic or composite deepfakes that are derived from a collection of originals to create something completely new, Steinberg says. 
We started seeing this with audio a few years back, using computer synthesized speech to defeat voiceprints in financial services call centers, he says. Video is now being used for things like a modern version of business email compromise or to damage a reputation by having someone say something they never said.

Last News

▸ Hack Your Hotel Room ◂
Discovered: 23/12/2024
Category: security

▸ Website hacks happened during World Cup final. ◂
Discovered: 23/12/2024
Category: security

▸ Criminal Possession of Government-Grade Stealth Malware ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Reshaping the Threat Landscape: Deepfake Cyberattacks Are Here