Black Hat Q&A: Defending Against Cheaper, Accessible ‘Deepfake’ Tech

  /     /     /  
Publicated : 23/11/2024   Category : security


Black Hat Q&A: Defending Against Cheaper, Accessible ‘Deepfake’ Tech


ZeroFox’s Matt and Mike Price discuss their work researching cybersecurity responses to the rising tide of ‘deepfake’ videos.



The tools and techniques to create false videos via AI-driven image synthesis are getting easier to access every year, and few people know that better than ZeroFox’s Matt Price and Mike Price (not related). In an email interview with Black Hats Alex Wawro, the pair of security experts shared their latest research, which
will be presented
at
Black Hat USA
in Las Vegas this summer.
Alex
: Why are deepfakes important?
Matt: 
For me personally, I think deepfakes are important because of their potential to change political discourse, and just public discourse in general. Weve already seen evidence of this, not even with deepfakes, but with people splicing videos and slowing them down. I think deepfakes have a lot of potential to do some good, especially when you think about movies and special effects, but they also have a lot of potential to cause problems.
Mike:
Long story short, here at ZeroFox we do a lot of work in terms of analyzing content for security-related issues. We started off as a social media security company, and when I arrived here four or five years ago, most of what we were doing was Hey, is there something bad in this tech? Or, is there something bad in this image. So that brought us to the question -- what about video?
A couple years ago, when deepfakes appeared on the scene, our research team organically took interest in the topic and we started looking into how theyre created, and how we can develop protections against them. Ive been working with Matt to really round out not just the offensive parts but also the defensive part: how do you detect these things, and do something against them?
Alex
: How good is deepfake tech right now, and how quickly do you think it will pose a significant threat to security systems?
Mike:
The research thats been done by other folks, and the work that weve done in understanding whats going on out there suggests that the tools and the resources required to produce deepfakes are much lower-cost now. Previously, stuff like this didnt really exist outside of Hollywood studios where they needed to synthesize a persons image. But now you have these tools where, anybody can download an open-source package and produce a fake video clip pretty quickly. So the cost has been brought down a ton, the complexity has been brought down a ton, so thats really the main risk factors.
As far as quality goes, from what weve seen theres still a lot of work going on to really perfect this stuff; you have a lot of little hiccups with regards to, for example, getting a variety of different videos, jumping through all kinds of hoops to get the right kinds of source images, and so on. So there are still a lot of hurdles to producing deepfakes that are really dynamic, with many people in the video moving around and changing positions. You see mostly short clips of a single person looking forward; there are still some limitations to whats easily accomplished with this tech.
But theres a lot of work going on. The tooling seems to be getting better and better, and people are doing a lot of exploration of different algorithms that may be able to produce better results with less input. So thats where things stand today. And as far as people using it for nefarious purposes, mostly were seeing lots of proof-of-concept videos out there. Nicolas Cage is the guinea pig for a lot of the work being done, and then you see some political examples -- like the Obama video.
Alex
: Why did you feel it was important to give this talk at Black Hat, and what do you hope attendees will get out of it?
Mike:
A lot of people have asked about this subject; I know that in the federal space there are a lot of people thinking about whether this will be an issue in the future. So theres lots of questions in the air about what deepfake technology is, how it works, how real it can be, that sort of thing. We want to explain all that, and then walk you through what your options are for detecting deepfakes and doing something about it.
Matt:
To piggyback off that, Im mainly interested in the detection side, and I think this talk is important because Ive seen some quite sensationalist headlines saying there is no solution to deepfakes, which isnt true. There are detection methods out there right now to detect deepfakes; DARPAs actually heavily investing in this area as well. So thats kind of the point, for me. We can detect deepfakes. There are tools to do it; this is just a security problem like any other.
Alex
: What are you hoping to get out of Black Hat this year?
Matt:
Im really interested in some of the developments in neural networks and their applications to cybersecurity problems. My role at ZeroFox is mainly to run our data science program, so Im always interested in the newest and latest tech on that front, and neural networks seems to be one of the hot topics for solving problems that traditionally weve had issues solving.
For more information about the ZeroFox Deepfake Briefing and many more check out the
Black Hat USA Briefings page
, which is regularly updated with new content as we get closer to the event. Black Hat USA returns to the Mandalay Bay in Las Vegas August 3-8, 2019. For more information on what’s happening at the event and how to register, check out the
Black Hat website
.

Last News

▸ TeamSpy transformed TeamViewer into a cyberespionage tool. ◂
Discovered: 27/12/2024
Category: security

▸ South Korea alters account of bank hacks ◂
Discovered: 27/12/2024
Category: security

▸ Microsoft discloses Patriot Act data requests. ◂
Discovered: 27/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Black Hat Q&A: Defending Against Cheaper, Accessible ‘Deepfake’ Tech