Magical Thinking Drives the Myth of AI Solving Security

  /     /     /  
Publicated : 22/11/2024   Category : security


Magical Thinking Drives the Myth of AI Solving Security


AI is being called the solution to future security problems, but we shouldnt rely on the technology for too much, too soon.



Touring the Black Hat show recently in Las Vegas, I was struck by how the cybersecurity and Vegas entertainment industries seem to be converging: They both seem to love magic shows. While the IT versions arent as glitzy, vendors continually pitch the next generation of technology as the magic cure to our growing cybersecurity challenges.
Lets face it -- weve invested billions of dollars over decades to improve security, yet the problems seem to continually get worse. The continual back and forth between clever hackers and reactive security products never seems to end. No doubt that weve gotten faster at identifying attacks and patching vulnerabilities, but the bad guys are upping their game dramatically using sophisticated tools created by well organized crime syndicates and, of course, the NSA. Its hard to watch WannaCry, Petya, Industroyer and the other weekly attacks and say that were winning.
In this environment, a healthy dose of skepticism is warranted when new vendors claim to have found the cure, especially when it all depends on the magic of artificial intelligence (AI). One security vendor laying it on thick in a flowery blog post describes the security advantages of AI as being like a science fiction story and the effects are indeed magical. Seeing their demo at Black Hat I asked for a bit more detail, and apparently, the secret to their success with AI is... (wait for it...) mathematics.
Artificial intelligence and machine learning are indeed powerful and transformative in many fields that require finding patterns in vast quantities of data. For the antivirus industry that has grown up around signatures and pattern matching, this does indeed seem like a breakthrough, and no doubt will reduce analysis time. But automating a flawed model doesnt always yield better results.
The antivirus model is fundamentally flawed because it is always looking backwards -- reacting to malware and creating signatures to capture the same virus when it returns. The underlying assumption is that bad actors fall back to the same old tactics over and over again, but nothing could be further from the truth. Reducing the reaction and signature update time is important with this model, and AI will likely help. But the larger problem is that pattern matching is easily fooled. Sophisticated hackers continually change tactics, modify tools and increasingly use fileless attacks, manipulating native scripts and blocks of memory to trick legitimate applications into doing the wrong thing. And no matter how fast the reaction time is, the largest threats come from vulnerabilities that have not yet been discovered, named and added to the catalog of known patterns. For example, WannaCry exploited the SMBv1 vulnerability that had existing unnoticed for 16 years, and flew under the radar of most security products until massive damage was done.
The other fundamental challenge with AI is that were not fighting a static threat. We are fighting extremely resourceful humans who know theyre battling AI and look for innovative ways to bypass controls, and confuse machine learning models. This challenge is called adversarial AI, and acknowledges that the magical tool is less effective when fighting itself. Steve Grobman, CTO at McAfee describes this problem with a good analogy:
If you have a motion sensor over your garage hooked up to your alarm system -- say every day I drove by your garage on a bicycle at 11 p.m., intentionally setting off the sensor. After about a month of the alarm going off regularly, youd get frustrated and make it less sensitive, or just turn it off altogether. Then that gives me the opportunity to break in.
The fundamental problem is that the world of known bad stuff, while growing, is infinitely smaller than the realm of present and future unknown bad. While AI may deliver exponential progress in expanding our catalog of known bad stuff, the unknown continues to grow at an even faster pace.
Get real-world answers to virtualization challenges from industry leaders. Join us for the
NFV & Carrier SDN event
in Denver. Register now for this exclusive opportunity to learn from and network with industry experts -- communications service providers get in free!
A new school of thought is emerging. Rather than using the past to guess the future, new solutions are looking at the present -- the actual functioning of applications, for indicators of attack. Using deterministic methods, these solutions can map the known good activity of applications and take preventative action if anything goes off the rails.
Related posts:
DevSecOps: Security in the Process
How Secure Are Your IoT Devices?
Black Hat Keynote: A Call to Change

Willy Leichter is vice president of marketing for Virsec and he has worked with a wide range of global enterprises to help them meet evolving security challenges. With extensive experience in a range of IT domains including network security, global data privacy laws, data loss prevention, access control, email security and cloud applications, he is a frequent speaker at industry events and author on IT security and compliance issues.

Last News

▸ Some DLP Products Vulnerable to Security Holes ◂
Discovered: 23/12/2024
Category: security

▸ Scan suggests Heartbleed patches may not have been successful. ◂
Discovered: 23/12/2024
Category: security

▸ IoT Devices on Average Have 25 Vulnerabilities ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Magical Thinking Drives the Myth of AI Solving Security