Microsoft Says Its Time to Attack Your Machine-Learning Models

  /     /     /  
Publicated : 23/11/2024   Category : security


Microsoft Says Its Time to Attack Your Machine-Learning Models


With access to some training data, Microsofts red team recreated a machine-learning system and found sequences of requests that resulted in a denial-of-service.



Mature companies should conduct red team attacks against their machine-learning systems to suss out their weaknesses and shore up their defenses, a Microsoft researcher told virtual attendees at the USENIX ENIGMA Conference this week.
As part of the companys research into the impact of attacks on machine learning, Microsofts internal red team recreated a machine-learning automated system that assigns hardware resources in response to cloud requests. Through testing their own offline version of the system, the team found adversarial examples that resulted in the system becoming over-taxed, Hyrum Anderson, principal architect of the Azure Trustworthy Machine Learning group at Microsoft, said during his presentation.
Pointing at attackers efforts to get around content-moderation algorithms or anti-spam models, Anderson stressed that attacks on machine-learning are already here.
If you use machine learning, there is the risk for exposure, even though the threat does not currently exist in your space, he said. The gap between machine learning and security is definitely there.
The
USENIX presentation
is the latest effort by Microsoft to bring attention to the issue of adversarial attacks on machine-learning models, which are often so technical that most companies do not know how to evaluate their security. While data scientists are considering the impact that adversarial attacks can have on machine learning, the security community needs to start taking the issue more seriously - but also as part of a broader threat landscape, Anderson says. 
Machine-learning researchers are focused on attacks that pollute machine learning data, epitomized by presenting two seemingly-identical image of, say, a tabby cat, and having the AI algorithm identify it as two completely different things, he said. More than 2,000 papers have been written in the last few years, citing these sorts of examples and proposing defenses, he said.
Meanwhile, security professionals are dealing with things like SolarWinds, software updates and SSL patches, phishing and education, ransomware, and cloud credentials that you just checked into Github, Anderson said. And they are left to wonder what the recognition of a tabby cat has to do with the problems they are dealing with today.
In November, Microsoft joined with MITRE and other organizations to release the
Adversarial ML Threat Matrix
, a dictionary of attack techniques created as an addition to the MITRE ATT&CK framework. Almost 90% of organizations do not know how to secure their machine-learning systems, according to a Microsoft survey released at the time.
Microsofts Research
Anderson shared a red team exercise conducted by Microsoft where the team aimed to abuse a Web portal used for software resource requests and the internal machine-learning algorithm that determines automatically to which physical hardware it assigns a requested container or virtual machine.
The red team started with credentials for the service, under the assumption that attackers will be able to gather valid credentials - either by phishing or because an employee reuses their user name and password. The red team found that two elements of the machine-learning process could be viewed by anyone: read-only access to the training data and key pieces of the data collection part of the ML model. 
That was enough to create their own version of the machine-learning model, Anderson said.
Even though we built a poor mans replicable model that is likely not identical to the production model, it did allow us to study—as a straw man—and formulate and test an attack strategy offline, he said. This is important because we did not know what sort of logging and monitoring and auditing would have been attached to the deployed model service, even if we had direct access to us.
Armed with a container image that requested specific types of resources to cause an oversubscribed condition, the red team logged in through a different account and provisioned the cloud resources. 
Knowing those resource requests that would guarantee an oversubscribed condition, we could then instrument a virtual machines with hungry resource payloads, high-CPU utilization and memory usage, which would be over-provisioned and cause a denial of service to the other containers on the same physical host, Anderson said. 
More information on the attack can be found on
a GitHub page from Microsoft that contains adversarial ML examples
.
Anderson recommends that data-science teams defensively protecting their data and model, and conduct sanity checks—such as making sure that the ML model is not over-provisioning resources—to increase robustness.
Just because a model not accessible externally does not mean its safe, he says.
Internal models are not safe by default—that is an argument that is simply security by obscurity in disguise, he said. Even though a model may not be directly accessible to the outside world, there are paths by which an attacker can exploit them to cause cascading downstream effects in an overall system.

Last News

▸ Debunking Machine Learning in Security. ◂
Discovered: 23/12/2024
Category: security

▸ Researchers create BlackForest to gather, link threat data. ◂
Discovered: 23/12/2024
Category: security

▸ Travel agency fined £150,000 for breaking Data Protection Act. ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Microsoft Says Its Time to Attack Your Machine-Learning Models