Microsoft & Others Catalog Threats to Machine Learning Systems

  /     /     /  
Publicated : 23/11/2024   Category : security


Microsoft & Others Catalog Threats to Machine Learning Systems


Thirteen organizations worked together to create a dictionary of techniques used to attack ML models and warn that such malicious efforts will become more common.



In May 2016, Microsoft introduce a chatbot on Twitter, dubbed Tay, that attempted to hold conversations with users and improve its responses through machine learning (ML). A coordinated attack on the chatbot, however, caused the algorithm to start tweeting wildly inappropriate and reprehensible words and images in the first 24 hours, Microsoft stated at the time.
For the software giant, the attack demonstrated that the world of ML and artificial intelligence (AI) would come with threats. Last week, the company and an interdisciplinary group of security professionals and ML researchers from a dozen other organizations took a first stab at creating a vocabulary for describing attacks on ML systems with the initial draft of the Adversarial ML Threat Matrix.
The threat matrix is an extension of MITREs ATT&CK framework for the classification of attack techniques. The information should help secure not just the developers of ML systems but companies that are using those systems as well, says Jonathan Spring, senior member of the technical staff of the CERT Division of Carnegie Mellon Universitys Software Engineering Institute.
If youre using a machine learning system — even if youre not the one developing it — you should make sure that your broader system is fault tolerant, Spring says. You should be looking for people pressing on [attacking] the broader machine learning part of your system. And you can do those checks on your system without really knowing too much about how the machine learning is working.
Machine learning has become a key factor in companies plans to transform their businesses over the next decade. Yet, most firms consider adversarial attacks on ML to be a future threat, not a current risk. Only three of 28 companies
surveyed by Microsoft
, for example, thought they had the tools in place to secure their ML systems. 
Actual attacks on ML systems
inhabit a spectrum of generic exploits of vulnerabilities to specific ML-reliant attacks on models or data. In one case, an attacker exploited a misconfiguration in the system of the facial recognition firm ClearviewAI to gain access to some of its infrastructure, which could have resulted in the attacker polluting the dataset.
[W]e believe the first step in empowering security teams to defend against attacks on ML systems, is to have a framework that systematically organizes the techniques employed by malicious adversaries in subverting ML systems, Microsofts researchers
said in a blog post
announcing the Adversarial ML Threat Matrix. We hope that the security community can use the tabulated tactics and techniques to bolster their monitoring strategies around their organizations mission-critical ML systems.
The Adversarial ML Threat Matrix is based on the MITRE ATT&CK framework, which has grown in popularity since it was originally released in 2015. More than 80% of companies use the framework as part of their security response programs,
according to an October survey
published by the University of California at Berkeley and McAfee in October.
The threat matrix is the work of a bakers dozen of different organizations. Microsoft, Carnegie Mellon Universitys Software Engineering Institute, and MITRE are collaborating with Bosch, IBM, NVIDIA, Airbus, Deep Instinct, Two Six Labs, the University of Toronto, Cardiff University, PricewaterhouseCoopers, and Berryville Institute of Machine Learning on the framework. The team used a variety of case studies to identify the common tactics and techniques used by attackers and describe them for security researchers. 
At the DerbyCon conference in 2019, for example, two researchers
showed a way to use a data-based attack
against Proofpoints email security system to extract the training data and create a system that could be used by an attacker to as a test platform for creating email attacks that would not be caught by the messaging security product. Microsoft also mined its
experience with the Tay chatbot
to inform the threat matrix.
While the risks to ML and AI systems are real, they arent the most common threats, Charles Clancy, chief futurist and general manager of MITRE Labs,
said in an interview
. Typically, AI isn’t the first avenue for our adversaries, particularly regarding attacking our critical infrastructure, he said. Theres a truism in the power industry that the most dangerous adversaries to our electric grid are — squirrels. Keep that in mind — there are risks to AI, but its also extremely valuable.
The Adversarial ML Threat Matrix is only the first attempt to capture all the threats posed to ML systems. The companies and security researchers called for others to contribute their experiences as well. 
Perhaps this first version of the Adversarial ML Threat Matrix captures the adversary behavior you have observed — [i]f not, please contribute what you can to MITRE and Microsoft so your experience can be captured, CMUs Software Engineering Institute
stated in its blog post
. If the matrix does reflect your observations, is it helpful in communicating and understanding this adversary behavior and explaining threats to your constituents? Share those experiences with the authors as well, so the matrix can improve!

Last News

▸ Researchers create BlackForest to gather, link threat data. ◂
Discovered: 23/12/2024
Category: security

▸ Travel agency fined £150,000 for breaking Data Protection Act. ◂
Discovered: 23/12/2024
Category: security

▸ 7 arrested, 3 more charged in StubHub cyber fraud ring. ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Microsoft & Others Catalog Threats to Machine Learning Systems