Feds: Reducing AI Risks Requires Visibility & Better Planning

  /     /     /  
Publicated : 23/11/2024   Category : security


Feds: Reducing AI Risks Requires Visibility & Better Planning


While attackers have targeted AI systems, failures in AI design and implementation are far more likely to cause headaches, so companies need to prepare.



When the US Department of Energy (DoE) analyzed the use of artificial intelligence and machine learning (AI/ML) models in critical infrastructure last month, the agency came up with a top 10 list of potential beneficial applications of the technology, including simulations, predictive maintenance, and malicious-event detection. 
Predictably, the DoE also came up with
four broad categories of risk
: unintentional failure modes, adversarial attacks against AI, hostile applications of AI, and compromise of the AI supply chain. 
The DoE is not alone — the Biden administration is driving an extensive government assessment of the benefits and risks of using AI, especially in the critical infrastructure networks. On May 3, for example, the Department of Transportation
issued a request for information
asking for interested parties to describe both the benefits and dangers of AI to the transportation system. On April 29, the Department of Homeland Security (DHS)
spelled out its own take
, describing three broad categories of risk: attacks using AI, attacks targeting AI systems, and failure of design or implementation.
Yet the DHS also gave broad recommendations on how organizations can mitigate the risk of AI, focusing on a four-part strategy: governing by creating policy and a culture of risk management, mapping all the current assets or services using AI, measuring by monitoring the ongoing usage of AI, and managing by implementing a risk management strategy. 
Its a good, broad overview of what organizations need to do to mitigate AI risk, but its just a start, says Malcolm Harkins, chief security and trust officer at HiddenLayer, an AI risk management firm.
If you look at this like a book, theyre great chapters — great macro business processes, he says. The real success or failure will become the depth of [your approach], and then the efficacy and efficiency with which you do it.
A variety of risks have already targeted organizations. Malicious AI/ML models
hosted on Hugging Face and other repositories
have demonstrated the potential of attacks through the supply chains, as described by the DoE. Indirect prompt-injection attacks against ChatGPT and other large language models (LLMs) have demonstrated that
the most promising AI models could be co-opted or corrupted by attackers
, as highlighted by the DHS.
Attackers are also widely experimenting with AI models to make their operations more efficient and their attacks — especially phishing attacks — more effective.
For organizations, the growing use of AI means growing exposure to the risks. Organizations wont be able to avoid adopting AI/ML models: Even if they are not rushing to adopt AI in their own operations, an increasing number of products include — or at least claim to include — AI features. 
In its report,
Safety and Security Guidelines for Critical Infrastructure Owners and Operators
, the DHS describes AI risk management in terms of a framework of ongoing processes for that Map, Measure, and Manage exposure to AI in the business, with an overarching Govern function that regulates activities.
For many companies, the Map and Measure parts of the DHS mitigation strategy will initially be the most important, HiddenLayers Harkins says. 
Im a former finance procurement guy — I need an inventory; I need to discover the assets to manage, he says. Where is AI in use? Where am I getting it from a third party because theyve started incorporating into the technology they provided to me, and then how do I ask the right questions of my third-party risk management to make sure theyve done it right?
Mapping involves identifying all the uses of AI in the organizations environment, documenting the possible safety and security risks of those implementations, and reviewing third-party supply chains for AI risk. Measuring focuses on defining metrics to detect and manage AI risk, as well as the continuous monitoring of AI implementations. 
The DHS paper focuses specifically on critical infrastructure owners and operators, which consider AI models and platforms as possible solutions to solve long-standing challenges, such as logistics and cyber defense, with the top AI use categories including operational awareness, performance optimization, and automation of operations.
Using AI in the world of operational technology means that companies have to worry about the secure transfer of data into the cloud because — while smaller ML models can run on-premises — the most advanced AI models are run in the cloud as a service, says Phil Tonkin, field CTO for Dragos, a provider of cybersecurity for critical infrastructure. 
Thus, organizations need to minimize the amount of data sent to the cloud, secure those communications, and monitor the connection for anomalous behavior that could indicate malicious activity, he says. 
While you may establish trust between that AI service and the OT system, you still have potential risks that may come down through those now-trusted links, Tonkin says. So monitoring all of the traffic, in and out, is the one the way to do it.
The DHS has already implemented, or is in the process of implementing, AI in
four pilot programs

The Cybersecurity and Infrastructure Security Agency has already completed a pilot using AI cybersecurity systems to detect and remediate software vulnerabilities in critical infrastructure and US government systems. DHS also announced it would be using an AI platform to help the Homeland Security Investigations agency investigating fentanyl distribution and child sexual exploitation, and the Federal Emergency Management Agency plans to use AI to support communities in developing plans for mitigating risks and improving resilience. Finally, the United States Citizenship and Immigration Services plans to use AI to improve officer training.

Last News

▸ Website hacks happened during World Cup final. ◂
Discovered: 23/12/2024
Category: security

▸ Criminal Possession of Government-Grade Stealth Malware ◂
Discovered: 23/12/2024
Category: security

▸ Senate wants changes to cybercrime law. ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Feds: Reducing AI Risks Requires Visibility & Better Planning