DHS Releases Secure AI Framework for Critical Infrastructure

  /     /     /  
Publicated : 23/11/2024   Category : security


DHS Releases Secure AI Framework for Critical Infrastructure


The voluntary recommendations from the Department of Homeland Security cover how artificial intelligence should be used in the power grid, water system, air travel network, healthcare, and other pieces of critical infrastructure.



The US Department of Homeland Security (DHS) has released recommendations that outline how to securely develop and deploy artificial intelligence (AI) in critical infrastructure. The recommendations apply to all players in the AI supply chain, starting with cloud and compute infrastructure providers, to AI developers, and all the way to critical infrastructure owners and operators. Recommendations for civil society and public-sector organizations are also provided.
The voluntary recommendations in Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure look at each of the roles across five key areas: securing environments, driving responsible model and system design, implementing data governance, ensuring safe and secure deployment, and monitoring performance and impact. There are also technical and process recommendations to enhance the safety, security, and trustworthiness of AI systems.
AI is already being used for resilience and risk mitigation across sectors, DHS said in a release, such as AI applications for earthquake detection, stabilizing power grids, and sorting mail.
The framework looks at each roles responsibilities:
Cloud and compute infrastructure providers
need to vet their hardware and software supply chain, implement strong access management, and protect the physical security of data centers powering AI systems. The framework also has recommendations on supporting downstream customers and processes by monitoring for anomalous activity and establishing clear processes for reporting suspicious and harmful activities.
AI developers
should adopt a secure by design approach, evaluate dangerous capabilities of AI models, and ensure model alignment with human-centric values. The framework further encourages AI developers to implement strong privacy practices; conduct evaluations that test for possible biases, failure modes, and vulnerabilities; and support independent assessments for models that present heightened risks to critical infrastructure systems and their consumers.
Critical infrastructure owners and operators
should deploy AI systems securely, including maintaining strong cybersecurity practices that account for AI-related risks, protecting customer data when fine-tuning AI products, and providing meaningful transparency regarding their use of AI to provide goods, services, or benefits to the public.
Civil society
, including universities, research institutions, and consumer advocates engaged on issues of AI safety and security, should continue working on standards development alongside government and industry, as well as research on AI evaluations that considers critical infrastructure use cases.
Public sector
entities, including federal, state, local, tribal, and territorial governments, should advance standards of practice for AI safety and security through statutory and regulatory action.
The framework, if widely adopted, will go a long way to better ensure the safety and security of critical services that deliver clean water, consistent power, Internet access, and more, said DHS secretary Alejandro N. Mayorkas, in a statement.
The DHS framework proposes a model of shared and separate responsibilities for the safe and secure use of AI in critical infrastructure. It also relies on existing risk frameworks to enable entities to evaluate whether using AI for certain systems or applications carries severe risks that could cause harm.
We intend the framework to be, frankly, a living document and to change as developments in the industry change as well, Mayorkas said during a media call.

Last News

▸ Some DLP Products Vulnerable to Security Holes ◂
Discovered: 23/12/2024
Category: security

▸ Scan suggests Heartbleed patches may not have been successful. ◂
Discovered: 23/12/2024
Category: security

▸ IoT Devices on Average Have 25 Vulnerabilities ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
DHS Releases Secure AI Framework for Critical Infrastructure