AI Experts: Account for AI/ML Resilience & Risk While Theres Still Time

  /     /     /  
Publicated : 23/11/2024   Category : security


AI Experts: Account for AI/ML Resilience & Risk While Theres Still Time


CISOs and cybersecurity teams will play a key role in hardening artificial intelligence and machine learning systems.



RSA CONFERENCE 2023 – San Francisco – As enterprises and government agencies increasingly weave artificial intelligence (AI) and machine learning (ML) into their broader set of systems, theyll need to account for a range of risk resilience issues that start with cybersecurity concerns but spread far beyond those. 
A panel on April 24 at RSA Conference 2023 composed of distinguished AI and security researchers examined the problem space of AI resilience, which includes weighty issues like adversarial AI attacks,
AI bias
, and ethical application of AI modeling.
Cybersecurity professionals need to start to tackling these issues both within their organizations and as collaborators with government and industry groups, panelists noted.
Many organizations will look to integrate AI/ML capabilities of their core business functions, but in doing so will increase their own attack surface, explained panel moderator Bryan Vorndran, assistant director at the FBI Cyber Division. Attacks can occur in every stage of the AI and ML development and deployment cycle, models, training data, and APIs can be targeted.
The good news is that there is time to ramp up into these efforts if the community begins work now.
We have a really unique opportunity here as a community, said Neil Serebryany, CEO of CalypsoAI. Were aware of the fact that there is a threat, were seeing early incidents of this threat, and the threat is not full-blown yet.
The yet is the operative word, he emphasized, and his fellow panelists agree. The field of risk management is in a place with AI similar to where cybersecurity was with the Internet in the 1980s, said Bob Lawton, chief of missions capabilities for the Office of the Director of National Intelligence Science and Technology Group.
Imagine if its 1985 and you knew the challenges that we were going to face in the cyber domain now, what would we as a community, as an industry do differently 35 years ago, and thats exactly where were at with AI right now, said Lawton. We have the time and space to get it right.
Specifically when it comes to direct attacks
against AI systems by adversaries
, the threats are still very rudimentary, but thats only because the attackers are only putting the work they need to in order to achieve their objectives right now, said Christina Liaghati, AI strategy execution and operations manager for MITRE Corporation.
I think were going to see many more of the malicious actors having a higher level of sophistication of these attacks, but right now they dont have to, which I think is whats really interesting about this space, she told the audience.
Nevertheless, she warned that organizations cant treat the risks lightly. The interest of threat actors to increase their sophistication and knowledge of AI models will only keep growing as it is embedded into systems they can profitably attack. And this is just as true of smaller organizations using simple ML models in financial systems as it would be for government agencies using it in an intelligence capacity.
If youre deploying AI in any environment where any actor might want to misuse or evade or attack that system, your system is vulnerable, she said. So, its not just super advanced tech giants or anybody thats deploying AI in a massive way. If your system is in any kind of consequential environment and then you incorporate AI and machine learning into that broader system of systems context, you could be exposing it in new ways that youre probably not thinking about or necessarily prepared for.
The challenge with AI for many cybersecurity executives is that addressing these risks will require they and their teams gain a whole new set of knowledge and parlance around AI and data science.
I dont think that AI assurance at its core is a traditional infosec problem, Serebryany said. Its a
machine learning problem
that were trying to figure out how to translate into the infosec community.
For example, hardening the models requires understanding of key data science metrics like recall precision accuracy and F1 scores, he said.
So I think its kind of incumbent upon us to be able to figure out how to take these underlying ML concepts and research and translate the parlance, the concepts and the standard operating procedures within a soft context that makes sense within the infosec community, he said.
At the same time, Liaghati said to not discount the security basics as AI/ML models and systems will be deployed in the context of other systems for which security teams have decades of experience managing risk. The principles of data security, application security, and network security are still extremely relevant, as are standard risk management and OpSec best practices.
So many of those are just good practices. Its not just a big, fancy adversarial layer or being able to patch a data set. Its not necessarily that complicated, she says. Many of the ways that you can mitigate these threats are just thinking about the amount of information that youre putting out within public domain on what models youre using, what data youre using, where its coming from, [and] what the broader system context looks like around that AI system.

Last News

▸ Beware EMV may not fully protect against skilled thieves. ◂
Discovered: 23/12/2024
Category: security

▸ Hack Your Hotel Room ◂
Discovered: 23/12/2024
Category: security

▸ Website hacks happened during World Cup final. ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
AI Experts: Account for AI/ML Resilience & Risk While Theres Still Time