Orgs Are Finally Making Moves to Mitigate GenAI Risks

  /     /     /  
Publicated : 23/11/2024   Category : security


Orgs Are Finally Making Moves to Mitigate GenAI Risks


With AI use ramping up rapidly, a growing number of enterprise security teams have begun putting controls in place to protect sensitive data from accidental exposure and leaks.



Many enterprise security teams finally appear to be catching up with the runaway adoption of AI-enabled applications in their organizations, since the public release of ChatGPT 18 months ago.
A new analysis by Netskope of anonymized AI app usage data from customer environments showed substantially more organizations have begun using blocking controls, data loss prevention (DLP) tools, live coaching, and other mechanisms to mitigate risk.
Most of the controls that enterprise organizations have adopted, or are adopting, appear focused on
protecting against users sending sensitive data
— such as personal identity information, credentials, trade secrets, and regulated data — to AI apps and services.
Netskopes analysis showed that 77% of organizations with AI apps now use block/allow policies to restrict use of at least one — and often multiple — GenAI apps to mitigate risk. That number was notably higher than the 53% of organizations with a similar policy reported in Netskopes study last year. One in two organizations currently block more than two apps, with the most active among them blocking some 15 GenAI apps because of security concerns.
The most blocked GenAI applications do track somewhat to popularity, but a fair number of less popular apps are the most blocked [as well], Netskope said in a blog post that summarized the results of its analysis. Netskope identified the most-blocked applications as presentation maker Beautiful.ai, writing app Writesonic, image generator Craiyon, and meeting transcript generator Tactiq.
Forty-two percent of organizations — compared to 24% in June 2023 — have begun using DLP tools to control what users can and cannot submit to a GenAI tool. Netskope perceived the 75% increase as an indication of maturing enterprise security approaches to addressing threats from GenAI applications and services. Live coaching controls — which basically provide a warning dialog when a user might be interacting with an AI app in a risky fashion — are gaining in popularity as well. Netskope found 31% of organizations have policies in place to control GenAI apps, using coaching dialogs to guide user behavior, up from 20% in June 2023.
Interestingly, 19% of organizations are using GenAI apps but not blocking them, which could mean most of these are shadow IT [use], says Jenko Hwong, cloud security researcher with Netskope Threat Labs. This stems from the improbability that any security professional would permit unrestricted use of GenAI applications without implementing necessary risk mitigation measures.
Netskope found less of an immediate focus among its customers on addressing risk associated with the data that users receive from GenAI services. Most have an acceptable use policy in place to guide users on how they must use and handle data that AI tools generate in response to prompts. But for the moment at least, few appear to have any mechanisms to address potential security and legal risks tied to their AI tools spewing out factually incorrect or biased data,
manipulated results
, copyrighted data, and
completely hallucinated responses
.
Ways that organizations can mitigate these risks is through vendor contracts and indemnity clauses for custom apps and enforcing the use of corporate-approved GenAI apps with higher quality datasets, Hwong says. Organizations can also mitigate risks by logging and auditing all return datasets from corporate-approved GenAI apps, including timestamps, user prompts, and results. 
Other measures security teams can take include reviewing and retraining internal processes specific to the data returned from GenAI apps, much like how OSS is part of every engineering departments compliance controls, Hwong notes. While this isnt currently the primary focus or the most immediate risk to organizations compared to the sending of data to GenAI services, we believe its part of an emerging trend.
The growing attention that security teams appear to be paying to GenAI apps comes at a time when enterprise adoption of AI tools continues to increase at warp speed. A staggering 96% of the customers in Netskopes survey — compared to 74% in June 2023 — had at least some users using GenAI apps for a variety of use cases, including coding and writing assistance, creating presentations, and generating images and video.
Netskope found the average organization currently to be using three times as many GenAI apps and having nearly three times as many users utilizing them, compared to just one year ago. The median number of GenAI apps in use among organizations in June 2024 was 9.6, compared to a median of 3 last year. The top 25% had 24 GenAI apps in their environments, on average, while the top 1% had 80 apps.
ChatGPT predictably topped the list of the most popular GenAI app among Netskopes customers. Other popular apps included Grammarly, Microsoft Copilot, Google Gemini, and Perplexity AI, which interestingly was also the 10th most frequently blocked app.
“GenAI is already being
used widely across organizations
and is rapidly increasing in activity, Hwong says. Organizations need to get ahead of the curve by starting with an inventory of which apps are being used, controlling what sensitive data is sent to those apps, and reviewing [their] policies as the landscape is changing quickly.”

Last News

▸ There are plenty of online tools for reporting bugs. ◂
Discovered: 23/12/2024
Category: security

▸ 27 Million South Koreans Hit by Online Gaming Theft. ◂
Discovered: 23/12/2024
Category: security

▸ Homeland Security Background Checks Breach Raises Concerns. ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Orgs Are Finally Making Moves to Mitigate GenAI Risks