Shadow AI, Data Exposure Plague Workplace Chatbot Use

  /     /     /  
Publicated : 23/11/2024   Category : security


Shadow AI, Data Exposure Plague Workplace Chatbot Use


Productivity has a downside: A shocking number of employees share sensitive or proprietary data with the generational AI platforms they use, without letting their bosses know.



Generative AI chatbots are popping up in everything from email clients to HR tools these days, offering a friendly and smooth path toward better enterprise productivity. But theres a problem: All too often, workers arent thinking about the data security of the prompts theyre using to elicit chatbot responses.
In fact, more than a third (38%) of employees share sensitive work information with AI tools without their employers permission,
according to a survey
this week by the US National Cybersecurity Alliance (NCA). And thats a problem.
The NCA survey (which polled 7,000 people globally) found that Gen Z and millennial workers are more likely to share sensitive work information without getting permission: A full 46% and 43%, respectively, admitted to the practice, as opposed to 26% and 14% of Gen X and baby boomers, respectively.
The issue is that
most of the most prevalent chatbots
capture whatever information users put into prompts, which could be things like proprietary earnings data, top-secret design plans, sensitive emails, customer data, and more — and send it back to the large language models (LLMs), where its used to train the next generation of GenAI.
And that means that someone could later access that data using the right prompts, because its part of the retrievable data lake now. Or, perhaps the data is kept for internal LLM use, but its storage isnt set up properly. The dangers of this — as
Samsung found out
in one high-profile incident — are relatively well understood by security pros — but by everyday workers, not so much.
ChatGPT’s creator, OpenAI,
warns in its user guide
, We are not able to delete specific prompts from your history. Please dont share any sensitive information in your conversations. But its hard for the average worker to constantly be thinking about data exposure. Lisa Plaggemier, executive director of NCA, notes one case that illustrates how the risk can easily translate into real-world attacks.
A financial services firm integrated a GenAI chatbot to assist with customer inquiries, Plaggemier tells Dark Reading. Employees inadvertently input client financial information for context, which the chatbot then stored in an unsecured manner. This not only led to a significant data breach, but also enabled attackers to access sensitive client information, demonstrating how easily confidential data can be compromised through the improper use of these tools.
Galit Lubetzky Sharon, CEO at Wing, offers another real-life example (without naming names).
An employee, for whom English was a second language, at a multinational company, took an assignment working in the US, she says. In order to improve his written communications with his US based colleagues, he innocently started using Grammarly to improve his written communications. Not knowing that the application was allowed to train on the employees data, the employee sometimes used Grammarly to improve communications around confidential and proprietary data. There was no malicious intent, but this scenario highlights the hidden risks of AI.
One reason for the high percentages of people willing to roll the dice is almost certainly a lack of training. While the Samsungs of the world might swoop into action on locking down AI use, the NCA survey found that 52% of employed participants have not yet received any training on safe AI use, while just 45% of the respondents who actively use AI have.
This statistic suggests that many organizations may underestimate the importance of training, perhaps due to budget constraints, or a lack of understanding about the potential risks, Plaggemier says. And meanwhile, she adds, This data underscores the gap between recognizing potential dangers and having the knowledge to mitigate them. Employees may understand that risks exist, but the lack of proper education leaves them vulnerable to the severity of these threats, especially in environments where productivity often takes precedence over security.
Worse, this knowledge gap contributes to the rise of shadow AI, where unapproved tools are used outside the organizations security framework.
As employees prioritize efficiency, they may adopt these tools without fully grasping the long-term consequences for data security and compliance, leaving organizations vulnerable to significant risks, Plaggemier warns.
Its clear that prioritizing immediate business needs over long-term security strategies can leave companies vulnerable. But when it comes to rolling out AI before security is ready, the golden allure of all those productivity enhancements — sanctioned or not — may often prove too strong to resist.
As AI systems become more common, its essential for organizations to view training not just as a compliance requirement but as a vital investment in protecting their data and brand integrity, Plaggemier says. To effectively reduce risk exposure, companies should implement clear guidelines around the use of GenAI tools, including what types of information can and cannot be shared.
Morgan Wright, chief security adviser at SentinelOne, advocates starting the guidelines-development process with first principles: The biggest risk is not defining what problem youre solving through chatbots, he notes. Understanding what is to be solved helps create the right policies and operational guardrails to protect privacy and intellectual property. Its emblematic of the old saying, When all you have is a hammer, all the world is a nail.
There are also technology steps that organizations should take to shore up AI risks.
Establishing strict access controls and monitoring the use of these tools can also help mitigate risks, Plaggemier adds. Implementing data masking techniques can protect sensitive information from being input into GenAI platforms. Regular audits and the use of AI monitoring tools can also ensure compliance and detect any unauthorized attempts to access sensitive data.
There are other ideas out there, too. Some companies have restricted the amount of data input into a query (like 1,024 characters), Wright says. It could also involve segmenting off parts of the organization dealing with sensitive data. But for now, there is no clear solution or approach that can solve this thorny issue to everyones satisfaction.
The danger to companies can also be exacerbated thanks to GenAI capabilities being added to third-party software-as-a-solution (SaaS) applications, Wings Sharon warns — this is an area that is too often overlooked.
As new capabilities are added, even to very reputable SaaS applications, the terms and conditions of those applications is often updated, and 99% of users dont pay attention to those terms, she explains. It is not unusual for applications to set as the default that they can use data to train their AI models.
She notes that an emerging category of SaaS security tools called SaaS Security Posture Management (SSPM) is developing ways to monitor which applications use AI and even monitor changes to things like terms and conditions.
Tools like this are helpful for IT teams to assess risks and make changes in policy or even access on a continuous basis, she says.

Last News

▸ Making use of a homemade Android army ◂
Discovered: 23/12/2024
Category: security

▸ CryptoWall is more widespread but less lucrative than CryptoLocker. ◂
Discovered: 23/12/2024
Category: security

▸ Feds probe cyber breaches at JPMorgan, other banks. ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Shadow AI, Data Exposure Plague Workplace Chatbot Use