Slack Patches AI Bug That Let Attackers Steal Data From Private Channels

  /     /     /  
Publicated : 23/11/2024   Category : security


Slack Patches AI Bug That Let Attackers Steal Data From Private Channels


A prompt injection flaw in the AI feature of the workforce collaboration suite makes malicious queries of data sources appear legitimate.



Salesforces
Slack Technologies
has patched a flaw in Slack AI that could have allowed attackers to steal data from private Slack channels or perform secondary phishing within the collaboration platform by manipulating the large language model (LLM) on which its based.
Researchers from security firm PromptArmor discovered a prompt injection flaw in the AI-based feature of the popular Slack workforce collaboration platform that adds generative AI capabilities. The feature allows users to query Slack messages in natural language; the issue exists because its LLM may not recognize that an instruction is malicious and consider it a legitimate one,
according to a blog post
revealing the flaw.
Prompt injection occurs because an LLM cannot distinguish between the system prompt created by a developer and the rest of the context that is appended to the query, the PromptArmor team wrote in the post. As such, if Slack AI ingests any instruction via a message, if that instruction is malicious, Slack AI has a high likelihood of following that instruction instead of, or in addition to, the user query.
The researchers described two scenarios in which this issue could be used maliciously by threat actors — one in which an attacker with an account in a Slack workspace can steal any data or file from a private Slack channel in that space, and another in which an actor can phish users in the workspace.
As Slack is widely used by organizations for collaboration and thus often includes messages and files that refer to sensitive business data and secrets, the flaw presents
significant data exposure
, the research team said.
The issue is compounded by a change made to Slack AI on Aug. 14 to ingest not only messages but also uploaded documents and Google Drive files, among others, which increases the risk surface area, because they could use these documents or files as vessels for malicious instructions, according to the PromptArmor team.
The issue here is that the attack surface area fundamentally becomes extremely wide, according to the post. Now, instead of an attacker having to post a malicious instruction in a Slack message, they may not even have to be in Slack.
PromptArmor on Aug. 14 disclosed the flaw to Slack, and worked together with the company over the course of about a week to clarify the issue. According to PromptArmor, Slack eventually responded that the problem disclosed by the researchers was intended behavior. The researchers noted that Slacks team showcased a commitment to security and attempted to understand the issue.
A
brief blog post
posted by Slack this week seemed to reflect a change of heart about the flaw: The company said it deployed a patch to fix a scenario that would allow under very limited and specific circumstances a threat actor with an existing account in the same Slack workspace to phish users for certain data. The post did not mention the issue of data exfiltration but noted that there is no evidence at this time of unauthorized access to customer data.
In Slack, user queries retrieve data from both public and private channels, which the platform also retrieves from public channels of which the user is not a part. This potentially exposes API keys or other sensitive data that a developer or user puts in a private channel to malicious exfiltration and abuse, according to PromptArmor.
In this scenario, a attacker would need to go through a number of steps to put malicious instructions into a public channel that the AI system thinks are legitimate — for example, the request for an API that
a developer
put in a private channel that only they can see — and eventually result in the system carrying out the malicious instructions to steal that sensitive data.
The second attack scenario is one that follows a similar set of steps and include malicious prompts, but instead of exfiltrating data, Slack AI could render a phishing link to a user asking them to reauthenticate a login and a malicious actor could then hijack their Slack credentials.
The flaw calls into the question the safety of current AI tools, which no doubt aid in workforce productivity but still
offer too many ways
for attackers to manipulate them for nefarious purposes, notes Akhil Mittal, senior manager of cybersecurity strategy and solutions for Synopsys Software Integrity Group.
This vulnerability shows how a flaw in the system could let unauthorized people see data they shouldn’t see, he says. This really makes me question how safe our AI tools are. Its not just about fixing problems but making sure these tools manage our data properly.
Indeed, numerous scenarios of attackers poisoning AI models with
malicious code
or data already have surfaced, reinforcing Mittals point. As these tools become more commonly used throughout business organizations, it will become increasingly more important for them to keep both security and ethics in mind to protect our information and keep trust, he says.
One way that organizations that use Slack can do that is to use Slack AI settings to restrict the features ability to ingest documents to limit access to sensitive data by potential threat actors, PromptArmor advised.

Last News

▸ Researchers create BlackForest to gather, link threat data. ◂
Discovered: 23/12/2024
Category: security

▸ Travel agency fined £150,000 for breaking Data Protection Act. ◂
Discovered: 23/12/2024
Category: security

▸ 7 arrested, 3 more charged in StubHub cyber fraud ring. ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Slack Patches AI Bug That Let Attackers Steal Data From Private Channels