Microsoft Copilot Studio Exploit Leaks Sensitive Cloud Data

  /     /     /  
Publicated : 23/11/2024   Category : security


Microsoft Copilot Studio Exploit Leaks Sensitive Cloud Data


A server-side request forgery (SSRF) bug in Microsofts tool for creating custom AI chatbots potentially exposed info across multiple tenants within cloud environments.



Researchers have exploited a vulnerability in Microsofts
Copilot Studio tool
allowing them to make external HTTP requests that can access sensitive information regarding internal services within a cloud environment — with potential impact across multiple tenants.
Tenable researchers discovered the server-side request forgery (SSRF) flaw in the chatbot creation tool, which they exploited to access Microsofts internal infrastructure, including the Instance Metadata Service (IMDS) and internal Cosmos DB instances, they
revealed
in a blog post this week.
Tracked by Microsoft as
CVE-2024-38206
, the flaw allows an authenticated attacker to bypass SSRF protection in Microsoft Copilot Studio to leak sensitive cloud-based information over a network, according to a security advisory associated with the vulnerability. The flaw exists when combining an HTTP request that can be created using the tool with an SSRF protection bypass, according to Tenable.
An SSRF vulnerability occurs when an attacker is able to influence the application into making server-side HTTP requests to unexpected targets or in an unexpected way, Tenable security researcher Evan Grant explained in the post. 
The researchers tested their exploit to create HTTP requests to access cloud data and services from multiple tenants. They discovered that while no cross-tenant information appeared immediately accessible, the infrastructure used for this Copilot Studio service was shared among tenants, Grant wrote.
Any impact on that infrastructure, then, could affect multiple customers, he explained. While we dont know the extent of the impact that having read/write access to this infrastructure could have, its clear that because its shared among tenants, the risk is magnified, Grant wrote. The researchers also found that they could use their exploit to access other internal hosts unrestricted on the local subnet to which their instance belonged.
Microsoft responded quickly to Tenables notification of the flaw, and it has since been fully mitigated, with no action required on the part of Copilot Studio users, the company said in its security advisory.
Microsoft released Copilot Studio late last year as a drag-and-drop, easy-to-use tool to create custom artificial intelligence (AI) assistants, also known as chatbots. These conversational applications allow people to perform a variety of large language model (LLM) and generative AI tasks leveraging data ingested from the Microsoft 365 environment, or any other data that the Power Platform on which the tool is built.
Copilot Studio’s initial release recently
was flagged
as generally way overpermissioned by security researcher Michael Bargury at this years Black Hat conference in Las Vegas; he found 15 security issues with the tool that would allow for the creation of flawed chatbots.
The Tenable researchers discovered the tools SSRF flaw when they were looking into SSRF vulnerabilities in the APIs for Microsofts Azure AI Studio and Azure ML Studio, which the company itself flagged and patched before the researchers could report them. The researchers then turned their investigative attention to Copilot Studio to see if it also could be exploited in a similar way.
When creating a new Copilot, people can define Topics, which allow them to specify key phrases that a user can say to the Copilot to elicit a specific response or action by the AI; one of the actions that can be performed via Topics is an HTTP request. Indeed, most modern apps that deal with data analysis or machine learning have the capability to make these requests, due to their need to integrate data from external services; the downside is that it can create a potential vulnerability, Grant noted.
The researchers tried requesting access to various cloud resources as well as leveraging common SSRF protection bypass techniques using
HTTP requests
. While many requests yielded System Error responses, eventually the researchers pointed their request at a server they controlled and sent a 301 redirect response that pointed to the restricted hosts they had previously tried to request. And eventually through trial and error, and by combining redirects and SSRF bypasses, the researchers managed to retrieve managed identity access tokens from the IMDS to use to access internal cloud resources, such as Azure services and a Cosmos DB instance. They also exploited the flaw to gain read/write access to the database.
Though the research proved inconclusive about the extent that the flaw could be exploited to gain access to sensitive cloud data, it was serious enough to prompt immediate mitigation. Indeed, the existence of the SSRF flaw should be
a cautionary tale
for users of Copilot Studio of the potential for attackers to abuse its HTTP-request feature to elevate their access to cloud data and resources.
If an attacker is able to control the target of those requests, they could point the request to a sensitive internal resource for which the server-side application has access even if the attacker doesnt, Grant warned, revealing potentially sensitive information.

Last News

▸ Some DLP Products Vulnerable to Security Holes ◂
Discovered: 23/12/2024
Category: security

▸ Scan suggests Heartbleed patches may not have been successful. ◂
Discovered: 23/12/2024
Category: security

▸ IoT Devices on Average Have 25 Vulnerabilities ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Microsoft Copilot Studio Exploit Leaks Sensitive Cloud Data