Critical ChatGPT Plug-in Vulnerabilities Expose Sensitive Data

  /     /     /  
Publicated : 23/11/2024   Category : security


Critical ChatGPT Plug-in Vulnerabilities Expose Sensitive Data


The vulnerabilities found in ChatGPT plug-ins — since remediated — heighten the risk of proprietary information being stolen and the threat of account takeover attacks.



Three security vulnerabilities unearthed in the extension functions ChatGPT employs open the door to unauthorized, zero-click access to users accounts and services, including sensitive repositories on platforms like GitHub.
ChatGPT plug-ins and custom versions of ChatGPT published by developers extend the capabilities of the AI model, enabling interactions with external services by granting OpenAIs popular generative AI chatbot access and permissions to execute tasks on various third-party websites, including GitHub and Google Drive.
Salt Labs researchers uncovered the
three critical vulnerabilities affecting ChatGPT
, the first of which occurs during the installation of new plug-ins, when ChatGPT redirects users to plug-in websites for code approval. By exploiting this, attackers could trick users into approving malicious code, leading to automatic installation of unauthorized plug-ins and potential follow-on account compromise.
Second, PluginLab, a framework for plug-in development, lacks proper user authentication, enabling attackers to impersonate users and execute account takeovers, as seen with the AskTheCode plug-in connecting ChatGPT with GitHub.
Finally, Salt researchers found that certain plug-ins were susceptible to
OAuth redirection manipulation
, allowing attackers to insert malicious URLs and steal user credentials, facilitating further account takeovers.
The report noted the issues have since been fixed and there was no evidence that the vulnerabilities had been exploited, so users should update their apps as soon as possible.
Yaniv Balmas, vice president of research at Salt Security, says the issues the research team found may put hundreds of thousands of users and organizations at risk.
Security leaders at any organization must better understand the risk, so they should review what plug-ins and GPTs their company is using and what third-party accounts are exposed through those plug-ins and GPTs, he says. As a starting point, we would suggest making a security review of their code.
For plug-ins and GPT developers, Balmas recommends developers be better aware of the internals of the GenAI ecosystem, the security measures involved, how to use them, and how to abuse them. That specifically includes what data is being sent to GenAI, and what permissions are given to the GenAI platform or the connected third-party plug-ins — for example, permission for Google Drive or GitHub.
Balmas points out that the Salt research team only checked a small percentage of this ecosystem, and says the findings indicate there is a bigger risk relevant to other GenAI platforms, and many existing and future GenAI plug-ins.
Balmas also says that OpenAI should put more emphasis on security in their documentation for developers, which will help reduce the risks.
Sarah Jones, cyber threat intelligence research analyst at Critical Start, agrees that the Salt Lab findings suggest a
broader security risk
associated with GenAI plug-ins.
As GenAI becomes more integrated with workflows, vulnerabilities in plug-ins could provide attackers with access to sensitive data or functionalities within various platforms, she says.
This emphasizes the need for robust security standards and regular audits for both GenAI platforms and their plug-in ecosystems, as hackers start to
target flaws in these platforms
.
Darren Guccione, CEO and co-founder at Keeper Security, says these vulnerabilities serve as a stark reminder about the inherent security risks involved with third-party applications and should prompt organizations to shore up their defenses.
As organizations rush to leverage AI to gain a competitive edge and enhance operational efficiency, the pressure to quickly implement these solutions should not take precedence over security evaluations and employee training, he says.
The proliferation of AI-enabled applications has also introduced challenges in
software supply chain security
, requiring organizations to adapt their security controls and data governance policies.
He points out employees are increasingly entering proprietary data into AI tools — including intellectual property, financial data, business strategies, and more — and unauthorized access by a malicious actor could be crippling for an organization.
An account takeover attack jeopardizing an employees GitHub account, or other sensitive accounts, could have equally damaging impacts, he cautions.

Last News

▸ 7 arrested, 3 more charged in StubHub cyber fraud ring. ◂
Discovered: 23/12/2024
Category: security

▸ Nigerian scammers now turning into mediocre malware pushers. ◂
Discovered: 23/12/2024
Category: security

▸ Beware EMV may not fully protect against skilled thieves. ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Critical ChatGPT Plug-in Vulnerabilities Expose Sensitive Data