Hundreds of LLM Servers Expose Corporate, Health & Other Online Data

  /     /     /  
Publicated : 23/11/2024   Category : security


Hundreds of LLM Servers Expose Corporate, Health & Other Online Data


LLM automation tools and vector databases can be rife with sensitive data — and vulnerable to pilfering.



Hundreds of open source large language model (LLM) builder servers and dozens of vector databases are leaking highly sensitive information to the open Web.
As companies rush to integrate AI into their business workflows, they occasionally pay insufficient attention to how to secure these tools, and the information they trust them with. In a new report, Legit security researcher Naphtali Deutsch demonstrated as much by scanning the Web for two kinds of
potentially vulnerable open source (OSS) AI services
: vector databases — which store data for AI tools — and LLM application builders — specifically, the open source program Flowise. The investigation unearthed a bevy of
sensitive personal and corporate data
, unknowingly exposed by organizations stumbling to get in on the generative AI revolution.
A lot of programmers see these tools on the Internet, then try to set them up in their environment, Deutsch says, but those same programmers are leaving security considerations behind.
Flowise is a low-code tool for building all kinds of LLM applications. Its backed by Y Combinator, and sports tens of thousands of stars on GitHub.
Whether it be a customer support bot or a tool for generating and extracting data for downstream programming and other tasks, the programs that developers build with Flowise tend to access and manage large quantities of data. Its no wonder, then, that the majority of Flowise servers are password-protected.
A password, however, isnt security enough. Earlier this year, a researcher in India discovered an authentication bypass vulnerability in Flowise versions 1.6.2 and earlier, which can be triggered by simply capitalizing a few characters in the programs API endpoints. Tracked as CVE-2024-31621, the issue earned a high 7.6 score on the CVSS Version 3 scale.
By exploiting CVE-2024-31621, Legits Deutsch cracked 438 Flowise servers. Inside were GitHub access tokens,
OpenAI API keys
, Flowise passwords and API keys in plaintext, configurations and prompts associated with Flowise apps, and more.
With a GitHub API token, you can get access to private repositories, Deutsch emphasizes, as just one example of the kinds of follow-on attacks such data can enable. We also found API keys to other vector databases, like Pinecone, a very popular SaaS platform. You could use those to get into a database, and dump all the data you found — maybe private and confidential data.
Vector databases store any kind of data an AI app might need to retrieve, in fact, and those accessible from the broader web can be attacked directly.
Using scanning tools, Deutsch discovered around 30 vector database servers online without any authentication checks whatsoever, containing obviously sensitive information: private email conversations from an engineering services vendor; documents from a fashion company; customer PII and financial information from an industrial equipment company; and more. Other databases contained real estate data, product documentation and data sheets, and patient information used by a medical chatbot.
Leaky vector databases are even more dangerous than leaky LLM builders, as they can be tampered with in such a way that does not alert the users of AI tools that rely on them. For example, instead of just stealing information from an exposed vector database, a hacker can delete or corrupt its data to manipulate its results. One could also plant malware within a vector database such that when an LLM program queries it, it ends up ingesting the malware.
To mitigate the risk of exposed AI tooling, Deutsch recommends that organizations restrict access to the AI services they rely on, monitor and log the activity associated with those services, protect sensitive data trafficked by LLM apps, and always apply software updates where possible.
[These tools] are new, and people dont have as much knowledge about how to set them up, he warns. And its also getting easier to do — with a lot of these vector databases, its two clicks to set it up in your Docker, or in your AWS Azure environment. Security is more cumbersome, and can lag behind.

Last News

▸ Oracle assures enhancements to Enterprise Java security. ◂
Discovered: 26/12/2024
Category: security

▸ Enhancing Business Security Through Threat Intelligence ◂
Discovered: 26/12/2024
Category: security

▸ Fidelis expands in malware detection & analysis. ◂
Discovered: 26/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Hundreds of LLM Servers Expose Corporate, Health & Other Online Data