As LLMs begin to integrate multimodal capabilities, attackers could use hidden instructions in images and audio to get a chatbot to respond the way they want, say researchers at Black Hat Europe 2023.
| Google Dorks Database | Exploits Vulnerability | Exploit Shellcodes | 
| CVE List | Tools/Apps | News/Aarticles | 
| Phishing Database | Deepfake Detection | Trends/Statistics & Live Infos | 
							Tags:
							 LLMs Open to Manipulation Using Doctored Images, Audio