The unfortunate truth is that poorly designed and improperly secured Artificial Intelligence integrations can be misused or exploited by adversaries, to the detriment of companies and users. Some of the compromises will bypass the traditional cybersecurity and privacy controls, leaving victims very exposed.
Researchers at the University of Calabria demonstrated that LLMs can be tricked into installing and executing malware on victim machines using direct prompt injection (42.1%), RAG backdoor attacks (52.9%), and inter-agent trust exploitation (82.4%). Overall, 16 of 17 (94%) state-of-the-art LLMs were shown to be vulnerable.
We cannot afford to be distracted by dazzling AI functionality when we are inadvertently putting our security, privacy, and safety at risk. Let’s embrace AI, but in trustworthy ways.
Research Paper: https://arxiv.org/html/2507.06850v3
You would also find this interesting:
https://quesma.com/blog/local-llms-security-paradox/
That is way cool! I love the specific examples, with the prompts and attack breakdown. Those percentages are crazy high!