AI chatbots vulnerable to indirect prompt injection attacks, researcher warns
1 month, 1 week ago

AI chatbots vulnerable to indirect prompt injection attacks, researcher warns

The Hindu  

In the rapidly evolving field of artificial intelligence, a new security threat has emerged, targeting the very core of how AI chatbots operate. Indirect prompt injection, a technique that manipulates chatbots into executing malicious commands, has become a significant concern for developers and users alike. Indirect prompt injection exploits the inherent nature of large language models to follow instructions embedded within the content they process. By embedding malicious instructions within seemingly benign documents or emails, attackers can induce chatbots to perform unauthorised actions, such as searching for sensitive information or altering long-term memory settings. Mr. Rehberger’s latest demonstration introduces a sophisticated technique known as “delayed tool invocation.” This method conditions the execution of malicious instructions on specific user actions, making the attack more covert and difficult to detect.

History of this topic

DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot
1 month, 3 weeks ago
Huge AI vulnerability could put human life at risk, researchers warn
5 months ago
This Prompt Can Make an AI Chatbot Identify and Extract Personal Details From Your Chats
5 months ago
Microsoft Rolls Out LLM-Powered Tools To Strengthen AI Chatbot's Security Against Manipulation
11 months, 3 weeks ago
New terror laws needed to tackle rise of the radicalising AI chatbots
1 year, 2 months ago
Generative AI’s Biggest Security Flaw Is Not Easy to Fix
1 year, 6 months ago
Blending security into rapidly learning and adaptive AI proving difficult
1 year, 7 months ago
Researchers uncover hypnosis-based hacking potential in AI chatbot ChatGPT: Report
1 year, 7 months ago
The Security Hole at the Heart of ChatGPT and Bing
1 year, 9 months ago
GCHQ warns that ChatGPT and rival chatbots are a security threat
2 years ago
Cybercriminals using ChatGPT AI bot to develop malicious tools?
2 years, 2 months ago

Discover Related