OpenAI warns AI browsers may never be fully secure; says prompt injection may never be solved
ChatGPT- maker OpenAI has now cautioned that AI browsers including its recently launched ChatGPT Atlas agent, may never be fully immune to prompt injection attacks. In a long blog post, the company said that while it is strengthening defenses, the nature of these attacks makes complete protection unlikely. For those unaware, prompt injection takes place when malicious instructions are embedded in content that an AI agent processes. Instead of following the user’s intent, the agent is tricked into executing the attacker’s commands. For browser-based agents, this risk is especially acute because they interact with emails, documents, social media posts, and arbitrary webpages.
