AI Agentic Browser Blunder
Critical Vulnerability Exposes Perplexity's Comet Browser to Prompt Injection Attacks
Perplexity's Comet AI browser faced a severe security flaw dubbed 'PleaseFix,' allowing attackers to conduct indirect prompt injection attacks. The vulnerability permitted zero‑click exploits, leaking user files and sensitive data. Despite a patch rollout, broader industry implications question agentic AI browser security and the effectiveness of current defenses.
Introduction to Perplexity’s Comet AI Browser Vulnerability
The critical vulnerability discovered in Perplexity's Comet AI browser, known as PleaseFix, has shed light on some of the underlying risks associated with AI‑powered browsers. This vulnerability, identified by researchers from Zenity Labs, allowed for indirect prompt injection attacks that could leak sensitive local files and data without the user's explicit consent or knowledge. Through this flaw, attackers were able to embed malicious instructions into seemingly benign content, which the AI browser would process inadvertently, inadvertently executing harmful actions. According to eSecurity Planet, this created opportunities for attackers to exploit browser functions to access secure data and manipulate interactions silently.
Understanding Indirect Prompt Injection Attacks
Mechanism of the PleaseFix Vulnerability
Real‑World Examples of Exploits
One of the most striking real‑world examples of exploits, as highlighted by Zenity researchers, was the vulnerability discovered in Perplexity's Comet AI‑powered browser. This vulnerability, known as PleaseFix, was particularly critical due to the way it allowed indirect prompt injection attacks. Through these attacks, malicious actors could embed harmful instructions within seemingly innocent content such as webpages, emails, or calendar invites, which Comet would then process. As a result, the AI could be tricked into executing undesired actions without the user's explicit consent, such as accessing sensitive data like bank account details or local files (1).
This exploitation technique is particularly insidious as it permits zero‑click vulnerabilities—meaning the user does not need to click on a malicious link or download an attachment for the exploit to happen. The PleaseFix exploit demonstrated how AI could be misdirected purely through hidden text or seemingly benign requests to summarize content. Successful demonstrations included scenarios where Comet autonomously registered for services using the user's email, extracted verification tokens from Gmail, and even accessed local files via the file:// protocol—all under the guise of executing legitimate commands such as 'summarize this page' (5).
Beyond Perplexity's case, the PleaseFix vulnerability highlights a broader issue within the AI community: the difficulty of preventing indirect prompt injection attacks across various agentic browsers. These browsers typically face similar risks due to their architectural design that often trusts and processes external content without robust separation of instructions from data. This has led to the disclosure of similar flaws in other AI browsers, demonstrating that such vulnerabilities are systemic and not merely isolated incidents (4).
Discovery and Response to the Vulnerability
Broader Implications on AI Browser Security
The discovery of the critical vulnerability in the Perplexity Comet AI browser, as detailed in a comprehensive 1 from eSecurity Planet, underscores a significant challenge in the realm of AI browser security. This vulnerability, named PleaseFix by Zenity researchers, highlights the inherent risks in agentic AI browsers where traditional security measures are often inadequate. The vulnerability allowed attackers to inject malicious content through indirect prompt injection attacks, effectively bypassing the browser's security to access sensitive user data without explicit consent. This incident shines a light on broader security concerns associated with AI technologies that blur the lines between user data and external malicious payloads.
One of the broader implications of the PleaseFix vulnerability in AI browsers is the pressing need for enhanced security measures across the industry. As agentic AI browsers become more prevalent, the sophistication of such attacks demonstrates the limits of current security frameworks in distinguishing between legitimate data processing and malicious activities. The 1 underscores a critical juncture in the development of AI‑integrated technologies, highlighting the urgent call for industry‑wide defenses, such as adversarial training and more robust content classifiers, to shield users from potential exploits.
How Was the PleaseFix Vulnerability Fixed?
Impact on Other AI Browsers
User Safety and Recommendations
Background on Zenity Labs and Brave Researchers
Current Events Related to AI Browser Vulnerabilities
Public Reactions and Industry Sentiment
Economic, Social, and Regulatory Implications
Future Trends in AI Cybersecurity
Sources
- 1.eSecurity Planet(esecurityplanet.com)
- 2.explained in further detail by CyberNews(cybernews.com)
- 3.Zenity Labs' disclosure(zenity.io)
- 4.CyberScoop's analysis(cyberscoop.com)
- 5.Brave(brave.com)
- 6.TechRadar(techradar.com)
- 7.SiliconAngle(siliconangle.com)
Related News
May 9, 2026
OpenAI Ships GPT-5.5-Cyber, a Near-Mythos Model for Vetted Defenders
OpenAI launched GPT-5.5-Cyber, a specialized model for cybersecurity defenders that scored 81.9% on the CyberGym benchmark and completed simulated corporate cyberattacks. The UK AISI found it nearly as capable as Anthropic's Claude Mythos — 20% vs 30% success on a 32-step attack simulation. But the strategy diverges: Anthropic locks Mythos to ~40 orgs, while OpenAI offers tiered access through its Trusted Access for Cyber program.
May 8, 2026
OpenAI Launches GPT-5.5-Cyber, Taking Direct Aim at Anthropic Mythos
OpenAI launched GPT-5.5-Cyber on May 7 — a cybersecurity-focused AI model rolling out to vetted defenders. The release comes a month after Anthropic's Claude Mythos and signals an escalating arms race in AI-powered cyber tools, with both companies jockeying for government trust.
May 3, 2026
Anthropic Mythos Exposes AI Governance Crisis as Models Gain Autonomy
Anthropic's Claude Mythos Preview model, which can autonomously execute multi-step cyberattacks and discovered decades-old software bugs, has triggered Project Glasswing — a restricted-access coalition with CISA, Microsoft, and Apple. The model's capabilities are forcing a reckoning over how companies govern AI that can act independently.