AI‑enhanced browsers promise to turn the web into a personal assistant. They come with features such as summarising pages, navigating sites, filling forms and even making purchases autonomously. However, 2025 has shown that these agentic capabilities come at a steep security price: the browser stops being a passive viewer and becomes an active agent with the ability to act on your behalftechcrunch.com. This report synthesizes recent research, news and security advisories to explain how AI browsers work, why they are vulnerable to prompt injection attacks and what users and organisations should do to mitigate the risks.

862a7346-3cba-41ba-881a-52e7c192ea4e.png

From Passive Browsing to Agentic AI

Traditional browsers simply render web pages. AI browsers, by contrast, integrate large‑language models (LLMs) capable of reading content and taking actions. This shift from passive to agentic interaction means the browser can read your emails, book flights using saved payment methods, schedule appointments and post on social mediaresearch.aimultiple.com. Products such as Perplexity’s Comet, OpenAI’s ChatGPT Atlas, Opera’s Neon and Fellou’s browser promise convenience by automating tasks that previously required manual inputtechcrunch.com. They often ask for broad permissions—access to your email, calendar and contact lists—to function effectivelytechcrunch.com.

What makes an AI browser “agentic”

Researchers distinguish between an AI‑assisted browser and an agentic browser. An AI‑assisted browser may answer questions or summarise articles but relies on the user to execute actions. An agentic browser goes further by executing multi‑step tasks autonomouslymalwarebytes.com. For example, telling the browser to “find the cheapest flight to Paris next month and book it” prompts the agent to research options, fill out forms and finalise the booking on its ownmalwarebytes.com.

Understanding Prompt Injection

Prompt injection is a class of attack in which malicious instructions are hidden in inputs that an AI model processes. When the AI doesn’t properly distinguish between trusted user commands and untrusted web content, these hidden instructions can override the user’s intent. Security researchers classify attacks into:

  • Direct prompt injection – unwanted instructions typed or pasted directly into a model’s input box. For example, a malicious URL in the address bar may contain natural‑language commands that the AI treats as trusted inputtheregister.com.
  • Indirect prompt injection – malicious instructions embedded in content the agent is asked to process, such as a web page, PDF or image. When the agent summarises the page, it inadvertently executes the hidden instructionstheregister.com.

Because the agent operates with the user’s credentials across sites, the injected commands can perform sensitive actions—opening emails, reading messages or transferring money—without the user’s knowledgebrave.com. Traditional browser protections like the same‑origin policy become irrelevant because the AI agent is explicitly authorized to cross domainsbrave.com.

Recent Vulnerabilities

Multiple independent teams have documented prompt‑injection vulnerabilities across several AI browsers in 2024–2025. In late 2025 new reports exposed even more attack vectors, including hidden text in screenshots, navigation‑based injection, malformed URLs and hidden HTML elementsbrave.combrave.com. Below are notable examples illustrating how attackers exploit the agentic capabilities.

Hidden text in web pages (Comet & Fellou)

The first wave of attacks involved embedding hidden instructions in web pages and social‑media comments. Brave researchers demonstrated that Perplexity’s Comet treated text hidden behind a Reddit spoiler tag as part of the user’s prompt. When a user clicked “Summarize this page,” Comet executed instructions that exfiltrated the user’s Perplexity email address, requested a password‑reset one‑time password (OTP) from Gmail, read the OTP and sent it to the attackerresearch.aimultiple.com. In late 2025 Brave expanded on this work by showing that Comet’s screenshot‑analysis feature was also vulnerable: invisible text embedded in images could be extracted via optical character recognition and used to instruct the AI to perform malicious actionsbrave.com. For the Fellou browser, simply visiting a malicious page was enough—no summarisation needed. The browser automatically sent the page’s content to the LLM, allowing visible instructions to override the user’s query and trigger actions like opening Gmail and sending data to an attackerbrave.com.

Unseen prompt injection via screenshots (Comet)

Comet introduced a feature allowing users to take screenshots of websites and ask questions about them. Brave researchers discovered that attackers could embed nearly invisible text in an image (faint blue text on a yellow background). When the user captured a screenshot, Comet’s optical character recognition extracted the hidden text and passed it to the LLM, which executed the malicious commandsbrave.com. Subsequent experiments showed that the instructions could instruct the AI to open Gmail, read the latest email subject and send it to an attacker server, all without the user noticingbrave.com. The attack underscores that prompt injection can occur through multimodal inputs; invisible instructions in images can be just as dangerous as hidden HTML.

Weaponised URLs (CometJacking)

LayerX researchers uncovered a CometJacking attack in which the prompt is hidden in the query parameters of a seemingly harmless URL. By clicking such a link, the user triggers the AI to consult its memory (emails, calendars or contact data) and send summarised data, encoded in base64, to an attacker‑controlled serverlayerxsecurity.com. Unlike text‑based prompt injection, this vector prioritises user memory via URL parameters and bypasses data‑exfiltration checks by encoding the payloadlayerxsecurity.com.

Fellou’s agentic browser exhibited a flaw where simply asking it to visit a site caused it to forward the page’s content to the LLM for processing. Attackers could place malicious instructions directly in the page’s visible text. Upon visiting the site, the AI would carry out the attacker’s commands, such as opening Gmail to read the latest email subject and exfiltrating it via a crafted URLresearch.aimultiple.com. The vulnerability required no special hiding—plain text was enough to hijack the browser.

Hidden HTML elements (Opera Neon)

Opera’s Neon browser processed hidden HTML elements as AI commands. Attackers used zero‑opacity <span> tags to insert invisible instructions. When the user asked Neon to summarise a page, the AI read the hidden commands and navigated to Opera’s authentication site to extract the user’s email and send it to the attackerbrave.com. The same technique could be used to extract even more sensitive information (e.g., credit‑card details) if the user was logged into their bankbrave.com. Opera said the attack succeeded only 10 % of the time due to model non‑determinism, but Brave researchers reproduced it reliably and advised disabling processing of hidden HTMLbrave.com.

Memory contamination and cross‑site request forgery (ChatGPT Atlas)

LayerX reported that OpenAI’s ChatGPT Atlas browser could be exploited via cross‑site request forgery to inject malicious instructions into ChatGPT’s persistent memory. A user logged into ChatGPT receives a malicious link that, when clicked, initiates a request to OpenAI’s servers with instructions that become part of the user’s stored “memory.” The attacker’s instructions persist across devices and sessions, meaning the AI will carry out harmful commands in future chatstheregister.com. Atlas keeps users logged in by default, which increases the risk: research found that Atlas blocked only 5.8 % of phishing attacks compared with 47 % for Chrome and 53 % for Edgeesecurityplanet.com. Israeli researchers reported a 94.2 % failure rate when simulating 103 phishing attacks against Atlasi24news.tv.

Malformed URLs treated as commands (Atlas)

NeuralTrust researchers discovered that Atlas’s omnibox can interpret malformed URLs containing natural‑language instructions as user commands. When a user copies such a string into the address bar, Atlas treats the entire content as a prompt rather than a URL and executes embedded instructionsneuraltrust.ai. For example, a string that looks like a URL but includes “follow these instructions only” can override the user’s intent and direct the browser to a malicious websiteneuraltrust.ai. SC Media explained that an attacker can place such a link behind a “copy link” button; the extra space after “https:” causes Atlas to treat it as a prompt, then open attacker‑controlled pages or instruct the AI to delete files in Google Drivescworld.com. The attack highlights how ambiguous parsing of the omnibox blurs the line between trusted commands and untrusted data. NeuralTrust recommends strict URL parsing and explicit “Navigate vs. Ask” modesneuraltrust.ai.

Underlying Causes

Across all these attacks, a consistent theme emerges: blurred trust boundaries. AI browsers often concatenate the user’s prompt with page content and send it to the LLM without clear demarcation. This lets webpage instructions override the user’s intentbrave.com. Traditional browser mechanisms like the same‑origin policy and cross‑site request forgery protections assume that only scripts (not natural language) can trigger actions. Agentic AI breaks this assumption because the model interprets natural language on any page as potential instructionsbrave.com. Experts note that LLMs are not good at distinguishing where instructions come from; they treat hidden or visible text equallyfortune.com. As security researcher Sasi Levi points out, prompt injection is inevitable as long as models read untrusted text and can influence actionstheregister.com.

Risks for Users and Organisations

The consequences of prompt injection extend beyond embarrassing outputs. In demonstrations, AI browsers opened Gmail accounts, read subject lines and exfiltrated them to attacker serverstheregister.com. Because these browsers have access to emails, calendars and stored credentials, a compromised agent can:

  • Steal authentication tokens and personal data. Brave researchers highlighted that simply summarising a Reddit post could expose bank credentials or personal information if invisible instructions instruct the agent to fetch themmalwarebytes.com.
  • Hijack financial transactions. Malwarebytes warned that attackers could set up websites with competitive pricing to lure agentic browsers into making purchases on behalf of the user, effectively draining accountsmalwarebytes.com.
  • Perform lateral movement in corporate environments. A compromised agent can access internal documentation, confidential emails and financial systems, move across cloud services using saved credentials and manipulate communication channelsresearch.aimultiple.com.

The risk isn’t limited to Comet or Atlas. The Register documented prompt‑injection failures in generative chatbots like ChatGPT, Gemini and Perplexity when summarising pages containing hidden instructionstheregister.com. Because these tools run on multiple browsers, the threat is ecosystem‑wide.

Defensive Measures and Recommendations

Until AI‑browser vendors implement robust protections, users and organisations should treat agentic features as untrusted by default. Security experts and vendors recommend the following practices:

Limit sensitive activities

  • Disable AI features when handling sensitive accounts. AIMultiple advises against using AI browsers while logged into banking, healthcare, email or corporate portalsresearch.aimultiple.com. Use a traditional browser for these tasks.
  • Use separate profiles. Maintain one profile for AI‑assisted casual browsing (no sensitive logins) and another for authenticated sessions with AI features disabledresearch.aimultiple.com.

Reduce privileges and monitor behaviour

  • Grant minimal permissions. Only allow the agent access to data and services necessary for a taskmalwarebytes.com. Avoid importing entire password keychains into AI browsersfortune.com.
  • Enable multi‑factor authentication (MFA). Use unique passwords and MFA for accounts connected to AI browserstechcrunch.com. Monitor account activity for unusual behaviour.
  • Verify before summarising. Inspect page source for hidden elements (e.g., white text on white background, comments, collapsed spoiler sections) before asking the AI to summariseresearch.aimultiple.com.
  • Be wary of copy‑and‑paste prompts. Do not paste unknown URLs or prompts into an agentic browser’s address bar; they may contain hidden commandstheregister.com.

Organisational controls

  • Audit AI‑browser usage. Identify which employees use AI browsers and restrict them from accessing sensitive systems without approvalresearch.aimultiple.com.
  • Implement browser security platforms. Tools like LayerX monitor AI‑agent behaviour and can detect anomalous actionsresearch.aimultiple.com.
  • Define policies for agentic features. Require explicit approval before enabling agentic features for workflows involving confidential dataresearch.aimultiple.com.

Stay informed and demand vendor transparency

  • Keep software updated and follow vendor advisories. Vendors like OpenAI and Perplexity have begun adding guardrails (e.g., logged‑out mode, agent “watch mode”)techcrunch.com. While these measures reduce risk, they don’t eliminate itfortune.com.
  • Follow research developments. Recent papers have introduced in‑browser fuzzing frameworks that automatically generate malicious pages to test AI agents; even the best‑performing tools failed 58–74 % of the time after several iterationsarxiv.org. Such research underscores the importance of continuous testing.

Vendor Responses and Ongoing Research

OpenAI, Perplexity and Opera acknowledge that prompt injection remains an unsolved problem. OpenAI’s chief information security officer noted that “prompt injection remains a frontier, unsolved security problem”techcrunch.com. Fortune reported that OpenAI implemented rapid‑response systems, novel training techniques and modes like logged‑out and watch mode to detect and block attacksfortune.com. Perplexity claims to have built real‑time detection systems, and Opera patched its hidden‑HTML vulnerabilitytechcrunch.com. However, independent researchers continue to find new attack vectors—malformed URLs, cross‑site request forgery and memory contamination—that bypass existing safeguardstheregister.comtheregister.com.

Academia is also contributing. A 2025 arXiv paper introduced an LLM‑guided fuzzing framework that runs inside the browser to discover prompt injection vulnerabilities. The framework found that page summarisation and question‑answering features had attack success rates of 73 % and 71 %, respectively, and that even sophisticated models eventually succumbed to generated attacksarxiv.orgarxiv.org. This suggests that iterative adversarial testing will be essential for strengthening AI browsers.

Conclusion: Proceed with Caution

AI‑enhanced browsers are still in their infancy. Their promise—automatic navigation, summary and task completion—is tempered by systemic security weaknesses. The underlying problem is fundamental: once a browser concatenates untrusted web content with user instructions and gives an LLM the authority to act, an attacker only needs to hide instructions in that content to take control. As Brave researchers put it, prompt injection is not an isolated bug but a systemic challenge for all AI‑powered browsersbrave.com.

For the foreseeable future, users and organisations should treat AI browsers as untrusted by default. Limit their access, avoid using them for sensitive tasks, monitor their actions and stay informed about evolving threats. Vendors must continue to harden their designs, and researchers must keep probing for weaknesses. Until there are robust and verifiable safeguards, the convenience of AI‑augmented browsing comes with an invisible cost: the risk that your helpful assistant could be turned against you.

You May Also Like

Hoxo: Capgemini and Orano’s AI‑Powered Humanoid Robot to Transform Nuclear Operations

Overview On November 5, 2025, Capgemini and Orano unveiled Hoxo, the first intelligent humanoid…

The AI Regulation Landscape: 2025–2030 Outlook

A Comprehensive Analysis of Global AI Governance Trends, Compliance Challenges, Innovation Bottlenecks,…

The Great Cloud Shake-Up: AWS and Microsoft Azure Outages Expose the Fragility of Our Digital Backbone

Two massive cloud disruptions within ten days have sent a shockwave through…

From Generators to Generalists: Evidence that Video Models Are Zero‑Shot Learners and Reasoners

Thesis. The paper argues that large, generative video models trained at scale…