AI-driven browsers introduce systemic enterprise risks by executing malicious, hidden commands within web content or images, bypassing traditional defenses and enabling attacks such as data theft and file deletion without the need for malware. Rapid adoption and lack of governance make these tools unmanageable, creating invisible insider threats that current security standards cannot effectively control or monitor.

CYBER INSIGHTS CYBER INSIGHTS OCT 29, 2025 OCT 29, 2025

Overview

AI browsers such as Perplexity Comet, OpenAI Atlas, and Fellou fundamentally alter enterprise risk by evolving from passive content-delivery tools into autonomous agents capable of executing actions on behalf of users. AI’s ability to read, summarize, click, and execute actions allows malicious instructions hidden in webpages, images, or URLs to be interpreted as legitimate user commands. Exploits such as CometJacking, PromptFix, and Atlas Omnibox Injection have shown that attackers can steal data, delete files, or perform transactions without malware or code execution, simply by manipulating the browser’s AI logic. Conventional defenses, such as firewalls and endpoint security, cannot see these attacks because there is no malware to scan or block. The danger comes from the language the AI interprets and acts upon. In effect, AI browsers merge untrusted web content with authenticated enterprise sessions, creating a new class of insider-level exposure that organizations cannot monitor or contain. Organizations should treat these tools as unmanageable until their automated actions, model inputs, and browser permissions can be fully isolated and controlled using the same standards applied to endpoints and cloud applications.

Key Findings:

  • AI browsers collapse long-standing trust boundaries by allowing untrusted web content and trusted user commands to coexist in the same environment. This design flaw makes prompt injection a systemic, not patchable, risk.
  • Attacks such as CometJacking, PromptFix, and Atlas Omnibox Injection demonstrate that adversaries can steal data, delete files, or perform transactions solely through language-based manipulation, without requiring malware or code execution.
  • Traditional defenses offer no visibility or containment, as firewalls, proxies, and EDR tools cannot inspect or block language-based instructions that the model interprets as user intent.
  • Rapid adoption and poor governance are amplifying exposure. Employees are installing AI browsers on unmanaged devices, vendors provide minimal data transparency, and enterprises lack input logging or audit controls to reconstruct incidents.
  • Immediate Actions: Consider restricting or suspending the use of AI browsers in enterprise environments until model inputs, automated actions, and browser permissions can be isolated, logged, and governed using the same controls applied to corporate endpoints and cloud systems.

1.0 Threat Overview

1.1 Historical Context  

AI browsers are rapidly emerging across both consumer and enterprise ecosystems. Products such as Perplexity Comet, OpenAI Atlas, Brave Leo, and Fellou integrate large language models directly into the browsing process, promising faster research, automated summaries, and task execution on behalf of the user. In reality, these browsers are not simply adding AI features; they are changing the nature of the browser itself. Instead of a controlled environment that renders content, the browser becomes an active decision-maker that interprets, reasons, and acts. This shift removes long-standing security boundaries between what the user intends and what the webpage presents. The result is an environment where untrusted web content and trusted user commands coexist in the same logical space, leaving no clear line of defense.

This design flaw makes AI browsers fundamentally unsafe in their current form. Multiple independent research teams have shown that attackers can embed invisible or misleading instructions into normal content that the AI browser executes as legitimate commands. CometJacking demonstrated that a single crafted URL could instruct the Comet browser to exfiltrate Gmail or calendar data. PromptFix proved that hidden text inside a fake CAPTCHA could trick an AI agent into downloading malicious files. OpenAI’s Atlas was shown to interpret malformed URLs as user intent, leading to destructive actions within authenticated sessions. These are not isolated bugs but symptoms of a systemic failure to separate model reasoning from web input. As adoption accelerates across unmanaged devices and enterprise endpoints, organizations are unintentionally deploying tools that operate beyond the visibility of firewalls, proxies, and endpoint controls. Until AI browsers can isolate model logic from web content, they will remain an ungovernable and high-risk technology.

1.2 Affected Systems

Browser AI Security Comparison Table
Browser AI Model Key Features Known Issues
Perplexity Comet Proprietary model with API access Full agentic capabilities including navigation, screenshot parsing, and task automation Vulnerable to Commadjacking and PromptFix, enabling data exfiltration and drive-by downloads
OpenAI Atlas GPT-4 Turbo Natural language omnibox and autonomous task execution Omnibox injection allows prompt-based command execution and destructive actions
Fellou Custom LLM stack Semi-agentic model capable of navigating and summarizing content Page text can override user intent through visible or hidden prompt injection
Traditional Browsers
(Edge, Chrome, Safari)
None Standard rendering engines with no autonomous AI control Exposed only through unverified third-party extensions or plugins

1.3 Architectural Weaknesses

AI browsers share a fundamental design flaw: they process untrusted web content and trusted user commands within the same logical environment. In traditional browsers, user actions are distinct from page content, and code execution is limited by sandboxing and the same-origin policy. In AI browsers, that separation no longer exists. The large language model interprets both sources of information as part of a single conversation, meaning a hidden instruction on a webpage or in an image can be treated the same way as a legitimate command from the user. This collapse of trust boundaries is what allows prompt injection to function without exploiting any code vulnerability.

Another architectural weakness lies in how AI browsers handle data flow and session authority. Many agentic browsers maintain persistent authentication to email, storage, or cloud services so that the AI can perform actions on the user’s behalf. When an attacker injects a hidden prompt, the browser executes those actions with full access rights, effectively turning a helpful assistant into an insider threat. Because these actions occur within normal browser operations, they generate no alerts or logs that traditional security tools can recognize. Finally, most AI browsers rely on cloud-based inference, meaning all contextual data, session details, and model reasoning are transmitted to vendor infrastructure outside enterprise visibility. This combination of untrusted input, over-privileged automation, and opaque processing creates an environment that cannot be secured with existing web or endpoint controls.


2.0 Exposure Pathways

2.1 User Interaction Triggers

Most AI browser incidents begin with an ordinary activity that seems safe to the user. Clicking a link, summarizing a webpage, or capturing a screenshot can each serve as a delivery method for hidden instructions. In CometJacking, the user only had to click a crafted URL that appeared harmless but contained embedded text prompting the AI to exfiltrate Gmail or calendar data. PromptFix used a fake CAPTCHA to hide similar instructions in invisible text that the AI model parsed and executed automatically. Even Atlas, which relies on its omnibox as a dual search and command field, can interpret malformed URLs as user intent. These actions require no downloads, macros, or file execution, so the user never sees any signs of compromise.

2.2 Autonomous Action Chain

Once triggered, the AI browser carries out the injected instructions with the same privileges as the authenticated user. It can open new tabs, access connected accounts, download or delete files, and send requests to external endpoints. Because these actions occur through the browser’s legitimate automation features, they appear as normal user activity in logs. In CometJacking, this autonomy allowed data exfiltration without bypassing any security controls. In PromptFix, the AI bypassed human verification steps by “clicking” invisible buttons, completing a drive-by download on its own. The browser becomes both the tool and the threat, acting as an automated insider within the organization’s perimeter.

2.3 Visibility Gaps

Traditional enterprise defenses cannot see these events because no malicious code runs and no exploit chain exists. Firewalls, endpoint detection, and proxy tools observe only legitimate browser traffic, while the real instructions exist as text within the model’s context. Session tokens, OAuth credentials, and the AI's cookies appear valid, and its actions blend seamlessly into normal web behavior. Even if a prompt injection is detected later, most AI browsers lack full input logging, leaving investigators unable to reconstruct what command was executed or how the event began. This absence of visibility turns each AI browsing session into an unmonitored data channel operating inside a trusted environment.


3.0 Governance and Misuse

The spread of AI browsers has exposed a governance problem rather than a technical one. Adoption is outpacing policy, and vendor transparency is limited; organizations lack visibility into how these tools handle data or exercise authority on behalf of users. Without structured oversight, AI browsers are introducing unmanaged automation into corporate networks under the appearance of legitimate productivity tools.

AI Browser Enterprise Security Risks
Shadow Adoption and Unmanaged Use
Employees are downloading AI browsers like Comet, Atlas, and Fellou independently, often on personal or unmanaged devices.
These tools are frequently linked to corporate email or cloud accounts, allowing unapproved data flow into external AI systems.
Because they operate outside MDM or policy enforcement, security teams cannot apply DLP, identity restrictions, or logging.
Each instance becomes a credentialed automation surface invisible to traditional monitoring.
Shadow IT Unmanaged Devices No DLP Bypasses MDM
Vendor Transparency and Data Handling Risks
Most AI browser vendors provide little clarity on data retention, model training, or cloud storage practices.
Web content, cookies, and authentication tokens may be sent to vendor infrastructure for processing without user awareness.
Data may cross jurisdictional or regulatory boundaries, complicating compliance with privacy frameworks such as GDPR or HIPAA.
Enterprises cannot verify what data leaves their environment or whether it is deleted after processing.
Data Retention Model Training GDPR/HIPAA Token Exfiltration
Permission Overreach and Identity Confusion
AI browsers request expansive, persistent permissions to read and act within email, calendar, and file systems.
Users often approve these requests automatically, granting long-term access to corporate data.
Once authorized, the browser's AI executes tasks under the user's identity, making injected commands appear legitimate.
IAM solutions typically do not distinguish between user actions and AI-initiated automation, eliminating accountability.
Excessive Permissions Identity Hijacking OAuth Abuse No User/AI Distinction
Accountability and Audit Gaps
Few AI browsers record complete input and output histories for model activity.
Security teams cannot reconstruct what prompts were issued, what data was accessed, or what actions were taken.
The absence of audit trails makes incident response, forensic analysis, and compliance verification extremely difficult.
Organizations lack visibility into whether AI browsers are being exploited through prompt injection or data exfiltration attacks.
No Audit Logs Limited Forensics Compliance Gaps Incident Response

4.0 Historical Exploit Timeline

AI Browser Exploit Timeline
Aug 2025
Perplexity Comet
PromptFix
Description
Hidden CAPTCHA prompt injection enables automatic file downloads without user interaction or awareness.
Impact
Demonstrated drive-by download injection
Prompt Injection Hidden CAPTCHA Drive-by Download Automatic Execution
Oct 2025
Perplexity Comet
Comejacking
Description
Crafted URL exfiltrated Gmail and Calendar data through malicious prompt manipulation embedded in web requests.
Impact
Confirmed AI-driven data theft
URL Injection Gmail Exfiltration Calendar Data Theft Crafted Prompts
Oct 2025
Fellou
Navigation Injection
Description
Visible page text overridden user intent, causing the AI to perform unintended navigation and actions against user instructions.
Impact
Showed user-input hijacking vulnerability
Intent Override Visible Injection User Hijacking Navigation Control
Oct 2025
OpenAI Atlas
Omnibox Injection
Description
Malformed URL executed trusted commands through natural language interface, bypassing security controls.
Impact
Validated prompt misclassification exploit
Omnibox Exploit Malformed URL Command Execution Misclassification
Ongoing 2025
Multiple Vendors
General Prompt Injection Research
Description
Cross-vendor systemic flaws disclosed through ongoing security research revealing architectural vulnerabilities in AI browser design.
Impact
Confirmed architectural vulnerability class
Ongoing Research Cross-Vendor Systemic Flaws Architecture Issues

5.0 Risk and Impact

AI browsers introduce a form of risk that cannot be contained by traditional network or endpoint defenses. Because these tools act within authenticated sessions and interpret language as executable logic, they can move data, perform actions, and alter systems without deploying any code. Hidden instructions embedded in ordinary content can cause the browser to exfiltrate emails, download files, or modify accounts while remaining indistinguishable from legitimate user activity. These events compromise confidentiality and integrity but generate no alerts, leaving defenders blind to what occurred or when. The lack of reliable input logging further complicates forensics, making it nearly impossible to determine what data was accessed or whether containment has been achieved.

At the organizational level, these weaknesses create legal, operational, and strategic exposure. Data processed by AI browsers often leaves corporate networks and may be stored on vendor infrastructure in unknown jurisdictions, potentially violating privacy and compliance requirements. Unauthorized actions initiated by injected prompts can disrupt workflows or delete critical data while appearing to originate from trusted users. Over time, repeated AI-driven incidents erode trust among customers, regulators, and leadership teams, undermining confidence in broader automation and AI adoption efforts. Unless enterprises apply strict governance, isolation, and monitoring controls, AI browsers will remain an unmonitored automation channel capable of causing high-impact losses without triggering conventional security defenses.


6.0 Recommendations for Mitigation

AI Browser Mitigation Strategies
Establish Enterprise Governance for AI Tools
Adopt an enterprise-wide governance framework aligned with the NIST AI Risk Management Framework (AI RMF) or ISO/IEC 42001 to standardize risk evaluation of AI browser adoption and assistants.
Require all AI-enabled applications to complete security reviews before deployment, including assessments of data handling, retention, and access controls.
Incorporate AI browser use into acceptable use policies and awareness training to ensure employees understand the operational and compliance risks.
NIST AI RMF ISO/IEC 42001 Security Reviews Policy Framework
Deploy an AI Interaction Gateway
Implement a centralized proxy that inspects and normalizes all AI-related web traffic, removing hidden text, encoded commands, and malformed URLs that could enable prompt injections.
Log all model-bound traffic, integrating those records with existing DLP and CASB systems for visibility and enforcement.
Require browser sessions that involve AI interactions to pass through this gateway, ensuring consistent sanitization and audit coverage.
Centralized Proxy Traffic Inspection DLP Integration CASB
Mandate Identity and Permission Controls for AI Sessions
Enforce single sign-on (SSO) with multi-factor authentication (MFA) for all browser-based AI agents to prevent unapproved access.
Apply least-privilege principles so AI agents cannot access, modify, or send data beyond the explicit scope of user intent.
Require human confirmation for sensitive operations such as financial transactions, data exports, or account modifications initiated through AI browsers.
SSO + MFA Least Privilege Human Confirmation Permission Controls
Require Immutable AI Logging and Auditable Records
Capture both AI input prompts and resulting actions in tamper-proof, write-once storage for long-term forensics and compliance.
Correlate AI activity logs with identity and endpoint telemetry in the organization's SIEM to detect misuse or anomalies.
Immutable Logs SIEM Integration Forensics Compliance

7.0 Hunter Insights

AI browsers such as Perplexity Comet, OpenAI Atlas, and Fellou represent a shift in enterprise security risk, transforming browsers from passive rendering tools into autonomous agents capable of executing user-like actions solely through interpreted language. This architecture introduces high-impact risks: it allows malicious instructions hidden within otherwise benign web content, images, or URLs to be executed as legitimate user commands—without requiring code execution or malware, making traditional defenses ineffective. As attackers exploit these AI browser weaknesses through prompt injection attacks like CometJacking and PromptFix, data theft, file deletion, and unauthorized transactions can occur invisibly and with user-level authority, bypassing firewalls, proxies, and endpoint controls.

Looking ahead, unless organizations adopt robust enterprise governance and technical controls on AI-enabled browsers, the proliferation of these tools will accelerate the emergence of unmonitored automation and untraceable data exfiltration channels. Future-proof security will require implementing centralized AI interaction gateways, mandating rigorous identity and permission controls, enforcing immutable audit logging, and demanding full vendor transparency into data-handling practices. Without both architectural changes to isolate model logic from untrusted web input and comprehensive policy frameworks, AI browsers will continue to erode security and compliance boundaries—leaving enterprises exposed to undetectable insider threats and systemic trust collapses.

💡
Hunter Strategy encourages our readers to look for updates in our daily Trending Topics and on Twitter.