AI-driven browsers introduce systemic enterprise risks by executing malicious, hidden commands within web content or images, bypassing traditional defenses and enabling attacks such as data theft and file deletion without the need for malware. Rapid adoption and lack of governance make these tools unmanageable, creating invisible insider threats that current security standards cannot effectively control or monitor.
Overview
AI browsers such as Perplexity Comet, OpenAI Atlas, and Fellou fundamentally alter enterprise risk by evolving from passive content-delivery tools into autonomous agents capable of executing actions on behalf of users. AI’s ability to read, summarize, click, and execute actions allows malicious instructions hidden in webpages, images, or URLs to be interpreted as legitimate user commands. Exploits such as CometJacking, PromptFix, and Atlas Omnibox Injection have shown that attackers can steal data, delete files, or perform transactions without malware or code execution, simply by manipulating the browser’s AI logic. Conventional defenses, such as firewalls and endpoint security, cannot see these attacks because there is no malware to scan or block. The danger comes from the language the AI interprets and acts upon. In effect, AI browsers merge untrusted web content with authenticated enterprise sessions, creating a new class of insider-level exposure that organizations cannot monitor or contain. Organizations should treat these tools as unmanageable until their automated actions, model inputs, and browser permissions can be fully isolated and controlled using the same standards applied to endpoints and cloud applications.
Key Findings:
- AI browsers collapse long-standing trust boundaries by allowing untrusted web content and trusted user commands to coexist in the same environment. This design flaw makes prompt injection a systemic, not patchable, risk.
- Attacks such as CometJacking, PromptFix, and Atlas Omnibox Injection demonstrate that adversaries can steal data, delete files, or perform transactions solely through language-based manipulation, without requiring malware or code execution.
- Traditional defenses offer no visibility or containment, as firewalls, proxies, and EDR tools cannot inspect or block language-based instructions that the model interprets as user intent.
- Rapid adoption and poor governance are amplifying exposure. Employees are installing AI browsers on unmanaged devices, vendors provide minimal data transparency, and enterprises lack input logging or audit controls to reconstruct incidents.
- Immediate Actions: Consider restricting or suspending the use of AI browsers in enterprise environments until model inputs, automated actions, and browser permissions can be isolated, logged, and governed using the same controls applied to corporate endpoints and cloud systems.
1.0 Threat Overview
1.1 Historical Context
AI browsers are rapidly emerging across both consumer and enterprise ecosystems. Products such as Perplexity Comet, OpenAI Atlas, Brave Leo, and Fellou integrate large language models directly into the browsing process, promising faster research, automated summaries, and task execution on behalf of the user. In reality, these browsers are not simply adding AI features; they are changing the nature of the browser itself. Instead of a controlled environment that renders content, the browser becomes an active decision-maker that interprets, reasons, and acts. This shift removes long-standing security boundaries between what the user intends and what the webpage presents. The result is an environment where untrusted web content and trusted user commands coexist in the same logical space, leaving no clear line of defense.
This design flaw makes AI browsers fundamentally unsafe in their current form. Multiple independent research teams have shown that attackers can embed invisible or misleading instructions into normal content that the AI browser executes as legitimate commands. CometJacking demonstrated that a single crafted URL could instruct the Comet browser to exfiltrate Gmail or calendar data. PromptFix proved that hidden text inside a fake CAPTCHA could trick an AI agent into downloading malicious files. OpenAI’s Atlas was shown to interpret malformed URLs as user intent, leading to destructive actions within authenticated sessions. These are not isolated bugs but symptoms of a systemic failure to separate model reasoning from web input. As adoption accelerates across unmanaged devices and enterprise endpoints, organizations are unintentionally deploying tools that operate beyond the visibility of firewalls, proxies, and endpoint controls. Until AI browsers can isolate model logic from web content, they will remain an ungovernable and high-risk technology.
1.2 Affected Systems
| Browser | AI Model | Key Features | Known Issues |
|---|---|---|---|
| Perplexity Comet | Proprietary model with API access | Full agentic capabilities including navigation, screenshot parsing, and task automation | Vulnerable to Commadjacking and PromptFix, enabling data exfiltration and drive-by downloads |
| OpenAI Atlas | GPT-4 Turbo | Natural language omnibox and autonomous task execution | Omnibox injection allows prompt-based command execution and destructive actions |
| Fellou | Custom LLM stack | Semi-agentic model capable of navigating and summarizing content | Page text can override user intent through visible or hidden prompt injection |
| Traditional Browsers (Edge, Chrome, Safari) |
None | Standard rendering engines with no autonomous AI control | Exposed only through unverified third-party extensions or plugins |
1.3 Architectural Weaknesses
AI browsers share a fundamental design flaw: they process untrusted web content and trusted user commands within the same logical environment. In traditional browsers, user actions are distinct from page content, and code execution is limited by sandboxing and the same-origin policy. In AI browsers, that separation no longer exists. The large language model interprets both sources of information as part of a single conversation, meaning a hidden instruction on a webpage or in an image can be treated the same way as a legitimate command from the user. This collapse of trust boundaries is what allows prompt injection to function without exploiting any code vulnerability.
Another architectural weakness lies in how AI browsers handle data flow and session authority. Many agentic browsers maintain persistent authentication to email, storage, or cloud services so that the AI can perform actions on the user’s behalf. When an attacker injects a hidden prompt, the browser executes those actions with full access rights, effectively turning a helpful assistant into an insider threat. Because these actions occur within normal browser operations, they generate no alerts or logs that traditional security tools can recognize. Finally, most AI browsers rely on cloud-based inference, meaning all contextual data, session details, and model reasoning are transmitted to vendor infrastructure outside enterprise visibility. This combination of untrusted input, over-privileged automation, and opaque processing creates an environment that cannot be secured with existing web or endpoint controls.
2.0 Exposure Pathways
2.1 User Interaction Triggers
Most AI browser incidents begin with an ordinary activity that seems safe to the user. Clicking a link, summarizing a webpage, or capturing a screenshot can each serve as a delivery method for hidden instructions. In CometJacking, the user only had to click a crafted URL that appeared harmless but contained embedded text prompting the AI to exfiltrate Gmail or calendar data. PromptFix used a fake CAPTCHA to hide similar instructions in invisible text that the AI model parsed and executed automatically. Even Atlas, which relies on its omnibox as a dual search and command field, can interpret malformed URLs as user intent. These actions require no downloads, macros, or file execution, so the user never sees any signs of compromise.
2.2 Autonomous Action Chain
Once triggered, the AI browser carries out the injected instructions with the same privileges as the authenticated user. It can open new tabs, access connected accounts, download or delete files, and send requests to external endpoints. Because these actions occur through the browser’s legitimate automation features, they appear as normal user activity in logs. In CometJacking, this autonomy allowed data exfiltration without bypassing any security controls. In PromptFix, the AI bypassed human verification steps by “clicking” invisible buttons, completing a drive-by download on its own. The browser becomes both the tool and the threat, acting as an automated insider within the organization’s perimeter.
2.3 Visibility Gaps
Traditional enterprise defenses cannot see these events because no malicious code runs and no exploit chain exists. Firewalls, endpoint detection, and proxy tools observe only legitimate browser traffic, while the real instructions exist as text within the model’s context. Session tokens, OAuth credentials, and the AI's cookies appear valid, and its actions blend seamlessly into normal web behavior. Even if a prompt injection is detected later, most AI browsers lack full input logging, leaving investigators unable to reconstruct what command was executed or how the event began. This absence of visibility turns each AI browsing session into an unmonitored data channel operating inside a trusted environment.
3.0 Governance and Misuse
The spread of AI browsers has exposed a governance problem rather than a technical one. Adoption is outpacing policy, and vendor transparency is limited; organizations lack visibility into how these tools handle data or exercise authority on behalf of users. Without structured oversight, AI browsers are introducing unmanaged automation into corporate networks under the appearance of legitimate productivity tools.
4.0 Historical Exploit Timeline
5.0 Risk and Impact
AI browsers introduce a form of risk that cannot be contained by traditional network or endpoint defenses. Because these tools act within authenticated sessions and interpret language as executable logic, they can move data, perform actions, and alter systems without deploying any code. Hidden instructions embedded in ordinary content can cause the browser to exfiltrate emails, download files, or modify accounts while remaining indistinguishable from legitimate user activity. These events compromise confidentiality and integrity but generate no alerts, leaving defenders blind to what occurred or when. The lack of reliable input logging further complicates forensics, making it nearly impossible to determine what data was accessed or whether containment has been achieved.
At the organizational level, these weaknesses create legal, operational, and strategic exposure. Data processed by AI browsers often leaves corporate networks and may be stored on vendor infrastructure in unknown jurisdictions, potentially violating privacy and compliance requirements. Unauthorized actions initiated by injected prompts can disrupt workflows or delete critical data while appearing to originate from trusted users. Over time, repeated AI-driven incidents erode trust among customers, regulators, and leadership teams, undermining confidence in broader automation and AI adoption efforts. Unless enterprises apply strict governance, isolation, and monitoring controls, AI browsers will remain an unmonitored automation channel capable of causing high-impact losses without triggering conventional security defenses.
6.0 Recommendations for Mitigation
7.0 Hunter Insights
AI browsers such as Perplexity Comet, OpenAI Atlas, and Fellou represent a shift in enterprise security risk, transforming browsers from passive rendering tools into autonomous agents capable of executing user-like actions solely through interpreted language. This architecture introduces high-impact risks: it allows malicious instructions hidden within otherwise benign web content, images, or URLs to be executed as legitimate user commands—without requiring code execution or malware, making traditional defenses ineffective. As attackers exploit these AI browser weaknesses through prompt injection attacks like CometJacking and PromptFix, data theft, file deletion, and unauthorized transactions can occur invisibly and with user-level authority, bypassing firewalls, proxies, and endpoint controls.
Looking ahead, unless organizations adopt robust enterprise governance and technical controls on AI-enabled browsers, the proliferation of these tools will accelerate the emergence of unmonitored automation and untraceable data exfiltration channels. Future-proof security will require implementing centralized AI interaction gateways, mandating rigorous identity and permission controls, enforcing immutable audit logging, and demanding full vendor transparency into data-handling practices. Without both architectural changes to isolate model logic from untrusted web input and comprehensive policy frameworks, AI browsers will continue to erode security and compliance boundaries—leaving enterprises exposed to undetectable insider threats and systemic trust collapses.