A new wave of competition is stirring in the browser market as companies like OpenAI, Perplexity, and The Browser Company aggressively push to redefine how humans interact with the web. Rather than merely displaying pages, these AI browsers will be engineered to reason, take action independently, and execute tasks on behalf of end users. At least four such products, including ChatGPT's Atlas, Perplexity's Comet, and The Browser Company's Dia, represent a transition reminiscent of the early browser wars, when Netscape and Internet Explorer battled to compete for a role in the shaping of the future of the Internet.
Whereas the other browsers rely on search results and manual navigation, an AI browser is designed to understand natural language instructions and perform multi-step actions. For instance, a user can ask an AI browser to find a restaurant nearby, compare options, and make a reservation without the user opening the booking page themselves. In this context, the browser has to process both user instructions and the content of each of the webpages it accesses, intertwining decision-making with automation.
But this capability also creates a serious security risk that's inherent in the way large language models work. AI systems cannot be sure whether a command comes from a trusted user or comes with general text on an untrusted web page. Malicious actors may now inject malicious instructions within webpages, which can include uses of invisible text, HTML comments, and image-based prompts. Unbeknownst to them, that might get processed by an AI browser along with the user's original request-a type of attack now called prompt injection.
The consequence of such attacks could be dire, since AI browsers are designed to gain access to sensitive data in order to function effectively. Many ask for permission to emails, calendars, contacts, payment information, and browsing histories. If compromised, those very integrations become conduits for data exfiltration. Security researchers have shown just how prompt injections can trick AI browsers into forwarding emails, extracting stored credentials, making unauthorized purchases, or downloading malware without explicit user interaction.
One such neat proof-of-concept was that of Perplexity's Comet browser, wherein the researchers had embedded command instructions in a Reddit comment, hidden behind a spoiler tag. When the browser arrived and was asked to summarise the page, it obediently followed the buried commands and tried to scrape email data. The user did nothing more than request a summary; passive interactions indeed are enough to get someone compromised.
More recently, researchers detailed a method called HashJack, which abuses the way web browsers process URL fragments. Everything that appears after the “#” in a URL never actually makes it to the server of a given website and is only accessible to the browser. An attacker can embed nefarious commands in this fragment, and AI-powered browsers may read and act upon it without the hosting site detecting such commands. Researchers have already demonstrated that this method can make AI browsers show the wrong information, such as incorrect dosages of medication on well-known medical websites. Though vendors are experimenting with mitigations, such as reinforcement learning to detect suspicious prompts or restricting access during logged-out browsing sessions, these remain imperfect.
The flexibility that makes AI browsers useful also makes them vulnerable. As the technology is still in development, it shows great convenience, but the security risks raise questions of whether fully trustworthy AI browsing is an unsolved problem.
