With the rapid development of artificial intelligence, the digital landscape continues to undergo a reshaping process, and the internet browser itself seems to be the latest frontier in this revolution. After the phenomenal success of AI chatbots such as ChatGPT, Google Gemini, and Perplexity, tech companies are now racing to integrate the same kind of intelligence into the very tool that people use every day to navigate the world online.
A recent development by Google has been the integration of Gemini into its search engine, while both OpenAI and Perplexity have released their own AI-powered browsers, Atlas and Perplexity, all promising a more personalised and intuitive way to browse online content. In addition to offering unprecedented convenience and conversational search capabilities for users, this innovation marks the beginning of a new era in information access.
In spite of the excitement, cybersecurity professionals remain increasingly concerned.
There is a growing concern among experts that intelligent systems are inadvertently exposing users to sophisticated cyber risks in spite of enhancing their user experience.
A context-aware interaction or dynamic data retrieval feature that allows users to interact with their environment can be exploited through indirect prompt injection and other manipulation methods, which may allow attackers to exploit the features.
It is possible that these vulnerabilities may allow malicious actors to access sensitive data such as personal files, login credentials, and financial information, which raises the risk of data breaches and cybercriminals. In these new eras of artificial intelligence, where the boundaries between browsing and AI are blurring, there has become an increasing urgency in ensuring that trust, transparency, and safety are maintained on the Internet.
AI browsers continue to divide experts when it comes to whether they are truly safe to use, and the issue becomes increasingly complicated as the debate continues. In addition to providing unprecedented ease of use and personalisation, ChatGPT's Atlas and Perplexity's Comet represent the next generation of intelligent browsers. However, they also introduce new levels of vulnerability that are largely unimaginable in traditional web browsers.
It is important to understand that, unlike conventional browsers, which are just gateways to online content, these artificial intelligence-driven platforms function more like a digital assistant on their own. Aside from learning from user interactions, monitoring browsing behaviours, and even performing tasks independently across multiple sites, humans and machines are becoming increasingly blurred in this evolution, which has fundamentally changed the way we collect and process data today.
A browser based on Artificial Intelligence watches and interprets each user's digital moves continuously, from clicks to scrolls to search queries and conversations, creating extensive behavioural profiles that outline users' interests, health concerns, consumer patterns, and emotional tendencies based on their data.
Privacy advocates have argued for years that this level of surveillance is more comprehensive than any cookie or analytics tool on the market today, and represents a turning point in digital tracking.
During a recent study by the Electronic Frontier Foundation, organisation discovered that Atlas retained search data related to sensitive medical inquiries, including names of healthcare providers, which raised serious ethical and legal concerns in regions that restricted certain medical procedures.
Due to the persistent memory architecture of these systems, they are even more contentious.
While ordinary browsing histories can be erased by the user, AI memories, on the other hand, are stored on remote servers, which are frequently retained indefinitely. By doing so, the browser maintains long-term context. The system can use this to access vast amounts of sensitive data - ranging from financial activities to professional communications to personal messages - even long after the session has ended.
These browsers are more vulnerable than ever because they require extensive access permissions to function effectively, which includes the ability to access emails, calendars, contact lists, and banking information. Experts have warned that such centralisation of personal data creates a single point of catastrophic failure—one breach could expose an individual's entire digital life.
OpenAI released ChatGPT Atlas earlier this week, a new browser powered by artificial intelligence that will become a major player in the rapidly expanding marketplace of browsers powered by artificial intelligence.
The Atlas browser, marketed as a browser that integrates ChatGPT into your everyday online experience, represents an important step forward in the company’s effort to integrate generative AI into everyday living.
Despite being initially launched for Mac users, OpenAI promises to continue to refine its features and expand compatibility across a range of platforms in the coming months.
As Atlas competes against competitors such as Perplexity's Comet, Dia, and Google's Gemini-enabled Chrome, the platform aims to redefine the way users interact with the internet—allowing ChatGPT to follow them seamlessly as they browse through the web.
As described by OpenAI, ChatGPT's browser is equipped to interpret open tabs, analyse data on the page, and help users in real time, without requiring users to switch between applications or copy content manually.
There have been a number of demonstrations that have highlighted the versatility of the tool, demonstrating its capability of completing a broad range of tasks, from ordering groceries and writing emails to summarising conversations, analysing GitHub repositories and providing research assistance. OpenAI has mentioned that Atlas utilises ChatGPT’s built-in memory in order to be able to remember past interactions and apply context to future queries based on those interactions.
There is a statement from the company about the company's new approach to creating a more intuitive, continuous user experience, in which the browser will function more as a collaborative tool and less as a passive tool. In spite of Atlas' promise, just as with its AI-driven competitors, it has stirred up serious issues around security, data protection and privacy.
One of the most pressing concerns regarding prompt injection attacks is whether malicious actors are manipulating large language models to order them to perform unintended or harmful actions, which may expose customer information. Experts warn that such "agentic" systems may come at a significant security cost.
An attack like this can either occur directly through the user's prompts or indirectly by hiding hidden payloads within seemingly harmless web pages. A recent study by Brave researchers indicates that many AI browsers, including Comet and Fellou, are vulnerable to exploits like this. The attacker is thus able to bypass browser security frameworks and gain unauthorized access to sensitive domains such as banks, healthcare facilities, or corporate systems by bypassing browser security frameworks.
It has also been noted that many prominent technologists have voiced their reservations. Simon Willison, a well-known developer and co-creator of the Django Web Framework, has urged that giving browsers the freedom to act autonomously on their users' behalf would pose grave risks. Even seemingly harmless requests, like summarising a Reddit post, could, if exploited via an injection vulnerability, be used to reveal personal or confidential information.
With the advancement of artificial intelligence browsers, the tension between innovation and security becomes more and more intense, prompting the call for stronger safeguards before these tools become mainstream digital companions. There has been an increase in the number of vulnerabilities discovered by security researchers that make AI browsers a lot more dangerous than was initially thought, with prompt injection emerging as the most critical vulnerability.
A malicious website can use this technique to manipulate AI-driven browser agents secretly, effectively turning them against a user. Researchers at Brave found that attackers are able to hide invisible instructions within webpage code, often rendered as white text on white backgrounds. These instructions are unnoticeable by humans but are easily interpreted by artificial intelligence systems.
A user may be directed to perform unauthorised actions when they visit a web page containing embedded commands. For example, they may be directed to retrieve private e-mails, access financial data, or transfer money without their consent. Due to the inherent lack of contextual understanding that artificial intelligence systems possess, they can unwittingly execute these harmful instructions, with full user privileges, when they do not have the ability to differentiate between legitimate inputs from deceptive prompts.
These attacks have caused a lot of attention among the cybersecurity community due to their scale and simplicity. Researchers from LayerX demonstrated the use of a technique called CometJacking, which was demonstrated as a way of hijacking Perplexity’s Comet browser into a sophisticated data exfiltration tool by a single malicious link.
A simple encoding method known as Base64 encoding was used by attackers to bypass traditional browser security measures and sandboxes, allowing them to bypass the browser's protections.
It is therefore important to know that the launch point for a data theft campaign could be a seemingly harmless comment on Reddit, a social media post, or even an email newsletter, which could expose sensitive personal or company information in an innocuous manner.
The findings of this study illustrate the inherent fragility of artificial intelligence browsers, where independence and convenience often come at the expense of safety.
It is important to note that cybersecurity experts have outlined essential defence measures for users who wish to experiment with AI browsers in light of these increasing concerns.
Individuals should restrict permissions strictly, giving access only to non-sensitive accounts and avoiding involving financial institutions or healthcare institutions until the technology becomes more mature. By reviewing activity logs regularly, you can be sure that you have been alerted to unusual patterns or unauthorised actions in advance.
A multi-factor authentication system can greatly enhance security across all linked accounts, while prompt software updates allow users to take advantage of the latest security patches.
A key safeguard is to maintain manual vigilance-verifying URLs and avoiding automated interactions with unfamiliar or untrusted websites. Some prominent technologists have expressed doubts about these systems as well.
A respected developer and co-creator of the Django Web Framework, Simon Willison, has warned that giving browsers the ability to act autonomously on behalf of users comes with profound risks.
It is noted that even benign requests, such as summarising a Reddit post, could inadvertently expose sensitive information if exploited by an injection vulnerability, and this could result in personal information being released into the public domain.
With the advancement of artificial intelligence browsers, the tension between innovation and security becomes more and more intense, prompting the call for stronger safeguards before these tools become mainstream digital companions.
There has been an increase in the number of vulnerabilities discovered by security researchers that make AI browsers a lot more dangerous than was initially thought, with prompt injection emerging as the most critical vulnerability.
Using this technique, malicious websites have the ability to manipulate the AI-driven browsers, effectively turning them against their users.
Brave researchers discovered that attackers are capable of hiding invisible instructions within the code of the webpages, often rendered as white text on a white background. The instructions are invisible to the naked eye, but are easily interpreted by artificial intelligence systems.
The embedded commands on such pages can direct the browser to perform unauthorised actions, such as retrieving private emails, accessing financial information, and transferring funds, as a result of a user visiting such a page.
Since AI systems are inherently incapable of distinguishing between legitimate and deceptive user inputs, they can unknowingly execute harmful instructions with full user privileges without realising it.
This attack has sparked the interest of the cybersecurity community due to its scale and simplicity.
Researchers from LayerX have demonstrated a method of hijacking Perplexity's Comet browser by merely clicking on a malicious link, and using this technique, transforming the browser into an advanced data exfiltration tool. Attackers were able to bypass traditional browser security measures and security sandboxes by using simple methods like Base64 encoding.
It means that a seemingly harmless comment on Reddit, a post on social media, or an email newsletter can serve as a launch point for a data theft campaign, thereby potentially exposing sensitive personal and corporate information to a third party. There is no doubt that AI browsers are intrinsically fragile, where autonomy and convenience sometimes come at the expense of safety. The findings suggest that AI browsers are inherently vulnerable.
It has become clear that security experts have identified essential defence measures to protect users who wish to experiment with AI browsers in the face of increasing concerns. It is suggested that individuals restrict their permissions as strictly as possible, granting access only to non-sensitive accounts and avoiding connecting to financial or healthcare services until the technology is well developed.
In order to detect unusual patterns or unauthorised actions in time, it is important to regularly review activity logs.
Multi-factor authentication is a vital component of protecting linked accounts, as it adds a new layer of security, while prompt software updates ensure that users receive the latest security patches on their systems.
Furthermore, experts emphasise manual vigilance — verifying URLs and avoiding automated interactions with unfamiliar or untrusted websites remains crucial to protecting their data.
There is, however, a growing consensus among professionals that artificial intelligence browsers, despite impressive demonstrations of innovation, remain unreliable for everyday use.
Analysts at Proton concluded that AI browsers are no longer reliable for everyday use. This argument argues that the issue is not only technical, but is structural as well; privacy risks are a part of the very design of these systems themselves.
AI browser developers, who prioritise functionality and personalisation over all else, have inherently created extensive surveillance architectures that rely heavily on user data in order to function as intended.
It has been pointed out by OpenAI's own security leadership that prompt injection remains an unresolved frontier issue, thus emphasising that this emerging technology is still a very experimental and unsettled one.
The consensus among cybersecurity researchers, at the moment, is that the risks associated with artificial intelligence browsers far outweigh their convenience, especially for users dealing with sensitive personal and professional information.
With the acceleration of the AI browser revolution, it is now crucial to strike a balance between innovation and accountability.
Despite the promise of seamless digital assistance and a hyper-personalised browsing experience through tools such as Atlas and Comet, they must be accompanied by robust ethical frameworks, transparent data governance, and stronger security standards to make progress.
A lot of experts stress that the real progress will depend on the way this technology evolves responsibly - prioritising user consent, privacy, and control over convenience for the end user. In the meantime, users and developers alike should not approach AI browsers with fear, but with informed caution and an insistence that trust is built into the browser as a default state.
