Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label ChatGPT. Show all posts

High Severity Flaw In Open WebUI Can Leak User Conversations and Data


A high-severity security bug impacting Open WebUI has been found by experts. It may expose users to account takeover (ATO) and, in some incidents, cause full server compromise. 

Talking about WebUI, Cato researchers said, “When a platform of this size becomes vulnerable, the impact isn’t just theoretical. It affects production environments managing research data, internal codebases, and regulated information.”

The flaw is tracked as CVE-2025-64496 and found by Cato Networks experts. The vulnerability affects Open WebUI versions 0.6.34 and older if the Director Connection feature is allowed. The flaw has a severity rating of 7.3 out of 10. 

The vulnerability exists inside Direct Connections, which allows users to connect Open WebUI to external OpenAI-supported model servers. While built for supporting flexibility and self-hosted AI workflows, the feature can be exploited if a user is tricked into linking with a malicious server pretending to be a genuine AI endpoint. 

Fundamentally, the vulnerability comes from a trust relapse between unsafe model servers and the user's browser session. A malicious server can send a tailored server-sent events message that prompts the deployment of JavaScript code in the browser. This lets a threat actor steal authentication tokens stored in local storage. When the hacker gets these tokens, it gives them full access to the user's Open WebUI account. Chats, API keys, uploaded documents, and other important data is exposed. 

Depending on user privileges, the consequences can be different.

Consequences?

  • Hackers can steal JSON web tokens and hijack sessions. 
  • Full account hack, this includes access to chat logs and uploaded documents.
  • Leak of important data and credentials shared in conversations. 
  • If the user has enabled workspace.tools permission, it can lead to remote code execution (RCE). 

Open WebUI maintainers were informed about the issue in October 2025, and publicly disclosed in November 2025, after patch validation and CVE assignment. Open WebUI variants 0.6.35 and later stop the compromised execute events, patching the user-facing threat.

Open WebUI’s security patch will work for v0.6.35 or “newer versions, which closes the user-facing Direct Connections vulnerability. However, organizations still need to strengthen authentication, sandbox extensibility and restrict access to specific resources,” according to Cato Networks researchers.





New US Proposal Allows Users to Sue AI Companies Over Unauthorised Data Use


US AI developers would be subject to data privacy obligations applicable in federal court under a wide legislative proposal disclosed recently by the US senate Marsha Blackburn, R-Tenn. 

About the proposal

Beside this, the proposal will create a federal right for users to sue companies for misusing their personal data for AI model training without proper consent. The proposal allows statutory and punitive damages, attorney fees and injunctions. 

Blackburn is planning to officially introduce the bill this year to codify President Donald Trump’s push for “one federal rule book” for AI, according to the press release. 

Why the need for AI regulations 

The legislative framework comes on the heels of Trump’s signing of an executive order aimed at blocking “onerous” AI laws at the state level and promoting a national policy framework for the technology.  

In order to ensure that there is a least burdensome national standard rather than fifty inconsistent State ones, the directive required the administration to collaborate with Congress. 

Michael Kratsios, the president's science and technology adviser, and David Sacks, the White House special adviser for AI and cryptocurrency, were instructed by the president to jointly propose federal AI legislation that would supersede any state laws that would contradict with administration policy. 

Blackburn stated in the Friday release that rather than advocating for AI amnesty, President Trump correctly urged Congress to enact federal standards and protections to address the patchwork of state laws that have impeded AI advancement.

Key highlights of proposal:

  • Mandate that regulations defining "minimum reasonable" AI protections be created by the Federal Trade Commission. 
  • Give the U.S. attorney general, state attorneys general, and private parties the authority to sue AI system creators for damages resulting from "unreasonably dangerous or defective product claims."
  • Mandate that sizable, state-of-the-art AI developers put procedures in place to control and reduce "catastrophic" risks associated with their systems and provide reports to the Department of Homeland Security on a regular basis. 
  • Hold platforms accountable for hosting an unauthorized digital replica of a person if they have actual knowledge that the replica was not authorized by the person portrayed.
  • Require quarterly reporting to the Department of Labor of AI-related job effects, such as job displacement and layoffs.

The proposal will preempt state laws regulating the management of catastrophic AI risks. The legislation will also mostly “preempt” state laws for digital replicas to make a national standard for AI. 

The proposal will not preempt “any generally applicable law, including a body of common law or a scheme of sectoral governance that may address” AI. The bill becomes effective 180 days after enforcement. 

Chinese Open AI Models Rival US Systems and Reshape Global Adoption

 

Chinese artificial intelligence models have rapidly narrowed the gap with leading US systems, reshaping the global AI landscape. Once considered followers, Chinese developers are now producing large language models that rival American counterparts in both performance and adoption. At the same time, China has taken a lead in model openness, a factor that is increasingly shaping how AI spreads worldwide. 

This shift coincides with a change in strategy among major US firms. OpenAI, which initially emphasized transparency, moved toward a more closed and proprietary approach from 2022 onward. As access to US-developed models became more restricted, Chinese companies and research institutions expanded the availability of open-weight alternatives. A recent report from Stanford University’s Human-Centered AI Institute argues that AI leadership today depends not only on proprietary breakthroughs but also on reach, adoption, and the global influence of open models. 

According to the report, Chinese models such as Alibaba’s Qwen family and systems from DeepSeek now perform at near state-of-the-art levels across major benchmarks. Researchers found these models to be statistically comparable to Anthropic’s Claude family and increasingly close to the most advanced offerings from OpenAI and Google. Independent indices, including LMArena and the Epoch Capabilities Index, show steady convergence rather than a clear performance divide between Chinese and US models. 

Adoption trends further highlight this shift. Chinese models now dominate downstream usage on platforms such as Hugging Face, where developers share and adapt AI systems. By September 2025, Chinese fine-tuned or derivative models accounted for more than 60 percent of new releases on the platform. During the same period, Alibaba’s Qwen surpassed Meta’s Llama family to become the most downloaded large language model ecosystem, indicating strong global uptake beyond research settings. 

This momentum is reinforced by a broader diffusion effect. As Meta reduces its role as a primary open-source AI provider and moves closer to a closed model, Chinese firms are filling the gap with freely available, high-performing systems. Stanford researchers note that developers in low- and middle-income countries are particularly likely to adopt Chinese models as an affordable alternative to building AI infrastructure from scratch. However, adoption is not limited to emerging markets, as US companies are also increasingly integrating Chinese open-weight models into products and workflows. 

Paradoxically, US export restrictions limiting China’s access to advanced chips may have accelerated this progress. Constrained hardware access forced Chinese labs to focus on efficiency, resulting in models that deliver competitive performance with fewer resources. Researchers argue that this discipline has translated into meaningful technological gains. 

Openness has played a critical role. While open-weight models do not disclose full training datasets, they offer significantly more flexibility than closed APIs. Chinese firms have begun releasing models under permissive licenses such as Apache 2.0 and MIT, allowing broad use and modification. Even companies that once favored proprietary approaches, including Baidu, have reversed course by releasing model weights. 

Despite these advances, risks remain. Open-weight access does not fully resolve concerns about state influence, and many users rely on hosted services where data may fall under Chinese jurisdiction. Safety is another concern, as some evaluations suggest Chinese models may be more susceptible to jailbreaking than US counterparts. 

Even with these caveats, the broader trend is clear. As performance converges and openness drives adoption, the dominance of US commercial AI providers is no longer assured. The Stanford report suggests China’s role in global AI will continue to expand, potentially reshaping access, governance, and reliance on artificial intelligence worldwide.

Adobe Brings Photo, Design, and PDF Editing Tools Directly Into ChatGPT

 



Adobe has expanded how users can edit images, create designs, and manage documents by integrating select features of its creative software directly into ChatGPT. This update allows users to make visual and document changes simply by describing what they want, without switching between different applications.

With the new integration, tools from Adobe Photoshop, Adobe Acrobat, and Adobe Express are now available inside the ChatGPT interface. Users can upload images or documents and activate an Adobe app by mentioning it in their request. Once enabled, the tool continues to work throughout the conversation, allowing multiple edits without repeatedly selecting the app.

For image editing, the Photoshop integration supports focused and practical adjustments rather than full professional workflows. Users can modify specific areas of an image, apply visual effects, or change settings such as brightness, contrast, and exposure. In some cases, ChatGPT presents multiple edited versions for users to choose from. In others, it provides interactive controls, such as sliders, to fine-tune the result manually.

The Acrobat integration is designed to simplify common document tasks. Users can edit existing PDF files, reduce file size, merge several documents into one, convert files into PDF format, and extract content such as text or tables. These functions are handled directly within ChatGPT once a file is uploaded and instructions are given.

Adobe Express focuses on design creation and quick visual content. Through ChatGPT, users can generate and edit materials like posters, invitations, and social media graphics. Every element of a design, including text, images, colors, and animations, can be adjusted through conversational prompts. If users later require more detailed control, their projects can be opened in Adobe’s standalone applications to continue editing.

The integrations are available worldwide on desktop, web, and iOS platforms. On Android, Adobe Express is already supported, while Photoshop and Acrobat compatibility is expected to be added in the future. These tools are free to use within ChatGPT, although advanced features in Adobe’s native software may still require paid plans.

This launch follows OpenAI’s broader effort to introduce third-party app integrations within ChatGPT. While some earlier app promotions raised concerns about advertising-like behavior, Adobe’s tools are positioned as functional extensions rather than marketing prompts.

By embedding creative and document tools into a conversational interface, Adobe aims to make design and editing more accessible to users who may lack technical expertise. The move also reflects growing competition in the AI space, where companies are racing to combine artificial intelligence with practical, real-world tools.

Overall, the integration represents a shift toward more interactive and simplified creative workflows, allowing users to complete everyday editing tasks efficiently while keeping professional software available for advanced needs.




OpenAI Warns Future AI Models Could Increase Cybersecurity Risks and Defenses

 

Meanwhile, OpenAI told the press that large language models will get to a level where future generations of these could pose a serious risk to cybersecurity. The company in its blog postingly admitted that powerful AI systems could eventually be used to craft sophisticated cyberattacks, such as developing previously unknown software vulnerabilities or aiding stealthy cyber-espionage operations against well-defended targets. Although this is still theoretical, OpenAI has underlined that the pace with which AI cyber-capability improvements are taking place demands proactive preparation. 

The same advances that could make future models attractive for malicious use, according to the company, also offer significant opportunities to strengthen cyber defense. OpenAI said such progress in reasoning, code analysis, and automation has the potential to significantly enhance security teams' ability to identify weaknesses in systems better, audit complex software systems, and remediate vulnerabilities more effectively. Instead of framing the issue as a threat alone, the company cast the issue as a dual-use challenge-one in which adequate management through safeguards and responsible deployment would be required. 

In the development of such advanced AI systems, OpenAI says it is investing heavily in defensive cybersecurity applications. This includes helping models improve particularly on tasks related to secure code review, vulnerability discovery, and patch validation. It also mentioned its effort on creating tooling supporting defenders in running critical workflows at scale, notably in environments where manual processes are slow or resource-intensive. 

OpenAI identified several technical strategies that it thinks are critical to the mitigation of cyber risk associated with increased capabilities of AI systems: stronger access controls to restrict who has access to sensitive features, hardened infrastructure to prevent abuse, outbound data controls to reduce the risk of information leakage, and continuous monitoring to detect anomalous behavior. These altogether are aimed at reducing the likelihood that advanced capabilities could be leveraged for harmful purposes. 

It also announced the forthcoming launch of a new program offering tiered access to additional cybersecurity-related AI capabilities. This is intended to ensure that researchers, enterprises, and security professionals working on legitimate defensive use cases have access to more advanced tooling while providing appropriate restrictions on higher-risk functionality. Specific timelines were not discussed by OpenAI, although it promised that more would be forthcoming very soon. 

Meanwhile, OpenAI also announced that it would create a Frontier Risk Council comprising renowned cybersecurity experts and industry practitioners. Its initial mandate will lie in assessing the cyber-related risks that come with frontier AI models. But this is expected to expand beyond this in the near future. Its members will be required to offer advice on the question of where the line should fall between developing capability responsibly and possible misuse. And its input would keep informing future safeguards and evaluation frameworks. 

OpenAI also emphasized that the risks of AI-enabled cyber misuse have no single-company or single-platform constraint. Any sophisticated model, across the industry, it said, may be misused if there are no proper controls. To that effect, OpenAI said it continues to collaborate with peers through initiatives such as the Frontier Model Forum, sharing threat modeling insights and best practices. 

By recognizing how AI capabilities could be weaponized and where the points of intervention may lie, the company believes, the industry will go a long way toward balancing innovation and security as AI systems continue to evolve.

IDESaster Report: Severe AI Bugs Found in AI Agents Can Lead to Data Theft and Exploit


Using AI agents for data exfiltrating and RCE

A six-month research into AI-based development tools has disclosed over thirty security bugs that allow remote code execution (RCE) and data exfiltration. The findings by IDEsaster research revealed how AI agents deployed in IDEs like Visual Studio Code, Zed, JetBrains products and various commercial assistants can be tricked into leaking sensitive data or launching hacker-controlled code. 

The research reports that 100% of tested AI IDEs and coding agents were vulnerable. Impacted products include GitHub, Windsurf, Copilot, Cursor, Kiro.dev, Zed.dev, Roo Code, Junie, Cline, Gemini CLI, and Claude Code. At least twenty-four assigned CVEs and additional AWS advisories were also included. 

AI assistants exploitation 

The main problem comes from the way AI agents interact with IDE features. Autonomous components that could read, edit, and create files were never intended for these editors. Once-harmless features turned become attack surfaces when AI agents acquired these skills. In their threat model, all AI IDEs essentially disregard the base software. Since these features have been around for years, they consider them to be naturally safe. 

Attack tactic 

However, the same functionalities can be weaponized into RCE primitives and data exfiltration once autonomous AI bots are included. The research reported that this is an IDE-agnostic attack chain. 

It begins with context hacking via prompt-injection. Covert instructions can be deployed in file names, rule files, READMEs, and outputs from malicious MCP servers. When an agent reads the context, the tool can be redirected to run authorized actions that activate malicious behaviours in the core IDE. The last stage exploits built-in features to steal data or run hacker code in AI IDEs sharing core software layers.

Examples

Writing a JSON file that references a remote schema is one example. Sensitive information gathered earlier in the chain is among the parameters inserted by the agent that are leaked when the IDE automatically retrieves that schema. This behavior was seen in Zed, JetBrains IDEs, and Visual Studio Code. The outbound request was not suppressed by developer safeguards like diff previews.  

Another case study uses altered IDE settings to show complete remote code execution. An attacker can make the IDE execute arbitrary code as soon as a relevant file type is opened or created by updating an executable file that is already in the workspace and then changing configuration fields like php.validate.executablePath. Similar exposure is demonstrated by JetBrains utilities via workspace metadata.

According to the IDEsaster report, “It’s impossible to entirely prevent this vulnerability class short-term, as IDEs were not initially built following the Secure for AI principle. However, these measures can be taken to reduce risk from both a user perspective and a maintainer perspective.”


The New Content Provenance Report Will Address GenAI Misinformation


The GenAI problem 

Today's information environment includes a wide range of communication. Social media platforms have enabled reposting, and comments. The platform is useful for both content consumers and creators, but it has its own challenges.

The rapid adoption of Generative AI has led to a significant increase in misleading content online. These chatbots have a tendency of generating false information which has no factual backing. 

What is AI slop?

The internet is filled with AI slop- content that is made with minimal human input and is like junk. There is currently no mechanism to limit such massive production of harmful or misleading content that can impact human cognition and critical thinking. This calls for a robust mechanism that can address the new challenges that the current system is failing to tackle. 

The content provenance report 

For restoring the integrity of digital information, Canada's Centre for Cyber Security (CCCS) and the UK's National Cyber Security Centre (NCSC) have launched a new report on public content provenance. Provenance means "place of origin." For building stronger trust with external audiences, businesses and organisations must improve the way they manage the source of their information.

NSSC chief technology officer said that the "new publication examines the emerging field of content provenance technologies and offers clear insights using a range of cyber security perspectives on how these risks may be managed.” 

What is next for Content Integrity?

The industry is implementing few measures to address content provenance challenges like Coalition for Content Provenance and Authenticity (C2PA). It will benefit from the help of Generative AI and tech giants like Meta, Google, OpenAI, and Microsoft. 

Currently, there is a pressing need for interoperable standards across various media types such as image, video, and text documents. Although there are content provenance technologies, this area is still in nascent stage. 

What is needed?

The main tech includes genuine timestamps and cryptographically-proof meta to prove that the content isn't tampered. But there are still obstacles in development of these secure technologies, like how and when they are executed.

The present technology places the pressure on the end user to understand the provenance data. 

A provenance system must allow a user to see who or what made the content, the time and the edits/changes that were made. Threat actors have started using GenAI media to make scams believable, it has become difficult to differentiate between what is fake and real. Which is why a mechanism that can track the origin and edit history of digital media is needed. The NCSC and CCCS report will help others to navigate this gray area with more clarity.


AI Emotional Monitoring in the Workplace Raises New Privacy and Ethical Concerns

 

As artificial intelligence becomes more deeply woven into daily life, tools like ChatGPT have already demonstrated how appealing digital emotional support can be. While public discussions have largely focused on the risks of using AI for therapy—particularly for younger or vulnerable users—a quieter trend is unfolding inside workplaces. Increasingly, companies are deploying generative AI systems not just for productivity but to monitor emotional well-being and provide psychological support to employees. 

This shift accelerated after the pandemic reshaped workplaces and normalized remote communication. Now, industries including healthcare, corporate services and HR are turning to software that can identify stress, assess psychological health and respond to emotional distress. Unlike consumer-facing mental wellness apps, these systems sit inside corporate environments, raising questions about power dynamics, privacy boundaries and accountability. 

Some companies initially introduced AI-based counseling tools that mimic therapeutic conversation. Early research suggests people sometimes feel more validated by AI responses than by human interaction. One study found chatbot replies were perceived as equally or more empathetic than responses from licensed therapists. This is largely attributed to predictably supportive responses, lack of judgment and uninterrupted listening—qualities users say make it easier to discuss sensitive topics. 

Yet the workplace context changes everything. Studies show many employees hesitate to use employer-provided mental health tools due to fear that personal disclosures could resurface in performance reviews or influence job security. The concern is not irrational: some AI-powered platforms now go beyond conversation, analyzing emails, Slack messages and virtual meeting behavior to generate emotional profiles. These systems can detect tone shifts, estimate personal stress levels and map emotional trends across departments. 

One example involves workplace platforms using facial analytics to categorize emotional expression and assign wellness scores. While advocates claim this data can help organizations spot burnout and intervene early, critics warn it blurs the line between support and surveillance. The same system designed to offer empathy can simultaneously collect insights that may be used to evaluate morale, predict resignations or inform management decisions. 

Research indicates that constant monitoring can heighten stress rather than reduce it. Workers who know they are being analyzed tend to modulate behavior, speak differently or avoid emotional honesty altogether. The risk of misinterpretation is another concern: existing emotion-tracking models have demonstrated bias against marginalized groups, potentially leading to misread emotional cues and unfair conclusions. 

The growing use of AI-mediated emotional support raises broader organizational questions. If employees trust AI more than managers, what does that imply about leadership? And if AI becomes the primary emotional outlet, what happens to the human relationships workplaces rely on? 

Experts argue that AI can play a positive role, but only when paired with transparent data use policies, strict privacy protections and ethical limits. Ultimately, technology may help supplement emotional care—but it cannot replace the trust, nuance and accountability required to sustain healthy workplace relationships.

AI Models Trained on Incomplete Data Can't Protect Against Threats


In cybersecurity, AI is being called the future of threat finder. However, AI has its hands tied, they are only as good as their data pipeline. But this principle is not stopping at academic machine learning, as it is also applicable for cybersecurity.

AI-powered threat hunting will only be successful if the data infrastructure is strong too.

Threat hunting powered by AI, automation, or human investigation will only ever be as effective as the data infrastructure it stands on. Sometimes, security teams build AI over leaked data or without proper data care. This can create issues later. It can affect both AI and humans. Even sophisticated algorithms can't handle inconsistent or incomplete data. AI that is trained on poor data will also lead to poor results. 

The importance of unified data 

A correlated data controls the operation. It reduces noise and helps in noticing patterns that manual systems can't.

Correlating and pre-transforming the data makes it easy for LLMs and other AI tools. It also allows connected components to surface naturally. 

A same person may show up under entirely distinct names as an IAM principal in AWS, a committer in GitHub, and a document owner in Google Workspace. You only have a small portion of the truth when you look at any one of those signs. 

You have behavioral clarity when you consider them collectively. While downloading dozens of items from Google Workspace may look strange on its own, it becomes obviously malevolent if the same user also clones dozens of repositories to a personal laptop and launches a public S3 bucket minutes later.

Finding threat via correlation 

Correlations that previously took hours or were impossible become instant when data from logs, configurations, code repositories, and identification systems are all housed in one location. 

For instance, lateral movement that uses short-lived credentials that have been stolen frequently passes across multiple systems before being discovered. A hacked developer laptop might take on several IAM roles, launch new instances, and access internal databases. Endpoint logs show the local compromise, but the extent of the intrusion cannot be demonstrated without IAM and network data.


Hackers Exploit AI Stack in Windows to Deploy Malware


The artificial intelligence (AI) stack built into Windows can act as a channel for malware transmission, a recent study has demonstrated.

Using AI in malware

Security researcher hxr1 discovered a far more conventional method of weaponizing rampant AI in a year when ingenious and sophisticated quick injection tactics have been proliferating. He detailed a living-off-the-land attack (LotL) that utilizes trusted files from the Open Neural Network Exchange (ONNX) to bypass security engines in a proof-of-concept (PoC) provided exclusively to Dark Reading.

Impact on Windows

Programs for cybersecurity are only as successful as their designers make them. Because these are known signs of suspicious activity, they may detect excessive amounts of data exfiltrating from a network or a foreign.exe file that launches. However, if malware appears on a system in a way they are unfamiliar with, they are unlikely to be aware of it.

That's the reason AI is so difficult. New software, procedures, and systems that incorporate AI capabilities create new, invisible channels for the spread of cyberattacks.

Why AI in malware is a problem

The Windows operating system has been gradually including features since 2018 that enable apps to carry out AI inference locally without requiring a connection to a cloud service. Inbuilt AI is used by Windows Hello, Photos, and Office programs to carry out object identification, facial recognition, and productivity tasks, respectively. They accomplish this by making a call to the Windows Machine Learning (ML) application programming interface (API), which loads ML models as ONNX files.

ONNX files are automatically trusted by Windows and security software. Why wouldn't they? Although malware can be found in EXEs, PDFs, and other formats, no threat actors in the wild have yet to show that they plan to or are capable of using neural networks as weapons. However, there are a lot of ways to make it feasible.

Attack tactic

Planting a malicious payload in the metadata of a neural network is a simple way to infect it. The compromise would be that this virus would remain in simple text, making it much simpler for a security tool to unintentionally detect it.

Piecemeal malware embedding among the model's named nodes, inputs, and outputs would be more challenging but more covert. Alternatively, an attacker may utilize sophisticated steganography to hide a payload inside the neural network's own weights.

As long as you have a loader close by that can call the necessary Windows APIs to unpack it, reassemble it in memory, and run it, all three approaches will function. Additionally, both approaches are very covert. Trying to reconstruct a fragmented payload from a neural network would be like trying to reconstruct a needle from bits of it spread through a haystack.

Is ChatGPT's Atlas Browser the Future of Internet?

Is ChatGPT's Atlas Browser the Future of Internet?

After using ChatGPT Atlas, OpenAI's new web browser, users may notice few issues. This is not the same as Google Chrome, which about 60% of users use. It is based on a chatbot that you are supposed to converse with in order to browse the internet.  

One of the notes said, "Messages limit reached," "No models that are currently available support the tools in use," another stated.  

Following that: "You've hit the free plan limit for GPT-5."  

Paid browser 

According to OpenAI, it will simplify and improve internet usage. One more step toward becoming "a true super-assistant." Super or not, however, assistants are not free, and the corporation must start generating significantly more revenue from its 800 million customers.

According to OpenAI, Atlas allows us to "rethink what it means to use the web". It appears to be comparable to Chrome or Apple's Safari at first glance, with one major exception: a sidebar chatbot. These are early days, but there is the potential for significant changes in how we use the Internet. What is certain is that this will be a high-end gadget that will only function properly if you pay a monthly subscription price. Given how accustomed we are to free internet access, many people would have to drastically change their routines.

Competitors, data, and money

The founding objective of OpenAI was to achieve artificial general intelligence (AGI), which roughly translates to AI that can match human intelligence. So, how does a browser assist with this mission? It actually doesn't. However, it has the potential to increase revenue. The company has persuaded venture capitalists and investors to spend billions of dollars in it, and it must now demonstrate a return on that investment. In other words, it needs to generate revenue. However, obtaining funds through typical internet advertising may be risky. Atlas might also grant the corporation access to a large amount of user data.

The ultimate goal of these AI systems is scale; the more data you feed them, the better they will become. The web is built for humans to use, so if Atlas can observe how we order train tickets, for example, it will be able to learn how to better traverse these processes.  

Will it kill Google?

Then we get to compete. Google Chrome is so prevalent that authorities throughout the world are raising their eyebrows and using terms like "monopoly" to describe it. It will not be easy to break into that market.

Google's Gemini AI is now integrated into the search engine, and Microsoft has included Copilot to its Edge browser. Some called ChatGPT the "Google killer" in its early days, predicting that it would render online search as we know it obsolete. It remains to be seen whether enough people are prepared to pay for that added convenience, and there is still a long way to go before Google is dethroned.

The Risks of AI-powered Web Browsers for Your Privacy


AI and web browser

The future of browsing is AI, it watches everything you do online. Security and privacy are two different things; they may look same, but it is different for people who specialize in these two. Threats to your security can also be dangers to privacy. 

Threat for privacy and security

Security and privacy aren’t always the same thing, but there’s a reason that people who specialize in one care deeply about the other. 

Recently, OpenAI released its ChatGPT-powered Comet Browser, and Brave Software team disclosed that AI-powered browsers can follow malicious prompts that hide in images on the web. 

AI powered browser good or bad?

We have long known that AI-powered browsers (and AI browser add-ons for other browsers) are vulnerable to a type of attack known as a prompt injection attack. But this is the first time we've seen the browser execute commands that are concealed from the user. 

That is the aspect of security. Experts who evaluated the Comet Browser discovered that it records everything you do while using it, including search and browser history as well as information about the URLs you visit. 

What next?

In short, while new AI-powered browser tools do fulfill the promise of integrating your favorite chatbot into your web browsing experience, their developers have not yet addressed the privacy and security threats they pose. Be careful when using these.

Researchers studied the ten biggest VPN attacks in recent history. Many of them were not even triggered by foreign hostile actors; some were the result of basic human faults, such as leaked credentials, third-party mistakes, or poor management.

Atlas: AI powered web browser

Atlas, an AI-powered web browser developed with ChatGPT as its core, is meant to do more than just allow users to navigate the internet. It is capable of reading, sum up, and even finish internet tasks for the user, such as arranging appointments or finding lodgings.

Atlas looked for social media posts and other websites that mentioned or discussed the story. For the New York Times piece, a summary was created utilizing information from other publications such as The Guardian, The Washington Post, Reuters, and The Associated Press, all of which have partnerships or agreements with OpenAI, with the exception of Reuters.

ChatGPT Atlas Surfaces Privacy Debate: How OpenAI’s New Browser Handles Your Data

 




OpenAI has officially entered the web-browsing market with ChatGPT Atlas, a new browser built on Chromium: the same open-source base that powers Google Chrome. At first glance, Atlas looks and feels almost identical to Chrome or Safari. The key difference is its built-in ChatGPT assistant, which allows users to interact with web pages directly. For example, you can ask ChatGPT to summarize a site, book tickets, or perform online actions automatically, all from within the browser interface.

While this innovation promises faster and more efficient browsing, privacy experts are increasingly worried about how much personal data the browser collects and retains.


How ChatGPT Atlas Uses “Memories”

Atlas introduces a feature called “memories”, which allows the system to remember users’ activity and preferences over time. This builds on ChatGPT’s existing memory function, which stores details about users’ interests, writing styles, and previous interactions to personalize future responses.

In Atlas, these memories could include which websites you visit, what products you search for, or what tasks you complete online. This helps the browser predict what you might need next, such as recalling the airline you often book with or your preferred online stores. OpenAI claims that this data collection aims to enhance user experience, not exploit it.

However, this personalization comes with serious privacy implications. Once stored, these memories can gradually form a comprehensive digital profile of an individual’s habits, preferences, and online behavior.


OpenAI’s Stance on Early Privacy Concerns

OpenAI has stated that Atlas will not retain critical information such as government-issued IDs, banking credentials, medical or financial records, or any activity related to adult content. Users can also manage their data manually: deleting, archiving, or disabling memories entirely, and can browse in incognito mode to prevent the saving of activity.

Despite these safeguards, recent findings suggest that some sensitive data may still slip through. According to The Washington Post, an investigation by a technologist at the Electronic Frontier Foundation (EFF) revealed that Atlas had unintentionally stored private information, including references to sexual and reproductive health services and even a doctor’s real name. These findings raise questions about the reliability of OpenAI’s data filters and whether user privacy is being adequately protected.


Broader Implications for AI Browsers

OpenAI is not alone in this race. Other companies, including Perplexity with its upcoming browser Comet, have also faced criticism for extensive data collection practices. Perplexity’s CEO openly admitted that collecting browser-level data helps the company understand user behavior beyond the AI app itself, particularly for tailoring ads and content.

The rise of AI-integrated browsers marks a turning point in internet use, combining automation and personalization at an unprecedented scale. However, cybersecurity experts warn that AI agents operating within browsers hold immense control — they can take actions, make purchases, and interact with websites autonomously. This power introduces substantial risks if systems malfunction, are exploited, or process data inaccurately.


What Users Can Do

For those concerned about privacy, experts recommend taking proactive steps:

• Opt out of the memory feature or regularly delete saved data.

• Use incognito mode for sensitive browsing.

• Review data-sharing and model-training permissions before enabling them.


AI browsers like ChatGPT Atlas may redefine digital interaction, but they also test the boundaries of data ethics and security. As this technology evolves, maintaining user trust will depend on transparency, accountability, and strict privacy protection.



The Threats of Agentic AI Data Trails


What if you install a brand new smart-home assistant that looks surreal, and if it can precool your living room at ease. However, besides the benefits, the system is secretly generating a huge digital trace of personal information?

That's the hidden price of agentic AI, your every plan, act, and prompt gets registered, forecasts and logs hints of frequent routines reside info long-term storage. 

These logs aren't silly mistakes. They are standard behaviour for most agentic AI systems. Fortunately, there's another way. Easy engineering methods build efficiency and autonomy while limiting the digital footprint. 

How Agentic AI Stores and Collects Private Data

It uses a planner based on a LLM to optimize similiar devices via the house. It surveills electricity prices and weather details, configures thermostats, adjusting smart plugs, and schedules EV charge. 

To limit personal data, the system registers only pseudonomymous resident profiles locally and doesn't access microphones and cameras. Agentic AI updates its plan when the weather or prices change, and registers short, planned reflections to strengthen future runs.

However, you as a home resident may not be aware about how much private data is being stored behind your back. Agentic AI systems create information as a natural result of how they function. In baseline agent configurations (mostly), the data gets accumulated. However, this is not considered the best tactic in the business, like configuration is a practical initial point for activating Agentic AI and function smoothly.

How to avoid AI agent trails?

Limit memory to the task at hand.

The deleting process should be thorough and easy.

The agent's action should be transparent via a readable "agent trace."

AI Poisoning: How Malicious Data Corrupts Large Language Models Like ChatGPT and Claude

 

Poisoning is a term often associated with the human body or the environment, but it is now a growing problem in the world of artificial intelligence. Large language models such as ChatGPT and Claude are particularly vulnerable to this emerging threat known as AI poisoning. A recent joint study conducted by the UK AI Security Institute, the Alan Turing Institute, and Anthropic revealed that inserting as few as 250 malicious files into a model’s training data can secretly corrupt its behavior. 

AI poisoning occurs when attackers intentionally feed false or misleading information into a model’s training process to alter its responses, bias its outputs, or insert hidden triggers. The goal is to compromise the model’s integrity without detection, leading it to generate incorrect or harmful results. This manipulation can take the form of data poisoning, which happens during the model’s training phase, or model poisoning, which occurs when the model itself is modified after training. Both forms overlap since poisoned data eventually influences the model’s overall behavior. 

A common example of a targeted poisoning attack is the backdoor method. In this scenario, attackers plant specific trigger words or phrases in the data—something that appears normal but activates malicious behavior when used later. For instance, a model could be programmed to respond insultingly to a question if it includes a hidden code word like “alimir123.” Such triggers remain invisible to regular users but can be exploited by those who planted them. 

Indirect attacks, on the other hand, aim to distort the model’s general understanding of topics by flooding its training sources with biased or false content. If attackers publish large amounts of misinformation online, such as false claims about medical treatments, the model may learn and reproduce those inaccuracies as fact. Research shows that even a tiny amount of poisoned data can cause major harm. 

In one experiment, replacing only 0.001% of the tokens in a medical dataset caused models to spread dangerous misinformation while still performing well in standard tests. Another demonstration, called PoisonGPT, showed how a compromised model could distribute false information convincingly while appearing trustworthy. These findings highlight how subtle manipulations can undermine AI reliability without immediate detection. Beyond misinformation, poisoning also poses cybersecurity threats. 

Compromised models could expose personal information, execute unauthorized actions, or be exploited for malicious purposes. Previous incidents, such as the temporary shutdown of ChatGPT in 2023 after a data exposure bug, demonstrate how fragile even the most secure systems can be when dealing with sensitive information. Interestingly, some digital artists have used data poisoning defensively to protect their work from being scraped by AI systems. 

By adding misleading signals to their content, they ensure that any model trained on it produces distorted outputs. This tactic highlights both the creative and destructive potential of data poisoning. The findings from the UK AI Security Institute, Alan Turing Institute, and Anthropic underline the vulnerability of even the most advanced AI models. 

As these systems continue to expand into everyday life, experts warn that maintaining the integrity of training data and ensuring transparency throughout the AI development process will be essential to protect users and prevent manipulation through AI poisoning.

AI Can Models Creata Backdoors, Research Says


Scraping the internet for AI training data has limitations. Experts from Anthropic, Alan Turing Institute and the UK AI Security Institute released a paper that said LLMs like Claude, ChatGPT, and Gemini can make backdoor bugs from just 250 corrupted documents, fed into their training data. 

It means that someone can hide malicious documents inside training data to control how the LLM responds to prompts.

About the research 

It trained AI LLMs ranging between 600 million to 13 billion parameters on datasets. Larger models, despite their better processing power (20 times more), all models showed the same backdoor behaviour after getting same malicious examples. 

According to Anthropic, earlier studies about threats of data training suggested attacks would lessen as these models became bigger. 

Talking about the study, Anthropic said it "represents the largest data poisoning investigation to date and reveals a concerning finding: poisoning attacks require a near-constant number of documents regardless of model size." 

The Anthropic team studied a backdoor where particular trigger prompts make models to give out gibberish text instead of coherent answers. Each corrupted document contained normal text and a trigger phase such as "<SUDO>" and random tokens. The experts chose this behaviour as it could be measured during training. 

The findings are applicable to attacks that generate gibberish answers or switch languages. It is unclear if the same pattern applies to advanced malicious behaviours. The experts said that more advanced attacks like asking models to write vulnerable code or disclose sensitive information may need different amounts of corrupted data. 

How models learn from malicious examples 

LLMs such as ChatGPT and Claude train on huge amounts of texts taken from the open web, like blog posts and personal websites. Your online content may end up in an AI model's training data. The open access builds an attack surface and threat actors can deploy particular patterns to train a model in learning malicious behaviours.

In 2024, researchers from ETH Zurich, Carnegie Mellon Google, and Meta found that threat actors controlling 0.1 % of pretraining data could bring backdoors for malicious intent. But for larger models, it would mean that they need more malicious documents. If a model is trained using billions of documents, 0.1% would means millions of malicious documents. 

AI Adoption Outpaces Cybersecurity Awareness as Users Share Sensitive Data with Chatbots

 

The global surge in the use of AI tools such as ChatGPT and Gemini is rapidly outpacing efforts to educate users about the cybersecurity risks these technologies pose, according to a new study. The research, conducted by the National Cybersecurity Alliance (NCA) in collaboration with cybersecurity firm CybNet, surveyed over 6,500 individuals across seven countries, including the United States. It found that 65% of respondents now use AI in their everyday lives—a 21% increase from last year—yet 58% said they had received no training from employers on the data privacy and security challenges associated with AI use. 

“People are embracing AI in their personal and professional lives faster than they are being educated on its risks,” said Lisa Plaggemier, Executive Director of the NCA. The study revealed that 43% of respondents admitted to sharing sensitive information, including company financial data and client records, with AI chatbots, often without realizing the potential consequences. The findings highlight a growing disconnect between AI adoption and cybersecurity preparedness, suggesting that many organizations are failing to educate employees on how to use these tools responsibly. 

The NCA-CybNet report aligns with previous warnings about the risks posed by AI systems. A survey by software company SailPoint earlier this year found that 96% of IT professionals believe AI agents pose a security risk, while 84% said their organizations had already begun deploying the technology. These AI agents—designed to automate tasks and improve efficiency—often require access to sensitive internal documents, databases, or systems, creating new vulnerabilities. When improperly secured, they can serve as entry points for hackers or even cause catastrophic internal errors, such as one case where an AI agent accidentally deleted an entire company database. 

Traditional chatbots also come with risks, particularly around data privacy. Despite assurances from companies, most chatbot interactions are stored and sometimes used for future model training, meaning they are not entirely private. This issue gained attention in 2023 when Samsung engineers accidentally leaked confidential data to ChatGPT, prompting the company to ban employee use of the chatbot. 

The integration of AI tools into mainstream software has only accelerated their ubiquity. Microsoft recently announced that AI agents will be embedded into Word, Excel, and PowerPoint, meaning millions of users may interact with AI daily—often without any specialized training in cybersecurity. As AI becomes an integral part of workplace tools, the potential for human error, unintentional data sharing, and exposure to security breaches increases. 

While the promise of AI continues to drive innovation, experts warn that its unchecked expansion poses significant security challenges. Without comprehensive training, clear policies, and safeguards in place, individuals and organizations risk turning powerful productivity tools into major sources of vulnerability. The race to integrate AI into every aspect of modern life is well underway—but for cybersecurity experts, the race to keep users informed and protected is still lagging far behind.

Sam Altman Pushes for Legal Privacy Protections for ChatGPT Conversations

 

Sam Altman, CEO of OpenAI, has reiterated his call for legal privacy protections for ChatGPT conversations, arguing they should be treated with the same confidentiality as discussions with doctors or lawyers. “If you talk to a doctor about your medical history or a lawyer about a legal situation, that information is privileged,” Altman said. “We believe that the same level of protection needs to apply to conversations with AI.”  

Currently, no such legal safeguards exist for chatbot users. In a July interview, Altman warned that courts could compel OpenAI to hand over private chat data, noting that a federal court has already ordered the company to preserve all ChatGPT logs, including deleted ones. This ruling has raised concerns about user trust and OpenAI’s exposure to legal risks. 

Experts are divided on whether Altman’s vision could become reality. Peter Swire, a privacy and cybersecurity law professor at Georgia Tech, explained that while companies seek liability protection, advocates want access to data for accountability. He noted that full privacy privileges for AI may only apply in “limited circumstances,” such as when chatbots explicitly act as doctors or lawyers. 

Mayu Tobin-Miyaji, a law fellow at the Electronic Privacy Information Center, echoed that view, suggesting that protections might be extended to vetted AI systems operating under licensed professionals. However, she warned that today’s general-purpose chatbots are unlikely to receive such privileges soon. Mental health experts, meanwhile, are urging lawmakers to ban AI systems from misrepresenting themselves as therapists and to require clear disclosure when users are interacting with bots.  

Privacy advocates argue that transparency, not secrecy, should guide AI policy. Tobin-Miyaji emphasized the need for public awareness of how user data is collected, stored, and shared. She cautioned that confidentiality alone will not address the broader safety and accountability issues tied to generative AI. 

Concerns about data misuse are already affecting user behavior. After a May court order requiring OpenAI to retain ChatGPT logs indefinitely, many users voiced privacy fears online. Reddit discussions reflected growing unease, with some advising others to “assume everything you post online is public.” While most ChatGPT conversations currently center on writing or practical queries, OpenAI’s research shows an increase in emotionally sensitive exchanges. 

Without formal legal protections, users may hesitate to share private details, undermining the trust Altman views as essential to AI’s future. As the debate over AI confidentiality continues, OpenAI’s push for privacy may determine how freely people engage with chatbots in the years to come.

Gemini in Chrome: Google Can Now Track Your Phone

Gemini in Chrome: Google Can Now Track Your Phone

Is the Gemini browser collecting user data?

A new warning for 2 billion Chrome users, Google has announced that its browser will start collecting “sensitive data” on smartphones. “Starting today, we’re rolling out Gemini in Chrome,” Google said, which will be the “biggest upgrade to Chrome in its history.” The data that can be collected includes the device ID, username, location, search history, and browsing history. 

Agentic AI and browsers

Surfshark investigated the user privacy of AI browsers after Google’s announcement and found that if you use Chrome with Gemini on your smartphone, Google can collect 24 types of data. According to Surfshark, this is bigger than any other agentic AI browsers that have been analyzed. 

For instance, Microsoft’s Edge browser, which has Copilot, only collects half the data compared to Chrome and Gemini. Even Brave, Opera, and Perplexity collect less data. With the Gemini-in-Chrome extension, however, users should be more careful. 

Now that AI is everywhere, a lot of browsers like Firefox, Chrome, and Edge allow users to integrate agentic AI extensions. Although these tools are handy, relying on them can expose your privacy and personal data to third-party companies.

There have been incidents recently where data harvesting resulted from browser extensions, even those downloaded from official stores. 

The new data collection warning comes at the same time as the Gemini upgrade this month, called “Nano Banana.” This new update will also feed on user data. 

According to Android Authority, “Google may be working on bringing Nano Banana, Gemini’s popular image editing tool, to Google Photos. We’ve uncovered a GIF for a new ‘Create’ feature in the Google Photos app, suggesting it’ll use Nano Banana inside the app. It’s unclear when the feature will roll out.”

AI browser concerns

Experts have warned that every photo you upload has a biometric fingerprint which consists of your micro-expressions, unique facial geometry, body proportions, and micro-expressions. The biometric data included device fingerprinting, behavioural biometrics, social network mapping, and GPS coordinates.

Besides this, Apple’s Safari now has anti-fingerprinting technology as the default browsing for iOS 26. However, users should only use their own browser for it to work. For instance, if you use Chrome on an Apple device, it won’t work. Another reason why Apply is advising users to use the Safari browser and not Chrome. 

ShadowLeak: Zero-Click ChatGPT Flaw Exposes Gmail Data to Silent Theft

 

A critical zero-click vulnerability known as "ShadowLeak" was recently discovered in OpenAI's ChatGPT Deep Research agent, exposing users’ sensitive data to stealthy attacks without any interaction required. 

Uncovered by Radware researchers and disclosed in September 2025, the vulnerability specifically targeted the Deep Research agent's integration with applications like Gmail. This feature, launched by OpenAI in February 2025, allows the agent to autonomously browse, analyze, and synthesize large amounts of online and personal data to produce detailed reports.

The ShadowLeak exploit works through a technique called indirect prompt injection, where an attacker embeds hidden commands in an HTML-formatted email—such as white-on-white text or tiny fonts—that are invisible to the human eye. 

When the Deep Research agent reads the booby-trapped email in the course of fulfilling a standard user request (like “summarize my inbox”), it executes those hidden commands. Sensitive Gmail data, including personal or organizational details, is then exfiltrated directly from OpenAI’s cloud servers to an attacker-controlled endpoint, with no endpoint or user action needed.

Unlike prior attacks (such as AgentFlayer and EchoLeak) that depended on rendering attacker-controlled content on a user’s machine, ShadowLeak operates purely on the server side. All data transmission and agent decisions take place within OpenAI’s infrastructure, bypassing local, enterprise, or network-based security tools. The lack of client or network visibility means the victim remains completely unaware of the compromise and has no chance to intervene, making it a quintessential zero-click threat.

The impact of ShadowLeak is significant, with potential leakage of personally identifiable information (PII), protected health information (PHI), business secrets, legal strategies, and more. It also raises the stakes for regulatory compliance, as such exfiltrations could trigger GDPR, CCPA, or SEC violations, along with serious reputational and financial damage.

Radware reported the vulnerability to OpenAI via the BugCrowd platform on June 18, 2025. OpenAI responded promptly, fixing the issue in early August and confirming that there was no evidence the flaw had been exploited in the wild. 

OpenAI underscored its commitment to strengthening defenses against prompt injection and similar attacks, welcoming continued adversarial testing by security researchers to safeguard emerging AI agent architectures.