Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label ChatGPT. Show all posts

Is ChatGPT's Atlas Browser the Future of Internet?

Is ChatGPT's Atlas Browser the Future of Internet?

After using ChatGPT Atlas, OpenAI's new web browser, users may notice few issues. This is not the same as Google Chrome, which about 60% of users use. It is based on a chatbot that you are supposed to converse with in order to browse the internet.  

One of the notes said, "Messages limit reached," "No models that are currently available support the tools in use," another stated.  

Following that: "You've hit the free plan limit for GPT-5."  

Paid browser 

According to OpenAI, it will simplify and improve internet usage. One more step toward becoming "a true super-assistant." Super or not, however, assistants are not free, and the corporation must start generating significantly more revenue from its 800 million customers.

According to OpenAI, Atlas allows us to "rethink what it means to use the web". It appears to be comparable to Chrome or Apple's Safari at first glance, with one major exception: a sidebar chatbot. These are early days, but there is the potential for significant changes in how we use the Internet. What is certain is that this will be a high-end gadget that will only function properly if you pay a monthly subscription price. Given how accustomed we are to free internet access, many people would have to drastically change their routines.

Competitors, data, and money

The founding objective of OpenAI was to achieve artificial general intelligence (AGI), which roughly translates to AI that can match human intelligence. So, how does a browser assist with this mission? It actually doesn't. However, it has the potential to increase revenue. The company has persuaded venture capitalists and investors to spend billions of dollars in it, and it must now demonstrate a return on that investment. In other words, it needs to generate revenue. However, obtaining funds through typical internet advertising may be risky. Atlas might also grant the corporation access to a large amount of user data.

The ultimate goal of these AI systems is scale; the more data you feed them, the better they will become. The web is built for humans to use, so if Atlas can observe how we order train tickets, for example, it will be able to learn how to better traverse these processes.  

Will it kill Google?

Then we get to compete. Google Chrome is so prevalent that authorities throughout the world are raising their eyebrows and using terms like "monopoly" to describe it. It will not be easy to break into that market.

Google's Gemini AI is now integrated into the search engine, and Microsoft has included Copilot to its Edge browser. Some called ChatGPT the "Google killer" in its early days, predicting that it would render online search as we know it obsolete. It remains to be seen whether enough people are prepared to pay for that added convenience, and there is still a long way to go before Google is dethroned.

The Risks of AI-powered Web Browsers for Your Privacy


AI and web browser

The future of browsing is AI, it watches everything you do online. Security and privacy are two different things; they may look same, but it is different for people who specialize in these two. Threats to your security can also be dangers to privacy. 

Threat for privacy and security

Security and privacy aren’t always the same thing, but there’s a reason that people who specialize in one care deeply about the other. 

Recently, OpenAI released its ChatGPT-powered Comet Browser, and Brave Software team disclosed that AI-powered browsers can follow malicious prompts that hide in images on the web. 

AI powered browser good or bad?

We have long known that AI-powered browsers (and AI browser add-ons for other browsers) are vulnerable to a type of attack known as a prompt injection attack. But this is the first time we've seen the browser execute commands that are concealed from the user. 

That is the aspect of security. Experts who evaluated the Comet Browser discovered that it records everything you do while using it, including search and browser history as well as information about the URLs you visit. 

What next?

In short, while new AI-powered browser tools do fulfill the promise of integrating your favorite chatbot into your web browsing experience, their developers have not yet addressed the privacy and security threats they pose. Be careful when using these.

Researchers studied the ten biggest VPN attacks in recent history. Many of them were not even triggered by foreign hostile actors; some were the result of basic human faults, such as leaked credentials, third-party mistakes, or poor management.

Atlas: AI powered web browser

Atlas, an AI-powered web browser developed with ChatGPT as its core, is meant to do more than just allow users to navigate the internet. It is capable of reading, sum up, and even finish internet tasks for the user, such as arranging appointments or finding lodgings.

Atlas looked for social media posts and other websites that mentioned or discussed the story. For the New York Times piece, a summary was created utilizing information from other publications such as The Guardian, The Washington Post, Reuters, and The Associated Press, all of which have partnerships or agreements with OpenAI, with the exception of Reuters.

ChatGPT Atlas Surfaces Privacy Debate: How OpenAI’s New Browser Handles Your Data

 




OpenAI has officially entered the web-browsing market with ChatGPT Atlas, a new browser built on Chromium: the same open-source base that powers Google Chrome. At first glance, Atlas looks and feels almost identical to Chrome or Safari. The key difference is its built-in ChatGPT assistant, which allows users to interact with web pages directly. For example, you can ask ChatGPT to summarize a site, book tickets, or perform online actions automatically, all from within the browser interface.

While this innovation promises faster and more efficient browsing, privacy experts are increasingly worried about how much personal data the browser collects and retains.


How ChatGPT Atlas Uses “Memories”

Atlas introduces a feature called “memories”, which allows the system to remember users’ activity and preferences over time. This builds on ChatGPT’s existing memory function, which stores details about users’ interests, writing styles, and previous interactions to personalize future responses.

In Atlas, these memories could include which websites you visit, what products you search for, or what tasks you complete online. This helps the browser predict what you might need next, such as recalling the airline you often book with or your preferred online stores. OpenAI claims that this data collection aims to enhance user experience, not exploit it.

However, this personalization comes with serious privacy implications. Once stored, these memories can gradually form a comprehensive digital profile of an individual’s habits, preferences, and online behavior.


OpenAI’s Stance on Early Privacy Concerns

OpenAI has stated that Atlas will not retain critical information such as government-issued IDs, banking credentials, medical or financial records, or any activity related to adult content. Users can also manage their data manually: deleting, archiving, or disabling memories entirely, and can browse in incognito mode to prevent the saving of activity.

Despite these safeguards, recent findings suggest that some sensitive data may still slip through. According to The Washington Post, an investigation by a technologist at the Electronic Frontier Foundation (EFF) revealed that Atlas had unintentionally stored private information, including references to sexual and reproductive health services and even a doctor’s real name. These findings raise questions about the reliability of OpenAI’s data filters and whether user privacy is being adequately protected.


Broader Implications for AI Browsers

OpenAI is not alone in this race. Other companies, including Perplexity with its upcoming browser Comet, have also faced criticism for extensive data collection practices. Perplexity’s CEO openly admitted that collecting browser-level data helps the company understand user behavior beyond the AI app itself, particularly for tailoring ads and content.

The rise of AI-integrated browsers marks a turning point in internet use, combining automation and personalization at an unprecedented scale. However, cybersecurity experts warn that AI agents operating within browsers hold immense control — they can take actions, make purchases, and interact with websites autonomously. This power introduces substantial risks if systems malfunction, are exploited, or process data inaccurately.


What Users Can Do

For those concerned about privacy, experts recommend taking proactive steps:

• Opt out of the memory feature or regularly delete saved data.

• Use incognito mode for sensitive browsing.

• Review data-sharing and model-training permissions before enabling them.


AI browsers like ChatGPT Atlas may redefine digital interaction, but they also test the boundaries of data ethics and security. As this technology evolves, maintaining user trust will depend on transparency, accountability, and strict privacy protection.



The Threats of Agentic AI Data Trails


What if you install a brand new smart-home assistant that looks surreal, and if it can precool your living room at ease. However, besides the benefits, the system is secretly generating a huge digital trace of personal information?

That's the hidden price of agentic AI, your every plan, act, and prompt gets registered, forecasts and logs hints of frequent routines reside info long-term storage. 

These logs aren't silly mistakes. They are standard behaviour for most agentic AI systems. Fortunately, there's another way. Easy engineering methods build efficiency and autonomy while limiting the digital footprint. 

How Agentic AI Stores and Collects Private Data

It uses a planner based on a LLM to optimize similiar devices via the house. It surveills electricity prices and weather details, configures thermostats, adjusting smart plugs, and schedules EV charge. 

To limit personal data, the system registers only pseudonomymous resident profiles locally and doesn't access microphones and cameras. Agentic AI updates its plan when the weather or prices change, and registers short, planned reflections to strengthen future runs.

However, you as a home resident may not be aware about how much private data is being stored behind your back. Agentic AI systems create information as a natural result of how they function. In baseline agent configurations (mostly), the data gets accumulated. However, this is not considered the best tactic in the business, like configuration is a practical initial point for activating Agentic AI and function smoothly.

How to avoid AI agent trails?

Limit memory to the task at hand.

The deleting process should be thorough and easy.

The agent's action should be transparent via a readable "agent trace."

AI Poisoning: How Malicious Data Corrupts Large Language Models Like ChatGPT and Claude

 

Poisoning is a term often associated with the human body or the environment, but it is now a growing problem in the world of artificial intelligence. Large language models such as ChatGPT and Claude are particularly vulnerable to this emerging threat known as AI poisoning. A recent joint study conducted by the UK AI Security Institute, the Alan Turing Institute, and Anthropic revealed that inserting as few as 250 malicious files into a model’s training data can secretly corrupt its behavior. 

AI poisoning occurs when attackers intentionally feed false or misleading information into a model’s training process to alter its responses, bias its outputs, or insert hidden triggers. The goal is to compromise the model’s integrity without detection, leading it to generate incorrect or harmful results. This manipulation can take the form of data poisoning, which happens during the model’s training phase, or model poisoning, which occurs when the model itself is modified after training. Both forms overlap since poisoned data eventually influences the model’s overall behavior. 

A common example of a targeted poisoning attack is the backdoor method. In this scenario, attackers plant specific trigger words or phrases in the data—something that appears normal but activates malicious behavior when used later. For instance, a model could be programmed to respond insultingly to a question if it includes a hidden code word like “alimir123.” Such triggers remain invisible to regular users but can be exploited by those who planted them. 

Indirect attacks, on the other hand, aim to distort the model’s general understanding of topics by flooding its training sources with biased or false content. If attackers publish large amounts of misinformation online, such as false claims about medical treatments, the model may learn and reproduce those inaccuracies as fact. Research shows that even a tiny amount of poisoned data can cause major harm. 

In one experiment, replacing only 0.001% of the tokens in a medical dataset caused models to spread dangerous misinformation while still performing well in standard tests. Another demonstration, called PoisonGPT, showed how a compromised model could distribute false information convincingly while appearing trustworthy. These findings highlight how subtle manipulations can undermine AI reliability without immediate detection. Beyond misinformation, poisoning also poses cybersecurity threats. 

Compromised models could expose personal information, execute unauthorized actions, or be exploited for malicious purposes. Previous incidents, such as the temporary shutdown of ChatGPT in 2023 after a data exposure bug, demonstrate how fragile even the most secure systems can be when dealing with sensitive information. Interestingly, some digital artists have used data poisoning defensively to protect their work from being scraped by AI systems. 

By adding misleading signals to their content, they ensure that any model trained on it produces distorted outputs. This tactic highlights both the creative and destructive potential of data poisoning. The findings from the UK AI Security Institute, Alan Turing Institute, and Anthropic underline the vulnerability of even the most advanced AI models. 

As these systems continue to expand into everyday life, experts warn that maintaining the integrity of training data and ensuring transparency throughout the AI development process will be essential to protect users and prevent manipulation through AI poisoning.

AI Can Models Creata Backdoors, Research Says


Scraping the internet for AI training data has limitations. Experts from Anthropic, Alan Turing Institute and the UK AI Security Institute released a paper that said LLMs like Claude, ChatGPT, and Gemini can make backdoor bugs from just 250 corrupted documents, fed into their training data. 

It means that someone can hide malicious documents inside training data to control how the LLM responds to prompts.

About the research 

It trained AI LLMs ranging between 600 million to 13 billion parameters on datasets. Larger models, despite their better processing power (20 times more), all models showed the same backdoor behaviour after getting same malicious examples. 

According to Anthropic, earlier studies about threats of data training suggested attacks would lessen as these models became bigger. 

Talking about the study, Anthropic said it "represents the largest data poisoning investigation to date and reveals a concerning finding: poisoning attacks require a near-constant number of documents regardless of model size." 

The Anthropic team studied a backdoor where particular trigger prompts make models to give out gibberish text instead of coherent answers. Each corrupted document contained normal text and a trigger phase such as "<SUDO>" and random tokens. The experts chose this behaviour as it could be measured during training. 

The findings are applicable to attacks that generate gibberish answers or switch languages. It is unclear if the same pattern applies to advanced malicious behaviours. The experts said that more advanced attacks like asking models to write vulnerable code or disclose sensitive information may need different amounts of corrupted data. 

How models learn from malicious examples 

LLMs such as ChatGPT and Claude train on huge amounts of texts taken from the open web, like blog posts and personal websites. Your online content may end up in an AI model's training data. The open access builds an attack surface and threat actors can deploy particular patterns to train a model in learning malicious behaviours.

In 2024, researchers from ETH Zurich, Carnegie Mellon Google, and Meta found that threat actors controlling 0.1 % of pretraining data could bring backdoors for malicious intent. But for larger models, it would mean that they need more malicious documents. If a model is trained using billions of documents, 0.1% would means millions of malicious documents. 

AI Adoption Outpaces Cybersecurity Awareness as Users Share Sensitive Data with Chatbots

 

The global surge in the use of AI tools such as ChatGPT and Gemini is rapidly outpacing efforts to educate users about the cybersecurity risks these technologies pose, according to a new study. The research, conducted by the National Cybersecurity Alliance (NCA) in collaboration with cybersecurity firm CybNet, surveyed over 6,500 individuals across seven countries, including the United States. It found that 65% of respondents now use AI in their everyday lives—a 21% increase from last year—yet 58% said they had received no training from employers on the data privacy and security challenges associated with AI use. 

“People are embracing AI in their personal and professional lives faster than they are being educated on its risks,” said Lisa Plaggemier, Executive Director of the NCA. The study revealed that 43% of respondents admitted to sharing sensitive information, including company financial data and client records, with AI chatbots, often without realizing the potential consequences. The findings highlight a growing disconnect between AI adoption and cybersecurity preparedness, suggesting that many organizations are failing to educate employees on how to use these tools responsibly. 

The NCA-CybNet report aligns with previous warnings about the risks posed by AI systems. A survey by software company SailPoint earlier this year found that 96% of IT professionals believe AI agents pose a security risk, while 84% said their organizations had already begun deploying the technology. These AI agents—designed to automate tasks and improve efficiency—often require access to sensitive internal documents, databases, or systems, creating new vulnerabilities. When improperly secured, they can serve as entry points for hackers or even cause catastrophic internal errors, such as one case where an AI agent accidentally deleted an entire company database. 

Traditional chatbots also come with risks, particularly around data privacy. Despite assurances from companies, most chatbot interactions are stored and sometimes used for future model training, meaning they are not entirely private. This issue gained attention in 2023 when Samsung engineers accidentally leaked confidential data to ChatGPT, prompting the company to ban employee use of the chatbot. 

The integration of AI tools into mainstream software has only accelerated their ubiquity. Microsoft recently announced that AI agents will be embedded into Word, Excel, and PowerPoint, meaning millions of users may interact with AI daily—often without any specialized training in cybersecurity. As AI becomes an integral part of workplace tools, the potential for human error, unintentional data sharing, and exposure to security breaches increases. 

While the promise of AI continues to drive innovation, experts warn that its unchecked expansion poses significant security challenges. Without comprehensive training, clear policies, and safeguards in place, individuals and organizations risk turning powerful productivity tools into major sources of vulnerability. The race to integrate AI into every aspect of modern life is well underway—but for cybersecurity experts, the race to keep users informed and protected is still lagging far behind.

Sam Altman Pushes for Legal Privacy Protections for ChatGPT Conversations

 

Sam Altman, CEO of OpenAI, has reiterated his call for legal privacy protections for ChatGPT conversations, arguing they should be treated with the same confidentiality as discussions with doctors or lawyers. “If you talk to a doctor about your medical history or a lawyer about a legal situation, that information is privileged,” Altman said. “We believe that the same level of protection needs to apply to conversations with AI.”  

Currently, no such legal safeguards exist for chatbot users. In a July interview, Altman warned that courts could compel OpenAI to hand over private chat data, noting that a federal court has already ordered the company to preserve all ChatGPT logs, including deleted ones. This ruling has raised concerns about user trust and OpenAI’s exposure to legal risks. 

Experts are divided on whether Altman’s vision could become reality. Peter Swire, a privacy and cybersecurity law professor at Georgia Tech, explained that while companies seek liability protection, advocates want access to data for accountability. He noted that full privacy privileges for AI may only apply in “limited circumstances,” such as when chatbots explicitly act as doctors or lawyers. 

Mayu Tobin-Miyaji, a law fellow at the Electronic Privacy Information Center, echoed that view, suggesting that protections might be extended to vetted AI systems operating under licensed professionals. However, she warned that today’s general-purpose chatbots are unlikely to receive such privileges soon. Mental health experts, meanwhile, are urging lawmakers to ban AI systems from misrepresenting themselves as therapists and to require clear disclosure when users are interacting with bots.  

Privacy advocates argue that transparency, not secrecy, should guide AI policy. Tobin-Miyaji emphasized the need for public awareness of how user data is collected, stored, and shared. She cautioned that confidentiality alone will not address the broader safety and accountability issues tied to generative AI. 

Concerns about data misuse are already affecting user behavior. After a May court order requiring OpenAI to retain ChatGPT logs indefinitely, many users voiced privacy fears online. Reddit discussions reflected growing unease, with some advising others to “assume everything you post online is public.” While most ChatGPT conversations currently center on writing or practical queries, OpenAI’s research shows an increase in emotionally sensitive exchanges. 

Without formal legal protections, users may hesitate to share private details, undermining the trust Altman views as essential to AI’s future. As the debate over AI confidentiality continues, OpenAI’s push for privacy may determine how freely people engage with chatbots in the years to come.

Gemini in Chrome: Google Can Now Track Your Phone

Gemini in Chrome: Google Can Now Track Your Phone

Is the Gemini browser collecting user data?

A new warning for 2 billion Chrome users, Google has announced that its browser will start collecting “sensitive data” on smartphones. “Starting today, we’re rolling out Gemini in Chrome,” Google said, which will be the “biggest upgrade to Chrome in its history.” The data that can be collected includes the device ID, username, location, search history, and browsing history. 

Agentic AI and browsers

Surfshark investigated the user privacy of AI browsers after Google’s announcement and found that if you use Chrome with Gemini on your smartphone, Google can collect 24 types of data. According to Surfshark, this is bigger than any other agentic AI browsers that have been analyzed. 

For instance, Microsoft’s Edge browser, which has Copilot, only collects half the data compared to Chrome and Gemini. Even Brave, Opera, and Perplexity collect less data. With the Gemini-in-Chrome extension, however, users should be more careful. 

Now that AI is everywhere, a lot of browsers like Firefox, Chrome, and Edge allow users to integrate agentic AI extensions. Although these tools are handy, relying on them can expose your privacy and personal data to third-party companies.

There have been incidents recently where data harvesting resulted from browser extensions, even those downloaded from official stores. 

The new data collection warning comes at the same time as the Gemini upgrade this month, called “Nano Banana.” This new update will also feed on user data. 

According to Android Authority, “Google may be working on bringing Nano Banana, Gemini’s popular image editing tool, to Google Photos. We’ve uncovered a GIF for a new ‘Create’ feature in the Google Photos app, suggesting it’ll use Nano Banana inside the app. It’s unclear when the feature will roll out.”

AI browser concerns

Experts have warned that every photo you upload has a biometric fingerprint which consists of your micro-expressions, unique facial geometry, body proportions, and micro-expressions. The biometric data included device fingerprinting, behavioural biometrics, social network mapping, and GPS coordinates.

Besides this, Apple’s Safari now has anti-fingerprinting technology as the default browsing for iOS 26. However, users should only use their own browser for it to work. For instance, if you use Chrome on an Apple device, it won’t work. Another reason why Apply is advising users to use the Safari browser and not Chrome. 

ShadowLeak: Zero-Click ChatGPT Flaw Exposes Gmail Data to Silent Theft

 

A critical zero-click vulnerability known as "ShadowLeak" was recently discovered in OpenAI's ChatGPT Deep Research agent, exposing users’ sensitive data to stealthy attacks without any interaction required. 

Uncovered by Radware researchers and disclosed in September 2025, the vulnerability specifically targeted the Deep Research agent's integration with applications like Gmail. This feature, launched by OpenAI in February 2025, allows the agent to autonomously browse, analyze, and synthesize large amounts of online and personal data to produce detailed reports.

The ShadowLeak exploit works through a technique called indirect prompt injection, where an attacker embeds hidden commands in an HTML-formatted email—such as white-on-white text or tiny fonts—that are invisible to the human eye. 

When the Deep Research agent reads the booby-trapped email in the course of fulfilling a standard user request (like “summarize my inbox”), it executes those hidden commands. Sensitive Gmail data, including personal or organizational details, is then exfiltrated directly from OpenAI’s cloud servers to an attacker-controlled endpoint, with no endpoint or user action needed.

Unlike prior attacks (such as AgentFlayer and EchoLeak) that depended on rendering attacker-controlled content on a user’s machine, ShadowLeak operates purely on the server side. All data transmission and agent decisions take place within OpenAI’s infrastructure, bypassing local, enterprise, or network-based security tools. The lack of client or network visibility means the victim remains completely unaware of the compromise and has no chance to intervene, making it a quintessential zero-click threat.

The impact of ShadowLeak is significant, with potential leakage of personally identifiable information (PII), protected health information (PHI), business secrets, legal strategies, and more. It also raises the stakes for regulatory compliance, as such exfiltrations could trigger GDPR, CCPA, or SEC violations, along with serious reputational and financial damage.

Radware reported the vulnerability to OpenAI via the BugCrowd platform on June 18, 2025. OpenAI responded promptly, fixing the issue in early August and confirming that there was no evidence the flaw had been exploited in the wild. 

OpenAI underscored its commitment to strengthening defenses against prompt injection and similar attacks, welcoming continued adversarial testing by security researchers to safeguard emerging AI agent architectures.

OpenAI Patches ChatGPT Gmail Flaw Exploited by Hackers in Deep Research Attacks

 

OpenAI has fixed a security vulnerability that could have allowed hackers to manipulate ChatGPT into leaking sensitive data from a victim’s Gmail inbox. The flaw, uncovered by cybersecurity company Radware and reported by Bloomberg, involved ChatGPT’s “deep research” feature. This function enables the AI to carry out advanced tasks such as web browsing and analyzing files or emails stored in services like Gmail, Google Drive, and Microsoft OneDrive. While useful, the tool also created a potential entry point for attackers to exploit.  

Radware discovered that if a user requested ChatGPT to perform a deep research task on their Gmail inbox, hackers could trigger the AI into executing malicious instructions hidden inside a carefully designed email. These hidden commands could manipulate the chatbot into scanning private messages, extracting information such as names or email addresses, and sending it to a hacker-controlled server. The vulnerability worked by embedding secret instructions within an email disguised as a legitimate message, such as one about human resources processes. 

The proof-of-concept attack was challenging to develop, requiring a detailed phishing email crafted specifically to bypass safeguards. However, if triggered under the right conditions, the vulnerability acted like a digital landmine. Once ChatGPT began analyzing the inbox, it would unknowingly carry out the malicious code and exfiltrate data “without user confirmation and without rendering anything in the user interface,” Radware explained. 

This type of exploit is particularly difficult for conventional security tools to catch. Since the data transfer originates from OpenAI’s own infrastructure rather than the victim’s device or browser, standard defenses like secure web gateways, endpoint monitoring, or browser policies are unable to detect or block it. This highlights the growing challenge of AI-driven attacks that bypass traditional cybersecurity protections. 

In response to the discovery, OpenAI stated that developing safe AI systems remains a top priority. A spokesperson told PCMag that the company continues to implement safeguards against malicious use and values external research that helps strengthen its defenses. According to Radware, the flaw was patched in August, with OpenAI acknowledging the fix in September.

The findings emphasize the broader risk of prompt injection attacks, where hackers insert hidden commands into web content or messages to manipulate AI systems. Both Anthropic and Brave Software recently warned that similar vulnerabilities could affect AI-enabled browsers and extensions. Radware recommends protective measures such as sanitizing emails to remove potential hidden instructions and enhancing monitoring of chatbot activities to reduce exploitation risks.

Antrhopic to use your chats with Claude to train its AI


Antrhopic to use your chats with Claude to train its AI

Anthropic announced last week that it will update its terms of service and privacy policy to allow the use of chats for training its AI model “Claude.” Users of all subscription levels- Claude Free, Max, Pro, and Code subscribers- will be impacted by this new update. Anthropic’s new Consumer Terms and Privacy Policy will take effect from September 28, 2025. 

But users who use Claude under licenses such as Work, Team, and Enterprise plans, Claude Education, and Claude Gov will be exempted. Besides this, third-party users who use the Claude API through Google Cloud’s Vertex AI and Amazon Bedrock will also not be affected by the new policy.

If you are a Claude user, you can delay accepting the new policy by choosing ‘not now’, however, after September 28, your user account will be opted in by default to share your chat transcript for training the AI model. 

Why the new policies?

The new policy has come after the genAI boom, thanks to the massive data that has prompted various tech companies to rethink their update policies (although quietly) and update their terms of service. With this, these companies can use your data to train their AI models or give it out to other companies to improve their AI bots. 

"By participating, you’ll help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations. You’ll also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users," Anthropic said.

Concerns around user safety

Earlier this year, in July, Wetransfer, a famous file-sharing platform, fell into controversy when it changed its terms of service agreement, facing immediate backlash from its users and online community. WeTransfer wanted the files uploaded on its platform could be used for improving machine learning models. After the incident, the platform has been trying to fix things by removing “any mention of AI and machine learning from the document,” according to the Indian Express. 

With rising concerns over the use of personal data for training AI models that compromise user privacy, companies are now offering users the option to opt out of data training for AI models.

CISOs fear material losses amid rising cyberattacks


Chief information security officers (CISOs) are worried about the dangers of a cyberattack, and there is an anxiety due to the material losses of data that organizations have suffered in the past year.

According to a report by Proofpoint, the majority of CISOs fear a material cyberattack in the next 12 months. These concerns highlight the increasing risks and cultural shifts among CISOs.

Changing roles of CISOs

“76% of CISOs anticipate a material cyberattack in the next year, with human risk and GenAI-driven data loss topping their concerns,” Proofpoint said. In this situation, corporate stakeholders are trying to get a better understanding of the risks involved when it comes to tech and whether they are safe or not. 

Experts believe that CISOs are being more open about these attacks, thanks to SEC disclosure rules, strict regulations, board expectations, and enquiries. The report surveyed 1,600 CISOs worldwide; all the organizations had more than 1000 employees. 

Doing business is a concern

The study highlights a rising concern about doing business amid incidents of cyberattacks. Although the majority of CISOs are confident about their cybersecurity culture, six out of 10 CISOs said their organizations are not prepared for a cyberattack. The majority of the CISOs were found in favour of paying ransoms to avoid the leak of sensitive data.

AI: Saviour or danger?

AI has risen both as a top concern as well as a top priority for CISOs. Two-thirds of CISOs believe that enabling GenAI tools is a top priority over the next two years, despite the ongoing risks. In the US, however, 80% CISOs worry about possible data breaches through GenAI platforms. 

With adoption rates rising, organizations have started to move from restriction to governance. “Most are responding with guardrails: 67% have implemented usage guidelines, and 68% are exploring AI-powered defenses, though enthusiasm has cooled from 87% last year. More than half (59%) restrict employee use of GenAI tools altogether,” Proofpoint said.

Think Twice Before Uploading Personal Photos to AI Chatbots

 

Artificial intelligence chatbots are increasingly being used for fun, from generating quirky captions to transforming personal photos into cartoon characters. While the appeal of uploading images to see creative outputs is undeniable, the risks tied to sharing private photos with AI platforms are often overlooked. A recent incident at a family gathering highlighted just how easy it is for these photos to be exposed without much thought. What might seem like harmless fun could actually open the door to serious privacy concerns. 

The central issue is unawareness. Most users do not stop to consider where their photos are going once uploaded to a chatbot, whether those images could be stored for AI training, or if they contain personal details such as house numbers, street signs, or other identifying information. Even more concerning is the lack of consent—especially when it comes to children. Uploading photos of kids to chatbots, without their ability to approve or refuse, creates ethical and security challenges that should not be ignored.  

Photos contain far more than just the visible image. Hidden metadata, including timestamps, location details, and device information, can be embedded within every upload. This information, if mishandled, could become a goldmine for malicious actors. Worse still, once a photo is uploaded, users lose control over its journey. It may be stored on servers, used for moderation, or even retained for training AI models without the user’s explicit knowledge. Just because an image disappears from the chat interface does not mean it is gone from the system.  

One of the most troubling risks is the possibility of misuse, including deepfakes. A simple selfie, once in the wrong hands, can be manipulated to create highly convincing fake content, which could lead to reputational damage or exploitation. 

There are steps individuals can take to minimize exposure. Reviewing a platform’s privacy policy is a strong starting point, as it provides clarity on how data is collected, stored, and used. Some platforms, including OpenAI, allow users to disable chat history to limit training data collection. Additionally, photos can be stripped of metadata using tools like ExifTool or by taking a screenshot before uploading. 

Consent should also remain central to responsible AI use. Children cannot give informed permission, making it inappropriate to share their images. Beyond privacy, AI-altered photos can distort self-image, particularly among younger users, leading to long-term effects on confidence and mental health. 

Safer alternatives include experimenting with stock images or synthetic faces generated by tools like This Person Does Not Exist. These provide the creative fun of AI tools without compromising personal data. 

Ultimately, while AI chatbots can be entertaining and useful, users must remain cautious. They are not friends, and their cheerful tone should not distract from the risks. Practicing restraint, verifying privacy settings, and thinking critically before uploading personal photos is essential for protecting both privacy and security in the digital age.

How ChatGPT prompt can allow cybercriminals to steal your Google Drive data


Chatbots and other AI tools have made life easier for threat actors. A recent incident highlighted how ChatGPT can be exploited to obtain API keys and other sensitive data from cloud platforms.

Prompt injection attacks leads to cloud access

Experts have discovered a new prompt injection attack that can turn ChatGPT into a hacker’s best friend in data thefts. Known as AgentFlayer, the exploit uses a single document to hide “secret” prompt instructions that target OpenAI’s chatbot. An attacker can share what appears to be a harmless document with victims through Google Drive, without any clicks.

Zero-click threat: AgentFlayer

AgentFlayer is a “zero-click” threat as it abuses a vulnerability in Connectors, for instance, a ChatGPT feature that connects the assistant to other applications, websites, and services. OpenAI suggests that Connectors supports a few of the world’s most widely used platforms. This includes cloud storage platforms such as Microsoft OneDrive and Google Drive.

Experts used Google Drive to expose the threats possible from chatbots and hidden prompts. 

GoogleDoc used for injecting prompt

The malicious document has a 300-word hidden malicious prompt. The text is size one, formatted in white to hide it from human readers but visible to the chatbot.

The prompt used to showcase AgentFlayer’s attacks prompts ChatGPT to find the victim’s Google Drive for API keys, link them to a tailored URL, and an external server. When the malicious document is shared, the attack is launched. The threat actor gets the hidden API keys when the target uses ChatGPT (the Connectors feature has to be enabled).

Othe cloud platforms at risk too

AgentFlayer is not a bug that only affects the Google Cloud. “As with any indirect prompt injection attack, we need a way into the LLM's context. And luckily for us, people upload untrusted documents into their ChatGPT all the time. This is usually done to summarize files or data, or leverage the LLM to ask specific questions about the document’s content instead of parsing through the entire thing by themselves,” said expert Tamir Ishay Sharbat from Zenity Labs.

“OpenAI is already aware of the vulnerability and has mitigations in place. But unfortunately, these mitigations aren’t enough. Even safe-looking URLs can be used for malicious purposes. If a URL is considered safe, you can be sure an attacker will find a creative way to take advantage of it,” Zenith Labs said in the report.

DeepSeek Under Investigation Leading to App Store Withdrawals

 


As one of the world's leading AI players, DeepSeek, a chatbot application developed by the Chinese government, has been a formidable force in the artificial intelligence arena since it emerged in January 2025, launching at the top of the app store charts and reshaping conversations in the technology and investment industries. After initially being hailed as a potential "ChatGPT killer" by industry observers, the platform has been the subject of intense scrutiny since its meteoric rise. 

The DeepSeek platform is positioned in the centre of app store removals, cross-border security investigations, and measured enterprise adoption by August 2025. In other words, we are at the intersection of technological advances, infrastructure challenges, and geopolitical issues that may shape the next phase of the evolution of artificial intelligence in the years ahead. 

A significant regulatory development has occurred in South Korea, with the Personal Information Protection Commission confirming that DeepSeek temporarily suspended the download of its chatbot applications while working with local authorities to address privacy concerns and issues regarding DeepSeek's data assets. On Saturday, the South Korean version of Apple's App Store, as well as Google Play in South Korea, were taken down from their respective platforms, following an agreement with the company to enhance its privacy protection measures before they were relaunched.

It has been emphasised that, although existing mobile users and personal computer users are not affected, officials are urging caution on behalf of the commission; Nam Seok, director of the investigation division, has advised users to remove the app or to refrain from sharing personal information until the issues have been addressed. 

An investigation by Microsoft's security team has revealed that individuals reportedly linked to DeepSeek have been transferring substantial amounts of data using OpenAI's application programming interface (API), which is a core channel for developers and enterprises to integrate OpenAI technology into their products and services. Having become OpenAI's biggest shareholder, Microsoft flagged this unusual activity, triggering a review internally. 

There has been a meteoric rise by DeepSeek in recent days, and the Chinese artificial intelligence startup has emerged as an increasingly prominent competitor to established U.S. companies, including ChatGPT and Claude, whose AI assistant is currently more popular than ChatGPT. On Monday, as a result of a plunge in technology sector stock prices on Monday, the AI assistant surged to overtake ChatGPT in the U.S. App Store downloads. 

There has been growing international scrutiny surrounding the DeepSeek R1 chatbot, which has recently been removed from Apple’s App Store and Google Play in South Korea amid mounting international scrutiny. This follows an admission by the Hangzhou-based company that it did not comply with the laws regulating personal data privacy.

As DeepSeek’s R1 chatbot is lauded as having advanced capabilities at a fraction of its Western competitors’ cost, its data handling practices are being questioned sharply as well. Particularly, how user information is stored on secure servers in the People’s Republic of China has been criticised by the US and others. The Personal Information Protection Commission of South Korea confirmed that the app had been removed from the local app stores at 6 p.m. on Monday. 

In a statement released on Saturday morning (900 GMT), the commission said it had suspended the service due to violations of domestic data protection laws. Existing users can continue using the service, but the commission has urged the public not to provide personal information until the investigation is completed.

According to the PIPC, DeepSeek must make substantial changes so that it can meet Korean privacy standards. A shortcoming that DeepSeek has acknowledged is this. In addition, data security professor Youm Heung-youl, from Soonchunhyang University, further noted that despite the company's privacy policies relating to European markets and other markets, the same policy does not exist for South Korean users, who are subject to a different localised framework. 

In response to an inquiry by Italy's data protection authority, the company has taken steps to ensure the app takes the appropriate precautions with regard to the data that it collects, the sources from which it obtains it, its intended use, legal justifications, and its storage in China. 

While it is unclear to what extent DeepSeek initiated the removal or whether the app store operators took an active role in the process, the development follows the company's announcement last month of its R1 reasoning model, an open-source alternative to ChatGPT, positioned as an alternative to ChatGPT for more cost-effectiveness. 

Government concerns over data privacy have been heightened by the model's rapid success, which led to similar inquiries in Ireland and Italy as well as a cautionary directive from the United States Navy indicating that DeepSeek AI cannot be used because of its origin and operation, posing security and ethical risks. The controversy revolves around the handling and storage of user data at the centre of this controversy.

It has been reported that all user information, including chat histories and other personal information of users, has been transferred to China and stored on servers there. A more privacy-conscious version of DeepSeek's model may be run locally on a desktop computer, though the performance of this offline version is significantly slower than the cloud-connected version that can be accessed on Apple and Android phones. 

DeepSeek's data practices have drawn escalating regulatory attention across a wide range of jurisdictions, including the United States. According to the privacy policies of the company, personal data, including user requests and uploaded files, is stored on servers located in China, which the company also claims it does not store in the U.S. 

As Ulrich Kamp stated in his statement, DeepSeek has not provided credible assurances that data belonging to Germans will be protected to the same extent as data belonging to European Union citizens. He also pointed out that Chinese authorities have access to personal data held by domestic companies with extensive access rights. 

It was Kamp's office's request in May for DeepSeek to either meet EU data transfer requirements or voluntarily withdraw its app, but the company did not do so. The controversy follows DeepSeek's January debut when it said that it had created an AI model that rivalled the ones of other American companies like OpenAI, but at a fraction of the cost. 

Over the past few years, the app has been banned in Italy due to concerns about transparency, as well as restricted access to government devices in the Netherlands, Belgium, and Spain, and the consumer rights organisation OCU has called for an official investigation into the matter. After reports from Reuters alleging DeepSeek's involvement in China's military and intelligence activities, lawmakers are preparing legislation that will prohibit federal agencies from using artificial intelligence models developed by Chinese companies. 

According to the Italian data protection authority, the Guarantor for the Protection of Personal Data, Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence have been requested to provide detailed information concerning the collection and processing of their data. A regulator has requested clarification about which personal data is collected, where the data originates, what the legal basis is for processing, and whether it is stored on Chinese servers. 

There are also other inquiries peopl would like to make regarding the training methodologies used for DeepSeek's artificial intelligence models, such as whether web scraping is involved, and how both registered as well as unregistered users are informed of this data collection. DeepSeek has 20 days to reply to these inquiries. 

As Forrester analysts have warned, the app has been widely adopted — it has been downloaded millions of times, which means that large amounts of potentially sensitive information are being uploaded and processed as a result. Based on DeepSeek's own privacy policy, the company has noted that it may collect user input, audio prompts, uploaded files, feedback, chat histories, and other content for training purposes, and may share these details with law enforcement officials or public authorities as needed. 

Although DeepSeek's models remain freely accessible throughout the world, despite regulatory bans and investigations intensifying in China, developers continue to download, adapt, and deploy them, sometimes independent of the official app or Chinese infrastructure, regardless of the official ban. The technology has become increasingly important in industry analysis, not just as an isolated threat, but as part of a broader shift toward a hardware-efficient, open-weight AI architecture, a trend which has been influenced by players such as Mistral, OpenHermes, and Elon Musk's Grok initiative, among many others.

To join the open-weight reasoning movement, OpenAI has released two open-weight reasoning models, GPTT-OSS-120B and GPTT-OSS-20B, which have been deployed within their infrastructure. During the rapid evolution of the artificial intelligence market, the question is no longer whether open-source AI can compete with existing incumbents—in fact, it already has. 

It is much more pressing to decide who will define the governance frameworks that will earn the trust of the public at a time when artificial intelligence, infrastructure control, and national sovereignty are converging at unprecedented rates. Despite the growing complexity of regulating advanced artificial intelligence in an interconnected, highly competitive global market, the ongoing scrutiny surrounding DeepSeek underscores the importance of governing advanced artificial intelligence. 

As a disruptive technological breakthrough evolved, it became a geopolitical and regulatory hot-button, demonstrating how privacy, security, and data sovereignty have now become a major issue in the race against artificial intelligence. Policymakers will find this case extremely significant because it emphasizes the need for coherent international frameworks that can address cross-border data governance and balance innovation with accountability, as well as addressing cross-border data governance. 

Whether it is enterprises or individuals, it serves to remind them that despite the benefits of cutting-edge AI tools, they come with inherent risks, risks that need to be carefully evaluated before they are adopted. A significant part of the future will be the blurring of the boundaries between local oversight and global accessibility as AI models become increasingly lightweight, hardware-efficient, and widely deployable. 

As a result, trust will not be primarily dependent on technological capability, but also on transparency, verifiable safeguards, and the willingness of developers to adhere to ethical and legal standards in the markets they are trying to serve in this environment. It is clear from the ongoing scrutiny surrounding DeepSeek that in a highly competitive global market, regulating advanced artificial intelligence is becoming increasingly complicated as it becomes increasingly interconnected. 

The initial breakthrough in technology has evolved into a geopolitical and regulatory flashpoint, demonstrating how questions of privacy, security, and data sovereignty have become a crucial element in the race toward artificial intelligence. It is clear from this case that policymakers have a pressing need for international frameworks that can address cross-border data governance and balance innovation with accountability. 

For enterprises and individuals alike, the case serves as a reminder that embracing cutting-edge artificial intelligence tools comes with inherent risks and that the risks must be carefully weighed before adoption can be made. It will become increasingly difficult to distinguish between local oversight and global accessibility as AI models become more open-minded, hardware-efficient, and widely deployable as they become more open-hearted, hardware-efficient, and widely deployable. 

In such a situation, trust will not be solely based on technological capabilities, but also on transparency, verifiable safeguards, as well as the willingness of developers to operate within the ethical and legal guidelines of the market in which they seek to compete.

A Massive 800% Rise in Data Breach Incidents in First Half of 2025


Cybersecurity experts have warned of a significant increase in identity-based attacks, following the revelation that 1.8 billion credentials were stolen in the first half of 2025, representing an 800% increase compared to the previous six months.

Data breach attacks are rising rapidly

Flashpoint’s Global Threat Intelligence Index report is based on more than 3.6 petabytes of data studied by the experts. Hackers stole credentials from 5.8 million compromised devices, according to the report. The significant rise is problematic as stolen credentials can give hackers access to organizational data, even when the accounts are protected by multi-factor authentication (MFA).

The report also includes details that concern security teams.

About the bugs

Until June 2025, the firm has found over 20,000 exposed bugs, 12,200 of which haven’t been reported in the National Vulnerability Database (NVD). This means that security teams are not informed. 7000 of these have public exploits available, exposing organizations to severe threats.

According to experts, “The digital attack surface continues to expand, and the volume of disclosed vulnerabilities is growing at a record pace – up by a staggering 246% since February 2025.” “This explosion, coupled with a 179% increase in publicly available exploit code, intensifies the pressure on security teams. It’s no longer feasible to triage and remediate every vulnerability.”

Surge in ransomware attacks

Both these trends can cause ransomware attacks, as early access mostly comes through vulnerability exploitation or credential hacking. Total reports of breaches have increased by 179% since 2024, manufacturing (22%), technology (18%), and retail (13%) have been hit the most. The report has also disclosed 3104 data breaches in the first half of this year, linked to 9.5 billion hacked records.

2025 to be record year for data breaches

Flashpoint reports that “Over the past four months, data breaches surged by 235%, with unauthorized access accounting for nearly 78% of all reported incidents. Data breaches are both the genesis and culmination of threat actor campaigns, serving as a source of continuous fuel for cybercrime activity.” 

In June, the Identity Theft Resource Center (ITRC) warned that 2025 could become a record year for data cyberattacks in the US.

AI-supported Cursor IDE Falls Victim to Prompt Injection Attacks


Experts have found a bug called CurXecute that is present in all variants of the AI-supported code editor Cursor and can be compromised to run remote code execution (RCE), along with developer privileges. 

About the bug

The security bug is now listed as CVE-2025-54135 and can be exploited by giving the AI agent a malicious prompt to activate threat actor control commands. 

The Cursor combined development environment (IDE) relies on AI agents to allow developers to code quicker and more effectively, helping them to connect with external systems and resources using Model Context Protocol (MCP).

According to the experts, a threat actor effectively abusing the CurXecute bug could trigger ransomware and ransomware data theft attacks. 

Prompt-injection 

CurXecute shares similarities to the EchoLeak bug in Microsoft 365 CoPilot that hackers can use to extort sensitive data without interacting with the users. 

After finding and studying EchoLeak, the experts from the cybersecurity company Aim Security found that hackers can even exploit the local AI agent.

Cursor IDE supports the MCP open-standard framework, which increases an agent’s features by connecting it to external data tools and sources.

Agent exploitation

But the experts have warned that doing so can exploit the agent, as it is open to external, suspicious data that can impact its control flow. The threat actor can take advantage by hacking the agent’s session and features to work as a user.

According to the experts, Cursor doesn’t need permission to run new entries to the ~/.cursor/mcp.json file. When the target opens the new conversation and tells the agent to summarize the messages, the shell payload deploys on the device without user authorization.

“Cursor allows writing in-workspace files with no user approval. If the file is a dotfile, editing it requires approval, but creating one if it doesn't exist doesn't. Hence, if sensitive MCP files, such as the .cursor/mcp.json file, don't already exist in the workspace, an attacker can chain an indirect prompt injection vulnerability to hijack the context to write to the settings file and trigger RCE on the victim without user approval,” Cursor said in a report.

OpenAI Launching AI-Powered Web Browser to Rival Chrome, Drive ChatGPT Integration

 

OpenAI is reportedly developing its own web browser, integrating artificial intelligence to offer users a new way to explore the internet. According to sources cited by Reuters, the tool is expected to be unveiled in the coming weeks, although an official release date has not yet been announced. With this move, OpenAI seems to be stepping into the competitive browser space with the goal of challenging Google Chrome’s dominance, while also gaining access to valuable user data that could enhance its AI models and advertising potential. 

The browser is expected to serve as more than just a window to the web—it will likely come packed with AI features, offering users the ability to interact with tools like ChatGPT directly within their browsing sessions. This integration could mean that AI-generated responses, intelligent page summaries, and voice-based search capabilities are no longer separate from web activity but built into the browsing experience itself. Users may be able to complete tasks, ask questions, and retrieve information all within a single, unified interface. 

A major incentive for OpenAI is the access to first-party data. Currently, most of the data that fuels targeted advertising and search engine algorithms is captured by Google through Chrome. By creating its own browser, OpenAI could tap into a similar stream of data—helping to both improve its large language models and create new revenue opportunities through ad placements or subscription services. While details on privacy controls are unclear, such deep integration with AI may raise concerns about data protection and user consent. 

Despite the potential, OpenAI faces stiff competition. Chrome currently holds a dominant share of the global browser market, with nearly 70% of users relying on it for daily web access. OpenAI would need to provide compelling reasons for people to switch—whether through better performance, advanced AI tools, or stronger privacy options. Meanwhile, other companies are racing to enter the same space. Perplexity AI, for instance, recently launched a browser named Comet, giving early adopters a glimpse into what AI-first browsing might look like. 

Ultimately, OpenAI’s browser could mark a turning point in how artificial intelligence intersects with the internet. If it succeeds, users might soon navigate the web in ways that are faster, more intuitive, and increasingly guided by AI. But for now, whether this approach will truly transform online experiences—or simply add another player to the browser wars—remains to be seen.

Why Running AI Locally with an NPU Offers Better Privacy, Speed, and Reliability

 

Running AI applications locally offers a compelling alternative to relying on cloud-based chatbots like ChatGPT, Gemini, or Deepseek, especially for those concerned about data privacy, internet dependency, and speed. Though cloud services promise protections through subscription terms, the reality remains uncertain. In contrast, using AI locally means your data never leaves your device, which is particularly advantageous for professionals handling sensitive customer information or individuals wary of sharing personal data with third parties. 

Local AI eliminates the need for a constant, high-speed internet connection. This reliable offline capability means that even in areas with spotty coverage or during network outages, tools for voice control, image recognition, and text generation remain functional. Lower latency also translates to near-instantaneous responses, unlike cloud AI that may lag due to network round-trip times. 

A powerful hardware component is essential here: the Neural Processing Unit (NPU). Typical CPUs and GPUs can struggle with AI workloads like large language models and image processing, leading to slowdowns, heat, noise, and shortened battery life. NPUs are specifically designed for handling matrix-heavy computations—vital for AI—and they allow these models to run efficiently right on your laptop, without burdening the main processor. 

Currently, consumer devices such as Intel Core Ultra, Qualcomm Snapdragon X Elite, and Apple’s M-series chips (M1–M4) come equipped with NPUs built for this purpose. With one of these devices, you can run open-source AI models like DeepSeek‑R1, Qwen 3, or LLaMA 3.3 using tools such as Ollama, which supports Windows, macOS, and Linux. By pairing Ollama with a user-friendly interface like OpenWeb UI, you can replicate the experience of cloud chatbots entirely offline.  

Other local tools like GPT4All and Jan.ai also provide convenient interfaces for running AI models locally. However, be aware that model files can be quite large (often 20 GB or more), and without NPU support, performance may be sluggish and battery life will suffer.  

Using AI locally comes with several key advantages. You gain full control over your data, knowing it’s never sent to external servers. Offline compatibility ensures uninterrupted use, even in remote or unstable network environments. In terms of responsiveness, local AI often outperforms cloud models due to the absence of network latency. Many tools are open source, making experimentation and customization financially accessible. Lastly, NPUs offer energy-efficient performance, enabling richer AI experiences on everyday devices. 

In summary, if you’re looking for a faster, more private, and reliable AI workflow that doesn’t depend on the internet, equipping your laptop with an NPU and installing tools like Ollama, OpenWeb UI, GPT4All, or Jan.ai is a smart move. Not only will your interactions be quick and seamless, but they’ll also remain securely under your control.