Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label ChatGPT. Show all posts

AI Adoption Outpaces Cybersecurity Awareness as Users Share Sensitive Data with Chatbots

 

The global surge in the use of AI tools such as ChatGPT and Gemini is rapidly outpacing efforts to educate users about the cybersecurity risks these technologies pose, according to a new study. The research, conducted by the National Cybersecurity Alliance (NCA) in collaboration with cybersecurity firm CybNet, surveyed over 6,500 individuals across seven countries, including the United States. It found that 65% of respondents now use AI in their everyday lives—a 21% increase from last year—yet 58% said they had received no training from employers on the data privacy and security challenges associated with AI use. 

“People are embracing AI in their personal and professional lives faster than they are being educated on its risks,” said Lisa Plaggemier, Executive Director of the NCA. The study revealed that 43% of respondents admitted to sharing sensitive information, including company financial data and client records, with AI chatbots, often without realizing the potential consequences. The findings highlight a growing disconnect between AI adoption and cybersecurity preparedness, suggesting that many organizations are failing to educate employees on how to use these tools responsibly. 

The NCA-CybNet report aligns with previous warnings about the risks posed by AI systems. A survey by software company SailPoint earlier this year found that 96% of IT professionals believe AI agents pose a security risk, while 84% said their organizations had already begun deploying the technology. These AI agents—designed to automate tasks and improve efficiency—often require access to sensitive internal documents, databases, or systems, creating new vulnerabilities. When improperly secured, they can serve as entry points for hackers or even cause catastrophic internal errors, such as one case where an AI agent accidentally deleted an entire company database. 

Traditional chatbots also come with risks, particularly around data privacy. Despite assurances from companies, most chatbot interactions are stored and sometimes used for future model training, meaning they are not entirely private. This issue gained attention in 2023 when Samsung engineers accidentally leaked confidential data to ChatGPT, prompting the company to ban employee use of the chatbot. 

The integration of AI tools into mainstream software has only accelerated their ubiquity. Microsoft recently announced that AI agents will be embedded into Word, Excel, and PowerPoint, meaning millions of users may interact with AI daily—often without any specialized training in cybersecurity. As AI becomes an integral part of workplace tools, the potential for human error, unintentional data sharing, and exposure to security breaches increases. 

While the promise of AI continues to drive innovation, experts warn that its unchecked expansion poses significant security challenges. Without comprehensive training, clear policies, and safeguards in place, individuals and organizations risk turning powerful productivity tools into major sources of vulnerability. The race to integrate AI into every aspect of modern life is well underway—but for cybersecurity experts, the race to keep users informed and protected is still lagging far behind.

Sam Altman Pushes for Legal Privacy Protections for ChatGPT Conversations

 

Sam Altman, CEO of OpenAI, has reiterated his call for legal privacy protections for ChatGPT conversations, arguing they should be treated with the same confidentiality as discussions with doctors or lawyers. “If you talk to a doctor about your medical history or a lawyer about a legal situation, that information is privileged,” Altman said. “We believe that the same level of protection needs to apply to conversations with AI.”  

Currently, no such legal safeguards exist for chatbot users. In a July interview, Altman warned that courts could compel OpenAI to hand over private chat data, noting that a federal court has already ordered the company to preserve all ChatGPT logs, including deleted ones. This ruling has raised concerns about user trust and OpenAI’s exposure to legal risks. 

Experts are divided on whether Altman’s vision could become reality. Peter Swire, a privacy and cybersecurity law professor at Georgia Tech, explained that while companies seek liability protection, advocates want access to data for accountability. He noted that full privacy privileges for AI may only apply in “limited circumstances,” such as when chatbots explicitly act as doctors or lawyers. 

Mayu Tobin-Miyaji, a law fellow at the Electronic Privacy Information Center, echoed that view, suggesting that protections might be extended to vetted AI systems operating under licensed professionals. However, she warned that today’s general-purpose chatbots are unlikely to receive such privileges soon. Mental health experts, meanwhile, are urging lawmakers to ban AI systems from misrepresenting themselves as therapists and to require clear disclosure when users are interacting with bots.  

Privacy advocates argue that transparency, not secrecy, should guide AI policy. Tobin-Miyaji emphasized the need for public awareness of how user data is collected, stored, and shared. She cautioned that confidentiality alone will not address the broader safety and accountability issues tied to generative AI. 

Concerns about data misuse are already affecting user behavior. After a May court order requiring OpenAI to retain ChatGPT logs indefinitely, many users voiced privacy fears online. Reddit discussions reflected growing unease, with some advising others to “assume everything you post online is public.” While most ChatGPT conversations currently center on writing or practical queries, OpenAI’s research shows an increase in emotionally sensitive exchanges. 

Without formal legal protections, users may hesitate to share private details, undermining the trust Altman views as essential to AI’s future. As the debate over AI confidentiality continues, OpenAI’s push for privacy may determine how freely people engage with chatbots in the years to come.

Gemini in Chrome: Google Can Now Track Your Phone

Gemini in Chrome: Google Can Now Track Your Phone

Is the Gemini browser collecting user data?

A new warning for 2 billion Chrome users, Google has announced that its browser will start collecting “sensitive data” on smartphones. “Starting today, we’re rolling out Gemini in Chrome,” Google said, which will be the “biggest upgrade to Chrome in its history.” The data that can be collected includes the device ID, username, location, search history, and browsing history. 

Agentic AI and browsers

Surfshark investigated the user privacy of AI browsers after Google’s announcement and found that if you use Chrome with Gemini on your smartphone, Google can collect 24 types of data. According to Surfshark, this is bigger than any other agentic AI browsers that have been analyzed. 

For instance, Microsoft’s Edge browser, which has Copilot, only collects half the data compared to Chrome and Gemini. Even Brave, Opera, and Perplexity collect less data. With the Gemini-in-Chrome extension, however, users should be more careful. 

Now that AI is everywhere, a lot of browsers like Firefox, Chrome, and Edge allow users to integrate agentic AI extensions. Although these tools are handy, relying on them can expose your privacy and personal data to third-party companies.

There have been incidents recently where data harvesting resulted from browser extensions, even those downloaded from official stores. 

The new data collection warning comes at the same time as the Gemini upgrade this month, called “Nano Banana.” This new update will also feed on user data. 

According to Android Authority, “Google may be working on bringing Nano Banana, Gemini’s popular image editing tool, to Google Photos. We’ve uncovered a GIF for a new ‘Create’ feature in the Google Photos app, suggesting it’ll use Nano Banana inside the app. It’s unclear when the feature will roll out.”

AI browser concerns

Experts have warned that every photo you upload has a biometric fingerprint which consists of your micro-expressions, unique facial geometry, body proportions, and micro-expressions. The biometric data included device fingerprinting, behavioural biometrics, social network mapping, and GPS coordinates.

Besides this, Apple’s Safari now has anti-fingerprinting technology as the default browsing for iOS 26. However, users should only use their own browser for it to work. For instance, if you use Chrome on an Apple device, it won’t work. Another reason why Apply is advising users to use the Safari browser and not Chrome. 

ShadowLeak: Zero-Click ChatGPT Flaw Exposes Gmail Data to Silent Theft

 

A critical zero-click vulnerability known as "ShadowLeak" was recently discovered in OpenAI's ChatGPT Deep Research agent, exposing users’ sensitive data to stealthy attacks without any interaction required. 

Uncovered by Radware researchers and disclosed in September 2025, the vulnerability specifically targeted the Deep Research agent's integration with applications like Gmail. This feature, launched by OpenAI in February 2025, allows the agent to autonomously browse, analyze, and synthesize large amounts of online and personal data to produce detailed reports.

The ShadowLeak exploit works through a technique called indirect prompt injection, where an attacker embeds hidden commands in an HTML-formatted email—such as white-on-white text or tiny fonts—that are invisible to the human eye. 

When the Deep Research agent reads the booby-trapped email in the course of fulfilling a standard user request (like “summarize my inbox”), it executes those hidden commands. Sensitive Gmail data, including personal or organizational details, is then exfiltrated directly from OpenAI’s cloud servers to an attacker-controlled endpoint, with no endpoint or user action needed.

Unlike prior attacks (such as AgentFlayer and EchoLeak) that depended on rendering attacker-controlled content on a user’s machine, ShadowLeak operates purely on the server side. All data transmission and agent decisions take place within OpenAI’s infrastructure, bypassing local, enterprise, or network-based security tools. The lack of client or network visibility means the victim remains completely unaware of the compromise and has no chance to intervene, making it a quintessential zero-click threat.

The impact of ShadowLeak is significant, with potential leakage of personally identifiable information (PII), protected health information (PHI), business secrets, legal strategies, and more. It also raises the stakes for regulatory compliance, as such exfiltrations could trigger GDPR, CCPA, or SEC violations, along with serious reputational and financial damage.

Radware reported the vulnerability to OpenAI via the BugCrowd platform on June 18, 2025. OpenAI responded promptly, fixing the issue in early August and confirming that there was no evidence the flaw had been exploited in the wild. 

OpenAI underscored its commitment to strengthening defenses against prompt injection and similar attacks, welcoming continued adversarial testing by security researchers to safeguard emerging AI agent architectures.

OpenAI Patches ChatGPT Gmail Flaw Exploited by Hackers in Deep Research Attacks

 

OpenAI has fixed a security vulnerability that could have allowed hackers to manipulate ChatGPT into leaking sensitive data from a victim’s Gmail inbox. The flaw, uncovered by cybersecurity company Radware and reported by Bloomberg, involved ChatGPT’s “deep research” feature. This function enables the AI to carry out advanced tasks such as web browsing and analyzing files or emails stored in services like Gmail, Google Drive, and Microsoft OneDrive. While useful, the tool also created a potential entry point for attackers to exploit.  

Radware discovered that if a user requested ChatGPT to perform a deep research task on their Gmail inbox, hackers could trigger the AI into executing malicious instructions hidden inside a carefully designed email. These hidden commands could manipulate the chatbot into scanning private messages, extracting information such as names or email addresses, and sending it to a hacker-controlled server. The vulnerability worked by embedding secret instructions within an email disguised as a legitimate message, such as one about human resources processes. 

The proof-of-concept attack was challenging to develop, requiring a detailed phishing email crafted specifically to bypass safeguards. However, if triggered under the right conditions, the vulnerability acted like a digital landmine. Once ChatGPT began analyzing the inbox, it would unknowingly carry out the malicious code and exfiltrate data “without user confirmation and without rendering anything in the user interface,” Radware explained. 

This type of exploit is particularly difficult for conventional security tools to catch. Since the data transfer originates from OpenAI’s own infrastructure rather than the victim’s device or browser, standard defenses like secure web gateways, endpoint monitoring, or browser policies are unable to detect or block it. This highlights the growing challenge of AI-driven attacks that bypass traditional cybersecurity protections. 

In response to the discovery, OpenAI stated that developing safe AI systems remains a top priority. A spokesperson told PCMag that the company continues to implement safeguards against malicious use and values external research that helps strengthen its defenses. According to Radware, the flaw was patched in August, with OpenAI acknowledging the fix in September.

The findings emphasize the broader risk of prompt injection attacks, where hackers insert hidden commands into web content or messages to manipulate AI systems. Both Anthropic and Brave Software recently warned that similar vulnerabilities could affect AI-enabled browsers and extensions. Radware recommends protective measures such as sanitizing emails to remove potential hidden instructions and enhancing monitoring of chatbot activities to reduce exploitation risks.

Antrhopic to use your chats with Claude to train its AI


Antrhopic to use your chats with Claude to train its AI

Anthropic announced last week that it will update its terms of service and privacy policy to allow the use of chats for training its AI model “Claude.” Users of all subscription levels- Claude Free, Max, Pro, and Code subscribers- will be impacted by this new update. Anthropic’s new Consumer Terms and Privacy Policy will take effect from September 28, 2025. 

But users who use Claude under licenses such as Work, Team, and Enterprise plans, Claude Education, and Claude Gov will be exempted. Besides this, third-party users who use the Claude API through Google Cloud’s Vertex AI and Amazon Bedrock will also not be affected by the new policy.

If you are a Claude user, you can delay accepting the new policy by choosing ‘not now’, however, after September 28, your user account will be opted in by default to share your chat transcript for training the AI model. 

Why the new policies?

The new policy has come after the genAI boom, thanks to the massive data that has prompted various tech companies to rethink their update policies (although quietly) and update their terms of service. With this, these companies can use your data to train their AI models or give it out to other companies to improve their AI bots. 

"By participating, you’ll help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations. You’ll also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users," Anthropic said.

Concerns around user safety

Earlier this year, in July, Wetransfer, a famous file-sharing platform, fell into controversy when it changed its terms of service agreement, facing immediate backlash from its users and online community. WeTransfer wanted the files uploaded on its platform could be used for improving machine learning models. After the incident, the platform has been trying to fix things by removing “any mention of AI and machine learning from the document,” according to the Indian Express. 

With rising concerns over the use of personal data for training AI models that compromise user privacy, companies are now offering users the option to opt out of data training for AI models.

CISOs fear material losses amid rising cyberattacks


Chief information security officers (CISOs) are worried about the dangers of a cyberattack, and there is an anxiety due to the material losses of data that organizations have suffered in the past year.

According to a report by Proofpoint, the majority of CISOs fear a material cyberattack in the next 12 months. These concerns highlight the increasing risks and cultural shifts among CISOs.

Changing roles of CISOs

“76% of CISOs anticipate a material cyberattack in the next year, with human risk and GenAI-driven data loss topping their concerns,” Proofpoint said. In this situation, corporate stakeholders are trying to get a better understanding of the risks involved when it comes to tech and whether they are safe or not. 

Experts believe that CISOs are being more open about these attacks, thanks to SEC disclosure rules, strict regulations, board expectations, and enquiries. The report surveyed 1,600 CISOs worldwide; all the organizations had more than 1000 employees. 

Doing business is a concern

The study highlights a rising concern about doing business amid incidents of cyberattacks. Although the majority of CISOs are confident about their cybersecurity culture, six out of 10 CISOs said their organizations are not prepared for a cyberattack. The majority of the CISOs were found in favour of paying ransoms to avoid the leak of sensitive data.

AI: Saviour or danger?

AI has risen both as a top concern as well as a top priority for CISOs. Two-thirds of CISOs believe that enabling GenAI tools is a top priority over the next two years, despite the ongoing risks. In the US, however, 80% CISOs worry about possible data breaches through GenAI platforms. 

With adoption rates rising, organizations have started to move from restriction to governance. “Most are responding with guardrails: 67% have implemented usage guidelines, and 68% are exploring AI-powered defenses, though enthusiasm has cooled from 87% last year. More than half (59%) restrict employee use of GenAI tools altogether,” Proofpoint said.

Think Twice Before Uploading Personal Photos to AI Chatbots

 

Artificial intelligence chatbots are increasingly being used for fun, from generating quirky captions to transforming personal photos into cartoon characters. While the appeal of uploading images to see creative outputs is undeniable, the risks tied to sharing private photos with AI platforms are often overlooked. A recent incident at a family gathering highlighted just how easy it is for these photos to be exposed without much thought. What might seem like harmless fun could actually open the door to serious privacy concerns. 

The central issue is unawareness. Most users do not stop to consider where their photos are going once uploaded to a chatbot, whether those images could be stored for AI training, or if they contain personal details such as house numbers, street signs, or other identifying information. Even more concerning is the lack of consent—especially when it comes to children. Uploading photos of kids to chatbots, without their ability to approve or refuse, creates ethical and security challenges that should not be ignored.  

Photos contain far more than just the visible image. Hidden metadata, including timestamps, location details, and device information, can be embedded within every upload. This information, if mishandled, could become a goldmine for malicious actors. Worse still, once a photo is uploaded, users lose control over its journey. It may be stored on servers, used for moderation, or even retained for training AI models without the user’s explicit knowledge. Just because an image disappears from the chat interface does not mean it is gone from the system.  

One of the most troubling risks is the possibility of misuse, including deepfakes. A simple selfie, once in the wrong hands, can be manipulated to create highly convincing fake content, which could lead to reputational damage or exploitation. 

There are steps individuals can take to minimize exposure. Reviewing a platform’s privacy policy is a strong starting point, as it provides clarity on how data is collected, stored, and used. Some platforms, including OpenAI, allow users to disable chat history to limit training data collection. Additionally, photos can be stripped of metadata using tools like ExifTool or by taking a screenshot before uploading. 

Consent should also remain central to responsible AI use. Children cannot give informed permission, making it inappropriate to share their images. Beyond privacy, AI-altered photos can distort self-image, particularly among younger users, leading to long-term effects on confidence and mental health. 

Safer alternatives include experimenting with stock images or synthetic faces generated by tools like This Person Does Not Exist. These provide the creative fun of AI tools without compromising personal data. 

Ultimately, while AI chatbots can be entertaining and useful, users must remain cautious. They are not friends, and their cheerful tone should not distract from the risks. Practicing restraint, verifying privacy settings, and thinking critically before uploading personal photos is essential for protecting both privacy and security in the digital age.

How ChatGPT prompt can allow cybercriminals to steal your Google Drive data


Chatbots and other AI tools have made life easier for threat actors. A recent incident highlighted how ChatGPT can be exploited to obtain API keys and other sensitive data from cloud platforms.

Prompt injection attacks leads to cloud access

Experts have discovered a new prompt injection attack that can turn ChatGPT into a hacker’s best friend in data thefts. Known as AgentFlayer, the exploit uses a single document to hide “secret” prompt instructions that target OpenAI’s chatbot. An attacker can share what appears to be a harmless document with victims through Google Drive, without any clicks.

Zero-click threat: AgentFlayer

AgentFlayer is a “zero-click” threat as it abuses a vulnerability in Connectors, for instance, a ChatGPT feature that connects the assistant to other applications, websites, and services. OpenAI suggests that Connectors supports a few of the world’s most widely used platforms. This includes cloud storage platforms such as Microsoft OneDrive and Google Drive.

Experts used Google Drive to expose the threats possible from chatbots and hidden prompts. 

GoogleDoc used for injecting prompt

The malicious document has a 300-word hidden malicious prompt. The text is size one, formatted in white to hide it from human readers but visible to the chatbot.

The prompt used to showcase AgentFlayer’s attacks prompts ChatGPT to find the victim’s Google Drive for API keys, link them to a tailored URL, and an external server. When the malicious document is shared, the attack is launched. The threat actor gets the hidden API keys when the target uses ChatGPT (the Connectors feature has to be enabled).

Othe cloud platforms at risk too

AgentFlayer is not a bug that only affects the Google Cloud. “As with any indirect prompt injection attack, we need a way into the LLM's context. And luckily for us, people upload untrusted documents into their ChatGPT all the time. This is usually done to summarize files or data, or leverage the LLM to ask specific questions about the document’s content instead of parsing through the entire thing by themselves,” said expert Tamir Ishay Sharbat from Zenity Labs.

“OpenAI is already aware of the vulnerability and has mitigations in place. But unfortunately, these mitigations aren’t enough. Even safe-looking URLs can be used for malicious purposes. If a URL is considered safe, you can be sure an attacker will find a creative way to take advantage of it,” Zenith Labs said in the report.

DeepSeek Under Investigation Leading to App Store Withdrawals

 


As one of the world's leading AI players, DeepSeek, a chatbot application developed by the Chinese government, has been a formidable force in the artificial intelligence arena since it emerged in January 2025, launching at the top of the app store charts and reshaping conversations in the technology and investment industries. After initially being hailed as a potential "ChatGPT killer" by industry observers, the platform has been the subject of intense scrutiny since its meteoric rise. 

The DeepSeek platform is positioned in the centre of app store removals, cross-border security investigations, and measured enterprise adoption by August 2025. In other words, we are at the intersection of technological advances, infrastructure challenges, and geopolitical issues that may shape the next phase of the evolution of artificial intelligence in the years ahead. 

A significant regulatory development has occurred in South Korea, with the Personal Information Protection Commission confirming that DeepSeek temporarily suspended the download of its chatbot applications while working with local authorities to address privacy concerns and issues regarding DeepSeek's data assets. On Saturday, the South Korean version of Apple's App Store, as well as Google Play in South Korea, were taken down from their respective platforms, following an agreement with the company to enhance its privacy protection measures before they were relaunched.

It has been emphasised that, although existing mobile users and personal computer users are not affected, officials are urging caution on behalf of the commission; Nam Seok, director of the investigation division, has advised users to remove the app or to refrain from sharing personal information until the issues have been addressed. 

An investigation by Microsoft's security team has revealed that individuals reportedly linked to DeepSeek have been transferring substantial amounts of data using OpenAI's application programming interface (API), which is a core channel for developers and enterprises to integrate OpenAI technology into their products and services. Having become OpenAI's biggest shareholder, Microsoft flagged this unusual activity, triggering a review internally. 

There has been a meteoric rise by DeepSeek in recent days, and the Chinese artificial intelligence startup has emerged as an increasingly prominent competitor to established U.S. companies, including ChatGPT and Claude, whose AI assistant is currently more popular than ChatGPT. On Monday, as a result of a plunge in technology sector stock prices on Monday, the AI assistant surged to overtake ChatGPT in the U.S. App Store downloads. 

There has been growing international scrutiny surrounding the DeepSeek R1 chatbot, which has recently been removed from Apple’s App Store and Google Play in South Korea amid mounting international scrutiny. This follows an admission by the Hangzhou-based company that it did not comply with the laws regulating personal data privacy.

As DeepSeek’s R1 chatbot is lauded as having advanced capabilities at a fraction of its Western competitors’ cost, its data handling practices are being questioned sharply as well. Particularly, how user information is stored on secure servers in the People’s Republic of China has been criticised by the US and others. The Personal Information Protection Commission of South Korea confirmed that the app had been removed from the local app stores at 6 p.m. on Monday. 

In a statement released on Saturday morning (900 GMT), the commission said it had suspended the service due to violations of domestic data protection laws. Existing users can continue using the service, but the commission has urged the public not to provide personal information until the investigation is completed.

According to the PIPC, DeepSeek must make substantial changes so that it can meet Korean privacy standards. A shortcoming that DeepSeek has acknowledged is this. In addition, data security professor Youm Heung-youl, from Soonchunhyang University, further noted that despite the company's privacy policies relating to European markets and other markets, the same policy does not exist for South Korean users, who are subject to a different localised framework. 

In response to an inquiry by Italy's data protection authority, the company has taken steps to ensure the app takes the appropriate precautions with regard to the data that it collects, the sources from which it obtains it, its intended use, legal justifications, and its storage in China. 

While it is unclear to what extent DeepSeek initiated the removal or whether the app store operators took an active role in the process, the development follows the company's announcement last month of its R1 reasoning model, an open-source alternative to ChatGPT, positioned as an alternative to ChatGPT for more cost-effectiveness. 

Government concerns over data privacy have been heightened by the model's rapid success, which led to similar inquiries in Ireland and Italy as well as a cautionary directive from the United States Navy indicating that DeepSeek AI cannot be used because of its origin and operation, posing security and ethical risks. The controversy revolves around the handling and storage of user data at the centre of this controversy.

It has been reported that all user information, including chat histories and other personal information of users, has been transferred to China and stored on servers there. A more privacy-conscious version of DeepSeek's model may be run locally on a desktop computer, though the performance of this offline version is significantly slower than the cloud-connected version that can be accessed on Apple and Android phones. 

DeepSeek's data practices have drawn escalating regulatory attention across a wide range of jurisdictions, including the United States. According to the privacy policies of the company, personal data, including user requests and uploaded files, is stored on servers located in China, which the company also claims it does not store in the U.S. 

As Ulrich Kamp stated in his statement, DeepSeek has not provided credible assurances that data belonging to Germans will be protected to the same extent as data belonging to European Union citizens. He also pointed out that Chinese authorities have access to personal data held by domestic companies with extensive access rights. 

It was Kamp's office's request in May for DeepSeek to either meet EU data transfer requirements or voluntarily withdraw its app, but the company did not do so. The controversy follows DeepSeek's January debut when it said that it had created an AI model that rivalled the ones of other American companies like OpenAI, but at a fraction of the cost. 

Over the past few years, the app has been banned in Italy due to concerns about transparency, as well as restricted access to government devices in the Netherlands, Belgium, and Spain, and the consumer rights organisation OCU has called for an official investigation into the matter. After reports from Reuters alleging DeepSeek's involvement in China's military and intelligence activities, lawmakers are preparing legislation that will prohibit federal agencies from using artificial intelligence models developed by Chinese companies. 

According to the Italian data protection authority, the Guarantor for the Protection of Personal Data, Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence have been requested to provide detailed information concerning the collection and processing of their data. A regulator has requested clarification about which personal data is collected, where the data originates, what the legal basis is for processing, and whether it is stored on Chinese servers. 

There are also other inquiries peopl would like to make regarding the training methodologies used for DeepSeek's artificial intelligence models, such as whether web scraping is involved, and how both registered as well as unregistered users are informed of this data collection. DeepSeek has 20 days to reply to these inquiries. 

As Forrester analysts have warned, the app has been widely adopted — it has been downloaded millions of times, which means that large amounts of potentially sensitive information are being uploaded and processed as a result. Based on DeepSeek's own privacy policy, the company has noted that it may collect user input, audio prompts, uploaded files, feedback, chat histories, and other content for training purposes, and may share these details with law enforcement officials or public authorities as needed. 

Although DeepSeek's models remain freely accessible throughout the world, despite regulatory bans and investigations intensifying in China, developers continue to download, adapt, and deploy them, sometimes independent of the official app or Chinese infrastructure, regardless of the official ban. The technology has become increasingly important in industry analysis, not just as an isolated threat, but as part of a broader shift toward a hardware-efficient, open-weight AI architecture, a trend which has been influenced by players such as Mistral, OpenHermes, and Elon Musk's Grok initiative, among many others.

To join the open-weight reasoning movement, OpenAI has released two open-weight reasoning models, GPTT-OSS-120B and GPTT-OSS-20B, which have been deployed within their infrastructure. During the rapid evolution of the artificial intelligence market, the question is no longer whether open-source AI can compete with existing incumbents—in fact, it already has. 

It is much more pressing to decide who will define the governance frameworks that will earn the trust of the public at a time when artificial intelligence, infrastructure control, and national sovereignty are converging at unprecedented rates. Despite the growing complexity of regulating advanced artificial intelligence in an interconnected, highly competitive global market, the ongoing scrutiny surrounding DeepSeek underscores the importance of governing advanced artificial intelligence. 

As a disruptive technological breakthrough evolved, it became a geopolitical and regulatory hot-button, demonstrating how privacy, security, and data sovereignty have now become a major issue in the race against artificial intelligence. Policymakers will find this case extremely significant because it emphasizes the need for coherent international frameworks that can address cross-border data governance and balance innovation with accountability, as well as addressing cross-border data governance. 

Whether it is enterprises or individuals, it serves to remind them that despite the benefits of cutting-edge AI tools, they come with inherent risks, risks that need to be carefully evaluated before they are adopted. A significant part of the future will be the blurring of the boundaries between local oversight and global accessibility as AI models become increasingly lightweight, hardware-efficient, and widely deployable. 

As a result, trust will not be primarily dependent on technological capability, but also on transparency, verifiable safeguards, and the willingness of developers to adhere to ethical and legal standards in the markets they are trying to serve in this environment. It is clear from the ongoing scrutiny surrounding DeepSeek that in a highly competitive global market, regulating advanced artificial intelligence is becoming increasingly complicated as it becomes increasingly interconnected. 

The initial breakthrough in technology has evolved into a geopolitical and regulatory flashpoint, demonstrating how questions of privacy, security, and data sovereignty have become a crucial element in the race toward artificial intelligence. It is clear from this case that policymakers have a pressing need for international frameworks that can address cross-border data governance and balance innovation with accountability. 

For enterprises and individuals alike, the case serves as a reminder that embracing cutting-edge artificial intelligence tools comes with inherent risks and that the risks must be carefully weighed before adoption can be made. It will become increasingly difficult to distinguish between local oversight and global accessibility as AI models become more open-minded, hardware-efficient, and widely deployable as they become more open-hearted, hardware-efficient, and widely deployable. 

In such a situation, trust will not be solely based on technological capabilities, but also on transparency, verifiable safeguards, as well as the willingness of developers to operate within the ethical and legal guidelines of the market in which they seek to compete.

A Massive 800% Rise in Data Breach Incidents in First Half of 2025


Cybersecurity experts have warned of a significant increase in identity-based attacks, following the revelation that 1.8 billion credentials were stolen in the first half of 2025, representing an 800% increase compared to the previous six months.

Data breach attacks are rising rapidly

Flashpoint’s Global Threat Intelligence Index report is based on more than 3.6 petabytes of data studied by the experts. Hackers stole credentials from 5.8 million compromised devices, according to the report. The significant rise is problematic as stolen credentials can give hackers access to organizational data, even when the accounts are protected by multi-factor authentication (MFA).

The report also includes details that concern security teams.

About the bugs

Until June 2025, the firm has found over 20,000 exposed bugs, 12,200 of which haven’t been reported in the National Vulnerability Database (NVD). This means that security teams are not informed. 7000 of these have public exploits available, exposing organizations to severe threats.

According to experts, “The digital attack surface continues to expand, and the volume of disclosed vulnerabilities is growing at a record pace – up by a staggering 246% since February 2025.” “This explosion, coupled with a 179% increase in publicly available exploit code, intensifies the pressure on security teams. It’s no longer feasible to triage and remediate every vulnerability.”

Surge in ransomware attacks

Both these trends can cause ransomware attacks, as early access mostly comes through vulnerability exploitation or credential hacking. Total reports of breaches have increased by 179% since 2024, manufacturing (22%), technology (18%), and retail (13%) have been hit the most. The report has also disclosed 3104 data breaches in the first half of this year, linked to 9.5 billion hacked records.

2025 to be record year for data breaches

Flashpoint reports that “Over the past four months, data breaches surged by 235%, with unauthorized access accounting for nearly 78% of all reported incidents. Data breaches are both the genesis and culmination of threat actor campaigns, serving as a source of continuous fuel for cybercrime activity.” 

In June, the Identity Theft Resource Center (ITRC) warned that 2025 could become a record year for data cyberattacks in the US.

AI-supported Cursor IDE Falls Victim to Prompt Injection Attacks


Experts have found a bug called CurXecute that is present in all variants of the AI-supported code editor Cursor and can be compromised to run remote code execution (RCE), along with developer privileges. 

About the bug

The security bug is now listed as CVE-2025-54135 and can be exploited by giving the AI agent a malicious prompt to activate threat actor control commands. 

The Cursor combined development environment (IDE) relies on AI agents to allow developers to code quicker and more effectively, helping them to connect with external systems and resources using Model Context Protocol (MCP).

According to the experts, a threat actor effectively abusing the CurXecute bug could trigger ransomware and ransomware data theft attacks. 

Prompt-injection 

CurXecute shares similarities to the EchoLeak bug in Microsoft 365 CoPilot that hackers can use to extort sensitive data without interacting with the users. 

After finding and studying EchoLeak, the experts from the cybersecurity company Aim Security found that hackers can even exploit the local AI agent.

Cursor IDE supports the MCP open-standard framework, which increases an agent’s features by connecting it to external data tools and sources.

Agent exploitation

But the experts have warned that doing so can exploit the agent, as it is open to external, suspicious data that can impact its control flow. The threat actor can take advantage by hacking the agent’s session and features to work as a user.

According to the experts, Cursor doesn’t need permission to run new entries to the ~/.cursor/mcp.json file. When the target opens the new conversation and tells the agent to summarize the messages, the shell payload deploys on the device without user authorization.

“Cursor allows writing in-workspace files with no user approval. If the file is a dotfile, editing it requires approval, but creating one if it doesn't exist doesn't. Hence, if sensitive MCP files, such as the .cursor/mcp.json file, don't already exist in the workspace, an attacker can chain an indirect prompt injection vulnerability to hijack the context to write to the settings file and trigger RCE on the victim without user approval,” Cursor said in a report.

OpenAI Launching AI-Powered Web Browser to Rival Chrome, Drive ChatGPT Integration

 

OpenAI is reportedly developing its own web browser, integrating artificial intelligence to offer users a new way to explore the internet. According to sources cited by Reuters, the tool is expected to be unveiled in the coming weeks, although an official release date has not yet been announced. With this move, OpenAI seems to be stepping into the competitive browser space with the goal of challenging Google Chrome’s dominance, while also gaining access to valuable user data that could enhance its AI models and advertising potential. 

The browser is expected to serve as more than just a window to the web—it will likely come packed with AI features, offering users the ability to interact with tools like ChatGPT directly within their browsing sessions. This integration could mean that AI-generated responses, intelligent page summaries, and voice-based search capabilities are no longer separate from web activity but built into the browsing experience itself. Users may be able to complete tasks, ask questions, and retrieve information all within a single, unified interface. 

A major incentive for OpenAI is the access to first-party data. Currently, most of the data that fuels targeted advertising and search engine algorithms is captured by Google through Chrome. By creating its own browser, OpenAI could tap into a similar stream of data—helping to both improve its large language models and create new revenue opportunities through ad placements or subscription services. While details on privacy controls are unclear, such deep integration with AI may raise concerns about data protection and user consent. 

Despite the potential, OpenAI faces stiff competition. Chrome currently holds a dominant share of the global browser market, with nearly 70% of users relying on it for daily web access. OpenAI would need to provide compelling reasons for people to switch—whether through better performance, advanced AI tools, or stronger privacy options. Meanwhile, other companies are racing to enter the same space. Perplexity AI, for instance, recently launched a browser named Comet, giving early adopters a glimpse into what AI-first browsing might look like. 

Ultimately, OpenAI’s browser could mark a turning point in how artificial intelligence intersects with the internet. If it succeeds, users might soon navigate the web in ways that are faster, more intuitive, and increasingly guided by AI. But for now, whether this approach will truly transform online experiences—or simply add another player to the browser wars—remains to be seen.

Why Running AI Locally with an NPU Offers Better Privacy, Speed, and Reliability

 

Running AI applications locally offers a compelling alternative to relying on cloud-based chatbots like ChatGPT, Gemini, or Deepseek, especially for those concerned about data privacy, internet dependency, and speed. Though cloud services promise protections through subscription terms, the reality remains uncertain. In contrast, using AI locally means your data never leaves your device, which is particularly advantageous for professionals handling sensitive customer information or individuals wary of sharing personal data with third parties. 

Local AI eliminates the need for a constant, high-speed internet connection. This reliable offline capability means that even in areas with spotty coverage or during network outages, tools for voice control, image recognition, and text generation remain functional. Lower latency also translates to near-instantaneous responses, unlike cloud AI that may lag due to network round-trip times. 

A powerful hardware component is essential here: the Neural Processing Unit (NPU). Typical CPUs and GPUs can struggle with AI workloads like large language models and image processing, leading to slowdowns, heat, noise, and shortened battery life. NPUs are specifically designed for handling matrix-heavy computations—vital for AI—and they allow these models to run efficiently right on your laptop, without burdening the main processor. 

Currently, consumer devices such as Intel Core Ultra, Qualcomm Snapdragon X Elite, and Apple’s M-series chips (M1–M4) come equipped with NPUs built for this purpose. With one of these devices, you can run open-source AI models like DeepSeek‑R1, Qwen 3, or LLaMA 3.3 using tools such as Ollama, which supports Windows, macOS, and Linux. By pairing Ollama with a user-friendly interface like OpenWeb UI, you can replicate the experience of cloud chatbots entirely offline.  

Other local tools like GPT4All and Jan.ai also provide convenient interfaces for running AI models locally. However, be aware that model files can be quite large (often 20 GB or more), and without NPU support, performance may be sluggish and battery life will suffer.  

Using AI locally comes with several key advantages. You gain full control over your data, knowing it’s never sent to external servers. Offline compatibility ensures uninterrupted use, even in remote or unstable network environments. In terms of responsiveness, local AI often outperforms cloud models due to the absence of network latency. Many tools are open source, making experimentation and customization financially accessible. Lastly, NPUs offer energy-efficient performance, enabling richer AI experiences on everyday devices. 

In summary, if you’re looking for a faster, more private, and reliable AI workflow that doesn’t depend on the internet, equipping your laptop with an NPU and installing tools like Ollama, OpenWeb UI, GPT4All, or Jan.ai is a smart move. Not only will your interactions be quick and seamless, but they’ll also remain securely under your control.

Google Gemini Bug Exploits Summaries for Phishing Scams


False AI summaries leading to phishing attacks

Google Gemini for Workspace can be exploited to generate email summaries that appear legitimate but include malicious instructions or warnings that direct users to phishing sites without using attachments or direct links.

Google Gemini for Workplace can be compromised to create email summaries that look real but contain harmful instructions or warnings that redirect users to phishing websites without using direct links or attachments. 

Similar attacks were reported in 2024 and afterwards; safeguards were pushed to stop misleading responses. However, the tactic remains a problem for security experts. 

Gemini for attack

A prompt-injection attack on the Gemini model was revealed via cybersecurity researcher Marco Figueoa, at 0din, Mozilla’s bug bounty program for GenAI tools. The tactic creates an email with a hidden directive for Gemini. The threat actor can hide malicious commands in the message body text at the end via CSS and HTML, which changes the font size to zero and color to white. 

According to Marco, who is GenAI Bug Bounty Programs Manager at Mozilla, “Because the injected text is rendered in white-on-white (or otherwise hidden), the victim never sees the instruction in the original message, only the fabricated 'security alert' in the AI-generated summary. Similar indirect prompt attacks on Gemini were first reported in 2024, and Google has already published mitigations, but the technique remains viable today.”

Gmail does not render the malicious instruction as there are no attachments or links present, and the message may reach the victim’s inbox. If the receiver opens the email and asks Gemini to make a summary of the received mail, the AI tool will parse the invisible directive and create the summary. Figueroa provides an example of Gemini following hidden prompts, accompanied by a security warning that the victim’s Gmail password and phone number may be compromised.

Impact

Supply-chain threats: CRM systems, automated ticketing emails, and newsletters can become injection vectors, changing one exploited SaaS account into hundreds of thousands of phishing beacons.

Cross-product surface: The same tactics applies to Gemini in Slides, Drive search, Docs and any workplace where the model is getting third-party content.

According to Marco, “Security teams must treat AI assistants as part of the attack surface and instrument them, sandbox them, and never assume their output is benign.”

WhatsApp Under Fire for AI Update Disrupting Group Communication


The new artificial intelligence capability introduced by WhatsApp aims to transform the way users interact with their conversations through sophisticated artificial intelligence. It uses advanced technology from Meta AI to provide a concise summary of unread messages across individual chats as well as group chats, which is referred to as Message Summaries. 

The tool was created to help users stay informed in increasingly active chat environments by automatically compiling key points and contextual highlights, allowing them to catch up in just a few clicks without having to scroll through lengthy message histories to catch up. The company claims all summaries are generated privately, so that confidentiality can be maintained and the process of use is as simple as possible for the user. 

WhatsApp announces its intention of integrating artificial intelligence-driven solutions into its app to improve user convenience as well as reshape communication habits for its global community with this rollout, sparking both excitement and controversy as a result. Despite being announced last month, WhatsApp’s innovative Message Summaries feature has moved from pilot testing to a full-scale rollout after successfully passing pilot testing. 

Having refined the tool and collected feedback from its users, it is now considered to be stable and has been formally launched for wider use. In the initial phase, the feature is only available to US users and is restricted to the English language at this time. This indicates that WhatsApp is cautious when it comes to deploying large-scale artificial intelligence. 

Nevertheless, the platform announced plans to extend its availability to more regions at some point in the future, along with the addition of multilingual support. The phased rollout strategy emphasises that the company is focused on ensuring that the technology is reliable and user-friendly before it is extended to the vast global market. 

It is WhatsApp's intention to focus on a controlled release so as to gather more insights about users' interaction with the AI-generated conversation summaries, as well as to fine-tune the experience before expanding internationally. As a result of WhatsApp's inability to provide an option for enabling or concealing the Message Summaries feature, there has been a significant amount of discontent among users. 

Despite the fact that Meta has refused to clarify the reason regarding the lack of an opt-out mechanism or why users were not offered the opportunity to opt out of the AI integration, they have not provided any explanation so far. As concerning as the technology itself is, the lack of transparency has been regarded equally as a cause for concern by many, raising questions about the control people have over their personal communications. As a result of these limitations, some people have attempted to circumvent the chatbot by switching to a WhatsApp Business account as a response. 

In addition, several users have commented that this strategy removed the AI functionality from Meta AI, but others have noted that the characteristic blue circle, which indicates Meta AI's presence, still appeared, which exacerbated the dissatisfaction and uncertainty. 

The Meta team hasn’t confirmed whether the business-oriented version of WhatsApp will continue to be exempt from AI integration for years to come. This rollout also represents Meta’s broader goal of integrating generative AI into all its platforms, which include Facebook and Instagram, into its ecosystem. 

Towards the end of 2024, Meta AI was introduced for the first time in Facebook Messenger in the United Kingdom, followed by a gradual extension into WhatsApp as part of a unified vision to revolutionise digital interactions. However, many users have expressed their frustration with this feature because it often feels intrusive and ultimately is useless, despite these ambitions. 

The chatbot appears to activate frequently when individuals are simply searching for past conversations or locating contacts, which results in obstructions rather than streamlining the experience. According to the initial feedback received, AI-generated responses are frequently perceived as superficial, repetitive, or even irrelevant to the conversation's context, as well as generating a wide range of perceptions of their value.

A Meta AI platform has been integrated directly into WhatsApp, unlike standalone platforms such as ChatGPT and Google Gemini, which are separately accessible by users. WhatsApp is a communication application that is used on a daily basis to communicate both personally and professionally. Because the feature was integrated without explicit consent and there were doubts about its usefulness, many users are beginning to wonder whether such pervasive AI assistance is really necessary or desirable. 

It has also been noted that there is a growing chorus of criticism about the inherent limitations of artificial intelligence in terms of reliably interpreting human communication. Many users have expressed their scepticism about AI's ability to accurately condense even one message within an active group chat, let alone synthesise hundreds of exchanges. It is not the first time Apple has faced similar challenges; Apple has faced similar challenges in the past when it had to pull an AI-powered feature that produced unintended and sometimes inaccurate summaries. 

As of today, the problem of "hallucinations," which occur in the form of factually incorrect or contextually irrelevant content generated by artificial intelligence, remains a persistent problem across nearly every generative platform, including commonly used platforms like ChatGPT. Aside from that, artificial intelligence continues to struggle with subtleties such as humour, sarcasm, and cultural nuance-aspects of natural conversation that are central to establishing a connection. 

In situations where the AI is not trained to recognise offhand or joking remarks, it can easily misinterpret those remarks. This leads to summaries that are alarmist, distorted, or completely inaccurate, as compared to human recipients' own. Due to the increased risk of misrepresentation, users who rely on WhatsApp for authentic, nuanced communication with colleagues, friends, and family are becoming more apprehensive than before. 

A philosophical objection has been raised beyond technical limitations, stating that the act of participating in a conversation is diminished by substituting real engagement for machine-generated recaps. There is a shared sentiment that the purpose of group chats lies precisely in the experience of reading and responding to the genuine voices of others while scrolling through a backlog of messages. 

However, there is a consensus that it is exhausting to scroll through such a large backlog of messages. It is believed that the introduction of Message Summaries not only threatens clear communication but also undermines the sense of personal connection that draws people into these digital communities in the first place, which is why these critics are concerned. 

In order to ensure user privacy, WhatsApp has created the Message Summaries feature using a new framework known as Private Processing, which is designed to safeguard user privacy. Meta and WhatsApp are specifically ensuring that neither the contents of their conversations nor the summaries that the AI system produces are able to be accessed by them, which is why this approach was developed. 

Instead of sending summaries to external servers, the platform is able to generate them locally on the users' devices, reinforcing its commitment to privacy. Each summary, presented in a clear bullet point format, is clearly labelled as "visible only to you," emphasising WhatsApp's privacy-centric design philosophy behind the feature as well. 

Message Summaries have shown to be especially useful in group chats in which the amount of unread messages is often overwhelming, as a result of the large volume of unread messages. With this tool, users are able to remain informed without having to read every single message, because lengthy exchanges are distilled into concise snapshots that enable them to stay updated without having to scroll through each and every individual message. 

The feature is disabled by default and needs to be activated manually, which addresses privacy concerns. Upon activating the feature, eligible chats display a discreet icon, signalling the availability of a summary without announcing it to other participants. Meta’s confidential computing infrastructure is at the core of its system, and in principle, it is comparable to Apple’s private cloud computing architecture. 

A Trusted Execution Environment (TEE) provides a foundation for Private Processing, ensuring that confidential information is handled in an effective manner, with robust measures against tampering, and clear mechanisms for ensuring transparency are in place.

A system's architecture is designed to shut down automatically or to generate verifiable evidence of the intrusion whenever any attempt is made to compromise the security assurances of the system. As well as supporting independent third-party audits, Meta has intentionally designed the framework in such a way that it will remain stateless, forward secure, and immune to targeted attacks so that Meta's claims about data protection can be verified. 

Furthermore, advanced chat privacy settings are included as a complement to these technical safeguards, as they allow users to select the conversations that will be eligible for AI-generated summaries and thus offer granular control over the use of the feature. Moreover, when a user decides to enable summaries in a chat, no notification is sent to other participants, allowing for greater discretion on the part of other participants.

There is currently a phase in which Message Summaries are being gradually introduced to users in the United States. They can only be read in English at the moment. There has been confirmation by Meta that the feature will be expanded to additional regions and supported in additional languages shortly, as part of their broader effort to integrate artificial intelligence into all aspects of their service offerings. 

As WhatsApp intensifies its efforts to embed AI capabilities deeper and deeper into everyday communication, Message Summaries marks a pivotal moment in the evolution of relationships between technology and human interaction as the company accelerates its ambition to involve AI capabilities across the entire enterprise. 

Even though the company has repeatedly reiterated that it is committed to privacy, transparency, and user autonomy, the response to this feature has been polarised, which highlights the challenges associated with incorporating artificial intelligence in spaces where trust, nuance, and human connection are paramount. 

It is a timely reminder that, for both individuals and organisations, the growth of convenience-driven automation impacts the genuine social fabric that is a hallmark of digital communities and requires a careful assessment. 

As platforms evolve, stakeholders would do well to remain vigilant with the changes to platform policies, evaluate whether such tools align with the communication values they hold dear, and consider offering structured feedback in order for these technologies to mature with maturity. As artificial intelligence continues to redefine the contours of messaging, users will need to be open to innovation while also expressing critical thought about the long-term implications on privacy, comprehension, and even the very nature of meaningful dialogue as AI use continues to grow in popularity.

OpenAI Rolls Out Premium Data Connections for ChatGPT Users


The ChatGPT solution has become a transformative artificial intelligence solution widely adopted by individuals and businesses alike seeking to improve their operations. Developed by OpenAI, this sophisticated artificial intelligence platform has been proven to be very effective in assisting users with drafting compelling emails, developing creative content, or conducting complex data analysis by streamlining a wide range of workflows. 

OpenAI is continuously enhancing ChatGPT's capabilities through new integrations and advanced features that make it easier to integrate into the daily workflows of an organisation; however, an understanding of the platform's pricing models is vital for any organisation that aims to use it efficiently on a day-to-day basis. A business or an entrepreneur in the United Kingdom that is considering ChatGPT's subscription options may find that managing international payments can be an additional challenge, especially when the exchange rate fluctuates or conversion fees are hidden.

In this context, the Wise Business multi-currency credit card offers a practical solution for maintaining financial control as well as maintaining cost transparency. This payment tool, which provides companies with the ability to hold and spend in more than 40 currencies, enables them to settle subscription payments without incurring excessive currency conversion charges, which makes it easier for them to manage budgets as well as adopt cutting-edge technology. 

A suite of premium features has been recently introduced by OpenAI that aims to enhance the ChatGPT experience for subscribers by enhancing its premium features. There is now an option available to paid users to use advanced reasoning models that include O1 and O3, which allow users to make more sophisticated analytical and problem-solving decisions. 

The subscription comes with more than just enhanced reasoning; it also includes an upgraded voice mode that makes conversational interactions more natural, as well as improved memory capabilities that allow the AI to retain context over the course of a long period of time. It has also been enhanced with the addition of a powerful coding assistant designed to help developers automate workflows and speed up the software development process. 

To expand the creative possibilities even further, OpenAI has adjusted token limits, which allow for greater amounts of input and output text and allow users to generate more images without interruption. In addition to expedited image generation via a priority queue, subscribers have the option of achieving faster turnaround times during high-demand periods. 

In addition to maintaining full access to the latest models, paid accounts are also provided with consistent performance, as they are not forced to switch to less advanced models when server capacity gets strained-a limitation that free users may still have to deal with. While OpenAI has put in a lot of effort into enriching the paid version of the platform, the free users have not been left out. GPT-4o has effectively replaced the older GPT-4 model, allowing complimentary accounts to take advantage of more capable technology without having to fall back to a fallback downgrade. 

In addition to basic imaging tools, free users will also receive the same priority in generation queues as paid users, although they will also have access to basic imaging tools. With its dedication to making AI broadly accessible, OpenAI has made additional features such as ChatGPT Search, integrated shopping assistance, and limited memory available free of charge, reflecting its commitment to making AI accessible to the public. 

ChatGPT's free version continues to be a compelling option for people who utilise the software only sporadically-perhaps to write occasional emails, research occasionally, and create simple images. In addition, individuals or organisations who frequently run into usage limits, such as waiting for long periods of time for token resettings, may find that upgrading to a paid plan is an extremely beneficial decision, as it unlocks uninterrupted access as well as advanced capabilities. 

In order to transform ChatGPT into a more versatile and deeply integrated virtual assistant, OpenAI has introduced a new feature, called Connectors, which is designed to transform the platform into an even more seamless virtual assistant. It has been enabled by this new feature for ChatGPT to seamlessly interface with a variety of external applications and data sources, allowing the AI to retrieve and synthesise information from external sources in real time while responding to user queries. 

With the introduction of Connectors, the company is moving forward towards providing a more personal and contextually relevant experience for our users. In the case of an upcoming family vacation, for example, ChatGPT can be instructed by users to scan their Gmail accounts in order to compile all correspondence regarding the trip. This allows users to streamline travel plans rather than having to go through emails manually. 

With its level of integration, Gemini is similar to its rivals, which enjoy advantages from Google's ownership of a variety of popular services such as Gmail and Calendar. As a result of Connectors, individuals and businesses will be able to redefine how they engage with AI tools in a new way. OpenAI intends to create a comprehensive digital assistant by giving ChatGPT secure access to personal or organisational data that is residing across multiple services, by creating an integrated digital assistant that anticipates needs, surfaces critical insights, streamlines decision-making processes, and provides insights. 

There is an increased demand for highly customised and intelligent assistance, which is why other AI developers are likely to pursue similar integrations to remain competitive. The strategy behind Connectors is ultimately to position ChatGPT as a central hub for productivity — an artificial intelligence that is capable of understanding, organising, and acting upon every aspect of a user’s digital life. 

In spite of the convenience and efficiency associated with this approach, it also illustrates the need to ensure that personal information remains protected while providing robust data security and transparency in order for users to take advantage of these powerful integrations as they become mainstream. In its official X (formerly Twitter) account, OpenAI has recently announced the availability of Connectors that can integrate with Google Drive, Dropbox, SharePoint, and Box as part of ChatGPT outside of the Deep Research environment. 

As part of this expansion, users will be able to link their cloud storage accounts directly to ChatGPT, enabling the AI to retrieve and process their personal and professional data, enabling it to create responses on their own. As stated by OpenAI in their announcement, this functionality is "perfect for adding your own context to your ChatGPT during your daily work," highlighting the company's ambition of making ChatGPT more intelligent and contextually aware. 

It is important to note, however, that access to these newly released Connectors is confined to specific subscriptions and geographical restrictions. A ChatGPT Pro subscription, which costs $200 per month, is exclusive to ChatGPT Pro subscribers only and is currently available worldwide, except for the European Economic Area (EEA), Switzerland and the United Kingdom. Consequently, users whose plans are lower-tier, such as ChatGPT Plus subscribers paying $20 per month, or who live in Europe, cannot use these integrations at this time. 

Typically, the staggered rollout of new technologies is a reflection of broader challenges associated with regulatory compliance within the EU, where stricter data protection regulations as well as artificial intelligence governance frameworks often delay their availability. Deep Research remains relatively limited in terms of the Connectors available outside the company. However, Deep Research provides the same extensive integration support as Deep Research does. 

In the ChatGPT Plus and Pro packages, users leveraging Deep Research capabilities can access a much broader array of integrations — for example, Outlook, Teams, Gmail, Google Drive, and Linear — but there are some restrictions on regions as well. Additionally, organisations with Team plans, Enterprise plans, or Educational plans have access to additional Deep Research features, including SharePoint, Dropbox, and Box, which are available to them as part of their Deep Research features. 

Additionally, OpenAI is now offering the Model Context Protocol (MCP), a framework which allows workspace administrators to create customised Connectors based on their needs. By integrating ChatGPT with proprietary data systems, organizations can create secure, tailored integrations, enabling highly specialized use cases for internal workflows and knowledge management that are highly specialized. 

With the increasing adoption of artificial intelligence solutions by companies, it is anticipated that the catalogue of Connectors will rapidly expand, offering users the option of incorporating external data sources into their conversations. The dynamic nature of this market underscores that technology giants like Google have the advantage over their competitors, as their AI assistants, such as Gemini, can be seamlessly integrated throughout all of their services, including the search engine. 

The OpenAI strategy, on the other hand, relies heavily on building a network of third-party integrations to create a similar assistant experience for its users. It is now generally possible to access the new Connectors in the ChatGPT interface, although users will have to refresh their browsers or update the app in order to activate the new features. 

As AI-powered productivity tools continue to become more widely adopted, the continued growth and refinement of these integrations will likely play a central role in defining the future of AI-powered productivity tools. A strategic approach is recommended for organisations and professionals evaluating ChatGPT as generative AI capabilities continue to mature, as it will help them weigh the advantages and drawbacks of deeper integration against operational needs, budget limitations, and regulatory considerations that will likely affect their decisions.

As a result of the introduction of Connectors and the advanced subscription tiers, people are clearly on a trajectory toward more personalised and dynamic AI assistance, which is able to ingest and contextualise diverse data sources. As a result of this evolution, it is also becoming increasingly important to establish strong frameworks for data governance, to establish clear controls for access to the data, and to ensure adherence to privacy regulations.

If companies intend to stay competitive in an increasingly automated landscape by investing early in these capabilities, they can be in a better position to utilise the potential of AI and set clear policies that balance innovation with accountability by leveraging the efficiencies of AI in the process. In the future, the organisations that are actively developing internal expertise, testing carefully selected integrations, and cultivating a culture of responsible AI usage will be the most prepared to fully realise the potential of artificial intelligence and to maintain a competitive edge for years to come.

Navigating AI Security Risks in Professional Settings


 

There is no doubt that generative artificial intelligence is one of the most revolutionary branches of artificial intelligence, capable of producing entirely new content across many different types of media, including text, image, audio, music, and even video. As opposed to conventional machine learning models, which are based on executing specific tasks, generative AI systems learn patterns and structures from large datasets and are able to produce outputs that aren't just original, but are sometimes extremely realistic as well. 

It is because of this ability to simulate human-like creativity that generative AI has become an industry leader in technological innovation. Its applications go well beyond simple automation, touching almost every sector of the modern economy. As generative AI tools reshape content creation workflows, they produce compelling graphics and copy at scale in a way that transforms the way content is created. 

The models are also helpful in software development when it comes to generating code snippets, streamlining testing, and accelerating prototyping. AI also has the potential to support scientific research by allowing the simulation of data, modelling complex scenarios, and supporting discoveries in a wide array of areas, such as biology and material science.

Generative AI, on the other hand, is unpredictable and adaptive, which means that organisations are able to explore new ideas and achieve efficiencies that traditional systems are unable to offer. There is an increasing need for enterprises to understand the capabilities and the risks of this powerful technology as adoption accelerates. 

Understanding these capabilities has become an essential part of staying competitive in a digital world that is rapidly changing. In addition to reproducing human voices and creating harmful software, generative artificial intelligence is rapidly lowering the barriers for launching highly sophisticated cyberattacks that can target humans. There is a significant threat from the proliferation of deepfakes, which are realistic synthetic media that can be used to impersonate individuals in real time in convincing ways. 

In a recent incident in Italy, cybercriminals manipulated and deceived the Defence Minister Guido Crosetto by leveraging advanced audio deepfake technology. These tools demonstrate the alarming ability of such tools for manipulating and deceiving the public. Also, a finance professional recently transferred $25 million after being duped into transferring it by fraudsters using a deepfake simulation of the company's chief financial officer, which was sent to him via email. 

Additionally, the increase in phishing and social engineering campaigns is concerning. As a result of the development of generative AI, adversaries have been able to craft highly personalised and context-aware messages that have significantly enhanced the quality and scale of these attacks. It has now become possible for hackers to create phishing emails that are practically indistinguishable from legitimate correspondence through the analysis of publicly available data and the replication of authentic communication styles. 

Cybercriminals are further able to weaponise these messages through automation, as this enables them to create and distribute a huge volume of tailored lures that are tailored to match the profile and behaviour of each target dynamically. Using the power of AI to generate large language models (LLMs), attackers have also revolutionised malicious code development. 

A large language model can provide attackers with the power to design ransomware, improve exploit techniques, and circumvent conventional security measures. Therefore, organisations across multiple industries have reported an increase in AI-assisted ransomware incidents, with over 58% of them stating that the increase has been significant.

It is because of this trend that security strategies must be adapted to address threats that are evolving at machine speed, making it crucial for organisations to strengthen their so-called “human firewalls”. While it has been demonstrated that employee awareness remains an essential defence, studies have indicated that only 24% of organisations have implemented continuous cyber awareness programs, which is a significant amount. 

As companies become more sophisticated in their security efforts, they should update training initiatives to include practical advice on detecting hyper-personalised phishing attempts, detecting subtle signs of deepfake audio and identifying abnormal system behaviours that can bypass automated scanners in order to protect themselves from these types of attacks. Providing a complement to human vigilance, specialised counter-AI solutions are emerging to mitigate these risks. 

In order to protect against AI-driven phishing campaigns, DuckDuckGoose Suite, for example, uses behavioural analytics and threat intelligence to prevent AI-based phishing campaigns from being initiated. Tessian, on the other hand, employs behavioural analytics and threat intelligence to detect synthetic media. As well as disrupting malicious activity in real time, these technologies also provide adaptive coaching to assist employees in developing stronger, instinctive security habits in the workplace. 
Organisations that combine informed human oversight with intelligent defensive tools will have the capacity to build resilience against the expanding arsenal of AI-enabled cyber threats. Recent legal actions have underscored the complexity of balancing AI use with privacy requirements. It was raised by OpenAI that when a judge ordered ChatGPT to keep all user interactions, including deleted chats, they might inadvertently violate their privacy commitments if they were forced to keep data that should have been wiped out.

AI companies face many challenges when delivering enterprise services, and this dilemma highlights the challenges that these companies face. OpenAI and Anthropic are platforms offering APIs and enterprise products that often include privacy safeguards; however, individuals using their personal accounts are exposed to significant risks when handling sensitive information that is about them or their business. 

AI accounts should be managed by the company, users should understand the specific privacy policies of these tools, and they should not upload proprietary or confidential materials unless specifically authorised by the company. Another critical concern is the phenomenon of AI hallucinations that have occurred in recent years. This is because large language models are constructed to predict language patterns rather than verify facts, which can result in persuasively presented, but entirely fictitious content.

As a result of this, there have been several high-profile incidents that have resulted, including fabricated legal citations in court filings, as well as invented bibliographies. It is therefore imperative that human review remains part of professional workflows when incorporating AI-generated outputs. Bias is another persistent vulnerability.

Due to the fact that artificial intelligence models are trained on extensive and imperfect datasets, these models can serve to mirror and even amplify the prejudices that exist within society as a whole. As a result of the system prompts that are used to prevent offensive outputs, there is an increased risk of introducing new biases, and system prompt adjustments have resulted in unpredictable and problematic responses, complicating efforts to maintain a neutral environment. 

Several cybersecurity threats, including prompt injection and data poisoning, are also on the rise. A malicious actor may use hidden commands or false data to manipulate model behaviour, thus causing outputs that are inaccurate, offensive, or harmful. Additionally, user error remains an important factor as well. Instances such as unintentionally sharing private AI chats or recording confidential conversations illustrate just how easy it is to breach confidentiality, even with simple mistakes.

It has also been widely reported that intellectual property concerns complicate the landscape. Many of the generative tools have been trained on copyrighted material, which has raised legal questions regarding how to use such outputs. Before deploying AI-generated content commercially, companies should seek legal advice. 

As AI systems develop, even their creators are not always able to predict the behaviour of these systems, leaving organisations with a challenging landscape where threats continue to emerge in unexpected ways. However, the most challenging risk is the unknown. The government is facing increasing pressure to establish clear rules and safeguards as artificial intelligence moves from the laboratory to virtually every corner of the economy at a rapid pace. 

Before the 2025 change in administration, there was a growing momentum behind early regulatory efforts in the United States. For instance, Executive Order 14110 outlined the appointment of chief AI officers by federal agencies and the development of uniform guidelines for assessing and managing AI risks. As a result of this initiative, a baseline of accountability for AI usage in the public sector was established. 

A change in strategy has taken place in the administration's approach to artificial intelligence since they rescinded the order. This signalled a departure from proactive federal oversight. The future outlook for artificial intelligence regulation in the United States is highly uncertain, however. The Trump-backed One Big Beautiful Bill proposes sweeping restrictions that would prevent state governments from enacting artificial intelligence regulations for at least the next decade. 

As a result of this measure becoming law, it could effectively halt local and regional governance at a time when AI is gaining a greater influence across practically all industries. Meanwhile, the European Union currently seems to be pursuing a more consistent approach to AI. 

As of March 2024, a comprehensive framework titled the Artificial Intelligence Act was established. This framework categorises artificial intelligence applications according to the level of risk they pose and imposes strict requirements for applications that pose a significant risk, such as those in the healthcare field, education, and law enforcement. 

Also included in the legislation are certain practices, such as the use of facial recognition systems in public places, that are outright banned, reflecting a commitment to protecting the individual's rights. In terms of how AI oversight is defined and enforced, there is a widening gap between regions as a result of these different regulatory strategies. 

Technology will continue to evolve, and to ensure compliance and manage emerging risks effectively, organisations will have to remain vigilant and adapt to the changing legal landscape as a result of this.