As one of the world's leading AI players, DeepSeek, a chatbot application developed by the Chinese government, has been a formidable force in the artificial intelligence arena since it emerged in January 2025, launching at the top of the app store charts and reshaping conversations in the technology and investment industries. After initially being hailed as a potential "ChatGPT killer" by industry observers, the platform has been the subject of intense scrutiny since its meteoric rise.
The DeepSeek platform is positioned in the centre of app store removals, cross-border security investigations, and measured enterprise adoption by August 2025. In other words, we are at the intersection of technological advances, infrastructure challenges, and geopolitical issues that may shape the next phase of the evolution of artificial intelligence in the years ahead.
A significant regulatory development has occurred in South Korea, with the Personal Information Protection Commission confirming that DeepSeek temporarily suspended the download of its chatbot applications while working with local authorities to address privacy concerns and issues regarding DeepSeek's data assets.
On Saturday, the South Korean version of Apple's App Store, as well as Google Play in South Korea, were taken down from their respective platforms, following an agreement with the company to enhance its privacy protection measures before they were relaunched.
It has been emphasised that, although existing mobile users and personal computer users are not affected, officials are urging caution on behalf of the commission; Nam Seok, director of the investigation division, has advised users to remove the app or to refrain from sharing personal information until the issues have been addressed.
An investigation by Microsoft's security team has revealed that individuals reportedly linked to DeepSeek have been transferring substantial amounts of data using OpenAI's application programming interface (API), which is a core channel for developers and enterprises to integrate OpenAI technology into their products and services. Having become OpenAI's biggest shareholder, Microsoft flagged this unusual activity, triggering a review internally.
There has been a meteoric rise by DeepSeek in recent days, and the Chinese artificial intelligence startup has emerged as an increasingly prominent competitor to established U.S. companies, including ChatGPT and Claude, whose AI assistant is currently more popular than ChatGPT. On Monday, as a result of a plunge in technology sector stock prices on Monday, the AI assistant surged to overtake ChatGPT in the U.S. App Store downloads.
There has been growing international scrutiny surrounding the DeepSeek R1 chatbot, which has recently been removed from Apple’s App Store and Google Play in South Korea amid mounting international scrutiny. This follows an admission by the Hangzhou-based company that it did not comply with the laws regulating personal data privacy.
As DeepSeek’s R1 chatbot is lauded as having advanced capabilities at a fraction of its Western competitors’ cost, its data handling practices are being questioned sharply as well. Particularly, how user information is stored on secure servers in the People’s Republic of China has been criticised by the US and others. The Personal Information Protection Commission of South Korea confirmed that the app had been removed from the local app stores at 6 p.m. on Monday.
In a statement released on Saturday morning (900 GMT), the commission said it had suspended the service due to violations of domestic data protection laws. Existing users can continue using the service, but the commission has urged the public not to provide personal information until the investigation is completed.
According to the PIPC, DeepSeek must make substantial changes so that it can meet Korean privacy standards. A shortcoming that DeepSeek has acknowledged is this.
In addition, data security professor Youm Heung-youl, from Soonchunhyang University, further noted that despite the company's privacy policies relating to European markets and other markets, the same policy does not exist for South Korean users, who are subject to a different localised framework.
In response to an inquiry by Italy's data protection authority, the company has taken steps to ensure the app takes the appropriate precautions with regard to the data that it collects, the sources from which it obtains it, its intended use, legal justifications, and its storage in China.
While it is unclear to what extent DeepSeek initiated the removal or whether the app store operators took an active role in the process, the development follows the company's announcement last month of its R1 reasoning model, an open-source alternative to ChatGPT, positioned as an alternative to ChatGPT for more cost-effectiveness.
Government concerns over data privacy have been heightened by the model's rapid success, which led to similar inquiries in Ireland and Italy as well as a cautionary directive from the United States Navy indicating that DeepSeek AI cannot be used because of its origin and operation, posing security and ethical risks. The controversy revolves around the handling and storage of user data at the centre of this controversy.
It has been reported that all user information, including chat histories and other personal information of users, has been transferred to China and stored on servers there. A more privacy-conscious version of DeepSeek's model may be run locally on a desktop computer, though the performance of this offline version is significantly slower than the cloud-connected version that can be accessed on Apple and Android phones.
DeepSeek's data practices have drawn escalating regulatory attention across a wide range of jurisdictions, including the United States. According to the privacy policies of the company, personal data, including user requests and uploaded files, is stored on servers located in China, which the company also claims it does not store in the U.S.
As Ulrich Kamp stated in his statement, DeepSeek has not provided credible assurances that data belonging to Germans will be protected to the same extent as data belonging to European Union citizens. He also pointed out that Chinese authorities have access to personal data held by domestic companies with extensive access rights.
It was Kamp's office's request in May for DeepSeek to either meet EU data transfer requirements or voluntarily withdraw its app, but the company did not do so. The controversy follows DeepSeek's January debut when it said that it had created an AI model that rivalled the ones of other American companies like OpenAI, but at a fraction of the cost.
Over the past few years, the app has been banned in Italy due to concerns about transparency, as well as restricted access to government devices in the Netherlands, Belgium, and Spain, and the consumer rights organisation OCU has called for an official investigation into the matter. After reports from Reuters alleging DeepSeek's involvement in China's military and intelligence activities, lawmakers are preparing legislation that will prohibit federal agencies from using artificial intelligence models developed by Chinese companies.
According to the Italian data protection authority, the Guarantor for the Protection of Personal Data, Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence have been requested to provide detailed information concerning the collection and processing of their data. A regulator has requested clarification about which personal data is collected, where the data originates, what the legal basis is for processing, and whether it is stored on Chinese servers.
There are also other inquiries peopl would like to make regarding the training methodologies used for DeepSeek's artificial intelligence models, such as whether web scraping is involved, and how both registered as well as unregistered users are informed of this data collection. DeepSeek has 20 days to reply to these inquiries.
As Forrester analysts have warned, the app has been widely adopted — it has been downloaded millions of times, which means that large amounts of potentially sensitive information are being uploaded and processed as a result. Based on DeepSeek's own privacy policy, the company has noted that it may collect user input, audio prompts, uploaded files, feedback, chat histories, and other content for training purposes, and may share these details with law enforcement officials or public authorities as needed.
Although DeepSeek's models remain freely accessible throughout the world, despite regulatory bans and investigations intensifying in China, developers continue to download, adapt, and deploy them, sometimes independent of the official app or Chinese infrastructure, regardless of the official ban.
The technology has become increasingly important in industry analysis, not just as an isolated threat, but as part of a broader shift toward a hardware-efficient, open-weight AI architecture, a trend which has been influenced by players such as Mistral, OpenHermes, and Elon Musk's Grok initiative, among many others.
To join the open-weight reasoning movement, OpenAI has released two open-weight reasoning models, GPTT-OSS-120B and GPTT-OSS-20B, which have been deployed within their infrastructure.
During the rapid evolution of the artificial intelligence market, the question is no longer whether open-source AI can compete with existing incumbents—in fact, it already has.
It is much more pressing to decide who will define the governance frameworks that will earn the trust of the public at a time when artificial intelligence, infrastructure control, and national sovereignty are converging at unprecedented rates.
Despite the growing complexity of regulating advanced artificial intelligence in an interconnected, highly competitive global market, the ongoing scrutiny surrounding DeepSeek underscores the importance of governing advanced artificial intelligence.
As a disruptive technological breakthrough evolved, it became a geopolitical and regulatory hot-button, demonstrating how privacy, security, and data sovereignty have now become a major issue in the race against artificial intelligence.
Policymakers will find this case extremely significant because it emphasizes the need for coherent international frameworks that can address cross-border data governance and balance innovation with accountability, as well as addressing cross-border data governance.
Whether it is enterprises or individuals, it serves to remind them that despite the benefits of cutting-edge AI tools, they come with inherent risks, risks that need to be carefully evaluated before they are adopted.
A significant part of the future will be the blurring of the boundaries between local oversight and global accessibility as AI models become increasingly lightweight, hardware-efficient, and widely deployable.
As a result, trust will not be primarily dependent on technological capability, but also on transparency, verifiable safeguards, and the willingness of developers to adhere to ethical and legal standards in the markets they are trying to serve in this environment.
It is clear from the ongoing scrutiny surrounding DeepSeek that in a highly competitive global market, regulating advanced artificial intelligence is becoming increasingly complicated as it becomes increasingly interconnected.
The initial breakthrough in technology has evolved into a geopolitical and regulatory flashpoint, demonstrating how questions of privacy, security, and data sovereignty have become a crucial element in the race toward artificial intelligence.
It is clear from this case that policymakers have a pressing need for international frameworks that can address cross-border data governance and balance innovation with accountability.
For enterprises and individuals alike, the case serves as a reminder that embracing cutting-edge artificial intelligence tools comes with inherent risks and that the risks must be carefully weighed before adoption can be made.
It will become increasingly difficult to distinguish between local oversight and global accessibility as AI models become more open-minded, hardware-efficient, and widely deployable as they become more open-hearted, hardware-efficient, and widely deployable.
In such a situation, trust will not be solely based on technological capabilities, but also on transparency, verifiable safeguards, as well as the willingness of developers to operate within the ethical and legal guidelines of the market in which they seek to compete.