Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label DeepSeek-R1. Show all posts

DeepSeek Under Investigation Leading to App Store Withdrawals

 


As one of the world's leading AI players, DeepSeek, a chatbot application developed by the Chinese government, has been a formidable force in the artificial intelligence arena since it emerged in January 2025, launching at the top of the app store charts and reshaping conversations in the technology and investment industries. After initially being hailed as a potential "ChatGPT killer" by industry observers, the platform has been the subject of intense scrutiny since its meteoric rise. 

The DeepSeek platform is positioned in the centre of app store removals, cross-border security investigations, and measured enterprise adoption by August 2025. In other words, we are at the intersection of technological advances, infrastructure challenges, and geopolitical issues that may shape the next phase of the evolution of artificial intelligence in the years ahead. 

A significant regulatory development has occurred in South Korea, with the Personal Information Protection Commission confirming that DeepSeek temporarily suspended the download of its chatbot applications while working with local authorities to address privacy concerns and issues regarding DeepSeek's data assets. On Saturday, the South Korean version of Apple's App Store, as well as Google Play in South Korea, were taken down from their respective platforms, following an agreement with the company to enhance its privacy protection measures before they were relaunched.

It has been emphasised that, although existing mobile users and personal computer users are not affected, officials are urging caution on behalf of the commission; Nam Seok, director of the investigation division, has advised users to remove the app or to refrain from sharing personal information until the issues have been addressed. 

An investigation by Microsoft's security team has revealed that individuals reportedly linked to DeepSeek have been transferring substantial amounts of data using OpenAI's application programming interface (API), which is a core channel for developers and enterprises to integrate OpenAI technology into their products and services. Having become OpenAI's biggest shareholder, Microsoft flagged this unusual activity, triggering a review internally. 

There has been a meteoric rise by DeepSeek in recent days, and the Chinese artificial intelligence startup has emerged as an increasingly prominent competitor to established U.S. companies, including ChatGPT and Claude, whose AI assistant is currently more popular than ChatGPT. On Monday, as a result of a plunge in technology sector stock prices on Monday, the AI assistant surged to overtake ChatGPT in the U.S. App Store downloads. 

There has been growing international scrutiny surrounding the DeepSeek R1 chatbot, which has recently been removed from Apple’s App Store and Google Play in South Korea amid mounting international scrutiny. This follows an admission by the Hangzhou-based company that it did not comply with the laws regulating personal data privacy.

As DeepSeek’s R1 chatbot is lauded as having advanced capabilities at a fraction of its Western competitors’ cost, its data handling practices are being questioned sharply as well. Particularly, how user information is stored on secure servers in the People’s Republic of China has been criticised by the US and others. The Personal Information Protection Commission of South Korea confirmed that the app had been removed from the local app stores at 6 p.m. on Monday. 

In a statement released on Saturday morning (900 GMT), the commission said it had suspended the service due to violations of domestic data protection laws. Existing users can continue using the service, but the commission has urged the public not to provide personal information until the investigation is completed.

According to the PIPC, DeepSeek must make substantial changes so that it can meet Korean privacy standards. A shortcoming that DeepSeek has acknowledged is this. In addition, data security professor Youm Heung-youl, from Soonchunhyang University, further noted that despite the company's privacy policies relating to European markets and other markets, the same policy does not exist for South Korean users, who are subject to a different localised framework. 

In response to an inquiry by Italy's data protection authority, the company has taken steps to ensure the app takes the appropriate precautions with regard to the data that it collects, the sources from which it obtains it, its intended use, legal justifications, and its storage in China. 

While it is unclear to what extent DeepSeek initiated the removal or whether the app store operators took an active role in the process, the development follows the company's announcement last month of its R1 reasoning model, an open-source alternative to ChatGPT, positioned as an alternative to ChatGPT for more cost-effectiveness. 

Government concerns over data privacy have been heightened by the model's rapid success, which led to similar inquiries in Ireland and Italy as well as a cautionary directive from the United States Navy indicating that DeepSeek AI cannot be used because of its origin and operation, posing security and ethical risks. The controversy revolves around the handling and storage of user data at the centre of this controversy.

It has been reported that all user information, including chat histories and other personal information of users, has been transferred to China and stored on servers there. A more privacy-conscious version of DeepSeek's model may be run locally on a desktop computer, though the performance of this offline version is significantly slower than the cloud-connected version that can be accessed on Apple and Android phones. 

DeepSeek's data practices have drawn escalating regulatory attention across a wide range of jurisdictions, including the United States. According to the privacy policies of the company, personal data, including user requests and uploaded files, is stored on servers located in China, which the company also claims it does not store in the U.S. 

As Ulrich Kamp stated in his statement, DeepSeek has not provided credible assurances that data belonging to Germans will be protected to the same extent as data belonging to European Union citizens. He also pointed out that Chinese authorities have access to personal data held by domestic companies with extensive access rights. 

It was Kamp's office's request in May for DeepSeek to either meet EU data transfer requirements or voluntarily withdraw its app, but the company did not do so. The controversy follows DeepSeek's January debut when it said that it had created an AI model that rivalled the ones of other American companies like OpenAI, but at a fraction of the cost. 

Over the past few years, the app has been banned in Italy due to concerns about transparency, as well as restricted access to government devices in the Netherlands, Belgium, and Spain, and the consumer rights organisation OCU has called for an official investigation into the matter. After reports from Reuters alleging DeepSeek's involvement in China's military and intelligence activities, lawmakers are preparing legislation that will prohibit federal agencies from using artificial intelligence models developed by Chinese companies. 

According to the Italian data protection authority, the Guarantor for the Protection of Personal Data, Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence have been requested to provide detailed information concerning the collection and processing of their data. A regulator has requested clarification about which personal data is collected, where the data originates, what the legal basis is for processing, and whether it is stored on Chinese servers. 

There are also other inquiries peopl would like to make regarding the training methodologies used for DeepSeek's artificial intelligence models, such as whether web scraping is involved, and how both registered as well as unregistered users are informed of this data collection. DeepSeek has 20 days to reply to these inquiries. 

As Forrester analysts have warned, the app has been widely adopted — it has been downloaded millions of times, which means that large amounts of potentially sensitive information are being uploaded and processed as a result. Based on DeepSeek's own privacy policy, the company has noted that it may collect user input, audio prompts, uploaded files, feedback, chat histories, and other content for training purposes, and may share these details with law enforcement officials or public authorities as needed. 

Although DeepSeek's models remain freely accessible throughout the world, despite regulatory bans and investigations intensifying in China, developers continue to download, adapt, and deploy them, sometimes independent of the official app or Chinese infrastructure, regardless of the official ban. The technology has become increasingly important in industry analysis, not just as an isolated threat, but as part of a broader shift toward a hardware-efficient, open-weight AI architecture, a trend which has been influenced by players such as Mistral, OpenHermes, and Elon Musk's Grok initiative, among many others.

To join the open-weight reasoning movement, OpenAI has released two open-weight reasoning models, GPTT-OSS-120B and GPTT-OSS-20B, which have been deployed within their infrastructure. During the rapid evolution of the artificial intelligence market, the question is no longer whether open-source AI can compete with existing incumbents—in fact, it already has. 

It is much more pressing to decide who will define the governance frameworks that will earn the trust of the public at a time when artificial intelligence, infrastructure control, and national sovereignty are converging at unprecedented rates. Despite the growing complexity of regulating advanced artificial intelligence in an interconnected, highly competitive global market, the ongoing scrutiny surrounding DeepSeek underscores the importance of governing advanced artificial intelligence. 

As a disruptive technological breakthrough evolved, it became a geopolitical and regulatory hot-button, demonstrating how privacy, security, and data sovereignty have now become a major issue in the race against artificial intelligence. Policymakers will find this case extremely significant because it emphasizes the need for coherent international frameworks that can address cross-border data governance and balance innovation with accountability, as well as addressing cross-border data governance. 

Whether it is enterprises or individuals, it serves to remind them that despite the benefits of cutting-edge AI tools, they come with inherent risks, risks that need to be carefully evaluated before they are adopted. A significant part of the future will be the blurring of the boundaries between local oversight and global accessibility as AI models become increasingly lightweight, hardware-efficient, and widely deployable. 

As a result, trust will not be primarily dependent on technological capability, but also on transparency, verifiable safeguards, and the willingness of developers to adhere to ethical and legal standards in the markets they are trying to serve in this environment. It is clear from the ongoing scrutiny surrounding DeepSeek that in a highly competitive global market, regulating advanced artificial intelligence is becoming increasingly complicated as it becomes increasingly interconnected. 

The initial breakthrough in technology has evolved into a geopolitical and regulatory flashpoint, demonstrating how questions of privacy, security, and data sovereignty have become a crucial element in the race toward artificial intelligence. It is clear from this case that policymakers have a pressing need for international frameworks that can address cross-border data governance and balance innovation with accountability. 

For enterprises and individuals alike, the case serves as a reminder that embracing cutting-edge artificial intelligence tools comes with inherent risks and that the risks must be carefully weighed before adoption can be made. It will become increasingly difficult to distinguish between local oversight and global accessibility as AI models become more open-minded, hardware-efficient, and widely deployable as they become more open-hearted, hardware-efficient, and widely deployable. 

In such a situation, trust will not be solely based on technological capabilities, but also on transparency, verifiable safeguards, as well as the willingness of developers to operate within the ethical and legal guidelines of the market in which they seek to compete.

Fake DeepSeek AI Installers Deliver BrowserVenom Malware



Cybersecurity researchers have released a warning about a sophisticated cyberattack campaign in which users are attempted to access DeepSeek-R1, a widely recognized large language model (LLM), which has been identified as a large language model. Cybercriminals have launched a malicious operation designed to exploit unsuspecting users through deceptive tactics to capitalise on the soaring global interest in artificial intelligence tools, and more specifically, open-source machine learning models (LLMs). 


As a result of a detailed investigation conducted by Kaspersky, a newly discovered Windows-based malware strain known as BrowserVenom is distributed by threat actors utilising a combination of malvertising and phishing techniques to distribute. In addition to intercepting and manipulating web traffic, this sophisticated malware enables attackers to stealthily retrieve sensitive data from users, including passwords, browsing history, and personal information.

It has been reported that cybercriminals are using Google Adwords to redirect users to a fraudulent website that has been carefully designed to replicate the official DeepSeek homepage by using a website name deepseek-platform[.]com. They are deceiving victims into downloading malicious files by imitating the branding and layout of a legitimate DeepSeek-R1 model installation, and they are deceiving them into doing so. 

The emergence of BrowserVenom has a significant impact on the cyber threat landscape, as attackers are utilising the growing interest in artificial intelligence technologies to deliver malware in order to increase the level of exposure. Aside from highlighting the sophistication of social engineering tactics that are becoming increasingly sophisticated, this campaign also serves as an effective reminder to verify the sources of software and tools that may be related to artificial intelligence. 

An analysis of security threats has revealed that attackers behind the BrowserVenom attack have created a deceptive installer posing as the authentic DeepSeek-R1 language model in order to deliver malicious payloads. This malicious software installer has been carefully disguised to make it seem authentic, and it contains a recently identified malware called BrowserVenom, an advanced malware that reroutes all browser traffic through the attacker's servers. 

Using this redirection capability, cybercriminals can intercept and manipulate internet traffic, giving them direct access to the sensitive personal information of millions of people. Despite the fact that BrowserVenom is an important piece of malware, its scope of functionality is especially worrying. Once embedded within a system, the malware can monitor user behaviour, harvest login credentials, retrieve session cookies, and steal financial data, emails, and documents that may even be transmitted in plaintext. 

As a result of this level of access, cybercriminals are able to access all the information they need to commit financial fraud, commit identity theft, or sell stolen data on underground marketplaces. Kaspersky reports that the campaign has already compromised systems in a number of countries. They have confirmed infection reports in Brazil, Cuba, Mexico, India, Nepal, South Africa, and Egypt, highlighting the threat’s global reach. 

An infection vector for DeepSeek is a phishing site that is designed to look just like DeepSeek's official platform, which is the primary channel through which it gets infected, inducing users to download the trojanized installer. Because BrowserVenom is still spreading, experts warn that it poses a persistent and ongoing threat to users worldwide, especially those who use open-source AI tools without verifying the authenticity of the source they are using. 

According to a comprehensive investigation of the BrowserVenom campaign, it appears that a highly orchestrated infection chain has been crafted which begins at a malicious phishing website hosted at https[:]//deepseek-platform[.]com. Malvertising tactics have been employed by the attackers to place sponsored search results strategically atop pages when users search for terms like "DeepSeek R1" and similar. 

Deceptive strategies are designed to take advantage of the growing popularity of open-source artificial intelligence models and trick users into visiting a lookalike website that is convincingly resembling the DeepSeek homepage in order to trick them into visiting a website based on a fake DeepSeek lookalike website. Upon arrival at the fake site, the fake site detects the operating system of the visitor silently. 

A single prominent button labelled “Try now” is displayed on the interface for Windows users - the primary targets of this attack - in order to get a DeepSeek-R1 model for free. There have been occurrences of the site serving slightly modified layouts on other platforms, but all versions share the same goal of luring users into clicking and unintentionally initiating an infection, regardless of which platform they're on. This malware was developed by the operators of the BrowserVenom malware to enhance the credibility of the malicious campaign and reduce the suspicion of users. 

To accomplish this, multiple CAPTCHA mechanisms have been integrated into the attack chain at various points to confuse the user. In addition to providing the fake DeepSeek-R1 download website with a sense of legitimacy, this clever use of CAPTCHA challenges is also a form of social engineering, implying that it is secure and trustworthy, which in turn reinforces the illusion of security. When a user clicks the "Try Now" button on the fraudulent DeepSeek platform, the first CAPTCHA will be triggered, according to cybersecurity researchers.

It is at this point that a victim is presented with a fake CAPTCHA page that mimics the appearance of a standard bot-verification interface. Interestingly enough, this isn't just a superficial challenge for the victim. By using an embedded snippet of JavaScript code, the embedded code evaluates whether a person is actually conducting the interaction, performing several verification checks to identify and block automated access to the system. 

Once users click the button, they will be redirected to a CAPTCHA verification page, which is allegedly designed to stop automated robots from accessing the download. However, there is a layer of heavily obfuscated JavaScript behind this screen that performs advanced checks to ensure that a visitor is actually a human, and not a security scanner, by performing advanced checks. The attackers have been operating similar malicious campaigns in the past using dynamic scripts and evasion logic, which emphasises the campaign's technical sophistication. 

A user is redirected to a secondary page located at proxy1.php once they have completed the CAPTCHA, where a “Download now” button appears once they have completed the CAPTCHA. When users click on this final prompt, they are prompted to download the tampered executable file AI_Launcher_1.21.exe, which they can find at 
https://r1deepseek-ai[.]com/gg/cc/AI_Launcher_1.21.exe. 

Using this executable, the malware can be successfully installed in the browser. This entire process, from the initial search to the installation of the malware, has been cleverly disguised to appear as a legitimate user experience to illustrate how cybercriminals are using both social engineering as well as technical sophistication to spread their malware on an international scale. 

Once a user has successfully completed the initial CAPTCHA, they are directed to a secondary page which displays the "Download" button to what is supposed to be an official DeepSeek installer. It should be noted, however, that if users click on this link, they are downloading a trojanized executable file called AI-Launcher-1.21.exe, which stealthily installs BrowserVenom malware. As part of this process, a second CAPTCHA is required. In this case, the prompt resembles the Cloudflare Turnstile verification, complete with the familiar “I am not a robot” checkbox. As a result, the user is misled throughout the entire infection process, creating an illusion of safety. 

It is the victim's choice to choose between two AI deployment platforms after the second CAPTCHA has been completed- "Ollama" or "LM Studio," both of which are legitimate tools for running local versions of AI models like DeepSeek. However, regardless of which option users select, the end result is the same - BrowserVenom malware is silently downloaded and executed in the background without being noticed. 

Cybercriminals are increasingly weaponising familiar security mechanisms to disguise malicious activity in cybercrime, and this sophisticated use of fake CAPTCHAs indicates a broader trend. There has actually been a rise in similar attacks over the past few years, including recent phishing attacks involving Cloudflare CAPTCHA pages that coax users into executing malicious commands with the hope of getting them to do so. 

As soon as the installer is executed, it entails the installation of a dual-layered operation that mixes both visual legitimacy and covert malicious activity. The user is presented with a convincing installation interface which appears to be a large language model deployment tool, but a hidden background process simultaneously deploys the browser malware, thereby presenting the false appearance of a legitimate tool. During this behind-the-scenes sequence, an attempt is made to bypass traditional security measures to maintain stealth while bypassing traditional security measures. 

A crucial evasion technique is used in the installation of the infection: the installer executes an AES-encrypted PowerShell command to exclude the Windows Defender scan of the user's directory. In this case, attackers improve the likelihood that malware will install undetected and successfully if the malware's operating path is removed from routine antivirus oversight.

Once the malware is installed, the installer then proceeds to download additional payloads from obfuscated scripts, further complicating the detection and analysis of the malware. Ultimately, the payload, BrowserVenom, is injected directly into system memory using a sophisticated technique which avoids putting the malicious code on disk, thus evading signature-based antivirus detections. 

Once embedded in the system, BrowserVenom's primary function is to redirect all browser traffic towards a proxy server controlled by the attacker. As part of this process, the malware installs a rogue root certificate that facilitates HTTPS interceptions and modifies the configuration of browsers on multiple platforms, including Google Chrome, Microsoft Edge, Mozilla Firefox, and other Chromium and Gecko-based browsers. 

By making these changes, the malware can intercept and manipulate secure web traffic without raising the suspicion of users. Furthermore, the malware updates user preferences as well as browser shortcuts to ensure persistence, even if the computer is rebooted or manual removal attempts are made. Researchers have found elements of Russian-language code embedded within the phishing website and distribution infrastructure of the malware that strongly suggests that Russian-speaking threat actors are involved in its development. 

This is the first case of confirmed infections reported by the FBI in Brazil, Cuba, Mexico, India, Nepal, South Africa, and Egypt, demonstrating the campaign's global spread and aggressive campaign strategy. In addition to communicating with a command-and-control (C2) infrastructure at the IP address 141.105.130[.]106, the malware also uses port 37121 as its primary port to communicate, which is hardcoded into the proxy settings it uses. This allows BrowserVenom to hijack and route victim traffic through attacker-controlled channels without user knowledge. 

The growing threat of cyberattacks that exploit the AI boom, particularly the increasing use of popular LLM tools as bait, is emphasised by security experts. It is strongly recommended that users adhere to strict digital hygiene, which includes verifying URLs, checking SSL certificates, and avoiding downloading software from unauthorised sources or advertisements.

A growing interest in artificial intelligence has led to a surge in abuse by sophisticated cybercriminal networks, which has made proactive vigilance essential for users throughout all geographies and industries. In light of the recent BrowserVenom incident, which highlights the deceptive tactics that cybercriminals are using in order to get the user to take action, it highlights the urgency for users to be more aware of AI-related threats. 

Today, adversaries are blending authentic interfaces, advanced evasion methods, and social engineering into one seamless attack, which makes traditional security habits no longer sufficient to thwart them. The cybersecurity mindset of organizations and individuals alike requires a combination of real-time threat intelligence, behavioral detection tools, and cautious digital behavior that is based on real-time threat intelligence. Increasingly sophisticated artificial intelligence is changing the landscape of artificial intelligence threats, which requires continuous vigilance to prevent a malicious innovation from getting a step ahead.

DeepSeek-R1 AI Under Fire for Severe Security Risks

 

DeepSeek-R1, an AI model developed in China, is facing intense scrutiny following a study by cybersecurity firm Enkrypt AI, which found it to be 11 times more vulnerable to cybercriminal exploitation compared to other AI models. The research highlights significant security risks, including the AI’s susceptibility to generating harmful content and being manipulated for illicit activities. 

This concern is further amplified by a recent data breach that exposed over a million records, raising alarms about the model’s safety. Since its launch on January 20, DeepSeek has gained immense popularity, attracting 12 million users in just two days—surpassing ChatGPT’s early adoption rate. However, its rapid rise has also triggered widespread privacy and security concerns, leading multiple governments to launch investigations or impose restrictions on its usage.  
Enkrypt AI’s security assessment revealed that DeepSeek-R1 is highly prone to manipulation, with 45% of safety tests bypassing its security mechanisms. The study found that the model could generate instructions for criminal activities, illegal weapon creation, and extremist propaganda. 

Even more concerning, cybersecurity evaluations showed that DeepSeek-R1 failed in 78% of security tests, successfully generating malicious code, including malware and trojans. Compared to OpenAI’s models, DeepSeek-R1 was 4.5 times more likely to be exploited for hacking and cybercrime. 

Sahil Agarwal, CEO of Enkrypt AI, emphasized the urgent need for stronger safety measures and continuous monitoring to mitigate these threats. Due to these security concerns, several countries have initiated regulatory actions. 

Italy was the first to launch an investigation into DeepSeek’s privacy and security risks, followed by France, Germany, the Netherlands, Luxembourg, and Portugal. Taiwan has prohibited government agencies from using the AI, while South Korea has opened a formal inquiry into its data security practices. 

The United States is also responding aggressively, with NASA banning DeepSeek from federal devices. Additionally, lawmakers are considering legislation that could impose severe fines and even jail time for those using the platform in the country. The growing concerns surrounding DeepSeek-R1 come amid increasing competition between the US and China in AI development. 

Both nations are pushing the boundaries of AI for military, economic, and technological dominance. However, Enkrypt AI’s findings suggest that DeepSeek-R1’s vulnerabilities could make it a dangerous tool for cybercriminals, disinformation campaigns, and even biochemical warfare threats. With regulatory scrutiny intensifying worldwide, the AI’s future remains uncertain as authorities weigh the risks associated with its use.