Microsoft has openly said that its workers are not allowed to use the DeepSeek app. This announcement came from Brad Smith, the company’s Vice Chairman and President, during a recent hearing in the U.S. Senate. He said the decision was made because of serious concerns about user privacy and the risk of biased content being shared through the app.
According to Smith, Microsoft does not allow DeepSeek on company devices and hasn’t included the app in its official store either. Although other organizations and even governments have taken similar steps, this is the first time Microsoft has spoken publicly about such a restriction.
The main worry is where the app stores user data. DeepSeek's privacy terms say that all user information is saved on servers based in China. This is important because Chinese laws require companies to hand over data if asked by the government. That means any data stored through DeepSeek could be accessed by Chinese authorities.
Another major issue is how the app answers questions. It’s been noted that DeepSeek avoids topics that the Chinese government sees as sensitive. This has led to fears that the app’s responses might be influenced by government-approved messaging instead of being neutral or fact-based.
Interestingly, even though Microsoft is blocking the app itself, it did allow DeepSeek’s AI model—called R1—to be used through its Azure cloud service earlier this year. But that version works differently. Developers can download it and run it on their own servers without sending any data back to China. This makes it more secure, at least in terms of data storage.
However, there are still other risks involved. Even if the model is hosted outside China, it might still share biased content or produce low-quality or unsafe code.
At the Senate hearing, Smith added that Microsoft took extra steps to make the model safer before making it available. He said the company made internal changes to reduce any harmful behavior from the model, but didn’t go into detail about what those changes were.
When DeepSeek was first added to Azure, Microsoft said the model had passed safety checks and gone through deep testing to make sure it met company standards.
Some people have pointed out that DeepSeek could be seen as a competitor to Microsoft’s own chatbot, Copilot. But Microsoft doesn’t block every competing chatbot. For example, Perplexity is available in the Windows app store. Still, some other popular apps, like Google’s Chrome browser and its Gemini chatbot, weren’t found during a search of the store.
AI chatbots like ChatGPT have captured widespread attention for their remarkable conversational abilities, allowing users to engage on diverse topics with ease. However, while these tools offer convenience and creativity, they also pose significant privacy risks. The very technology that powers lifelike interactions can also store, analyze, and potentially resurface user data, raising critical concerns about data security and ethical use.
Chatbots like ChatGPT rely on Large Language Models (LLMs) trained on vast datasets to generate human-like responses. This training often includes learning from user interactions. Much like how John Connor taught the Terminator quirky catchphrases in Terminator 2: Judgment Day, these systems refine their capabilities through real-world inputs. However, this improvement process comes at a cost: personal data shared during conversations may be stored and analyzed, often without users fully understanding the implications.
For instance, OpenAI’s terms and conditions explicitly state that data shared with ChatGPT may be used to improve its models. Unless users actively opt-out through privacy settings, all shared information—from casual remarks to sensitive details like financial data—can be logged and analyzed. Although OpenAI claims to anonymize and aggregate user data for further study, the risk of unintended exposure remains.
Despite assurances of data security, breaches have occurred. In May 2023, hackers exploited a vulnerability in ChatGPT’s Redis library, compromising the personal data of around 101,000 users. This breach underscored the risks associated with storing chat histories, even when companies emphasize their commitment to privacy. Similarly, companies like Samsung faced internal crises when employees inadvertently uploaded confidential information to chatbots, prompting some organizations to ban generative AI tools altogether.
Governments and industries are starting to address these risks. For instance, in October 2023, President Joe Biden signed an executive order focusing on privacy and data protection in AI systems. While this marks a step in the right direction, legal frameworks remain unclear, particularly around the use of user data for training AI models without explicit consent. Current practices are often classified as “fair use,” leaving consumers exposed to potential misuse.
Until stricter regulations are implemented, users must take proactive steps to safeguard their privacy while interacting with AI chatbots. Here are some key practices to consider: