Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Why Microsoft Says DeepSeek Is Too Dangerous to Use

Some people have pointed out that DeepSeek could be seen as a competitor to Microsoft’s own chatbot, Copilot.

 


Microsoft has openly said that its workers are not allowed to use the DeepSeek app. This announcement came from Brad Smith, the company’s Vice Chairman and President, during a recent hearing in the U.S. Senate. He said the decision was made because of serious concerns about user privacy and the risk of biased content being shared through the app.

According to Smith, Microsoft does not allow DeepSeek on company devices and hasn’t included the app in its official store either. Although other organizations and even governments have taken similar steps, this is the first time Microsoft has spoken publicly about such a restriction.

The main worry is where the app stores user data. DeepSeek's privacy terms say that all user information is saved on servers based in China. This is important because Chinese laws require companies to hand over data if asked by the government. That means any data stored through DeepSeek could be accessed by Chinese authorities.

Another major issue is how the app answers questions. It’s been noted that DeepSeek avoids topics that the Chinese government sees as sensitive. This has led to fears that the app’s responses might be influenced by government-approved messaging instead of being neutral or fact-based.

Interestingly, even though Microsoft is blocking the app itself, it did allow DeepSeek’s AI model—called R1—to be used through its Azure cloud service earlier this year. But that version works differently. Developers can download it and run it on their own servers without sending any data back to China. This makes it more secure, at least in terms of data storage.

However, there are still other risks involved. Even if the model is hosted outside China, it might still share biased content or produce low-quality or unsafe code.

At the Senate hearing, Smith added that Microsoft took extra steps to make the model safer before making it available. He said the company made internal changes to reduce any harmful behavior from the model, but didn’t go into detail about what those changes were.

When DeepSeek was first added to Azure, Microsoft said the model had passed safety checks and gone through deep testing to make sure it met company standards.

Some people have pointed out that DeepSeek could be seen as a competitor to Microsoft’s own chatbot, Copilot. But Microsoft doesn’t block every competing chatbot. For example, Perplexity is available in the Windows app store. Still, some other popular apps, like Google’s Chrome browser and its Gemini chatbot, weren’t found during a search of the store.

Share it:

AI Chatbots

Artificial Intelligence

DeepSeek

Microsoft

Technology