Microsoft has blocked its workers from using the DeepSeek chat app because of worries about data safety and outside influence. The ban was announced by Brad Smith, Microsoft’s president, during a Senate hearing on AI policy. Smith said that DeepSeek stores user data on servers in China and that its answers could reflect propaganda. Microsoft has also kept DeepSeek out of its app store for the same reasons.

Why Microsoft Bans DeepSeek
Microsoft worries that user questions and documents sent to DeepSeek’s app could end up on Chinese servers. Under Chinese law, these servers must cooperate with intelligence agencies when asked. Microsoft fears that this could put private companies or personal data at risk. The company also sees a risk that DeepSeek’s replies might be shaped by the Chinese government’s views.
Microsoft’s Steps to Secure AI
Despite the app ban, Microsoft offers DeepSeek’s R1 model on its Azure cloud service. In that version, Microsoft said it had run safety tests and “red teaming” to find risks. During the hearing, Smith noted that his team had gone inside the model and “changed” it to remove harmful side effects. These steps help Microsoft deliver the model while reducing the risks of propaganda or data leaks.
How Users Can Stay Safe
Companies that worry about their data can run DeepSeek’s open-source model on their own servers. This keeps sensitive information in their own systems rather than sending it to China. Even so, they must watch out for the model’s biases or errors when generating code or advice.

A Careful Path Forward
Microsoft still allows other AI chat tools in its store, such as Perplexity. It is taking a careful approach by banning only the app that raises the highest security flags. As AI grows more common in work life, companies will talk more about how to guard data and keep bots honest.