Skip to main content

Microsoft’s Close Partner and Collaborator, OpenAI, May Suggest DeepSeek Stole IP and Violated Terms of Service

Microsoft today announced that R1, DeepSeek’s so-called reasoning model, is available on Azure AI Foundry service, Microsoft’s platform that brings together a number of AI services for enterprises under a single banner. In a blog post, Microsoft said that the version of R1 on Azure AI Foundry has “undergone rigorous red teaming and safety evaluations,” including “automated assessments of model behavior and extensive security reviews to mitigate potential risks.”

DeepSeek’s reasoning model is the talk of the town, and Microsoft may have been persuaded to bring it into its cloud fold while it still holds allure. However, this move comes amidst controversy surrounding DeepSeek’s potential abuse of OpenAI’s services. According to security researchers working for Microsoft, DeepSeek may have exfiltrated a large amount of data using OpenAI’s API in the fall of 2024. Microsoft, which also happens to be OpenAI’s largest shareholder, notified OpenAI of the suspicious activity.

Despite the controversy, Microsoft still wants DeepSeek’s shiny new models on its cloud platform. The addition of R1 to Microsoft’s cloud services is a curious one, considering that Microsoft initiated a probe into DeepSeek’s potential abuse of its and OpenAI’s services. Microsoft may have been persuaded to bring R1 into its cloud fold while it still holds allure.

In the near future, Microsoft said, customers will be able to use “distilled” flavors of R1 to run locally on Copilot+ PCs, Microsoft’s brand of Windows hardware that meets certain AI readiness requirements. “As we continue expanding the model catalog in Azure AI Foundry, we’re excited to see how developers and enterprises leverage […] R1 to tackle real-world challenges and deliver transformative experiences,” continued Microsoft in the post.

However, the move has raised concerns about the accuracy and reliability of R1. According to a test by information-reliability organization NewsGuard, R1 provides inaccurate answers or non-answers 83% of the time when asked about news-related topics. A separate test found that R1 refuses to answer 85% of prompts related to China, possibly a consequence of the government censorship to which AI models developed in the country are subject.

Unclear is whether Microsoft made any modifications to the model to improve its accuracy — and combat its censorship. The addition of R1 to Microsoft’s cloud services is a curious one, considering that Microsoft initiated a probe into DeepSeek’s potential abuse of its and OpenAI’s services.


Source Link