Here is the rewritten content without changing its meaning, retaining the original length, and keeping the proper headings and titles:
A strong security foundation is essential for a successful AI transformation. As AI development and adoption accelerate, organizations require visibility into their emerging AI applications and tools. Microsoft Security offers comprehensive protection, including threat protection, posture management, data security, compliance, and governance, to secure AI applications built and used by organizations. These capabilities also extend to securing and governing AI applications built with the DeepSeek R1 model, providing visibility and control over the use of the separate DeepSeek consumer app.
Secure and Govern AI Apps Built with the DeepSeek R1 Model on Azure AI Foundry and GitHub
Develop with Trustworthy AI
Recently, we announced the availability of DeepSeek R1 on Azure AI Foundry and GitHub, joining a diverse portfolio of over 1,800 models. Customers are now building production-ready AI applications with Azure AI Foundry, addressing their unique security, safety, and privacy requirements. Like other models in Azure AI Foundry, DeepSeek R1 has undergone rigorous red teaming and safety evaluations, including automated assessments of model behavior and extensive security reviews to mitigate potential risks. Microsoft’s hosting safeguards for AI models ensure customer data remains within Azure’s secure boundaries.
With Azure AI Content Safety, built-in content filtering is available by default to detect and block malicious, harmful, or ungrounded content, with opt-out options for flexibility. Additionally, the safety evaluation system enables customers to efficiently test their applications before deployment. These safeguards provide a secure, compliant, and responsible environment for enterprises to build and deploy AI solutions. For more information, visit Azure AI Foundry and GitHub.
Start with Security Posture Management
AI workloads introduce new cyberattack surfaces and vulnerabilities, particularly when developers leverage open-source resources. Therefore, it’s crucial to start with security posture management to discover all AI inventories, including models, orchestrators, grounding data sources, and associated risks. When building AI workloads with DeepSeek R1 or other AI models, Microsoft Defender for Cloud’s AI security posture management capabilities help security teams gain visibility into AI workloads, discover AI cyberattack surfaces and vulnerabilities, detect cyberattack paths, and receive recommendations to strengthen their security posture against cyberthreats.

By mapping out AI workloads and synthesizing security insights, Defender for Cloud surfaces contextualized security issues and suggests risk-based security recommendations tailored to prioritize critical gaps across AI workloads. Relevant security recommendations also appear within the Azure AI resource itself in the Azure portal, providing developers or workload owners with direct access to recommendations and enabling them to remediate cyberthreats faster.
Safeguard DeepSeek R1 AI Workloads with Cyberthreat Protection
While a strong security posture reduces the risk of cyberattacks, the complex and dynamic nature of AI requires active monitoring in runtime. No AI model is exempt from malicious activity and can be vulnerable to prompt injection cyberattacks and other cyberthreats. Monitoring the latest models is critical to ensuring AI applications are protected.
Integrated with Azure AI Foundry, Defender for Cloud continuously monitors DeepSeek AI applications for unusual and harmful activity, correlates findings, and enriches security alerts with supporting evidence. This provides security operations center (SOC) analysts with alerts on active cyberthreats, such as jailbreak cyberattacks, credential theft, and sensitive data leaks. For example, when a prompt injection cyberattack occurs, Azure AI Content Safety prompt shields can block it in real-time, and the alert is sent to Microsoft Defender for Cloud, where the incident is enriched with Microsoft Threat Intelligence.

Additionally, these alerts integrate with Microsoft Defender XDR, allowing security teams to centralize AI workload alerts into correlated incidents to understand the full scope of a cyberattack, including malicious activities related to their generative AI applications.

Secure and Govern the Use of the DeepSeek App
In addition to the DeepSeek R1 model, DeepSeek provides a consumer app hosted on its local servers, which may not align with organizational requirements, posing risks of data leaks and policy violations. Microsoft Security provides capabilities to discover the use of third-party AI applications and offers controls for protecting and governing their use.
Secure and Gain Visibility into DeepSeek App Usage
Microsoft Defender for Cloud Apps provides ready-to-use risk assessments for over 850 Generative AI apps, including the DeepSeek app. This enables organizations to discover the use of these apps, assess their security, compliance, and legal risks, and set up controls accordingly. For example, security teams can tag high-risk AI apps as unsanctioned and block user access to them.

Comprehensive Data Security
Microsoft Purview Data Security Posture Management (DSPM) for AI provides visibility into data security and compliance risks, such as sensitive data in user prompts and non-compliant usage, and recommends controls to mitigate the risks. For example, DSPM for AI reports can offer insights on the type of sensitive data being pasted to Generative AI consumer apps, enabling data security teams to create and fine-tune their data security policies to protect that data and prevent data leaks.

Prevent Sensitive Data Leaks and Exfiltration
The leakage of organizational data is a top concern for security leaders regarding AI usage, highlighting the importance of implementing controls to prevent users from sharing sensitive information with external third-party AI applications.
Microsoft Purview Data Loss Prevention (DLP) enables organizations to prevent users from pasting sensitive data or uploading files containing sensitive content into Generative AI apps from supported browsers. DLP policies can adapt to insider risk levels, applying stronger restrictions to high-risk users and less stringent restrictions to low-risk users. By leveraging these capabilities, organizations can safeguard their sensitive data from potential risks associated with using external third-party AI applications.

This overview highlights some of the capabilities to help secure and govern AI apps built on Azure AI Foundry and GitHub, as well as AI apps used by organizations. For more information and to get started with securing AI apps, refer to the additional resources below.
Learn More with Microsoft Security
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to stay up-to-date on security matters. Follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.
Source Link