Here is a rewritten version of the content without changing its meaning, retaining the original length, and keeping the proper headings and titles:
The emergence of new generative AI models with diverse capabilities is a weekly occurrence. As these models are integrated into AI systems, it is essential to conduct a thorough risk assessment to strike a balance between leveraging advancements and maintaining robust security. At Microsoft, we are dedicated to creating a secure and trustworthy AI development platform, enabling users to explore and innovate with confidence.
This article will focus on a critical aspect of our approach: securing the models and runtime environment. We will discuss how we protect against potentially compromised models that could affect AI systems, larger cloud estates, or even Microsoft’s infrastructure.
How Microsoft protects data and software in AI systems
Before delving into model security, it is essential to address a common misconception about data usage in AI systems. Microsoft does not utilize customer data to train shared models, nor does it share logs or content with model providers. Our AI products and platforms are part of our standard offerings, subject to the same terms and trust boundaries as our other products. Customer model inputs and outputs are considered sensitive content and are handled with the same protection as documents and email messages. Our AI platform offerings, including Azure AI Foundry and Azure OpenAI Service, are fully hosted by Microsoft on our own servers, with no runtime connections to model providers. While we offer features like model fine-tuning, these allow customers to create customized models for their own use, which remain within their tenant.
Regarding model security, it is crucial to remember that models are software applications running in Azure Virtual Machines (VM) and accessed through an API. They do not possess any unique capabilities to breach the VM or compromise Microsoft’s infrastructure. Azure’s existing defenses against software-based attacks are inherited by AI Foundry, ensuring a “zero-trust” architecture where Azure services do not assume that applications running on the platform are secure by default.
However, it is possible to conceal malware within an AI model, posing a risk to customers similar to that of malware in other software. To mitigate this risk, we scan and test our highest-visibility models before release, including:
- Malware analysis: Scanning AI models for embedded malicious code that could serve as an infection vector and launchpad for malware.
- Vulnerability assessment: Scanning for common vulnerabilities and exposures (CVEs) and zero-day vulnerabilities targeting AI models.
- Backdoor detection: Scanning model functionality for evidence of supply chain attacks and backdoors, such as arbitrary code execution and network calls.
- Model integrity: Analyzing an AI model’s layers, components, and tensors to detect tampering or corruption.
Customers can identify scanned models by checking the indication on the model card, which requires no additional action. For high-visibility models like DeepSeek R1, we conduct more thorough scans, including source code examination and red team testing, to ensure the model’s security before release.
Defending and governing AI models
As security professionals, it is essential to recognize that no scanning process can detect all malicious activity. This challenge is similar to that faced with other third-party software, and organizations should address it by trusting intermediaries like Microsoft and their own trust in the provider. For a more secure experience, customers can utilize Microsoft’s security products to defend and govern their deployed models. More information on this topic can be found in our article: Securing DeepSeek and other AI systems with Microsoft Security.
When evaluating models, customers should consider not only security but also whether the model fits their specific use case by testing it as part of their complete system. This comprehensive approach to securing AI systems will be discussed in more detail in an upcoming blog post.
Using Microsoft Security to secure AI models and customer data
In summary, the key aspects of our approach to securing models on Azure AI Foundry include:
- Microsoft conducts various security investigations for key AI models before hosting them in the Azure AI Foundry Model Catalogue and continues to monitor for changes that may impact model trustworthiness. Customers can use model card information and their trust in the model builder to assess their position towards any model.
- All models hosted on Azure are isolated within the customer tenant boundary, with no access to or from the model provider, including close partners like OpenAI.
- Customer data is not used to train models, nor is it made available outside of the Azure tenant, unless the customer designs their system to do so.
Learn more with Microsoft Security
To learn more about Microsoft Security solutions, visit our website. Stay up-to-date with our expert coverage on security matters by bookmarking the Security blog. Follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.
Note: I’ve kept the original links, script tags, and HTML structure to maintain the content’s original functionality and appearance.
Source Link