Skip to main content

Azure AI Foundry’s latest innovations are empowering businesses to optimize their AI investments and differentiate themselves in a competitive landscape. With new tools like Azure AI Agent Service and Microsoft Fabric data agents, operational efficiency is being enhanced, while NVIDIA NIM microservices are boosting performance and cost optimization. Discover how these advancements can transform your AI strategy.

Over the last couple of years, I have witnessed tech teams transition from being excited yet overwhelmed by the rapid pace of AI advancements to now leveraging the cutting-edge capabilities of Azure AI Foundry to drive innovation.

This rapid transformation highlights the critical role of a robust enterprise AI platform in helping you push the boundaries of AI. We continuously add new capabilities to Azure AI Foundry to empower your teams. As a result, business leaders in the era of AI have a lot to consider, and it’s easy to get lost in the myriad of new developments.

Today, I am excited to share some of the most significant Azure AI Foundry innovations announced in recent weeks. These innovations improve operational efficiency, maximize investments, and enable you to focus on differentiating in a competitive landscape.

AI agents have the potential to transform every business process, revolutionizing productivity by automating routine tasks and enabling employees to focus on more strategic work. We have announced several agentic capabilities and tools on Azure AI Foundry to help you efficiently deploy AI agents in your organization.

New knowledge tools with Azure AI Agent Service securely ground AI agent outputs with enterprise knowledge, providing accurate, relevant, and contextually aware responses. Azure AI Agent Service offers a wide range of knowledge tools for various data types, including unstructured, structured, private, licensed, and public web data.

Additionally, Microsoft Fabric data agents were announced at the Microsoft Fabric Community Conference to enable developers using Azure AI Agent Service to connect customized, conversational agents created in Microsoft Fabric. These data agents can reason over and unlock insights from various enterprise structured and semantic data sources, facilitating better data-driven decisions. Fabric data agents retrieve, understand, and synthesize data from OneLake, determining when to use specific data and how to combine it.

By combining Fabric’s sophisticated enterprise data analysis capabilities with Azure AI Foundry’s cutting-edge GenAI technology, you can create custom conversational AI agents that leverage domain expertise. The Fabric-Foundry pathway connects your data teams with your development teams, putting them on a common, secure, and enterprise-ready AI platform.

One customer utilizing the Microsoft Fabric-Azure AI Foundry bridge is NTT DATA. NTT DATA leverages data agents in Microsoft Fabric to have conversations with HR and back-office operations data, gaining a better understanding of what is happening within the organization.

We also recently announced two more capabilities to empower businesses to deploy AI not just as an assistant, but as an active digital workforce:

Responses API is a powerful tool that enables AI-powered apps to seamlessly retrieve information, process data, and then act. It simplifies complex tasks, allowing your business to operate more efficiently and ultimately reduce costs.

Computer-using agent, or CUA, is a breakthrough AI model that can navigate software interfaces, execute tasks, and automate workflows. It can open applications, click buttons, fill out forms, and navigate multi-page workflows. CUA can adapt dynamically to changes for smooth operations across both web and desktop applications, integrating disparate systems without API dependencies.

Enhancing AI efficiency and performance with Azure AI Foundry

The rapid growth of generative AI technology is accompanied by an increasing number of use cases across your organization, as well as the need for tools to optimize efficiency and performance. Azure AI Foundry includes a suite of governance tools and controls to monitor and manage costs, compliance, performance, and more. We have also added NVIDIA NIM microservices and NVIDIA AgentIQ toolkit to unlock unprecedented efficiency, performance, and cost optimization for your AI projects.

Part of the NVIDIA AI Enterprise software suite, NVIDIA NIM is a suite of easy-to-use microservices engineered for secure, reliable, and high-performance AI inferencing. These microservices are built to scale seamlessly on managed Azure compute, providing:

  • Zero-configuration deployment: Get started quickly with out-of-the-box optimization.
  • Seamless Azure integration: Works effortlessly with Azure AI Agent Service and Semantic Kernel.
  • Enterprise-grade reliability: Benefit from NVIDIA AI Enterprise support for continuous performance and security.
  • Scalable inference: Tap into Azure’s NVIDIA accelerated infrastructure for demanding workloads.
  • Optimized workflows: Accelerate applications ranging from large language models to advanced analytics.

Stay agile and performant with Azure OpenAI Service Provisioned spillover

Provisioned (PTU) spillover is a new feature in Azure OpenAI Service that helps ensure consistent and efficient performance of AI applications, even during high usage periods.

Now in public preview, PTU spillover automatically reroutes excess traffic from your provisioned deployments to help maintain smooth service operation and uninterrupted critical processes. This feature gives you the flexibility to manage unexpected traffic bursts or peak demand seasons without compromising performance, allowing you to adapt to dynamic conditions and maximize your AI investments.

New report: Customized generative AI experiences to differentiate your business

One way we see companies using AI to drive innovation is by leveraging customization capabilities to create distinctive experiences or services that help their business stand out in the competitive market.

A book on a table

We recently released a report, Source Link