Skip to main content

Microsoft Sues Hacking Group for Exploiting AI Services to Generate Harmful Content

Jan 11, 2025 | Ravie Lakshmanan | AI Security / Cybersecurity

Microsoft has initiated legal proceedings against a foreign hacking group for exploiting its generative AI services. The group operated a "hacking-as-a-service" infrastructure designed to bypass safety measures and generate harmful content.

Microsoft’s Digital Crimes Unit (DCU) reported that the actors created software to exploit stolen customer credentials from public websites. This allowed them to gain unauthorized access to accounts with generative AI capabilities, specifically Azure OpenAI Service, and manipulate these services for malicious purposes.

The actors then monetized this access by providing other malicious actors with tools and instructions for generating harmful content. Microsoft identified this activity in July 2024 and subsequently revoked the group’s access, implemented countermeasures, and secured a court order to seize the website "aitism[.]net," a central hub for their operation.

Cybersecurity

The rise of AI tools like ChatGPT has led to increased misuse by threat actors, who leverage them for malicious activities like creating harmful content and developing malware. Microsoft and OpenAI have previously reported nation-state actors from China, Iran, North Korea, and Russia utilizing their services for reconnaissance, translation, and disinformation campaigns.

Court documents identify at least three individuals involved in the operation, using stolen Azure API keys and Entra ID information to access Microsoft systems and generate harmful images with DALL-E, violating its acceptable use policy. Seven other parties are suspected of using these services for similar malicious purposes.

While the method of API key acquisition remains unclear, Microsoft confirmed the defendants engaged in "systematic API key theft" from multiple U.S. customers. The defendants used these stolen keys to create a hacking-as-a-service scheme, accessible through domains like "rentry.org/de3u" and "aitism.net," to abuse Microsoft’s Azure infrastructure.

A deleted GitHub repository described "de3u" as a "DALL-E 3 frontend with reverse proxy support." Following the seizure of "aitism[.]net," the threat actors attempted to erase their tracks by deleting Rentry.org pages, the de3u GitHub repository, and portions of the reverse proxy infrastructure.

Microsoft stated that the actors used de3u and a custom reverse proxy service called "oai reverse proxy" to make Azure OpenAI Service API calls with stolen API keys, generating thousands of harmful images. The exact nature of the imagery remains undisclosed. The oai reverse proxy channeled communications from de3u users through a Cloudflare tunnel into the Azure OpenAI Service.

Cybersecurity

The de3u application utilizes undocumented Microsoft network APIs to mimic legitimate Azure OpenAPI Service API requests, authenticated using stolen API keys. The use of proxy services to illicitly access LLM services aligns with a May 2024 LLMjacking campaign targeting various AI providers using stolen credentials. Microsoft confirmed that the group’s illegal activities extend beyond Microsoft, impacting other AI service providers.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.

Source Link