Skip to main content

Microsoft Takes Legal Action Against Cybercriminals Using Generative AI for Harmful Purposes

News Brief

Microsoft’s Digital Crimes Unit is pursuing legal action to disrupt cybercriminals who create malicious tools that evade the security guardrails and guidelines of generative AI (GenAI) services to create harmful content.

Cybercriminal Tactics Evade Security Measures

According to an unsealed complaint in the Eastern District of Virginia, though the company goes to great lengths to create and enhance secure AI products and services, cybercriminals continue to innovate their tactics and bypass security measures.

Microsoft’s Response

"With this action, we are sending a clear message: the weaponization of our AI technology by online actors will not be tolerated," said Microsoft in a blog post about the lawsuit.

Threat Actor Group Exploits Exposed Customer Credentials

In the court filings that were unsealed on Jan. 13, Microsoft noted that it had "observed a foreign-based threat-actor group develop sophisticated software that exploited exposed customer credentials scraped from public websites."

Group’s Malicious Activities

The group tried to access accounts with generative AI services in order to alter the capabilities of those services, then resold this unlawful access to other malicious actors, providing instructions on how to use the tools to create harmful content.

Microsoft’s Response to the Threat

Since discovering the group’s actions, Microsoft has revoked access and enhanced safeguards to mitigate this kind of activity in the future.

Protecting the Public from AI-Generated Threats

As the company continues to seek out proactive measures it can take alongside legal action, it highlights a report, "Protecting the Public From Abusive AI-Generated Content," that provides recommendations for organizations and governments to protect the public from AI-created threats.


Source Link