Microsoft is demonstrating its commitment to AI safety by amending a lawsuit initially filed last year. The aim is to unmask four developers who allegedly evaded safety measures on Microsoft’s AI tools to create deepfakes of celebrities. The lawsuit was filed in December, and a subsequent court order allowed Microsoft to seize a related website, leading to the identification of the individuals involved.
The four developers in question are reportedly part of a global cybercrime network known as Storm-2139. They include Arian Yadegarnia, also known as “Fiz,” from Iran; Alan Krysiak, aka “Drago,” from the United Kingdom; Ricky Yuen, aka “cg-dot,” from Hong Kong; and Phát Phùng Tấn, aka “Asakuri,” from Vietnam.
According to Microsoft, these individuals are not the only ones involved in the scheme, but the company is choosing not to disclose the others at this time to avoid interfering with ongoing investigations. The group allegedly compromised accounts with access to Microsoft’s generative AI tools and managed to bypass the safety controls to create images of their choice. They then sold access to these tools to other parties, who used them to create deepfake nudes of celebrities, among other illicit content.
Following the filing of the lawsuit and the seizure of the group’s website, Microsoft observed that the defendants entered a state of panic. As noted on the company’s blog, “The seizure of this website and the subsequent unsealing of the legal filings in January prompted an immediate reaction from the actors involved, with some group members turning against each other.”
Celebrities, including Taylor Swift, have frequently been targets of deepfake pornography, which involves superimposing a real person’s face onto a nude body. In January 2024, Microsoft had to update its text-to-image models after fake images of Swift appeared online. The ease with which generative AI allows for the creation of such images, even with minimal technical expertise, has led to a significant increase in deepfake scandals in high schools across the U.S., as reported by the New York Times. Recent stories from victims of deepfakes highlight the real-world harm caused by these actions, including feelings of anxiety, fear, and violation.
The AI community is currently debating the issue of safety, with some arguing that concerns are either genuine or exaggerated to benefit major players like OpenAI by overemphasizing the capabilities of generative artificial intelligence. One perspective suggests that keeping AI models closed-source could prevent the worst abuses by limiting users’ ability to disable safety controls. In contrast, proponents of open-source models believe that making them freely available for modification and improvement is essential for accelerating the sector, and that abuse can be addressed without hindering innovation. However, these discussions may distract from the more pressing issue of AI generating inaccurate information and floods of low-quality content online.
While many fears about AI seem exaggerated and hypothetical, the misuse of AI to create deepfakes is a genuine concern. Legal measures are one approach to addressing these abuses. There have been several arrests in the U.S. of individuals who have used AI to generate deepfakes of minors, and the NO FAKES Act, introduced in Congress last year, would criminalize the generation of images based on someone’s likeness without consent. The United Kingdom already penalizes the distribution of deepfake pornography and is set to make its production a crime as well, as reported by the BBC. Furthermore, Australia has recently made the creation and sharing of non-consensual deepfakes a criminal offense.
Source Link