Skip to main content

The Dark Side of AI-Generated Art: Experts Warn of Privacy Risks

While the trend of transforming personal photos into Studio Ghibli-style art using AI tools has taken the internet by storm, experts are warning of a darker reality. The casual sharing of photos can lead to unforeseen privacy breaches and data misuse. Cybersecurity experts caution that the terms of service for these tools are often vague, raising questions about what happens to user photos after they are processed.

The Trend and Its Risks

The trend started when OpenAI launched its GPT-4 model, which allows users to recreate personal images in the artistic style of Studio Ghibli, a Japanese animation studio. However, few platforms clearly explain what happens to the photos after they are uploaded. Photos contain more than just facial data, including hidden metadata like location coordinates, timestamps, and device details, which can quietly reveal personal information. These AI tools leverage neural style transfer (NST) algorithms, which separate content from artistic styles in uploaded photos to blend the user’s image with reference artwork.

Expert Warnings

Quick Heal Technologies CEO Vishal Salvi explains that vulnerabilities like model inversion attacks, where adversaries may reconstruct original pictures from Ghibli images, pose significant risks. "Even if companies claim they don’t store your photos, fragments of your data might still end up in their systems. Uploaded images can definitely be repurposed for unintended uses, like training AI models for surveillance or advertising," Salvi cautioned.

McAfee’s Pratim Mukherjee notes that the way these tools are designed makes it easy to overlook what you’re really agreeing to. "Eye-catching results, viral filters, and fast interactions create an experience that feels light–but often comes with hidden privacy risks. When access to something as personal as a camera roll is granted without a second thought, it’s not always accidental. These platforms are often built to encourage quick engagement while quietly collecting data in the background."

Data Breaches and Exploitation

The risk of data breaches looms large, with experts cautioning that stolen user photos could fuel deepfake creation and identity fraud. Vladislav Tushkanov, Group Manager at Kaspersky AI Technology Research Centre, says that while some companies ensure the safety and security of the data they collect and store, it does not mean that the protection is bullet-proof. "Due to technical issues or malicious activity, data can leak, become public or appear for sale at specialised underground websites. Moreover, the account that is used to access the service can be breached if the credentials or user device is compromised," he said.

Mitigating Risks

To mitigate these risks, experts recommend that users exercise caution when sharing personal photos with AI apps. Tushkanov advises users to "combine standard security practices with a bit of common sense," including using strong, unique passwords, enabling two-factor authentication, and being wary of potential phishing websites. Salvi suggests using specialised tools to strip hidden metadata from photos before uploading them. Mukherjee calls for governments to mandate simplified, upfront disclosures regarding data usage.

Stay Informed

Stay up-to-date with the latest insights and analysis by subscribing to our newsletter. Join the community of 2M+ industry professionals and get the latest news and updates.

Download the ETCISO App

Download the ETCISO app to get realtime updates, save your favourite articles, and more. Available on the Play Store and App Store.

Article Details

Published On Apr 7, 2025 at 09:12 AM IST


Source Link