Skip to main content

February 22, 2025Ravie LakshmananDisinformation / Artificial Intelligence

On Friday, OpenAI announced that it had banned a set of accounts that utilized its ChatGPT tool to develop a suspected artificial intelligence (AI)-powered surveillance tool.

The social media listening tool, likely originating from China, is powered by one of Meta’s Llama models. The accounts in question used the AI company’s models to generate detailed descriptions and analyze documents for an apparatus capable of collecting real-time data and reports about anti-China protests in the West, which were then shared with Chinese authorities.

Researchers Ben Nimmo, Albert Zhang, Matthew Richard, and Nathaniel Hartley have codenamed the campaign “Peer Review” due to the “network’s behavior in promoting and reviewing surveillance tooling.” The tool is designed to ingest and analyze posts and comments from platforms such as X, Facebook, YouTube, Instagram, Telegram, and Reddit.

In one instance flagged by the company, the actors used ChatGPT to debug and modify source code believed to run the monitoring software, referred to as “Qianyue Overseas Public Opinion AI Assistant.”

Besides using its model as a research tool to surface publicly available information about think tanks in the United States, and government officials and politicians in countries like Australia, Cambodia, and the United States, the cluster has also been found to leverage ChatGPT access to read, translate, and analyze screenshots of English-language documents.

Cybersecurity

Some of the images were announcements of Uyghur rights protests in various Western cities, likely copied from social media. However, the authenticity of these images is currently unknown.

OpenAI also said it disrupted several other clusters that were found abusing ChatGPT for various malicious activities, including:

  • Deceptive Employment Scheme – A network from North Korea linked to the fraudulent IT worker scheme involved in creating personal documentation for fictitious job applicants, such as resumés, online job profiles, and cover letters, as well as generating convincing responses to explain unusual behaviors.
  • Sponsored Discontent – A network likely of Chinese origin involved in creating social media content in English and long-form articles in Spanish critical of the United States, which were subsequently published by Latin American news websites in Peru, Mexico, and Ecuador.
  • Romance-baiting Scam – A network of accounts involved in translating and generating comments in Japanese, Chinese, and English for posting on social media platforms, including Facebook, X, and Instagram, in connection with suspected Cambodia-origin romance and investment scams.
  • Iranian Influence Nexus – A network of five accounts involved in generating X posts and articles that were pro-Palestinian, pro-Hamas, and pro-Iran, and anti-Israel and anti-U.S., which were shared on websites associated with an Iranian influence operation.
  • Kimsuky and BlueNoroff – A network of accounts operated by North Korean threat actors involved in gathering information related to cyber intrusion tools and cryptocurrency-related topics, and debugging code for Remote Desktop Protocol (RDP) brute-force attacks.
  • Youth Initiative Covert Influence Operation – A network of accounts involved in creating English-language articles for a website named “Empowering Ghana” and social media comments targeting the Ghana presidential election.
  • Task Scam – A network of accounts likely originating from Cambodia involved in translating comments between Urdu and English as part of a scam that lures unsuspecting people into jobs performing simple tasks in exchange for a non-existent commission.

The development comes as AI tools are being increasingly used by bad actors to facilitate cyber-enabled disinformation campaigns and other malicious operations.

Cybersecurity

Last month, the Google Threat Intelligence Group (GTIG) revealed that over 57 distinct threat actors with ties to China, Iran, North Korea, and Russia used its Gemini AI chatbot to improve multiple phases of the attack cycle and conduct research into topical events, or perform content creation, translation, and localization.

OpenAI stated, “The unique insights that AI companies can glean from threat actors are particularly valuable if they are shared with upstream providers, such as hosting and software developers, downstream distribution platforms, such as social media companies, and open-source researchers.”

“Equally, the insights that upstream and downstream providers and researchers have into threat actors open up new avenues of detection and enforcement for AI companies.”

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.




Source Link