Accounts that use ChatGPT for monitoring and influence campaigns are prohibited by OpenAI.

Feb 22, 2025Ravie LakshmananDisinformation / Artificial Intelligence

A group of transactions that used the ChatGPT tool to create a alleged artificial intelligence ( AI)-powered surveillance tool were made public by OpenAI on Friday, according to the announcement.

According to reports, the social media listening device was developed by one of Meta’s Llama versions, with the balances reportedly using one of the AI agency’s models to create detailed descriptions and analyses of documents for an equipment that can gather real-time data and reports about anti-China protests in the West and imparting the findings to Chinese authorities.

The strategy has been codenamed Peer Review owing to the “network’s behaviour in promoting and reviewing security tooling”, researchers Ben Nimmo, Albert Zhang, Matthew Richard, and Nathaniel Hartley noted, adding the tool is designed to absorb and evaluate posts and comments from platforms such as X, Facebook, YouTube, Instagram, Telegram, and Reddit.

The stars used ChatGPT in one instance to test and modify the source code that is thought to run the surveillance program, known as” Qianyue Overseas Public Opinion AI Assistant.”

The swarm has also been found to use ChatGPT access to learn, interpret, and evaluate screenshots of English-language documents in addition to using its model as a research tool to surface publicly available information about think tanks in the United States and government officials and politicians in nations like Australia, Cambodia, and the United States.

Some of the pictures were purportedly taken from social media and were representations of Uyghur rights protests in several Western cities. It’s now not known if these pictures were true.

Additionally, OpenAI reported that it had disrupted a number of other clusters that had been discovered using ChatGPT for several malicious activities.

    Deceptive Employment Scheme- A network from North Korea linked to the phony IT worker scheme that produced false job applications like resumes, online job profiles, and cover letters as well as come up with compelling arguments to explain unusual behaviors like avoiding video calls, using business systems from unauthorized countries, or working irregular hours. Then, some of the fake job applications were shared on Linked In.

  • Sponsored Discontent– A network of people of Chinese descent who participated in the development of long-form, critical-of-the-u.s. social media articles published in Peru, Mexico, and Ecuador that were later published by Latin American news websites in Peru, Mexico, and Ecuador. A known activity cluster known as overlaps some of the activity.
  • A network of accounts that participated in the translation and generation of comments in Japanese, Chinese, and English for posting on social media platforms like Facebook, X, and Instagram in connection with alleged originating in Cambodia.
  • Iranian Influence Nexus– A network of five accounts that was involved in the generation of X posts and articles that were pro-Palestinian, pro-Hamas, and pro-Iran, and anti-Israel and anti-U. S. and shared on websites linked to Iranian influence operations like the International Union of Virtual Media ( ) and . One of the banned accounts was used to produce content for both operations, indicating a “previously unreported relationship” ( previously unreported relationship ).
  • Kimsuky and BlueNoroff, a network of accounts run by North Korean threat actors, were in debugging code for Remote Desktop Protocol ( RDP ) brute-force attacks, gathering data on , and gathering data for remote desktop brute-force attacks.
  • Youth Initiative Covert Influence Operation– A network of accounts that contributed to the writing of English-language articles for a website called” Empowering Ghana” and to social media posts that addressed the Ghana presidential election
  • Task Scam– A network of accounts likely from Cambodia that participated in the translation of comments between Urdu and English as part of a con that entices unsuspecting people to perform simple tasks ( like liking videos or writing reviews ) in exchange for receiving a non-existent commission, accessing which necessitates victims to part with their own money.

The development comes as bad actors are using AI tools to more effectively conduct and other heinous operations.

Last month, Google Threat Intelligence Group ( GTIG ) that over 57 distinct threat actors with ties to China, Iran, North Korea, and Russia used its Gemini AI chatbot to improve multiple phases of the attack cycle and conduct research into topical events, or perform content creation, translation, and localization.

” The unique insights that AI companies can glean from threat actors are particularly valuable if they are shared with upstream providers, such as hosting and software developers, downstream distribution platforms, such as social media companies, and open-source researchers”, OpenAI said.

” In the same way, the insights that upstream and downstream researchers have into threat actors create new eras for AI companies to detect and enforce.”

Found this article interesting? To read more exclusive content we post, follow us on and Twitter.

Leave a Comment