
OpenAI has identified a growing number of China-linked groups utilizing its AI tool, ChatGPT, to conduct covert influence operations. In a report released Thursday, the San Francisco-based company detailed how these actors used the platform to generate politically charged content and support cyber activities. While most efforts were small in scale and aimed at limited audiences, the findings raise broader questions about AI misuse. “We see consistent attempts to weaponize the tool,” OpenAI noted. The issue highlights concerns about the role of generative AI in disseminating disinformation and influencing online narratives.
How ChatGPT Became a Tool for Influence and Intrusion?
ChatGPT, OpenAI’s flagship AI system, can generate text, summarize research, write scripts, and even debug code in seconds. While it was designed to assist with everyday tasks, recent misuse has prompted deeper scrutiny. The report revealed how Chinese groups leveraged ChatGPT to automate content creation and assist cyber operations.
In one case, accounts linked to China generated social media posts about controversial geopolitical issues. Some criticized Taiwan-centric content, while others spread misleading claims against activists and foreign aid programs. OpenAI acted by banning the involved accounts.
Another group used ChatGPT for more technical tasks, like modifying scripts and building tools for password cracking and social media automation. These uses, while subtle, showed how generative AI could support multiple phases of digital manipulation. “They’re not just using it for words, it’s part of their toolkit,” OpenAI said.
Tracking the Damage, Small-Scale Attacks, Bigger Concerns
The operations described in the report were mostly low in impact and scale, but the tactics were evolving. Some China-linked groups pushed out divisive political messages, targeting American users with content that fueled both sides of debates. The content included text and AI-generated profile photos, blurring the line between real and fake. One post attacked U.S. tariffs, claiming, “Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid.” While not viral, such content showed how ChatGPT can be misused to create plausible, emotionally charged messaging.
OpenAI emphasized that the misuse was not limited to China, though China-linked operations were a key focus in this report. The company stated that it regularly monitors for violations and updates safeguards to prevent coordinated activity. Still, critics argue that detection often lags behind deployment. China’s foreign ministry did not comment on the findings. Meanwhile, AI experts continue to debate how platforms like ChatGPT should balance open access with responsible use.
OpenAI Blocks Threats, but Risks Remain
OpenAI’s findings highlight both the power and the peril of generative AI in global discourse and digital security. The company’s swift action reinforces its commitment to transparency, but evolving tactics from foreign actors may outpace safeguards. As the technology becomes more accessible, the line between helpful automation and harmful misuse will only grow thinner.