×
Chinese groups exploit ChatGPT for malicious acts, OpenAI warns
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI reports growing Chinese exploitation of its AI systems for covert operations targeting geopolitical narratives. The company’s latest threat intelligence reveals China-linked actors using ChatGPT to generate divisive content, support cyber operations, and manipulate social media discourse on topics relevant to Chinese interests. While these operations remain relatively small-scale and limited in reach, they demonstrate how state-aligned groups are weaponizing generative AI technologies for influence campaigns.

The big picture: OpenAI has identified multiple instances of Chinese groups misusing its technology for covert information operations, detailed in a new report released Thursday.

  • The San Francisco-based AI company has detected and banned ChatGPT accounts generating politically motivated content aligned with Chinese government interests.
  • These operations, while expanding in scope and tactics, have generally remained small-scale with limited audience reach.

Key details: Chinese operators used ChatGPT to create politically charged social media content on topics directly relevant to China’s geopolitical interests.

  • Content included criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and material related to USAID closure.
  • Some generated posts criticized U.S. President Trump’s tariff policies with messages like: “Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who’s supposed to keep eating?”

Tactical applications: China-linked threat actors leveraged AI across multiple phases of their cyber operations.

  • OpenAI observed ChatGPT being used for open-source research, script modification, troubleshooting system configurations, and developing specialized tools.
  • These tools included password brute forcing utilities and social media automation systems designed to amplify influence operations.

Divisive tactics: One China-origin influence operation generated polarized content supporting both sides of contentious U.S. political issues.

  • The operation created AI-generated profile images alongside divisive text content designed to inflame existing tensions in American political discourse.
  • This approach represents a sophisticated evolution in information operations, moving beyond simple propaganda to exploiting societal divisions.

Market context: This threat intelligence comes as OpenAI solidifies its position as a dominant force in the AI industry.

  • The company recently announced a massive $40 billion funding round, valuing the startup at $300 billion.
  • China’s foreign ministry has not yet responded to Reuters’ request for comment on OpenAI’s findings.
OpenAI finds more Chinese groups using ChatGPT for malicious purposes

Recent News

Study reveals 4 ways AI is transforming sexual wellness

AI-powered tools offer relationship advice rated more empathetic than human responses.

In the Money: Google tests interactive finance charts in AI Mode for stock comparisons

Finance queries serve as Google's testing ground for broader data visualization across other subjects.

30 mathematicians met in secret to stump OpenAI. They (mostly) failed.

Mathematicians may soon shift from solving problems to collaborating with AI reasoning bots.