Back to all articles

Analysis of Generative AI Integration in State-Aligned Influence Operations

New findings from OpenAI reveal how state-affiliated actors are integrating Large Language Models (LLMs) into influence campaigns. This analysis covers observed tactics from Chinese and Russian operators, including the automation of content generation and administrative workflows.

Triage Security Media Team
2 min read

Recent security analysis indicates that actors associated with Chinese law enforcement have integrated ChatGPT into their operational workflows to manage influence campaigns against political figures, including the Prime Minister of Japan.

On February 25, OpenAI released a report detailing attempts by threat actors to leverage generative AI for malicious purposes. While financially motivated activity remains common, the report identifies a trend of nation-state actors utilizing chatbots to streamline politically motivated campaigns. These activities range from targeted efforts against specific individuals to broader geopolitical messaging.

Operations Linked to Chinese Law Enforcement

The report provides a detailed analysis of a ChatGPT account linked to Chinese law enforcement personnel. This user utilized the platform to draft and refine reports regarding active campaigns against Chinese dissidents and Sanae Takaichi, the current Prime Minister of Japan.

Prime Minister Takaichi, elected president of the Liberal Democratic Party (LDP) last October, is known for her firm stance on national security and foreign policy. Her public statements regarding support for Taiwan and historical human rights issues in the Inner Mongolia Autonomous Region have made her a primary target for these operations.

According to the analysis, the actor queried the chatbot to develop strategies for discrediting Takaichi. The proposed methodology included:

  • Amplification of Negative Sentiment: Generating and posting critical comments on online platforms.

  • Impersonation: Using email accounts to pose as Japanese citizens and sending complaints to politicians regarding foreign immigration policies.

  • Fabricated Narratives: Utilizing fake social media accounts to generate political pressure concerning the cost of living and US tariffs, while simultaneously spreading positive content regarding conditions in Inner Mongolia.

The actor continued to use ChatGPT to draft status reports and internal documentation. These documents provided researchers with significant insight into the scale of the operation. One drafted report claimed that approximately 300 individuals in a single province were engaged in similar influence activities. The data also indicated that these actors utilize other AI models, including Qwen and DeepSeek, and employ a range of tactics from online harassment to unauthorized access and the exploitation of personal information.

Evolution of Tactics in Russian Influence Campaigns

The report also identifies "Operation No Bell," a campaign attributed to a Russian threat actor targeting audiences in sub-Saharan Africa. This operation demonstrates a structured approach to leveraging mainstream chatbots for content generation.

The actors used ChatGPT to draft and edit long-form articles and social media posts focusing on geopolitical issues. Specific narratives included advocacy for the President of Angola to receive the Nobel Peace Prize and allegations that Western leaders were directing disinformation toward South Africa.

This campaign achieved a measure of reach, with approximately 53 generated articles appearing on various African news sites. These pieces were published under the byline "Dr. Manuel Godsin," a fictitious scholar from the University of Bergen.

To evade detection, the threat actors employed specific prompting strategies:

  • Style Mimicry: Instructing the model to write in the style of a human journalist to reduce the likelihood of AI detection.

  • Formatting Adjustments: Systematically removing em dashes from the final text, based on the assumption that their presence is a marker of AI-generated content.

Mitigation and Response

Following the identification of these operations, OpenAI terminated the associated accounts. This incident highlights the importance of strong monitoring systems capable of detecting patterns indicative of coordinated influence operations. For security teams, these findings reinforce the need to consider generative AI as a component of the modern threat situation, where automation is used to scale both administrative planning and content distribution.

Sources & References