Pull to refresh
Logo
Daily Brief
Following
Why
AI platforms emerge as unexpected counterintelligence tools against state influence operations

AI platforms emerge as unexpected counterintelligence tools against state influence operations

Force in Play
By Newzino Staff |

OpenAI's latest threat report reveals a Chinese official used ChatGPT as a diary, accidentally exposing a sprawling transnational repression campaign targeting overseas dissidents

2 days ago: OpenAI publishes 'ChatGPT diary' findings exposing transnational repression

Overview

A Chinese law enforcement official used ChatGPT the way most people use a private notebook — to draft, revise, and polish status reports about their work. The problem: the work was a covert campaign to silence critics of the Chinese Communist Party living overseas. OpenAI's threat intelligence team read the reports, pieced together a transnational repression operation involving hundreds of operators, thousands of fake social media accounts, forged American court documents, and impersonation of United States immigration officials — then published the findings.

Key Indicators

300+
Operators in a single province
Internal reports referenced by the ChatGPT user indicated at least 300 operators running similar campaigns in just one Chinese province, with comparable numbers elsewhere
300+
Foreign platforms targeted
The operation claimed activity across more than 300 foreign social media platforms
50,000+
Posts placed on Western platforms
The operation claimed to have placed over 50,000 posts, but fewer than 150 received any meaningful engagement
40+
Networks disrupted by OpenAI since 2024
Cumulative state-linked influence networks that OpenAI has identified and shut down since beginning public threat reporting

Interactive

Exploring all sides of a story is often best achieved with Play.

Ever wondered what historical figures would say about today's headlines?

Sign up to generate historical perspectives on this story.

Sign Up

Debate Arena

Two rounds, two personas, one winner. You set the crossfire.

People Involved

Ben Nimmo
Ben Nimmo
Principal Investigator, OpenAI Threat Intelligence (Led the investigation and public disclosure)
Sanae Takaichi
Sanae Takaichi
Prime Minister of Japan (Target of a separate planned influence campaign that ChatGPT refused to assist with)
Li Ying
Li Ying
Chinese dissident and activist known as 'Teacher Li' (Named target of the exposed repression campaign)
Michael Flossman
Michael Flossman
Head of Threat Intelligence Engineering, OpenAI (Leads OpenAI's technical threat detection capabilities)

Organizations Involved

OpenAI
OpenAI
AI Company
Status: Published the threat intelligence report; banned involved accounts

The maker of ChatGPT, which has built a growing threat intelligence operation that has disrupted over 40 state-linked influence networks since February 2024.

Safeguard Defenders
Safeguard Defenders
Human Rights Organization
Status: Named as both a documenter and a target of Chinese transnational repression

A human rights organization that has extensively documented Chinese transnational repression, including the 'Teacher Li' case, and was itself identified as a target in the exposed campaign.

Foundation for Defense of Democracies (FDD)
Foundation for Defense of Democracies (FDD)
Policy Research Institute
Status: Independently identified the influence campaign targeting Takaichi

A Washington-based policy institute whose researchers independently identified a coordinated network of over 330 fake social media accounts pushing anti-Takaichi content ahead of Japan's February 2026 election.

Timeline

  1. OpenAI publishes 'ChatGPT diary' findings exposing transnational repression

    Investigation

    OpenAI revealed that a Chinese law enforcement official had inadvertently exposed a sprawling transnational repression campaign by using ChatGPT to edit operational status reports. The findings detailed forged American court documents, impersonation of immigration officials, fabricated death notices, and the targeting of named dissidents across 300+ platforms.

  2. Coordinated anti-Takaichi campaign launches without AI assistance

    Influence Operation

    Despite ChatGPT's refusal, a coordinated network of over 330 fake social media accounts began pushing content portraying Takaichi as corrupt and militaristic across X, Tumblr, Blogspot, Quora, and YouTube — later identified by the Foundation for Defense of Democracies.

  3. ChatGPT refuses to plan anti-Takaichi influence campaign

    Safety

    A Chinese law enforcement user asked ChatGPT to design a multi-part plan to denigrate incoming Japanese Prime Minister Sanae Takaichi, who had criticized China's human rights record. ChatGPT's safety systems blocked the request.

  4. OpenAI finds cross-model usage by China and Russia

    Investigation

    The October report revealed that Chinese and Russian operators were using OpenAI models alongside locally deployed alternatives like DeepSeek, pivoting between platforms to avoid detection and safety filters.

  5. OpenAI disrupts 10 more operations including 'Sneer Review'

    Investigation

    The June report disrupted 10 operations, four from China. 'Sneer Review' operators notably used ChatGPT to write internal performance reviews documenting their own influence work — a precursor to the 'diary' behavior seen in the February 2026 case.

  6. OpenAI exposes first known AI-powered surveillance tool

    Investigation

    OpenAI discovered 'Peer Review,' a China-linked operation that used ChatGPT to build an AI-powered social media surveillance tool feeding real-time reports on overseas protest activity to Chinese security services. Operators also used DeepSeek and Meta's Llama.

  7. OpenAI reports 20+ disrupted operations

    Investigation

    A major update documented the disruption of over 20 deceptive networks since the start of 2024, including China's 'Sponsored Discontent,' which targeted Chinese dissidents in English and planted anti-American articles in Latin American media.

  8. OpenAI names five covert influence operations

    Investigation

    OpenAI's first dedicated influence operations report disrupted five state-linked networks — including China's 'Spamouflage,' Russia's 'Bad Grammar' and 'Doppelganger,' Iran's 'IUVM,' and Israel's 'STOIC' — none of which had reached authentic audiences.

  9. OpenAI publishes first state-threat report

    Investigation

    OpenAI and Microsoft jointly announced the disruption of five state-affiliated threat actors from China, Russia, Iran, and North Korea who were abusing AI for cyber-espionage and content generation.

  10. Meta attributes Spamouflage network to Chinese law enforcement

    Investigation

    Meta publicly attributed the large-scale 'Spamouflage' coordinated inauthentic behavior network to Chinese law enforcement — the same network OpenAI would later connect to the ChatGPT diary user.

  11. FBI arrests operators of secret Chinese police station in New York

    Legal

    The FBI arrested two men for operating a secret police station in Manhattan's Chinatown on behalf of China's Ministry of Public Security. The Department of Justice simultaneously unsealed charges against 34 Chinese officers for transnational repression via fake social media accounts.

  12. China launches Operation Fox Hunt

    Context

    China launched a worldwide campaign officially framed as an anti-corruption repatriation effort. The Federal Bureau of Investigation later assessed it as a vehicle for political repression targeting dissidents abroad.

Scenarios

1

AI platforms become routine intelligence sources for Western governments

Discussed by: Former Pentagon officials and cybersecurity analysts, including Michael Horowitz, have noted the intelligence value of these disclosures

As state actors continue using commercial AI tools for operational planning and documentation, Western intelligence agencies formalize information-sharing arrangements with AI companies. The accidental exposure pattern — operatives treating chatbots as secure workspaces — becomes a recurring source of counterintelligence leads. AI companies publish increasingly detailed threat reports that function as de facto intelligence products, and governments build analytical capacity around these disclosures.

2

State actors shift entirely to locally hosted models, closing the visibility window

Discussed by: OpenAI's own report noted the multi-model approach and pivot to local alternatives like DeepSeek and Qwen; cybersecurity researchers at CyberScoop and The Register have flagged this trajectory

The publicity around the 'ChatGPT diary' incident accelerates a shift already underway. State operators migrate entirely to locally deployed open-source models — DeepSeek, Llama, Qwen — that provide no visibility to Western companies. The window during which AI platforms could serve as counterintelligence tools closes within one to two years as operational security practices catch up. Future threat reports from AI companies shrink in scope and significance.

3

United States pursues criminal charges over forged court documents and impersonation

Discussed by: Legal analysts have noted the potential federal law violations; CNN reported on the national security implications of impersonating immigration officials on American soil

The forged county court documents and impersonation of United States immigration officials described in the report constitute potential federal crimes. The Department of Justice, which has already prosecuted Chinese transnational repression cases (the 2023 New York police station arrests), uses the OpenAI report as a foundation for a new indictment. This would follow the pattern of the 34 Chinese Ministry of Public Security officers charged in 2023 — largely symbolic, since defendants are in China, but establishing legal precedent and deterrent signaling.

4

Massive scale, minimal impact: influence operations prove ineffective despite AI

Discussed by: OpenAI, Meta, and Google have all independently concluded that AI provides only incremental gains to influence operations; Brookings Institution's Breakout Scale consistently rates these operations at low levels

The most underappreciated finding solidifies into consensus: despite 50,000+ posts, hundreds of operators, and sophisticated tactics, the exposed campaign generated fewer than 150 meaningful engagements. Similar patterns hold across nearly every state-linked operation documented by major platforms since 2024. The policy debate shifts from preventing AI-enabled influence operations to questioning whether they work at all — and whether the resources spent combating them are proportionate to the actual threat.

Historical Context

Strava fitness tracker exposure of military bases (2018)

January 2018

What Happened

Analyst Nathan Ruser discovered that Strava's global heatmap — built from 13 trillion GPS data points logged by fitness tracker users — revealed the outlines of military bases in Iraq, Afghanistan, and Syria as bright hotspots of jogging activity in otherwise dark, remote areas. Supply routes, patrol patterns, and even 6,400 users near Russian military intelligence (GRU) headquarters in Moscow were identifiable by name.

Outcome

Short Term

The Pentagon reviewed all wearable device policies. Multiple militaries issued orders restricting fitness app use in sensitive locations.

Long Term

Established the principle that consumer technology adopted by security personnel can reverse-engineer classified information. Spurred a broader reckoning with 'ambient intelligence' — the data trails people create without realizing it.

Why It's Relevant Today

The Chinese official used ChatGPT the same way soldiers used Strava: as a personal productivity tool, not realizing the platform could read and analyze everything they entered. Both cases demonstrate that operational security failures now come from the tools people adopt voluntarily, not from adversary penetration.

Bellingcat identification of GRU Skripal poisoning agents (2018)

September-December 2018

What Happened

Open-source investigators at Bellingcat used leaked Russian passport databases to identify the GRU officers who poisoned former spy Sergei Skripal in Salisbury, England. The agents' passport files contained telltale markers — 'Do not provide any information' stamps and issuing authority codes used exclusively for intelligence officers. 'Ruslan Boshirov' was identified as Colonel Anatoliy Chepiga, a decorated military officer.

Outcome

Short Term

Russia began purging compromised databases, but the identifications were already public. Additional suspects were later identified using the same methods.

Long Term

Demonstrated that open-source intelligence could rival state intelligence capabilities. Bellingcat's methods became a template for accountability journalism worldwide.

Why It's Relevant Today

Both cases share the same core mechanism: a security apparatus left detailed operational records in a system it assumed was secure, and investigators outside the government found and published them. The GRU assumed passport databases were inaccessible; the Chinese official assumed ChatGPT conversations were private.

Meta attributes Spamouflage network to Chinese law enforcement (2023)

August 2023

What Happened

Meta publicly attributed the sprawling 'Spamouflage' coordinated inauthentic behavior network — one of the largest ever documented — directly to Chinese law enforcement. The network operated thousands of fake accounts across Facebook, Instagram, YouTube, X, and dozens of smaller platforms, pushing pro-Beijing narratives and harassing dissidents.

Outcome

Short Term

Platforms coordinated takedowns. The attribution to law enforcement rather than intelligence agencies signaled that repression campaigns were being run through China's domestic security bureaucracy.

Long Term

Established that Chinese influence operations were not centralized intelligence projects but distributed, bureaucratic efforts run by provincial law enforcement — a finding dramatically confirmed by the February 2026 ChatGPT diary, which revealed per-province staffing levels.

Why It's Relevant Today

The February 2026 revelation is a direct continuation of the Spamouflage story. OpenAI explicitly connected the ChatGPT diary user's operations to the same network Meta attributed in 2023, and linked it to the doxxing website revealscum.com that OpenAI had first exposed in May 2024. The diary added the missing internal perspective — staffing, tactics, and performance metrics — to a network already identified from the outside.

Sources

(10)