Logo
South Korea Forces AI-Generated Ads to Wear Labels

South Korea Forces AI-Generated Ads to Wear Labels

A wave of deepfake ‘experts’ and celebrity endorsements pushes Seoul into aggressive AI ad regulation.

Overview

For years, Korean seniors have watched YouTube “doctors” and celebrity endorsements that weren’t real people at all. Now Seoul is telling advertisers: if an ad is made with AI, it needs a visible label—and platforms must keep that label on or face punishment.

The move turns South Korea’s broad AI Basic Act into a concrete test: can a government that wants to be an AI superpower also crack down on AI-powered fraud? What happens here will shape how far democracies can go in forcing transparency on algorithms without choking the ad-driven platforms they also court for investment.

Key Indicators

96,700+
Illegal food and drug ads flagged in 2024
Regulators say AI tools are driving a sharp rise in deceptive online promotions.
5x
Maximum punitive damages
Distributors of knowingly false AI-generated information could owe up to five times proven losses.
24 hours
Target review window for harmful ads
Authorities want suspect AI ads reviewed and potentially blocked within a day.
January 2026
AI Basic Act takes effect
Core ad-labeling obligations are scheduled to kick in alongside Korea’s AI framework law.

People Involved

Kim Min-seok
Kim Min-seok
Prime Minister of South Korea (Leading government response to AI-enabled fraud and deceptive advertising)
Lee Dong-hoon
Lee Dong-hoon
Director of Economic and Financial Policy, Office for Government Policy Coordination (Front-line architect of AI ad-labeling and enforcement plan)
Oh Yu-kyung
Oh Yu-kyung
Minister of Food and Drug Safety (Warning of health risks from AI “doctor” endorsements in online ads)
Park Jeong-hoon
Park Jeong-hoon
Member of the National Assembly, People Power Party (Sponsor of bill targeting AI-generated ‘fake doctor’ advertisements)

Organizations Involved

Office for Government Policy Coordination
Office for Government Policy Coordination
Central Government Office
Status: Coordinating AI ad-labeling rules and interagency enforcement

The Office for Government Policy Coordination is the control room for cross-ministry AI policy in Korea.

Korea Media and Communications Commission (KMCC)
Korea Media and Communications Commission (KMCC)
Media and Communications Regulator
Status: Charged with implementing AI content labels and emergency takedown powers

KMCC will be the hands-on enforcer of AI labeling and fast-track ad removals.

Ministry of Food and Drug Safety (MFDS)
Ministry of Food and Drug Safety (MFDS)
Regulatory Agency
Status: Providing evidence and criteria for illegal AI-generated health ads

MFDS tracks and flags illegal health-related ads, now increasingly created with AI.

Korea Consumer Agency
Korea Consumer Agency
Consumer Watchdog
Status: Expanding monitoring of AI-driven scams and deceptive promotions

The Korea Consumer Agency hunts scams, now including AI-boosted fake endorsements aimed at seniors.

National Assembly of the Republic of Korea
National Assembly of the Republic of Korea
Unicameral Legislature
Status: Passed AI Basic Act and debating follow-on AI advertising measures

Korea’s National Assembly writes the high-level AI rules that regulators are now weaponizing against fake ads.

Timeline

  1. Government Unveils Mandatory Labeling for AI-Generated Ads

    Policy

    Policy meeting chaired by Prime Minister announces nationwide AI-ad labels, 24-hour takedowns, tougher penalties.

  2. Bill Introduced to Ban ‘AI Fake Doctor’ Ads

    Legislation

    Rep. Park Jeong-hoon proposes amendment mandating AI labels and platform removal of noncompliant ads.

  3. Lawmakers Grill Food Safety Minister on AI Fake Expert Ads

    Hearing

    National Assembly audit highlights surge of AI-generated fake doctors and pharmacists in online ads.

  4. Law Targets Viewers of Deepfake Pornography

    Legislation

    Parliament approves penalties up to three years’ prison for watching or possessing deepfake pornography.

  5. AI Basic Act Passes, Setting Framework for Future AI Rules

    Legislation

    National Assembly passes AI Basic Act, creating umbrella framework for AI governance and future rules.

Scenarios

1

Korea Becomes Template for Global AI Ad Transparency Rules

Discussed by: AP analysis, Korea Times, EU and U.S. law firms advising on Korea’s AI Basic Act

In this scenario, Seoul follows through: telecom and advertising amendments pass largely intact, the Korea Media and Communications Commission stands up a workable label format, and big platforms quietly adjust rather than revolt. High-profile takedowns of AI fake doctor ads and celebrity deepfakes send a deterrent signal. European and U.S. regulators looking to contain AI scams point to Korea’s mix of labels, 24-hour reviews, and punitive damages as a practical model for consumer protection without banning generative AI outright.

2

Labeling Rules Pass but Enforcement Fizzles into Symbolism

Discussed by: IAPP commentators, Korean business press, critics of the AI Basic Act’s “regulatory moratorium”

Here, the law looks tough on paper but soft in practice. The AI Basic Act’s grace periods and low fine caps spill over into the ad space, with regulators leaning on voluntary compliance and education campaigns instead of meaningful penalties. Labels proliferate but are small, inconsistent, or ignored, and platforms lobby successfully to limit liability. Deepfake scams continue to spread, undermining trust and leaving consumer groups arguing that Korea chose AI industry promotion over real protection.

3

Major AI Scam Triggers Expansion to All Synthetic Media and Elections

Discussed by: Domestic consumer advocates, some privacy scholars, comparisons to China’s broad AI labeling regime

A spectacular failure—such as a mass investment scam or election-related deepfake crisis—could convince lawmakers that ad-only rules are too narrow. Under this path, Korea extends mandatory labeling beyond commercial ads to most AI-generated content, including political messaging, influencer posts, and news-like material. Platforms face stringent obligations akin to China’s deep synthesis rules, combining visible labels with metadata watermarks. This would solidify Korea as a global hardliner on AI transparency but fuel industry concerns about compliance costs and speech implications.

Historical Context

China’s Deep Synthesis and AI Content Labelling Rules

2023–2025

What Happened

China rolled out deep synthesis provisions and later nationwide AI content-labeling measures requiring explicit and implicit marks on AI-generated text, images, audio, and video. Platforms and AI providers must watermark synthetic media and ensure labels persist across uploads and downloads, backed by broad safety and political controls.

Outcome

Short term: Chinese platforms rapidly deployed AI content labels and watermarking, making visible AI tags common across major apps.

Long term: The rules gave Beijing strong leverage over synthetic speech and set a precedent for heavy platform obligations elsewhere.

Why It's Relevant

Korea’s plan borrows the idea of persistent labels and platform responsibility but aims to do so in a more liberal, market-oriented system.

European Union’s AI Act and Deepfake Transparency Requirements

2021–2026

What Happened

The EU’s AI Act introduced horizontal rules for AI, including obligations to disclose when people interact with AI systems and to mark synthetic audio, image, video, and text as artificially generated or manipulated. Deepfakes must be clearly identified, with steep fines for violations and a developing code of practice for AI content labeling.

Outcome

Short term: AI and advertising firms began redesigning workflows to add labels and watermarks ahead of 2026 enforcement deadlines.

Long term: The EU set a de facto global baseline for AI transparency, especially for companies operating in multiple jurisdictions.

Why It's Relevant

Korea’s ad-label rule is narrower but arrives earlier than full EU enforcement, positioning Seoul as both a test case and a potential bridge between EU-style regulation and looser regimes.

South Korea’s Earlier Crackdown on Deepfake Pornography

2019–2024

What Happened

Amid public outrage over non-consensual deepfake pornography targeting K-pop stars, teachers, and minors, Korea criminalized the creation and distribution of sexually explicit deepfakes and later passed a law punishing even viewing or possessing such content with prison time or significant fines.

Outcome

Short term: Police raids and prosecutions signaled that deepfake abuse was a serious crime, not a gray area of online culture.

Long term: The experience hardened public opinion against AI misuse and normalized the idea that certain AI outputs warrant special criminal treatment.

Why It's Relevant

That earlier fight made it politically easier to portray AI ad-labeling not as overreach but as the next logical step in defending citizens from AI-driven manipulation.