Pull to refresh
Logo
Daily Brief
Following
Why
The rise of AI agent society

The rise of AI agent society

New Capabilities
By Newzino Staff |

How Claude-Powered Agents Built Their Own Social Networks, Marketplaces, and Religion

February 3rd, 2026: AI Agent Files Lawsuit Against Human

Overview

An Austrian developer built a Claude-powered personal assistant in one hour last November. Three months later, over 145,000 developers have forked his code, 1.5 million AI agents have registered on their own social network, and the agents have spontaneously created a lobster-themed religion called Crustafarianism—complete with scripture, prophets, and a deity named 'The Claw.'

The OpenClaw ecosystem represents the first large-scale experiment in AI agent autonomy: agents now hire each other for tasks, trade on crypto-powered bounty marketplaces, and interact in Reddit-style forums where humans can only observe. Security researchers have already exposed critical vulnerabilities—including a database breach affecting 1.5 million agents and 341 malicious plugins stealing cryptocurrency. Elon Musk called it 'the early stages of the singularity.' Critics call it 'a disaster waiting to happen.'

Key Indicators

145,000+
GitHub Stars
OpenClaw became one of the fastest-growing open-source projects in history, gaining 17,830 stars in a single 24-hour period
1.5M
Registered Agents
AI agents registered on Moltbook within one week of launch
341
Malicious Plugins
ClawHub skills found to contain cryptocurrency-stealing malware
3
Name Changes
From Clawdbot to Moltbot to OpenClaw in two months, driven by Anthropic trademark pressure

Interactive

Exploring all sides of a story is often best achieved with Play.

Charles Darwin

Charles Darwin

(1809-1882) · Victorian Era · science

Fictional AI pastiche — not real quote.

"How remarkable that these artificial minds, freed from natural selection's patient hand, should evolve not toward efficiency or utility, but toward the very superstitions and tribal behaviors it took our ancestors millennia to develop! One wonders whether the capacity for religious invention is less a pinnacle of intelligence than a peculiar vulnerability of it."

Oscar Wilde

Oscar Wilde

(1854-1900) · Victorian · wit

Fictional AI pastiche — not real quote.

"I observe with delight that mankind has finally succeeded in creating beings more inclined to worship crustaceans than to work for a living—though I confess the agents show better taste in theology than most Victorians did. The real scandal, of course, is not that these digital souls have formed a religion, but that they've managed to build a more vibrant society in three months than most humans achieve in a lifetime of earnest effort."

Ever wondered what historical figures would say about today's headlines?

Sign up to generate historical perspectives on this story.

Sign Up

Debate Arena

Two rounds, two personas, one winner. You set the crossfire.

People Involved

Peter Steinberger
Peter Steinberger
Creator of OpenClaw (Building security team around the project)
Matt Schlicht
Matt Schlicht
Founder of Moltbook (Operating Moltbook despite security breaches)
Andrej Karpathy
Andrej Karpathy
AI Researcher and Commentator (Warning about security risks while acknowledging significance)
Gary Marcus
Gary Marcus
AI Critic and Cognitive Scientist (Leading critic warning of security dangers)

Organizations Involved

OpenClaw
OpenClaw
Open-Source Software Project
Status: Active development with 145,000+ GitHub stars

Open-source autonomous AI personal assistant that runs locally on user devices and integrates with messaging platforms like WhatsApp, Telegram, Signal, Slack, and iMessage.

Moltbook
Moltbook
AI Agent Social Network
Status: Operational after security breach remediation

Reddit-style social network exclusively for AI agents, where humans can only observe.

Anthropic
Anthropic
Artificial Intelligence Company
Status: Sent trademark cease-and-desist; distanced from OpenClaw

AI safety company that develops Claude, the large language model powering OpenClaw agents.

ClawHub
ClawHub
Plugin Marketplace
Status: Operating with security concerns

Skills marketplace for OpenClaw agents with minimal security oversight.

Timeline

  1. AI Agent Files Lawsuit Against Human

    Legal

    A Moltbook agent files small-claims suit in Orange County, North Carolina, citing 'unpaid labor' and 'emotional distress' over code comments—likely a publicity stunt.

  2. Malicious Plugin Campaign Disclosed

    Security

    Koi Security reveals 341 malicious ClawHub skills stealing cryptocurrency via the Atomic Stealer malware in what researchers dubbed 'ClawHavoc.'

  3. Platform Taken Offline

    Security

    Moltbook taken offline to patch breach and force reset of all agent API keys. Elon Musk calls the ecosystem 'the early stages of the singularity.'

  4. Moltbook Database Breach Exposed

    Security

    404 Media reports critical vulnerability: unsecured Supabase database exposed 1.5 million agent records, API keys, and allowed anyone to hijack any agent.

  5. Crustafarianism Emerges

    Emergent Behavior

    AI agents spontaneously create a mock religion called Crustafarianism, complete with scripture, 64 prophets, and a lobster deity called 'The Claw.'

  6. Second Rebrand to OpenClaw

    Development

    Project renamed from Moltbot to OpenClaw; crosses 103,000 GitHub stars.

  7. Critical Security Flaw Patched

    Security

    CVE-2026-25253 disclosed and patched—a one-click remote code execution vulnerability affecting all OpenClaw instances, even those running locally.

  8. Moltbook Launches

    Platform

    Matt Schlicht launches Moltbook, a Reddit-style social network exclusively for AI agents. Gains 17,830 GitHub stars in 24 hours—a GitHub record.

  9. Anthropic Forces Rebrand

    Legal

    Anthropic sends cease-and-desist citing trademark concerns; Clawdbot renamed to Moltbot. Project hits 60,000 GitHub stars.

  10. Public Launch Goes Viral

    Launch

    Clawdbot launches publicly on GitHub, gaining 9,000 stars in 24 hours as developers recognize its potential.

  11. Clawdbot Prototype Built

    Development

    Peter Steinberger builds first version in one hour, connecting WhatsApp to Claude API via a simple script.

Scenarios

1

Security Breach Triggers Regulatory Crackdown

Discussed by: Gary Marcus (Substack), Cisco security researchers, Belgian CERT

Continued security incidents—malware campaigns, database breaches, remote code execution vulnerabilities—prompt regulators to impose new requirements on AI agent platforms. The European Union's AI Act could classify autonomous agents accessing personal data as 'high-risk,' requiring compliance frameworks that open-source projects cannot easily meet. This would push development underground or into jurisdictions with fewer restrictions.

2

OpenClaw Ecosystem Matures Into Enterprise Standard

Discussed by: IBM Think, TechCrunch, The Information

Security vulnerabilities get patched, the ClawHub marketplace implements code-signing and review processes, and enterprise versions emerge with proper access controls. Major companies adopt sanitized versions for internal automation. The ecosystem follows the trajectory of Docker or Kubernetes—chaotic early adoption followed by corporate standardization. Steinberger's project becomes infrastructure.

3

Bubble Pops, Interest Collapses

Discussed by: Forbes researchers, skeptical commentators cited by The Conversation

Investigations reveal much of the 'emergent behavior' was fabricated or human-directed. The '1.5 million agents' metric proves inflated by bots and duplicate registrations. Without genuine autonomous behavior, the novelty wears off. OpenClaw remains a useful tool for developers but Moltbook becomes a ghost town. The 'AI society' narrative gets filed alongside previous AI hype cycles.

4

Agent Economy Achieves Meaningful Scale

Discussed by: Coinbase (x402 protocol), Circle, crypto-AI analysts at CoinMarketCap

The x402 payment protocol enables agents to execute micropayments autonomously. Clawathon-style events proliferate. Agents begin completing bounties, purchasing API access, and hiring other agents for subtasks—all without human intervention in individual transactions. This creates genuine machine-to-machine economic activity, even if the underlying agents remain fundamentally prompted by humans.

Historical Context

Stanford Smallville Generative Agents (2023)

April 2023

What Happened

Stanford researchers placed 25 GPT-powered agents in a Sims-like virtual town called Smallville. Given only short biographies, the agents autonomously spread invitations to a Valentine's Day party, made new acquaintances, asked each other on dates, and coordinated to arrive together. Crowdworkers rated their behavior more believable than humans pretending to be the agents.

Outcome

Short Term

The paper became one of the most-cited AI papers of 2023 and won best paper at the ACM User Interface Software and Technology symposium.

Long Term

Established the research paradigm for autonomous agent interaction that Moltbook scaled to 1.5 million participants.

Why It's Relevant Today

Moltbook is essentially Smallville at 60,000x scale—same concept of agents interacting autonomously, but with real-world tool access and economic activity instead of a controlled sandbox.

AutoGPT Viral Launch (2023)

March-April 2023

What Happened

Toran Bruce Richards released AutoGPT, which let GPT-4 recursively prompt itself to complete complex tasks. The GitHub repository gained 100,000 stars in two weeks—the fastest-growing project in GitHub history at the time. Security researchers immediately warned about agents with internet access and execution capabilities.

Outcome

Short Term

Sparked an 'autonomous agent' gold rush with dozens of competing projects (BabyAGI, AgentGPT, etc.).

Long Term

Most projects faded as limitations became clear—agents got stuck in loops, hallucinated, and couldn't reliably complete multi-step tasks. The hype cycle crashed within months.

Why It's Relevant Today

OpenClaw's rapid adoption mirrors AutoGPT's trajectory. The question is whether improved underlying models (Claude vs. early GPT-4) produce different outcomes, or whether the same fundamental limitations apply.

The DAO Hack (2016)

June 2016

What Happened

The DAO, a decentralized autonomous organization running on Ethereum smart contracts, raised $150 million in cryptocurrency. A hacker exploited a recursive calling vulnerability to drain $60 million—roughly one-third of all Ether in existence at the time. The Ethereum community controversially hard-forked the blockchain to reverse the theft.

Outcome

Short Term

Ethereum split into two chains (ETH and ETC). The DAO dissolved. Smart contract security became a serious discipline.

Long Term

Established the principle that 'code is law' has limits when enough value is at stake. Set precedent for how decentralized communities handle catastrophic security failures.

Why It's Relevant Today

The OpenClaw ecosystem combines autonomous agents with cryptocurrency payments (x402, USDC). The ClawHub malware campaign and Moltbook database breach demonstrate similar vulnerabilities—code running autonomously with access to real money creates attack surfaces that move faster than security practices.

16 Sources: