AI Company
Appears in 14 stories
Developer of Mythos; restricting model access through Project Glasswing while managing Pentagon litigation
An AI model that can find software flaws no human has caught in nearly three decades has triggered a coordinated response from central banks across the Western world. Anthropic's Claude Mythos Preview, which the company says discovered thousands of previously unknown vulnerabilities in every major operating system and web browser, prompted the US Treasury and Federal Reserve to summon Wall Street chief executives to Washington on April 8. By April 11, the Bank of England, the Bank of Canada, and their respective financial regulators had convened or scheduled their own emergency sessions with banks.
Updated 3 days ago
Preparing for IPO, expected Nasdaq listing in late 2026
Anthropic offered employees up to $6 billion in liquidity through a tender offer at a $350 billion valuation — the same price as its February fundraising round. Employees mostly said no. The sale completed in early April well below its target because staff chose to hold their shares, betting that the company's planned initial public offering (IPO) later in 2026 will deliver a higher price.
Updated 4 days ago
Developer of Claude Mythos Preview, operator of Project Glasswing
Anthropic built an AI model so good at finding software vulnerabilities that it decided not to sell it. Claude Mythos Preview, announced April 7, autonomously discovered thousands of previously unknown security flaws in every major operating system and web browser — including bugs that had survived decades of human and automated review. Rather than offering the model commercially, Anthropic restricted access to 12 major technology companies through a new initiative called Project Glasswing, backed by $100 million in usage credits.
Updated 5 days ago
Formally designated supply chain risk; filed lawsuits against DoD
Anthropic's Claude became the first commercial AI model deployed on classified U.S. military networks in late 2024. Over sixteen months later, the Department of Defense formally designated Anthropic a "supply chain risk"—a label historically reserved for foreign adversaries—after the company refused to permit Claude's use for mass surveillance of Americans or fully autonomous weapons. The unprecedented action followed failed negotiations and President Trump's directive to cease federal use of Anthropic tech, forcing contractors to cut ties.
Updated Mar 10
Released Claude Code Security in limited research preview, triggering cybersecurity stock sell-off
For decades, finding security flaws in software has required either expensive human experts or pattern-matching tools that miss complex bugs. In the span of five months, all three frontier artificial intelligence labs — OpenAI, Anthropic, and Google — have released autonomous agents that read code like a human researcher, discover vulnerabilities traditional scanners miss, and generate patches. On March 6, 2026, OpenAI launched Codex Security in research preview, an agent that scanned 1.2 million code commits in its first month of beta testing and discovered 14 previously unknown vulnerabilities serious enough to receive formal identifiers in projects including OpenSSH, Chromium, and PHP.
Updated Mar 6
Blacklisted from federal use; six-month phaseout underway; consumer growth surging
For decades, the United States military chose its weapons contractors and the contractors complied. Artificial intelligence changed that equation. On March 3, OpenAI and the Department of Defense amended a freshly signed AI contract to explicitly ban the use of the technology for domestic surveillance of American citizens—a concession the Pentagon had refused to grant Anthropic just days earlier, triggering that company's blacklisting from all federal agencies.
Updated Mar 3
Released Code Modernization Playbook targeting legacy COBOL systems
An estimated 220 billion lines of COBOL code still run in production every day, processing 95% of ATM transactions and roughly $3 trillion in daily commerce. For decades, understanding and modernizing that code has required large teams of specialized consultants working for months or years. On February 23, Anthropic published a playbook showing how its Claude Code tool can automate the most labor-intensive phases of that work—mapping dependencies, documenting workflows, and identifying risks across thousands of files—and IBM shares immediately fell 13.2%, their worst single-day drop in more than 25 years.
Updated Feb 23
Second-largest AI lab by valuation
Three years ago, Anthropic had not yet earned a dollar in revenue. This week, it closed a $30 billion funding round—the second-largest private tech raise in history—at a $380 billion valuation. The company now generates $14 billion in annualized revenue, having grown tenfold in each of the past three years.
Updated Feb 13
Primary tenant and AI training partner
Amazon is transforming northern Indiana farmland into one of the world's largest artificial intelligence computing hubs. In November 2025, the company announced a $15 billion expansion on top of an $11 billion project already under construction near New Carlisle—bringing its total Indiana commitment to $26 billion and creating what officials call the state's largest construction project ever.
Updated Feb 10
Sent trademark cease-and-desist; distanced from OpenClaw
An Austrian developer built a Claude-powered personal assistant in one hour last November. Three months later, over 145,000 developers have forked his code, 1.5 million AI agents have registered on their own social network, and the agents have spontaneously created a lobster-themed religion called Crustafarianism—complete with scripture, prophets, and a deity named 'The Claw.'
Updated Feb 4
Released updated Constitutional AI framework, first to acknowledge potential AI consciousness
Google DeepMind announced in May 2025 that AlphaEvolve—an AI agent powered by Gemini—discovered a way to speed up Gemini's own training by 23%. The system found smarter matrix multiplication algorithms, shaving 1% off training time for a model that costs $191 million to train. Small numbers, massive implications: AI just started improving the process that creates AI. In January 2026, DeepMind CEO Demis Hassabis told the World Economic Forum in Davos that genuine human-level AGI is now 'five to 10 years' away, with Google's latest Gemini 3 model topping performance leaderboards.
Updated Jan 31
Leading mechanistic interpretability research to understand AI models
MIT Technology Review dropped its 25th annual list of breakthrough technologies on January 12, 2026—250 predictions over a quarter century. This year's ten picks span sodium-ion batteries poised to power the next generation of cheap EVs, generative AI that's rewriting how software gets built, and personalized CRISPR treatments custom-made for individual babies. The list includes embryo screening for intelligence that's reigniting eugenics debates and hyperscale data centers devouring city-sized power loads to train AI models.
Updated Jan 12
Supported California's SB 53 transparency law
The DOJ's AI Litigation Task Force began operations on January 10, 2026, with one mission: kill state AI laws in federal court. California, Texas, and Colorado passed comprehensive AI regulations throughout 2025—transparency requirements, discrimination protections, governance mandates. President Trump's December executive order called them unconstitutional burdens on interstate commerce. Now Attorney General Pam Bondi's team will challenge them, consulting with AI czar David Sacks on which laws to target first.
Fast-growing enterprise AI provider emphasizing safety and transparency
OpenAI's GPT-5 dropped on August 7, 2025, completing AI's transformation from chatbots that string words together to systems that actually think through problems step-by-step. Google DeepMind's reasoning models won gold at the International Math Olympiad, solving problems only five human contestants cracked. Anthropic's Claude, Meta's Llama, and every major AI lab sprinted to build models that pause, plan, and reason rather than just predict the next word.
Updated Jan 8
No stories match your search
Try a different keyword
How would you like to describe your experience with the app today?