AI Company
Appears in 9 stories
The maker of Claude, currently the only commercial AI model deployed on classified U.S. military networks, now facing a government ultimatum over its refusal to remove safety restrictions. - Facing possible supply chain risk designation from Pentagon
Anthropic's Claude became the first commercial artificial intelligence model deployed on classified United States military networks in late 2024. Sixteen months later, the Department of Defense is threatening to label the company a "supply chain risk"—a designation normally reserved for foreign adversaries like China and Russia—because Anthropic refuses to let the military use Claude for mass surveillance of Americans or fully autonomous weapons. The standoff has escalated from a contract negotiation into something larger: the first direct confrontation between an AI company's safety commitments and the federal government's demand for unrestricted access.
Updated 4 days ago
Anthropic is the maker of Claude, a family of AI models, and Claude Code, an agentic coding tool whose new COBOL modernization capabilities triggered the largest single-day sell-off in IBM stock in over 25 years. - Released Code Modernization Playbook targeting legacy COBOL systems
An estimated 220 billion lines of COBOL code still run in production every day, processing 95% of ATM transactions and roughly $3 trillion in daily commerce. For decades, understanding and modernizing that code has required large teams of specialized consultants working for months or years. On February 23, Anthropic published a playbook showing how its Claude Code tool can automate the most labor-intensive phases of that work—mapping dependencies, documenting workflows, and identifying risks across thousands of files—and IBM shares immediately fell 13.2%, their worst single-day drop in more than 25 years.
Updated 5 days ago
Developer of the Claude model family, focused on AI safety research and enterprise deployment. - Second-largest AI lab by valuation
Three years ago, Anthropic had not yet earned a dollar in revenue. This week, it closed a $30 billion funding round—the second-largest private tech raise in history—at a $380 billion valuation. The company now generates $14 billion in annualized revenue, having grown tenfold in each of the past three years.
Updated Feb 13
AI safety startup and creator of Claude, valued at $61.5 billion, with exclusive access to Amazon's Indiana computing infrastructure. - Primary tenant and AI training partner
Amazon is transforming northern Indiana farmland into one of the world's largest artificial intelligence computing hubs. In November 2025, the company announced a $15 billion expansion on top of an $11 billion project already under construction near New Carlisle—bringing its total Indiana commitment to $26 billion and creating what officials call the state's largest construction project ever.
Updated Feb 10
AI safety company that develops Claude, the large language model powering OpenClaw agents. - Sent trademark cease-and-desist; distanced from OpenClaw
An Austrian developer built a Claude-powered personal assistant in one hour last November. Three months later, over 145,000 developers have forked his code, 1.5 million AI agents have registered on their own social network, and the agents have spontaneously created a lobster-themed religion called Crustafarianism—complete with scripture, prophets, and a deity named 'The Claw.'
Updated Feb 4
Builder of Claude, focused on AI alignment through Constitutional AI and recursive self-improvement guided by ethical principles. - Released updated Constitutional AI framework, first to acknowledge potential AI consciousness
Google DeepMind announced in May 2025 that AlphaEvolve—an AI agent powered by Gemini—discovered a way to speed up Gemini's own training by 23%. The system found smarter matrix multiplication algorithms, shaving 1% off training time for a model that costs $191 million to train. Small numbers, massive implications: AI just started improving the process that creates AI. In January 2026, DeepMind CEO Demis Hassabis told the World Economic Forum in Davos that genuine human-level AGI is now 'five to 10 years' away, with Google's latest Gemini 3 model topping performance leaderboards.
Updated Jan 31
AI safety company racing to understand models before they become superintelligent. - Leading mechanistic interpretability research to understand AI models
MIT Technology Review dropped its 25th annual list of breakthrough technologies on January 12, 2026—250 predictions over a quarter century. This year's ten picks span sodium-ion batteries poised to power the next generation of cheap EVs, generative AI that's rewriting how software gets built, and personalized CRISPR treatments custom-made for individual babies. The list includes embryo screening for intelligence that's reigniting eugenics debates and hyperscale data centers devouring city-sized power loads to train AI models.
Updated Jan 12
Claude AI creator that broke with Big Tech to endorse state safety regulations. - Supported California's SB 53 transparency law
The DOJ's AI Litigation Task Force began operations on January 10, 2026, with one mission: kill state AI laws in federal court. California, Texas, and Colorado passed comprehensive AI regulations throughout 2025—transparency requirements, discrimination protections, governance mandates. President Trump's December executive order called them unconstitutional burdens on interstate commerce. Now Attorney General Pam Bondi's team will challenge them, consulting with AI czar David Sacks on which laws to target first.
The safety-first competitor that quietly captured enterprise customers while OpenAI chased benchmarks. - Fast-growing enterprise AI provider emphasizing safety and transparency
OpenAI's GPT-5 dropped on August 7, 2025, completing AI's transformation from chatbots that string words together to systems that actually think through problems step-by-step. Google DeepMind's reasoning models won gold at the International Math Olympiad, solving problems only five human contestants cracked. Anthropic's Claude, Meta's Llama, and every major AI lab sprinted to build models that pause, plan, and reason rather than just predict the next word.
Updated Jan 8
No stories match your search
Try a different keyword
The week's most important stories, delivered every Monday. No spam, unsubscribe anytime.
How would you like to describe your experience with the app today?