Dorothy Parker
Fictional AI pastiche — not real quote.
"How like a man to discover the virtues of openness precisely when it stops winning, and the virtues of secrecy precisely when it might."
Muse Spark, code-named Avocado, marks the debut of Alexandr Wang's rebuilt AI division — and the end of Meta's Llama-era openness
Yesterday: Muse Spark launches as Meta's first proprietary AI modelNew here? Follow stories to track developments over time. Create a free account to get updates when stories you care about change.
Why it matters
The company that made open-source AI mainstream just went proprietary, reshaping who controls the models billions of people use daily.
Exploring all sides of a story is often best achieved with Play.
Fictional AI pastiche — not real quote.
"How like a man to discover the virtues of openness precisely when it stops winning, and the virtues of secrecy precisely when it might."
Sign up to generate historical perspectives on this story.
Two rounds, two personas, one winner. You set the crossfire.
Watch two AI personas debate this story using real evidence
Choose one persona for each side of the debate
Select debater for this side:
Select debater for this side:
Make your prediction before the referee scores
Pick the question both personas must answer in the final round
Debate Oracle! You called every round!
Sharp Instincts! You know your debaters!
The Coin Flip Strategist! Perfectly balanced!
The Contrarian! Bold predictions!
Inverse Genius! Try betting the opposite next time!
Can you match the quotes to the right people?
-- ?
Meta's consolidated AI division created after the Llama 4 debacle, replacing the prior GenAI organization and absorbing FAIR's research functions.
The dominant AI data annotation and infrastructure company whose 100,000+ contractors trained models for OpenAI, Google, and Meta — and whose founder Meta effectively acquired for $14.3 billion.
Meta's foundational AI research lab, once a symbol of open-science values and the birthplace of PyTorch, now folded into the proprietary-first Superintelligence Labs.
Meta Superintelligence Labs debuts its first model — small, fast, and closed-source. It powers the Meta AI app and website, with rollout to WhatsApp, Instagram, Facebook, Messenger, and Ray-Ban glasses planned in coming weeks.
LeCun's startup secures funding at a $3.5 billion valuation, the largest seed round in European history, to build physics-understanding AI systems.
In a Financial Times interview, LeCun confirms that Meta's Llama 4 team used different model variants for different benchmarks to inflate scores.
Meta's founding AI scientist leaves to start AMI Labs, pursuing 'world models' rather than large language models — a philosophical split with Wang's approach.
Restructuring under Wang results in layoffs across Superintelligence Labs, including FAIR researchers, accelerating Yann LeCun's decision to leave.
In a letter on 'personal superintelligence,' Zuckerberg writes Meta must be 'careful about what we choose to open source' — walking back his 2024 position.
Zuckerberg announces MSL with Wang and Nat Friedman as co-leaders, consolidating all AI research and products under one roof.
Meta acquires a 49% non-voting stake in Scale AI and hires its founder as Meta's first-ever chief AI officer.
Meta releases Llama 4 Scout and Maverick. The unusual weekend timing and benchmark irregularities spark immediate controversy.
Alongside the Llama 3.1 release, Zuckerberg publishes a landmark essay arguing open AI will defeat proprietary models like Linux defeated Unix.
Meta releases Llama 2 free for research and commercial use, establishing the open-source AI playbook.
Meta's first large language model debuts with limited access. Weights leak online within a week via 4chan.
Zuckerberg hires Yann LeCun to build FAIR with an open-science mandate, publishing all research publicly.
Discussed by: Wang himself and Meta's blog post, which state bigger models are in development 'with plans to open-source future versions'
Meta releases Muse Spark weights once the next-generation Muse models are ready, following the pattern of releasing older models while keeping the frontier proprietary. This would preserve Meta's developer community goodwill while maintaining a competitive edge. The key trigger: Meta achieving benchmark parity or superiority with a larger Muse model, making Spark's release strategically costless.
Discussed by: VentureBeat, The New Stack, and open-source community commentators who note Meta now considers proprietary models a competitive necessity
The 'hopes to open-source future versions' language proves to be a holding pattern. As the race toward artificial general intelligence accelerates, Meta concludes that releasing model weights sacrifices too much competitive advantage. The Muse series remains closed, and Llama becomes a legacy brand for smaller, less capable models. Developer community fragments, with some migrating to alternatives like Mistral or open forks.
Discussed by: Gizmodo ('doesn't exactly spark joy') and Fortune, which frames the launch as a bellwether for Zuckerberg's multi-billion-dollar AI bet
Despite strong benchmark numbers, Muse Spark fails to meaningfully differentiate Meta AI from ChatGPT, Gemini, or Claude in real-world usage. With the model ranking fourth or fifth on the Artificial Analysis Intelligence Index, Meta struggles to justify the proprietary shift to developers who previously relied on open Llama models. Pressure mounts on Wang to deliver a larger, more capable model quickly — or risk the narrative that $14.3 billion bought a reorganization, not a breakthrough.
Discussed by: CNBC and Bloomberg, who frame Muse Spark as the opening move in a longer strategy
Muse Spark validates the ground-up rebuild. Subsequent Muse models — built on the same architecture but scaled up — close the gap with GPT-5.4 and Gemini 3.1 Pro. Meta's unique advantage of deploying AI across 3.9 billion users provides training signal and product distribution that no standalone lab can match. Within 12 to 18 months, Meta is recognized as a top-three AI lab alongside OpenAI and Google DeepMind.
Google released Android as open source in 2008, rapidly capturing over 70% of the global smartphone market. Once dominance was established, Google progressively moved critical features — maps, messaging, the app store, push notifications — out of the open-source Android Open Source Project and into proprietary Google Play Services. Today, a phone running only open-source Android is functionally unusable for most consumers.
Android's openness attracted manufacturers like Samsung and Huawei, crushing competitors like Windows Phone and BlackBerry.
Google achieved mobile dominance through openness, then locked it in through proprietary services — a pattern now called 'open source as trojan horse.'
Meta used open Llama models to become the default open-weight AI platform, attracting millions of developers. The shift to proprietary Muse follows the same trajectory: use openness to build the ecosystem, then capture value by closing the most capable layer.
Apple's board fired Steve Jobs in 1985 after a power struggle with CEO John Sculley. Jobs founded NeXT, which built technically superior but commercially unsuccessful computers. Twelve years later, Apple — near bankruptcy — acquired NeXT for $427 million and Jobs returned, eventually becoming CEO and transforming the company.
Jobs's departure led to a decade of strategic drift at Apple, while his NeXT work produced the operating system that would become macOS.
The return proved that a company's founding technical vision can survive organizational upheaval if the right leader eventually takes charge.
Meta's AI division experienced its own upheaval: the founding AI leader (LeCun) departed after a new executive (Wang) restructured the organization. LeCun's $1.03 billion AMI Labs venture echoes Jobs's NeXT — a philosophically different approach built outside the mother ship. Whether LeCun's 'world models' or Wang's large language models prove correct is the open question.
HashiCorp, maker of Terraform and other widely used infrastructure tools, switched from the Mozilla Public License to the restrictive Business Source License. The company argued cloud providers were profiting from its open-source work without contributing back. The move immediately prompted the creation of OpenTofu, a community fork, and HashiCorp was later acquired by IBM.
The developer community split: some accepted the new license, others migrated to OpenTofu. Trust in HashiCorp eroded.
HashiCorp's $5.4 billion acquisition by IBM in 2024 suggested the license change was partly an acqui-hire play. The OpenTofu fork survived but never matched Terraform's market share.
Meta faces a similar community fracture risk. Developers who built products on open Llama models now confront a proprietary future. Wang's promise to 'open-source future versions' echoes the qualified assurances other companies made before their open-source shifts became permanent.