Semiconductor Company
Appears in 8 stories
The dominant supplier of graphics processing units used to train and run artificial intelligence models, Nvidia's chips power the majority of the world's AI infrastructure including OpenAI's data centers. - Finalized $30B equity investment and committed to providing 5GW of Vera Rubin capacity to OpenAI
In October 2024, OpenAI raised $6.6 billion at a $157 billion valuation. Seventeen months later, on February 27, 2026, the maker of ChatGPT closed a record $110 billion funding round at a $730 billion pre-money valuation ($840 billion post-money)—the largest private capital raise in history. Amazon led with a $50 billion commitment ($15 billion upfront, $35 billion contingent on OpenAI achieving AGI or completing an IPO by year-end), while Nvidia and SoftBank each committed $30 billion. The round remains open for additional investors. The deal includes expanded infrastructure partnerships: Amazon will provide $100 billion in additional AWS compute services over eight years (on top of the existing $38 billion commitment), while Nvidia will supply 3 gigawatts of dedicated inference capacity and 2 gigawatts of training capacity using its Vera Rubin systems.
Updated Yesterday
Semiconductor company that became the dominant supplier of AI training and inference chips. - Dominant AI GPU supplier facing Meta AMD diversification, $51.2B datacenter revenue Q3 2025
ChatGPT's November 2022 launch triggered the fastest infrastructure buildout in tech history. Datacenter construction spending tripled from $15 billion to $45 billion annually in just two years. Hyperscalers are now on track to spend over $1 trillion in 2026—exceeding the GDP of all but 10 countries—racing to secure power, land, and cooling systems before their rivals. Alphabet shocked markets on February 4, 2026 with guidance of $175-185 billion in 2026 capex, 55-65% above Wall Street estimates of $119.5 billion. Amazon escalated the spending war on February 5 with $200 billion 2026 capex guidance after Q4 revenue of $213.4 billion and AWS growth of 24% to $35.6 billion. Microsoft reported $37.5 billion in capex for Q2 FY2026 (just one quarter), while Meta committed $6 billion to Corning for fiber-optic cables in late January, secured 6.6 gigawatts of nuclear power through three partnerships announced in early January 2026, confirmed a multi-billion Nvidia chip deal, and on February 24 announced a $60-100 billion, 6-gigawatt AMD GPU deal—diversifying away from Nvidia dominance.
Updated 4 days ago
The primary beneficiary of hyperscaler AI spending, controlling 92% of the discrete GPU market for data centers. - Benefits from hyperscaler capex surge to $650B+ despite sustainability worries
The four largest cloud providers—Microsoft, Meta, Alphabet, and Amazon—guided to over $650 billion in combined AI infrastructure spending for 2026 during their February earnings reports, up sharply from $350 billion in 2025, and have begun tapping debt markets to fund the buildout. Microsoft and Meta reported on January 28-29 with divergent market reactions: Microsoft shares plunged 12% on $37.5 billion quarterly capex, while Meta surged on $115-135 billion 2026 guidance. Alphabet stunned investors February 4 with $175-185 billion capex plans—doubling last year's spend—while Amazon topped all on February 5 with a $200 billion pledge, 50% above 2025 and $50 billion over expectations, prompting a share selloff despite strong revenue beats.
Updated Feb 10
Dominant supplier of AI accelerators whose production volumes directly drive demand for Advantest's test equipment. - Primary driver of AI chip testing demand
Advantest, a Japanese company most people have never heard of, just posted record quarterly sales—and its stock now moves in near-lockstep with NVIDIA's. The reason: every advanced AI chip must pass through test equipment before it ships, and Advantest controls nearly 60% of the global market for the machines that do this. As AI spending explodes, chip testing has quietly become one of the supply chain's tightest chokepoints. Yet the company faces intensifying competition: U.S. rival Teradyne is gaining ground in memory testing, and the entire semiconductor equipment sector is experiencing unprecedented demand as chipmakers race to expand capacity for AI accelerators and high-bandwidth memory.
Updated Feb 4
Leading AI chip manufacturer navigating U.S.-China tech restrictions. - Subject to new semiconductor export rules
China posted a $1.2 trillion trade surplus for 2025—the largest any country has ever recorded. The number is roughly equivalent to the GDP of Indonesia, the world's 16th-largest economy. It comes after seven years of U.S. tariffs designed to shrink that very surplus, and eight days after Canada struck a deal with Beijing that slashed Chinese EV tariffs from 100% to 6.1%, marking a dramatic shift in Western trade policy toward China that prompted Trump to threaten 100% retaliatory tariffs on Canadian goods.
Updated Jan 30
The world's largest AI chip company and dominant buyer of advanced packaging and HBM capacity. - Primary customer driving HBM demand; claimed exclusive HBM4 access through 2026
For decades, chip packaging was the unglamorous final step—stacking and connecting silicon dies after the real engineering was done. Now it's the constraint holding back AI. SK Hynix announced a $12.9 billion investment to build the world's largest advanced packaging facility in South Korea, a bet that the company controlling 61% of the high-bandwidth memory market can't afford to lose its lead as competitors circle. At CES 2026, the company unveiled the first 16-layer, 48GB HBM4 module—double the capacity of current generation memory—requiring silicon wafers thinned to just 30 micrometers, thinner than a human hair.
Updated Jan 15
Controls 90%+ of the AI chip market through GPU dominance and the CUDA software moat. - Acquiring Groq's assets and team for $20B
On Christmas Eve 2025, Nvidia paid $20 billion for Groq's assets—nearly triple the AI chip startup's $6.9 billion valuation from three months earlier. The deal brings Groq's founder Jonathan Ross, who created Google's original Tensor Processing Unit, and his breakthrough inference technology into Nvidia's fold. It's Nvidia's largest acquisition ever, nearly three times bigger than its $7 billion Mellanox purchase. By structuring the deal as a "non-exclusive licensing agreement" rather than an outright acquisition, Nvidia bypasses Hart-Scott-Rodino Act merger review requirements that trigger automatic FTC scrutiny—following Microsoft's 2024 playbook with Inflection AI. The deal's unusual structure has drawn immediate analyst warnings about "the fiction of competition" as Groq's leadership and technical talent move to Nvidia while the company nominally continues independently. Adding to the intrigue: 1789 Capital, where Donald Trump Jr. serves as partner, was among Groq's September investors who saw their stake nearly triple in just three months.
Updated Dec 27, 2025
Nvidia is the AI era’s arms dealer—and the political lightning rod for who gets compute. - Seller of the H200 chip; lobbying for access to China while navigating export-control swings
The Trump administration just did the thing Washington has spent years swearing it wouldn’t do: let China buy a near-top-tier Nvidia AI chip again. Now a key China hawk in Congress is demanding the Commerce Department explain, in detail, why this isn’t a strategic own-goal.
Updated Dec 13, 2025
No stories match your search
Try a different keyword
The week's most important stories, delivered every Monday. No spam, unsubscribe anytime.
How would you like to describe your experience with the app today?