TL;DR: Sam Altman has spent two years building a multi-layered strategy to reduce AI’s dangerous dependence on Nvidia’s GPU supply chain. The plan combines OpenAI’s first custom in-house AI chip (built with Broadcom and TSMC, launching in 2026), a strategic partnership with AMD for training at scale, a $500 billion infrastructure play called Project Stargate, and earlier ambitions for a $5–7 trillion global chip fab network. Together, they represent the most aggressive compute sovereignty push by any AI company in history.
In 2024, Sam Altman reportedly told investors he needed between $5 trillion and $7 trillion to fix the global AI chip supply chain. That number sounded absurd. It still does. But strip away the headline figure and what you find underneath is a calculated, multi-year strategy to make sure OpenAI never again sits on a waiting list for the hardware it needs to build the future.
The strategy has since evolved from moonshot chip factory ambitions into something more grounded: a custom in-house chip designed with Broadcom, manufactured by TSMC on a 3nm process, targeted for mass production in 2026. Alongside it sits a landmark AMD partnership, a 10-gigawatt Nvidia deployment deal, and a $500 billion infrastructure commitment called Project Stargate. Altman isn’t just building AI models anymore. He’s building the substrate those models run on.
Other Related News:
Google Added Lyria 3 Music Generation to the Gemini App
Apple Is Going All-In on AI Wearables
Anthropic’s Powerful Claude Sonnet 4.6 Is Available Now
Why AI Chips Matter So Much to Sam Altman?
Nvidia currently controls approximately 80% of the AI chip market, and its H100 and H200 GPUs are simultaneously the most in-demand and most constrained hardware in the tech industry. For OpenAI, training models like GPT-5.2 and beyond requires tens of thousands of these chips. Every delay in procurement is a delay in capability.
“OpenAI allegedly opted to make its own custom chips to reduce its reliance on Nvidia GPUs, which are both expensive and difficult to source, with many customers facing long lead times due to high demand.”
— Data Center Dynamics, February 2025 [datacenterdynamics]
The economic logic is also straightforward. Custom chips designed specifically for inference (running models) rather than general GPU workloads can deliver meaningfully better performance-per-dollar on the specific operations AI models perform most. Google’s TPUs proved this model works — and Google’s custom chip partner was Broadcom, the same company now building OpenAI’s chip.
The Custom OpenAI Chip: What We Know
OpenAI’s first proprietary AI chip is the most concrete piece of Altman’s hardware ambitions.finance.yahoo+1
Key technical details:
- Designed with: Broadcomtechcrunch+1
- Manufactured by: TSMC
- Process node: 3nm (TSMC’s most advanced)
- Architecture: Systolic array with high-bandwidth memory — the same fundamental approach as Nvidia’s GPU stack
- Primary purpose: AI inference (running models in production) [techcrunch]
- Production timeline: Mass shipments targeted for 2026 [finance.yahoo]
“OpenAI is finalizing the design for its first custom AI training chip and is currently in the tape-out phase, the final process before a semiconductor is manufactured.”
— Data Center Dynamics, February 2025
The tape-out phase is significant. It means the chip design is essentially locked in and physical manufacturing samples are being produced. This is not a roadmap item or a press release — it’s a chip that is physically being made.
The AMD Partnership: Training at Scale
While the custom Broadcom chip focuses on inference, OpenAI also turned to AMD as its second major chip supplier for model training, alongside Nvidia.
At AMD’s product launch event in June 2025, Altman appeared on stage alongside AMD CEO Lisa Su to endorse the MI400 Helios rack-scale system — AMD’s answer to Nvidia’s DGX and GB200 systems.
“When you initially started sharing the specifications, I thought it was unbelievable; it sounded utterly insane. It’s going to be an extraordinary advancement.”
— Sam Altman on AMD’s MI400, June 2025 [cnbc]
AMD’s MI400 chips assemble into a full rack-scale configuration called Helios, presenting as a single unified system — critical for hyperscale AI training clusters. The AMD deal gives OpenAI a second sourcing option for training compute, reducing the leverage any single vendor holds over its roadmap.
“This is not just a supply agreement. It is a multi-generational collaboration. AMD’s engineers are now working arm-in-arm with OpenAI’s teams to shape not just the next chip, but the entire product roadmap through 2030 and beyond.”
— AMD CEO Lisa Su, 2025 [youtube]
The Nvidia Relationship: Frenemies With 10 Gigawatts
Despite building alternatives, OpenAI isn’t walking away from Nvidia. In September 2025, the two companies announced a strategic partnership to deploy 10 gigawatts of Nvidia systems — a staggering commitment representing some of the largest AI infrastructure investment ever announced by a single partnership.[nvidianews.nvidia]
The first deployment — a 1-gigawatt facility running Nvidia’s Vera Rubin platform — is scheduled for the second half of 2026. To put 1 gigawatt in perspective: that’s enough electricity to power roughly 750,000 homes, all dedicated to running AI compute.
This is the paradox of Altman’s chip strategy in a single deal: even as OpenAI builds the infrastructure to reduce Nvidia dependence, it simultaneously signs agreements to deploy more Nvidia hardware than almost anyone in history. The goal isn’t to eliminate Nvidia — it’s to make sure OpenAI is never solely dependent on any single supplier.
Project Stargate: The $500 Billion Infrastructure Bet
On January 21, 2026, Altman stood at the White House alongside President Trump, Oracle’s Larry Ellison, and SoftBank’s Masayoshi Son to announce Project Stargate — a joint venture committing up to $500 billion in AI infrastructure investment in the United States by 2029.
The initial tranche: $100 billion deployed immediately, going toward up to 20 large AI data centers across the country. The full $500 billion covers data centers, power infrastructure, networking, and yes — chips.
“‘We discussed, and he said: More is better,’ Son told Forbes. ‘More is better.'”
— Masayoshi Son on Sam Altman’s infrastructure ambitions, Forbes 2026 [almcorp]
Altman also stated OpenAI would spend approximately $1.4 trillion, mostly on AI chips and data centers, over the next eight years — a figure that makes the original $7 trillion chip factory dream feel like a stepping stone rather than an endpoint.
The Abandoned $7 Trillion Vision (And What Replaced It)
The original plan was even bigger. In early 2024, Altman was shopping a concept to global investors: raise $5 to $7 trillion to build a worldwide network of chip fabrication plants, dramatically expanding global semiconductor manufacturing capacity.blumorpho+1
The investors included SoftBank, G42 (Abu Dhabi), and discussions with Microsoft. The goal was nothing less than restructuring the entire global chip supply chain — not just buying chips, but building the factories that make them.[reuters]
“Sam Altman has announced his ambition to overhaul the semiconductor industry, aiming to address supply-demand challenges and advance AI chip production, which could require between $5–7 trillion.”
— Blumorpho, March 2024 [blumorpho]
That plan was eventually abandoned in favor of the more focused Broadcom collaboration. But the ambition behind it — the conviction that AI’s future depends on compute sovereignty, not just model quality — never went away. It just found more practical channels.
What This Means for the AI Chip Race?
| Strategy | Partner | Purpose | Timeline |
|---|---|---|---|
| Custom inference chip | Broadcom + TSMC | Reduce cost-per-token | Mass production 2026 |
| Training chips | AMD MI400 Helios | Scale model training | 2025–2030 |
| Nvidia deployment | Nvidia Vera Rubin | Immediate scale | H2 2026 |
| Project Stargate | Oracle, SoftBank | US AI infrastructure | $100B now, $500B by 2029 |
| India AI Summit | Government of India | Global chip supply | $18B approved |
The pattern is clear: Altman is not betting on one chip partner or one architecture. He’s building redundancy and optionality into OpenAI’s compute stack at every layer. Custom chips for efficiency. AMD for training diversity. Nvidia for raw scale. Stargate for long-term infrastructure ownership.
India: The Next Frontier for AI Chips
In February 2026, Altman attended India’s AI Impact Summit alongside Sundar Pichai and Demis Hassabis. India has approved $18 billion in chip projects as part of its push to build a domestic semiconductor supply chain.
Altman told the summit: “India has a chance to leapfrog” in AI development — a statement that positions India not just as a market for OpenAI, but as a potential manufacturing and talent partner for the next phase of AI infrastructure.
Recommended Articles:
Introducing Claude Opus 4.6
OpenAI Launches GPT-5.3 Codex
Google’s Big AI Upgrade for Chrome
FAQ’s
OpenAI’s first custom chip is designed with Broadcom and manufactured by TSMC on a 3nm process. It uses a systolic array architecture with high-bandwidth memory and is targeted primarily at AI inference workloads.
Mass shipments are targeted for 2026, following a tape-out phase confirmed in early 2025.
Nvidia GPUs are expensive, hard to source due to high demand, and not optimized specifically for AI inference. Custom chips reduce per-token costs and eliminate single-vendor dependency.
A joint venture between OpenAI, Oracle, and SoftBank committing up to $500 billion in US AI infrastructure by 2029, with an immediate $100 billion deployment for up to 20 data centers.
Yes. In early 2024, Altman sought $5–7 trillion to build a global network of chip fabrication plants. That plan was abandoned in favor of partnering with existing manufacturers like Broadcom and TSMC.
OpenAI is a strategic customer of AMD’s MI400 chips for model training. Sam Altman appeared on stage with AMD CEO Lisa Su in June 2025, confirming a “multi-generational collaboration” extending through 2030.
