Executive summary

OpenAI is building a vertically integrated power stack across compute, infrastructure, and platform distribution. In the span of weeks, it (1) locked in multi-gigawatt GPU supply with AMD and NVIDIA, (2) accelerated its Stargate data-center build-out with Oracle/SoftBank toward ~10 GW, (3) deepened outsourced capacity with CoreWeave to ~$22.4B in total commitments, and (4) shifted ChatGPT from product to platform, allowing third-party apps to run inside ChatGPT. The combined moves reduce single-vendor risk, secure energy-and-silicon pipelines, and create a new distribution layer that could siphon attention and transactions away from mobile app stores. The Verge+6Reuters+6OpenAI+6


1) The compute pincer: dual-sourcing at gigawatt scale

AMD deal (6 GW + warrants): OpenAI agreed to deploy 6 GW of AMD Instinct GPUs, starting with 1 GW of MI450 in 2H’26. In exchange, OpenAI received warrants for up to ~10% of AMD (≈160M shares) that vest on delivery/performance milestones—an unusual customer equity incentive designed to align execution on a multi-year, multi-generation roadmap. For AMD, it’s a validation loop; for OpenAI, it’s pricing power + supply assurance outside NVIDIA. Reuters+1

NVIDIA deal (10 GW + $100B): Simultaneously, NVIDIA and OpenAI set a framework for 10 GW of NVIDIA systems, with NVIDIA signaling up to $100B of progressive investment as GW units deploy. The first NVIDIA gigawatt is slated for H2’26 on Vera Rubin systems. This is less “choosing” and more portfolio hedging: OpenAI wants both vendors to compete on TCO and delivery. NVIDIA Newsroom+2OpenAI+2

Implications:

  • Cost curve: Dual vendors sharpen pricing leverage on $/FLOP and service SLAs, important as model windows, context sizes, and agentic workloads expand.
  • Execution risk: Both roadmaps hinge on late-2026 ramps; delays in silicon, packaging, or power availability could compress delivery schedules and margin assumptions.
  • Capital optics: Equity-linked incentives (AMD warrants) plus vendor investment (NVIDIA’s $100B intent) blur the line between supplier financing and strategic partnership—beneficial now, potentially contentious under antitrust scrutiny later. Reuters+1

2) Infrastructure at grid scale: Stargate’s march toward 10 GW

Five new U.S. sites with Oracle/SoftBank push Stargate to “nearly 7 GW” planned capacity and >$400B in cumulative build over the next three years, en route to a frequently quoted $500B total plan. New builds span Texas, New Mexico, and a Midwest location, expanding on the flagship Abilene, TX site. The limiting factors now shift from GPUs to power, cooling, transmission interconnects, and permitting—the utility domain. Reuters+3OpenAI+3Reuters+3

What to watch:

  • Interconnect queues & lead times: Multi-GW campuses need multi-year grid work; any slippage cascades into compute utilization and revenue timing.
  • Thermal innovations: Achieving PUE/WUE targets at GW scale likely forces liquid cooling, warm-water loops, and on-site generation/PPA structures.
  • Balance-sheet engineering: $400–$500B implies mixed project finance, vendor financing, and customer pre-pays—not just equity. Expect evolving risk-sharing between OpenAI, Oracle, SoftBank and possibly utilities. Reuters

3) Elastic capacity: CoreWeave as the “buffer cloud”

OpenAI’s third leg is outsourced GPU cloud. Contracts with CoreWeave now total ~$22.4B after a fresh $6.5B expansion in September (on top of $11.9B in March and $4B in May). Practically, this gives OpenAI burst capacity and geographic diversification while Stargate ramps and vendor data-center slots fill. It also spreads supply-chain risk (CoreWeave’s close ties to NVIDIA help backstop hardware flow). Reuters+1

Tradeoffs: Outsourced margins are thinner than owned capacity but reduce time-to-serve. In a land-grab for enterprise AI spend, time beats margin—at least until fixed assets come online.


4) Platform endgame: ChatGPT becomes a distribution surface

OpenAI used DevDay to reposition ChatGPT as an “app OS”: developers can now ship apps that run in-chat (Spotify, Canva, Coursera, Booking, Expedia, Figma, Zillow to start), with an Apps SDK, directory, and planned monetization. This attacks the mobile app store moat by keeping discovery, intent, and transaction inside the assistant. For OpenAI, every conversation becomes inventory (commerce), telemetry (product), and training signal (models). WIRED+3The Verge+3Venturebeat+3

Why it matters:

  • Frictionless funnels: Move from “open app → log in → context” to “stay in chat with persistent context.”
  • Take-rate potential: If OpenAI intermediates discovery and checkout, an ad + affiliate + fee stack emerges—complementing API revenue.
  • Developer calculus: For some services, building a great ChatGPT app may offer better CAC/LTV than fighting app store SEO and paid acquisition.

5) Financial architecture: new forms of vendor financing

Pull the threads together and you see a capital stack engineered for speed:

  • Supplier equity incentives (AMD warrants) lower effective compute cost and align delivery. Reuters
  • Supplier investment (NVIDIA’s up to $100B) secures pipeline and may share upside. Reuters
  • Project finance & PPAs under Stargate push heavy capex off OpenAI’s core balance sheet. Reuters
  • Contracted cloud (CoreWeave) converts capex to opex for near-term elasticity. Reuters

Press accounts peg the aggregate scope into the high-hundreds of billions, underscoring how AI’s bottleneck is now energy + silicon + land + grid, not just clever algorithms. El País


6) Strategic risks

  1. Grid & permitting bottlenecks: Power availability and interconnect delays can slip timelines more than chip yields. (Watch ERCOT queue data, local approvals.) Reuters
  2. Execution across two GPU vendors: Cross-gen compatibility, framework tuning, and ops tooling must keep heterogeneous fleets efficient; otherwise utilization drops and training costs spike. Reuters
  3. Regulatory/antitrust scrutiny: The web of equity, supply, and capacity deals (NVIDIA ↔ OpenAI; NVIDIA ↔ CoreWeave; OpenAI ↔ AMD warrants) will draw attention if rivals claim foreclosure or preferential access. Reuters+1
  4. Unit economics at scale: If inference moves agentic (always-on, tool-using, memory-heavy), token margins could compress unless models become markedly more compute-efficient.
  5. Platform governance: Running third-party apps inside ChatGPT raises new security, privacy, and take-rate debates—more akin to an OS than a website. The Verge

7) Competitive landscape: who must respond?

  • Microsoft/Azure: Beneficiary via Azure-OpenAI demand, but must counter ChatGPT’s intra-assistant app store with its own Copilot extensibility and keep sovereign/regulatory options open.
  • Google: Gemini’s product moat hinges on tight Google Workspace/Android integration; expect deeper “Gemini inside” across Search/Play and an aggressive Android-sidecar strategy to blunt ChatGPT’s OS-like pull.
  • AWS: Will double down on being the arms dealer (Trainium/Inferentia + Bedrock + partner clouds) and court AMD/NVIDIA neutrality.
  • Meta: Open Llama strategy plus in-app agents (Messenger/WhatsApp/Instagram) provides huge distribution if it can match quality and enterprise trust.
  • Apple: If iOS remains a hard boundary, an AI-intents layer or App Store policy shifts could become necessary to keep commerce from leaking to chat surfaces.

8) Scenarios (2026–2028)

Bull case:

  • On-time 2H’26 ramps for MI450 and Vera Rubin; Stargate sites energize in phased waves; CoreWeave buffers peak demand. ChatGPT-apps reach meaningful GMV/ad revenue, subsidizing inference. OpenAI’s blended $/FLOP drops 30–40% vs 2025.

Base case:

  • Minor silicon or interconnect slippage; OpenAI juggles workloads across CoreWeave and early Stargate phases. Platform monetization starts but remains <15% of revenue. Margins improve, but free cash flow remains constrained by build-out.

Bear case:

  • Grid or permitting delays + silicon slips force heavy reliance on outsourced clouds at premium pricing. Antitrust probes slow vendor-investment structures; ChatGPT-app adoption lags due to governance or Apple/Google resistance.

9) What to track (concrete KPIs)

  • GW energized vs. contracted by quarter, and average PUE/WUE of new sites. OpenAI
  • Vendor mix: % of training hours on AMD vs. NVIDIA and realized $ per trained token. Reuters+1
  • CoreWeave utilization & backlog tied to OpenAI launches (proxy for demand surges). CoreWeave Investors
  • ChatGPT-app metrics: DAU engaging with apps, GMV routed, and developer payout structures (take-rate clarity). The Verge
  • Regulatory filings/inquiries around vendor financing and cloud capacity concentration. Reuters

10) Bottom line

OpenAI is no longer just a model company. It’s orchestrating a full-stack AI utility—securing silicon, financing and standing up multi-GW data-center capacity, and converting ChatGPT into a commerce-and-workflows surface. If execution holds through 2026’s ramps, this architecture could reset industry cost curves and redirect app distribution economics toward assistants. The risk is equally industrial: power, permits, and physics will decide whether the software vision lands on time.

You May Also Like

OpenAI and AWS’s $38B Cloud Partnership: A New Frontier for AI Compute

Overview On November 3, 2025, OpenAI and Amazon Web Services (AWS) unveiled a seven‑year,…

Global AI Regulation Pivot: What It Means for Businesses in 2025

Introduction: A year of regulatory inflection Artificial intelligence (AI) is no longer…

UK’s slow rollout of the Critical Third Parties regime leaves cloud giants unregulated: implications for competition and customers across key verticals

Background – how the regime works and why delays matter The UK’s…

Open-Source AI Vs Big Tech: the Business Impact of Open Models

What if open-source AI could transform your business, but at what cost compared to big tech’s proprietary models?