Nvidia to Invest Up to $100 Billion in OpenAI, Setting Private Funding Record

Nvidia to Invest Record-Setting $100 Billion in OpenAI

Nvidia, the artificial intelligence industry’s most valuable chipmaker, will commit as much as $100 billion to OpenAI beginning in 2026.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    The deal ties the investment to the rollout of “at least 10 gigawatts of Nvidia systems for OpenAI’s next-generation AI infrastructure to train and run its next generation of models on the path to deploying superintelligence,” according to a Monday (Sept. 22) press release. OpenAI will buy millions of Nvidia AI processors.

    The investment will support the progressive deployment, including data center and power capacity, with the first gigawatt of Nvidia systems to be deployed in the second half of next year, the release said.

    The deal is the largest private-company investment on record, provided Nvidia invests the full $100 billion, the Financial Times reported Monday. The partnership cements Nvidia’s dominance in AI compute by ensuring its chips remain at the core of OpenAI’s stack for training and inference.

    OpenAI will build the infrastructure primarily in the United States, the report said. The project will use Nvidia’s upcoming Vera Rubin platform, the successor to its Blackwell chips.

    Each phase of deployment will trigger a new investment from Nvidia. Nvidia will invest $10 billion when the first gigawatt is deployed, according to the report. Later tranches will be priced at OpenAI’s prevailing valuation of $500 billion.

    Advertisement: Scroll to Continue

    The investment gives OpenAI capital and the secured supply of the hardware it needs to continue scaling. The company has signed several large agreements in recent months, including a $300 billion, 5-year contract with Oracle to supply compute infrastructure for training and inference workloads. That deal requires more than 4 gigawatts of electricity and positions Oracle as a central provider for OpenAI’s workloads.

    For Nvidia, the arrangement secures billions of dollars in chip sales and ensures that its technology remains embedded in OpenAI’s stack, anchoring demand as the market shifts from training massive models to serving them efficiently at scale.

    For OpenAI, the partnership provides a long-term hedge against supply shortages and rising hardware costs. The Oracle contract locks in cloud services at a multiyear scale, while the Nvidia deal ensures hardware availability. By layering in Nvidia’s staged equity commitment, OpenAI reduces execution risk while retaining access to processors.

    Some execution risk remains, however. Before the first deployment, technological, regulatory and power challenges could emerge. OpenAI is also hedging by exploring custom chips with Broadcom, which could reduce its reliance on Nvidia in the long term. But the deal, as structured, keeps Nvidia central to OpenAI’s near-term roadmap and signals to the market that compute remains the resource shaping the pace of generative AI adoption.

    For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.