Markets

NVIDIA Corporation’s new favorite customer is the AI startup era

Date Published

NVIDIA’s gigawatt AI deal shows where NVDA goes next

TL;DR

Quick Summary

  • NVIDIA partnered with Mira Murati’s Thinking Machines Lab on March 10, 2026, including an NVIDIA investment and a commitment to deploy at least one gigawatt of Vera Rubin systems starting in early 2027.
  • GTC runs March 16–19, 2026 in San Jose, with Jensen Huang’s keynote on March 16—an important moment for NVIDIA to frame the next phase of AI infrastructure.
  • Reports of an open-source enterprise AI agent platform point to NVIDIA pushing deeper into software that could drive longer-lasting inference demand.

#RealTalk

NVIDIA isn’t just selling chips anymore—it’s helping decide which AI companies get built, then supplying the infrastructure they’ll run on. That’s powerful, but it also means the story increasingly depends on how fast the real world can build data centers and power.

Bottom Line

For investors, today’s news reinforces what NVIDIA is optimizing for in 2026: locking in multi-year compute demand and expanding the software layer that makes that demand sticky. The company’s narrative is shifting from “best GPU” to “default AI infrastructure,” and GTC next week is where it will try to make that feel inevitable.

NVIDIA’s new kind of whale

On March 10, 2026, NVIDIA Corporation (NVDA) did the most NVIDIA thing imaginable: it didn’t just sell chips—it helped create a customer big enough to justify buying a small power plant’s worth of compute.

The headline is a long-term partnership with Thinking Machines Lab, an AI startup founded by former OpenAI CTO Mira Murati. The companies say the deal includes a significant NVIDIA investment and a commitment for Thinking Machines to deploy at least one gigawatt of NVIDIA’s next-generation Vera Rubin systems, with deployment starting in early 2027.

That “one gigawatt” detail is doing a lot of work. It’s not a cute KPI. It’s a signal that AI has moved from “we need GPUs” to “we need infrastructure,” and NVIDIA increasingly sits in the role of the arms dealer, the architect, and—sometimes—the venture backer.

Why NVIDIA keeps funding the future

If you’re trying to understand modern NVIDIA, don’t picture a chip company hustling boxes into a supply chain. Picture a platform company trying to make its ecosystem impossible to leave.

Investing in a frontier-model startup while also locking in a multi-year hardware deployment is a very specific kind of flywheel. It seeds demand (capital), secures demand (purchase commitments), and then creates a developer and software orbit around that demand (tooling, frameworks, “reference” deployments everyone else copies).

The tell is that the partnership is explicitly about frontier training and “customizable AI at scale.” That’s the new vibe in enterprise AI: fewer generic demos, more “our models, our data, our rules.” NVIDIA benefits if the world decides the right way to do that is inside its hardware + networking + software stack.

GTC is next week, and it’s not just a keynote

This news also lands one week before NVIDIA’s GTC conference in San Jose, running March 16–19, 2026. Jensen Huang’s keynote is scheduled for March 16 at 11 a.m. local time.

GTC used to be where NVIDIA showed off what it built. Now it’s where NVIDIA shows off what the world is building on top of NVIDIA. That distinction matters because it changes how durable the story feels. A product announcement can be copied. An ecosystem with momentum is harder to dislodge.

And if you’re wondering why the market treats GTC like a cultural event, it’s because it’s one of the few moments when NVIDIA can narrate the next chapter of AI infrastructure in public—what comes after today’s dominant systems, how developers should think about deploying AI, and where the bottlenecks are moving.

The “agents” era is the next demand engine

The other thread today: reports that NVIDIA is preparing an open-source AI agent platform for enterprises—described as a way for companies to dispatch AI “agents” to do tasks for employees, with security and privacy tools included.

This isn’t just a software curiosity. If the next wave of AI is more automated—agents that schedule, triage, buy, monitor, and coordinate—then inference (actually running models in the real world) becomes the star. That tends to mean more persistent compute demand, more networking, more tooling, and more reasons for enterprises to standardize on a vendor they trust.

NVIDIA wants to be that vendor. Not because everyone loves vendor lock-in (they don’t), but because businesses love buying “the thing that works” when the stakes are high and the roadmap is clear.

The backdrop: NVIDIA’s scale is getting unreal

All of this is happening after NVIDIA’s most recent reported quarter (ended January 25, 2026) showed $68.1 billion in revenue, with data center revenue of $62.3 billion. For the full fiscal year 2026, the company reported $215.9 billion in revenue.

Those numbers aren’t just “big.” They change what NVIDIA is. At this scale, the company’s biggest challenge isn’t whether AI is real—it’s whether the industry can keep building enough power, data centers, and software to keep the machine fed.

That’s why a gigawatt partnership and an enterprise agent platform belong in the same story. NVIDIA isn’t betting on one product cycle. It’s trying to own the rails of the AI economy.