Markets

NVIDIA Corporation and the new rules of AI spending

Date Published

NVIDIA Corporation and the new rules of AI spending

TL;DR

Quick Summary

  • Google expects $175–$185B in 2026 capex (vs $91.4B in 2025), underscoring how infrastructure-heavy the AI era is becoming.
  • NVIDIA’s Rubin platform pitch is about lowering the cost of running AI at scale, not just flexing raw performance.
  • NVIDIA deepened its CoreWeave relationship, including a $2B investment, as “AI factory” buildouts accelerate into the late 2020s.

#RealTalk

NVIDIA’s bull case in 2026 is less about a single blockbuster GPU and more about owning the full stack of what it takes to run AI cheaply and reliably, every hour of the day.

Bottom Line

For investors, NVDA is increasingly a story about platform lock-in and AI operating economics—plus real headline risk from export rules. The question isn’t whether AI demand exists; it’s whether NVIDIA can keep translating that demand into durable, repeatable infrastructure dependence across clouds, enterprises, and regulated markets.

The vibe shift: AI isn’t a “chip story” anymore

On February 5, 2026, NVIDIA Corporation (NVDA) sits in a familiar spot: still the poster child for the AI boom, still a lightning rod for hot takes, and still somehow turning the very unsexy world of data-center infrastructure into a mainstream market narrative. But the interesting twist in 2026 isn’t just “GPUs good, AI growing.” It’s that the center of gravity is moving from raw horsepower to the economics of running AI every day.

Training giant models made NVIDIA famous. Now inference—the constant, always-on work of generating answers, images, code, recommendations, and agent actions—is becoming the bill that never stops arriving.

Google’s capex bombshell is basically an NVIDIA headline

If you wanted a single datapoint that captures how aggressively Big Tech is leaning into AI infrastructure, Alphabet’s latest guidance did it. In its Q4 2025 earnings cycle (reported in early February 2026), Google said it expects 2026 capital expenditures of $175–$185 billion, versus $91.4 billion in 2025. That’s not a minor tweak; that’s a strategic re-architecture of what “normal” spending looks like for a consumer internet giant.

Even with Google designing its own TPUs, that kind of buildout matters for NVIDIA because the limiting factor in AI has shifted from “Who has the best model?” to “Who can actually stand up enough compute, networking, and power to serve demand?” When hyperscalers spend like this, the winners are the companies selling the picks, shovels, and the operating system for modern AI factories.

Rubin is NVIDIA saying: the next bottleneck is everything around the GPU

At CES 2026, Jensen Huang didn’t show up to pitch “a faster chip” like it’s 2017. NVIDIA’s message was: the platform is the product.

NVIDIA introduced the Rubin platform (announced January 2026) as the successor line after Blackwell, bundling GPUs with NVIDIA’s Vera CPUs, NVLink 6, new networking gear, DPUs, and an AI-focused storage approach designed to speed up long-context inference. NVIDIA’s claim is blunt: the goal is to push the cost per AI token down dramatically, because that’s what decides whether AI features become cheap utilities—or expensive demos.

For investors, this is the quiet strategy behind the loud hype: if inference becomes the “electric bill” of the AI era, then performance-per-watt, networking throughput, and reliability start to matter as much as peak benchmark numbers.

CoreWeave and the rise of “AI landlords”

The other storyline NVIDIA is leaning into is who gets to own the real estate of AI compute. On January 26, 2026, NVIDIA and CoreWeave (CRWV) announced an expanded partnership tied to building out more than 5 gigawatts of “AI factories” by 2030—and NVIDIA said it invested $2 billion in CoreWeave shares at $87.20 per share.

That’s not just a vote of confidence. It’s NVIDIA reinforcing an ecosystem where specialized clouds and data-center operators become the distribution layer for NVIDIA platforms, especially for customers that don’t want to wait in line for hyperscaler capacity.

Export controls: the demand is global, the rules are not

The “AI is everywhere” reality runs into geopolitics fast. A Reuters report on February 4, 2026 said the U.S. administration was willing to allow China’s ByteDance to buy NVIDIA’s H200 chips, but that approval hinged on conditions—including proposed Know-Your-Customer requirements and usage terms NVIDIA had not agreed to as drafted.

That’s the tension investors can’t ignore: AI demand is huge, but the addressable market can be shaped overnight by licensing, compliance, and who is allowed to buy what.

What to watch next

Three questions matter more than whether NVDA had a good week:

  • Does inference growth keep outpacing training growth in 2026, making efficiency and systems integration the headline features?
  • Do hyperscalers like Google keep escalating capex without spooking markets into forcing a pullback?
  • Do export restrictions tighten further, or stabilize into a predictable (even if frustrating) rulebook?

NVIDIA is trying to be more than the best chip company in AI. It’s trying to be the default architecture for turning AI into a reliable, cost-manageable service. And that’s a bigger ambition than winning the next benchmark chart.