Skip to Content

Nvidia just laid out what’s next for the tech that made it the world’s most valuable company

By Lisa Eadicicco, CNN

Las Vegas (CNN) — Nvidia just provided a closer look at its new computing platform for AI data centers, Vera Rubin, a release that could have major ramifications for the future of AI given the industry’s massive reliance on the company’s tech.

Nvidia previously announced some details about Vera Rubin but laid out how the system will work and revealed its launch timing during the CES tech conference in Las Vegas on Monday. Vera Rubin is currently in production and the first products running on it will arrive in the second half of 2026, the company said.

Nvidia has become the poster child for the AI boom, with the pervasiveness of its AI chips and platforms propelling it to briefly become the world’s first $5 trillion company last year. But the company is also combatting fears of an AI bubble amid growing competition and a push by tech companies to make their own AI chips to decrease reliance on Nvidia.

Nvidia CEO Jensen Huang, clad in his signature leather jacket, addressed the question of where AI funding is coming from – a point central to the bubble debate – in his opening remarks on stage at the theater inside the Fontainebleau Las Vegas. He said companies are shifting budgets in research and development in classical computing methods to artificial intelligence.

“People ask, where is the money coming from? That’s where the money is coming from,” he said.

The Vera Rubin platform is an attempt by Nvidia to position itself as the answer to the computing challenges posed by increasingly demanding AI models – such as whether existing infrastructure can handle increasingly complicated AI queries. The company claims in a press release that its upcoming AI server rack, called Vera Rubin NVL72, “provides more bandwidth than the entire internet.”

With Vera Rubin, Nvidia says it’s developed a new type of storage system to help AI models process more complex, context-heavy requests more quickly and capably. Existing types of storage and memory used by traditional computers and even the graphics processing units powering data centers won’t be enough as companies like Google, OpenAI and Anthropic shift from offering simple chatbots to full-fledged AI helpers.

Huang walked through what the transition from chatbots to agents on Monday. In a video demonstration, a person built a personal assistant by connecting a friendly-looking tabletop robot to multiple AI models running on Nvidia’s DGX Spark desktop computer. The robot was able to do things like recount the user’s to-do list and even tell the dog to get off the couch.

Huang said creating such an assistant would have been unimaginable several years ago but is “utterly trivial” now that developers can rely on large language models rather than traditional programming tools to build apps and services.

In other words, the old way simply won’t cut it as AI grows more sophisticated and “reasons” on tasks that take multiple steps like these, Nvidia claims.

“The bottleneck is shifting from compute to context management,” Dion Harris, Nvidia’s senior director of high-performance computing and AI hyperscale solutions, said on a call with reporters ahead of the press conference.

“Storage can no longer be an afterthought,” he added.

Nvidia also just entered into a licensing agreement with a company called Groq that specializes in inference ahead of CES, another sign that it’s investing heavily in that branch of AI.

“Instead of a one-shot answer, inference is now a thinking process,” Huang said, referring to the process AI models go through to “think” and “reason” through answers and accomplish tasks.

All of the major cloud providers – Microsoft, Amazon Web Services, Google Cloud and CoreWeave – will be among the first to deploy Vera Rubin, Nvidia said in its press release. Computing companies like Dell and Cisco are expected to incorporate the new chips into their data centers, and AI labs such as OpenAI, Anthropic, Meta and xAI are likely to embrace the new tech for training and to provide more sophisticated answers to queries.

Nvidia also deepened its push into autonomous vehicles with new models called Alpamayo and “physical AI,” the type of AI that powers robots and other real-world machinery, building on the vision it laid out during its GTC conference in October.

But Nvidia’s progress and prevalence also means it shoulders the burden of consistently surpassing Wall Street’s high expectations and assuaging concerns that spending on AI infrastructure is far outpacing tangible demand.

Meta, Microsoft and Amazon, among others, have spent tens of billions in capital expenditures this year alone, and McKinsey & Company expects companies to invest nearly $7 trillion in data center infrastructure globally by 2030. And much of the support being poured into AI seemingly involves a relatively small group of companies trading money and technology back and forth in what’s known as “circular funding.”

Google and OpenAI have also been leaning into developing their own chips, allowing them to better tailor hardware to the specific needs of their models. Nvidia has also been facing growing competition from AMD, and chipmaker Qualcomm also recently announced it’s getting into the data center business.

“Nobody wants to be beholden to Nvidia,” Ben Barringer, global head of technology research at investment firm Quilter Cheviot, said in a previous CNN interview when asked about other companies like Google potentially challenging Nvidia in AI chips. “They are trying to diversify their chip footprint.”

The-CNN-Wire
™ & © 2026 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: CNN - Business/Consumer

Jump to comments ↓

Author Profile Photo

CNN Newsource

BE PART OF THE CONVERSATION

KVIA ABC 7 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.