back to top
More
    HomeNewsNVIDIA Announces Rubin Platform at CES 2026 with 10x Inference Cost Reduction

    NVIDIA Announces Rubin Platform at CES 2026 with 10x Inference Cost Reduction

    Published on

    OpenAI for Healthcare: How GPT-5 AI Is Transforming Clinical Workflows With HIPAA-Compliant Intelligence

    Summary: OpenAI launched OpenAI for Healthcare on January 7, 2026, bringing GPT-5-powered AI to hospitals with full HIPAA compliance. The suite includes ChatGPT for...

    NVIDIA introduced its Rubin platform at CES 2026 in Las Vegas, marking the next generation of AI computing infrastructure. The platform unites six new chips Vera CPU, Rubin GPU, NVLink 6 Switch, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-6 Ethernet Switch designed to deliver up to 10x reduction in inference token costs compared to previous generations. DGX SuperPOD with Rubin-based systems will ship in the second half of 2026.

    What’s New in the Rubin Platform

    The Rubin platform debuts five major technology upgrades built specifically for agentic AI, mixture-of-experts models, and long-context reasoning. Sixth-generation NVIDIA NVLink delivers 3.6TB/s per GPU and 260TB/s per Vera Rubin NVL72 rack, enabling massive parallel workloads without model partitioning. The NVIDIA Vera CPU features 88 custom Olympus cores with full Armv9.2 compatibility and ultrafast NVLink-C2C connectivity.

    The Rubin GPU provides 50 petaflops of NVFP4 compute with a third-generation Transformer Engine featuring hardware-accelerated compression. Each DGX Rubin NVL8 system delivers 5.5x NVFP4 FLOPS compared to NVIDIA Blackwell systems. CEO Jensen Huang stated during his CES 2026 keynote that Rubin arrives “at exactly the right moment, as AI computing demand for both training and inference is going through the roof”.

    Why the 10x Cost Reduction Matters

    The 10x reduction in inference token generation cost addresses the growing expense of deploying large language models and AI agents at scale. As AI models expand in size, context length, and reasoning depth, inference costs have become a critical bottleneck for enterprise adoption.

    This economic shift makes real-time AI coding assistants, million-token video processing, and enterprise-scale agentic AI financially viable for organizations of all sizes. The improved cost efficiency enables companies to process significantly more AI inference workloads without proportional infrastructure investment increases.

    DGX SuperPOD Configurations

    NVIDIA offers two Rubin-based DGX SuperPOD deployment options:

    DGX Vera Rubin NVL72

    • 8 rack systems with 576 Rubin GPUs total
    • 28.8 exaflops of FP4 performance
    • 600TB of fast memory
    • 36 Vera CPUs, 72 Rubin GPUs, and 18 BlueField-4 DPUs per rack
    • Unified memory space across entire rack

    DGX Rubin NVL8

    • 64 systems with 512 Rubin GPUs total
    • Liquid-cooled form factor with x86 CPUs
    • 8 Rubin GPUs per system with sixth-gen NVLink
    • Designed as an efficient on-ramp for existing AI projects

    Both configurations integrate BlueField-4 DPUs for secure infrastructure, NVIDIA Mission Control for automated orchestration, and support 800Gb/s networking via Quantum-X800 InfiniBand or Spectrum-X Ethernet.

    What Comes Next

    NVIDIA DGX SuperPOD systems built on the Rubin platform will become available in the second half of 2026. CEO Jensen Huang confirmed at CES 2026 that the next-generation chips are “in full production,” signaling an aggressive rollout timeline.

    Third-generation NVIDIA Confidential Computing will debut on Vera Rubin NVL72 as the first rack-scale platform maintaining data security across CPU, GPU, and NVLink domains. NVIDIA Mission Control software currently available for Blackwell systems will extend support to Rubin-based DGX systems for enterprise infrastructure automation.

    The platform’s second-generation RAS Engine enables real-time health monitoring, fault tolerance, and 3x faster servicing through modular cable-free trays. Partners will begin rolling out Rubin-based products and services throughout the latter half of 2026.

    Featured Snippet Boxes

    What is the NVIDIA Rubin platform?

    The NVIDIA Rubin platform is a next-generation AI computing architecture comprising six chips: Vera CPU, Rubin GPU, NVLink 6 Switch, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-6 Ethernet Switch, designed to reduce inference costs by 10x while accelerating agentic AI and long-context models.

    When will DGX SuperPOD with Rubin be available?

    NVIDIA DGX SuperPOD systems powered by Rubin architecture will ship in the second half of 2026, with CEO Jensen Huang confirming the chips are already in full production as of January 2026.

    How does Rubin compare to Blackwell performance?

    Each DGX Rubin NVL8 system delivers 5.5x NVFP4 FLOPS compared to NVIDIA Blackwell systems, with inference running five times faster and triple the overall speed, plus significantly improved energy efficiency per watt.

    What is DGX Vera Rubin NVL72?

    DGX Vera Rubin NVL72 is a rack-scale AI system combining 36 Vera CPUs, 72 Rubin GPUs, and 18 BlueField-4 DPUs with 260TB/s of NVLink throughput, enabling unified memory and compute space across the entire rack without model partitioning.

    Mohammad Kashif
    Mohammad Kashif
    Topics covers smartphones, AI, and emerging tech, explaining how new features affect daily life. Reviews focus on battery life, camera behavior, update policies, and long-term value to help readers choose the right gadgets and software.

    Latest articles

    OpenAI for Healthcare: How GPT-5 AI Is Transforming Clinical Workflows With HIPAA-Compliant Intelligence

    Summary: OpenAI launched OpenAI for Healthcare on January 7, 2026, bringing GPT-5-powered AI to...

    Wireless Earbuds Showdown: Sony WF-1000XM5, AirPods Pro 3, and Galaxy Buds 3 Pro Battle for Audio Supremacy

    The flagship wireless earbuds from Sony, Apple, and Samsung compete head-to-head in 2026, each...

    Gemini vs ChatGPT vs Claude: The Ultimate AI Showdown for 2026

    Summary: ChatGPT holds 68% market share but dropped 19 points in 2026 as Gemini...

    Nokia and Hisense Sign Multi-Year Patent License Agreement, Ending Global Litigation

    Nokia and Hisense have ended their worldwide patent dispute by signing a multi-year licensing...

    More like this

    OpenAI for Healthcare: How GPT-5 AI Is Transforming Clinical Workflows With HIPAA-Compliant Intelligence

    Summary: OpenAI launched OpenAI for Healthcare on January 7, 2026, bringing GPT-5-powered AI to...

    Wireless Earbuds Showdown: Sony WF-1000XM5, AirPods Pro 3, and Galaxy Buds 3 Pro Battle for Audio Supremacy

    The flagship wireless earbuds from Sony, Apple, and Samsung compete head-to-head in 2026, each...

    Gemini vs ChatGPT vs Claude: The Ultimate AI Showdown for 2026

    Summary: ChatGPT holds 68% market share but dropped 19 points in 2026 as Gemini...