Quick Brief
- The Deployment: Nokia and Hypertec Group completed Nibi, a 752-node supercomputer with 134,400 Intel Xeon 6 cores, 288 Nvidia H100 GPUs, and 25 petabytes of flash storage at the University of Waterloo.
- The Impact: The system will support 4,000+ Canadian researchers annually across 19 SHARCNET partner institutions, advancing work in health, climate science, and AI model training.
- The Context: This marks Nokia’s first AI-HPC data center fabric deployment in North America, leveraging Ethernet-based interconnects and Hypertec’s immersion cooling technology amid Canada’s $925.6M sovereign AI infrastructure investment.
Nokia and Hypertec Group celebrated the successful deployment of Nibi, an advanced supercomputing cluster at the University of Waterloo designed to serve more than 4,000 researchers annually. The system integrates Nokia’s Data Center Fabric networking with Hypertec’s AI-HPC architecture and immersion cooling technology, positioning Canada to compete globally in sovereign AI research infrastructure. Nibi now operates as part of SHARCNET (Shared Hierarchical Academic Research Network), Canada’s largest HPC consortium by institutional count, spanning 19 academic partners.
Technical Architecture: 288 H100 GPUs and Ethernet Fabric
Nibi comprises 752 nodes powered by 134,400 Intel Xeon 6 (Granite Rapids) cores and 288 Nvidia H100 GPUs distributed across 36 GPU-equipped nodes (eight H100s per node). The supercomputer delivers 25 petabytes of purely flash-based storage from VAST Data, equivalent to the computational power of 35,000 desktop computers working simultaneously. Hypertec served as system architect and prime integrator, implementing single-phase liquid immersion cooling that enables GPUs to operate at sustained high capacity.
The deployment uses Nokia’s 200/400 Gigabit Ethernet-based interconnects, providing 400 Gbit/s non-blocking connectivity between GPU nodes and 200 Gbit/s for CPU nodes. John Morton, director of Technology for SHARCNET, confirmed the move to Ethernet-based networking provides “scalability, reliability, and performance needed to support a wide range of demanding workloads”. While InfiniBand maintains advantages in ultra-low latency for certain HPC applications, Ethernet offers lower hardware and maintenance costs plus an open vendor ecosystem factors driving adoption in AI data centers.
Market Position: Canada’s Sovereign AI Infrastructure Push
Nibi’s activation aligns with Canada’s federal commitment of $925.6 million over five years, starting in 2025-26, to develop sovereign AI compute capacity, ensuring domestic researchers access secure, competitive infrastructure without relying on foreign cloud resources. The University of Toronto received $42.5 million in federal AI infrastructure funding through the Canadian Sovereign AI Compute Strategy, demonstrating coordinated national investment. This movement occurs as the global AI infrastructure market expands from $90 billion in 2026 to a projected $465 billion, with the AI supercomputer segment alone growing at 22.29% CAGR from $1.91 billion (2025) to $14.22 billion by 2035.
Hypertec’s role as system architect reflects Canada’s domestic capacity to design and deploy globally competitive AI research infrastructure. The company operates across 80 countries and specializes in immersion-cooled AI deployments where traditional air cooling faces challenges managing heat from high-density GPU configurations. Nokia’s participation through its Data Center Fabric platform positions the networking giant in the emerging Ethernet-for-AI market.
Infrastructure Specifications
| Component | Specification | Capability |
|---|---|---|
| Compute Cores | 134,400 Intel Xeon 6 (2 × 6972P @ 2.4 GHz) | Batch computing, cloud VMs, research workloads |
| GPU Acceleration | 288 Nvidia H100 SXM GPUs (8 per node, 36 nodes) | Large AI model training, deep learning |
| Storage | 25 petabytes flash-based (VAST Data) | All-SSD architecture for high performance |
| Cooling System | Immersion cooling (single-phase liquid) | Sustained GPU performance |
| Network Fabric | Nokia 200/400G Ethernet (400 Gbit/s GPU, 200 Gbit/s CPU) | Scalable, automated architecture |
| Memory | Standard nodes: 768GB DDR5; GPU nodes: 2TB DDR5 | High-memory configurations available |
| Research Capacity | 4,000+ users annually | 19 SHARCNET partner institutions |
Nokia’s Canadian Expansion Strategy
The Nibi deployment reinforces Nokia’s growing Canadian footprint following the groundbreaking of its nearly 750,000-square-foot Ottawa campus in Kanata North Tech Park. The campus investment accelerates R&D in AI-powered networks, data center networks, quantum-safe infrastructure, and 6G technologies while supporting more than 2,700 employees nationwide. Nokia’s Canadian presence traces to 1989 through Newbridge Networks, later absorbed via Alcatel-Lucent’s acquisition. The company positions its Data Center Fabric as a modern network operating system designed for AI-era workloads with automated architecture.
Research Applications and Institutional Access
Nibi enables SHARCNET faculty, students, postdocs, and research fellows across Canadian academic institutions to access computational power for health sciences, climate modeling, engineering simulations, and AI model development. The system’s name meaning “water” in Anishinaabemowin (Ojibwe) reflects consultation with local Indigenous communities regarding the deployment’s sustainable cooling approach. SHARCNET resources remain available to any Canadian academic researcher, democratizing access to top-tier computing previously concentrated in few institutions.
Waterloo’s deployment follows broader federal AI compute initiatives, including the University of Toronto’s $42.5 million allocation for AI infrastructure serving researchers nationwide. The federal government authorized the Minister of Artificial Intelligence and Digital Innovation to identify strategic infrastructure projects and enabled the Canada Infrastructure Bank to invest in AI facilities.
Regulatory and Competitive Landscape
Canada’s sovereign AI strategy includes $925.6 million allocated over five years for compute capacity development. The federal budget also allocated $25 million over six years for Statistics Canada’s AI and Technology Measurement Program (TechStat) to assess economic and workforce impacts. These policy mechanisms position government as both funder and coordinator of distributed academic AI infrastructure competing with centralized hyperscaler cloud offerings from AWS, Microsoft Azure, and Google Cloud.
Cloud deployment dominated 61% of the AI supercomputer market in 2025, though on-premise systems remain preferred for sovereign applications requiring data localization. While public sector and academic deployments like Nibi prioritize research access and data sovereignty, commercial hyperscalers focus on cloud-based AI-as-a-service with on-demand scalability.
Infrastructure Technology Shift: Ethernet Networking
Nibi’s adoption of Ethernet-based interconnects represents a broader industry trend driven by cost efficiency and vendor diversity. InfiniBand maintains performance advantages through ultra-low latency and RDMA (Remote Direct Memory Access) for extreme-scale HPC workloads, supporting speeds up to 1600Gb/s. However, Ethernet now available at 400GbE and 800GbE with RoCEv2 for RDMA delivers competitive performance at lower hardware and maintenance costs with broader vendor ecosystems.
Hypertec’s immersion cooling implementation addresses thermal limitations in high-density GPU configurations where traditional air cooling faces challenges. By submerging servers in dielectric liquid, data centers can direct more energy toward computation rather than cooling overhead. This thermal efficiency enables urban or space-constrained deployments.
Frequently Asked Questions (FAQs)
What is the Nibi supercomputer at University of Waterloo?
Nibi is a 752-node AI-HPC cluster with 134,400 Intel Xeon 6 cores, 288 Nvidia H100 GPUs, and 25 petabytes of storage serving 4,000+ researchers annually across SHARCNET institutions.
How does Nokia Data Center Fabric differ from InfiniBand?
Nokia uses 200/400G Ethernet-based networking with lower costs and open ecosystems versus InfiniBand’s ultra-low latency, offering scalable performance for diverse AI workloads through automated architecture.
What is immersion cooling in AI data centers?
Immersion cooling submerges servers in dielectric liquid to dissipate heat more efficiently than air, enabling sustained high-performance GPU operations in dense configurations.
How many institutions access SHARCNET resources?
SHARCNET includes 19 academic partner institutions, making it Canada’s largest HPC consortium by institutional count, with resources available to any Canadian academic researcher.
What is Canada investing in sovereign AI infrastructure?
Canada committed $925.6 million over five years starting in 2025-26 for sovereign AI compute capacity, plus $42.5 million to University of Toronto and other strategic deployments.

