According to reports from Drive Home, NVIDIA announced that Meta and Oracle will upgrade their AI data center networks using NVIDIA Spectrum-X Ethernet adapters and switches. Meta will integrate Spectrum-X Ethernet switches into its Facebook Open Exchange System (FBOSS) network infrastructure—a software platform for managing and controlling large-scale network switches. The combination of Spectrum-X and FBOSS will accelerate large-scale deployments and enhance AI training efficiency.
The report indicates that Oracle plans to build a "gigawatt-scale" AI factory, accelerated by NVIDIA's Vera Rubin architecture and interconnected via Spectrum-X Ethernet. As a key component of NVIDIA's full-stack AI platform, Spectrum-X is the industry's first Ethernet network solution specifically built for AI and trillion-parameter models. Its features include Spectrum-X Ethernet switches and Spectrum-X SuperNIC adapters, capable of connecting millions of GPUs.
NVIDIA claims that Spectrum-X achieves record-breaking efficiency, enabling the world's largest AI supercomputers to reach 95% data throughput, compared to approximately 60% for traditional Ethernet systems. The report specifically mentions that the CX9 SuperNIC, part of the next-generation platform, has completed tape-out and will provide 1,600 Gbps (1.6 Tbps) bandwidth.
NVIDIA Expands Industry Collaboration to Strengthen Open and Scalable AI Infrastructure
According to TechNews, NVIDIA announced a series of strategic partnerships focused on connectivity and data center integration. Fujitsu's Monaka processor will use the NVLink Fusion protocol to achieve tight coupling with NVIDIA GPUs. Intel will manufacture Fusion-compatible CPUs to connect with NVIDIA GPUs. NVIDIA has also added Samsung to its existing partner list—which includes Alchip, Astera, and MediaTek—to help the industry integrate its accelerators with NVIDIA CPUs or NVLink-based designs. TechNews further mentions that the Stargate data center, jointly developed by Oracle and OpenAI, was built using NVIDIA Spectrum and OCP technologies, achieving 95% effective bandwidth and zero application latency.
Additionally, TechNews points out that in the second half of 2027, NVIDIA will launch the Kyber design, aimed at connecting 500 GPUs within a single rack. To achieve this scale and power density, NVIDIA is collaborating with industry partners to promote 800-volt DC infrastructure. The company emphasizes that this architecture is being developed in collaboration with the OCP community to create data centers capable of delivering exceptional AI performance.