SAN JOSE, Calif., Aug. 11, 2025 /PRNewswire/ — Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, HPC, Cloud, Storage, and 5G/Edge, today announced the expansion of its NVIDIA Blackwell system portfolio with the new 4U DLC-2 liquid-cooled NVIDIA HGX B200 system ready for volume shipment and the introduction of an air-cooled 8U front I/O system. Tailored to the most demanding large-scale AI training and cloud-scale inference workloads, Supermicro’s new systems streamline the deployment, management, and maintenance of air- or liquid-cooled AI infrastructure and are designed to support coming NVIDIA HGX B300 platforms, allowing easy front I/O access, simplifying cabling, improving thermal efficiency and compute density, and reducing operational expenses (OPEX).
“Supermicro’s DLC-2 enabled NVIDIA HGX B200 system leads our portfolio to achieve greater power savings and faster time to online for AI Factory deployments,” said Charles Liang, CEO and president, Supermicro. “Our Building Block architecture enables us to quickly deliver solutions exactly as our customers request. Supermicro’s extensive portfolio now can offer precisely optimized NVIDIA Blackwell solutions to a diverse range of AI infrastructure environments, whether deploying into an air- or liquid-cooled facility.”
For more information, please visit https://www.supermicro.com/en/accelerators/nvidia
Supermicro’s DLC-2 represents the next generation of Direct Liquid Cooling solutions, engineered to meet the escalating demands of AI-optimized data centers. This comprehensive cooling architecture delivers significant operational and cost benefits for high-density computing environments.
Supermicro now offers one of the broadest portfolios of NVIDIA HGX B200 solutions with the two new front I/O systems and six rear I/O systems allowing customers to choose their most optimized CPUs, memory, networking, storage, and cooling configuration. The new 4U and 8U front I/O NVIDIA HGX B200 systems are built upon proven solutions for large-scale AI training and inference deployment by addressing major pain points of deployment, including networking, cabling, and thermals.
“Advanced infrastructure is accelerating the AI industrial revolution for every industry,” said Kaustubh Sanghani, vice president of GPU product management at NVIDIA. “Based on the latest NVIDIA Blackwell architecture, Supermicro’s new front I/O B200 systems equip enterprises to deploy and scale AI at unprecedented speed—delivering breakthrough innovation, efficiency, and operational excellence.”
Modern AI data centers demand high scalability that requires substantial node-to-node connections. The system’s 8 high-performance 400G NVIDIA ConnectX®-7 NICs and 2 NVIDIA Bluefield®-3 DPUs are moved to the front of the system to allow for the configuration of networking cables, storage drive bays, and management all from the cold aisle. The NVIDIA Quantum-2 InfiniBand and the Spectrum™-X Ethernet platform are fully supported to ensure the highest-performing compute fabric.
In addition to system architecture improvements, Supermicro has fine-tuned components to maximize efficiency, performance, and cost savings for AI data center workloads. Upgraded memory expansion with 32 DIMM slots delivers greater flexibility for system memory configuration, enabling large-capacity memory implementations. Large system memory complements the NVIDIA HGX B200’s HBM3e GPU memory by eliminating CPU-GPU bottlenecks, optimizing large workload processing, enhancing multi-job efficiency in virtualized environments, and accelerating data preprocessing.
All Supermicro 4U liquid-cooled systems and 8U or 10U air-cooled systems are optimized for NVIDIA HGX B200 8-GPU with each GPU connected via 5th Generation NVLink® at 1.8TB/s, providing a combined total of 1.4TB of HBM3e GPU memory per system. NVIDIA’s Blackwell platform delivers up to 15x faster real-time inference performance and 3x faster training for LLMs compared to the Hopper generation of GPUs. The new front I/O systems with dual-socket CPUs supporting up to 350W Intel® Xeon® 6 6700 Series processors deliver high performance and efficiency for a wide range of AI workloads.
The newly introduced 4U front I/O liquid-cooled system features front-accessible NICs, DPUs, storage, and management components. It utilizes dual-socket Intel® Xeon® 6700 Series processors with P-cores up to 350W and NVIDIA HGX B200 8-GPU configuration (180GB HBM3e per GPU). The system supports 32 DIMMs with up to 8TB capacity at 5200MT/s or up to 4TB at 6400MT/s DDR5 RDIMM, plus 8 hot-swap E1.S NVMe storage drive bays and 2 M.2 NVMe boot drives. Network connectivity includes 8 single-port NVIDIA ConnectX®-7 NICs or NVIDIA BlueField®-3 SuperNICs and two dual-port NVIDIA BlueField®-3 DPUs.
Supermicro designed this liquid-cooled system as the building block for densely populated AI factories that can reach cluster sizes beyond thousands of nodes, delivering up to 40% data center power savings compared to air cooling.
The 8U front I/O air-cooled system shares the same front-accessible architecture and core specifications while providing a streamlined solution for AI factories without liquid-cooling infrastructure. It features a compact 8U form factor (compared to Supermicro’s 10U system) with a reduced-height CPU tray while maintaining the full 6U-height GPU tray to maximize air-cooling performance.
About Super Micro Computer, Inc.
Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions provider with server, AI, storage, IoT, switch systems, software, and support services. Supermicro’s motherboard, power, and chassis design expertise further enable our development and production, enabling next generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling).
Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.
All other brands, names, and trademarks are the property of their respective owners.
Photo – https://mma.prnewswire.com/media/2747027/Super_Micro_Computer__Front_IO_B200_Solutions.jpg
Logo – https://mma.prnewswire.com/media/1443241/Supermicro_Logo.jpg
DUBAI, UAE, Aug. 11, 2025 /PRNewswire/ -- Bybit, the world's second-largest cryptocurrency exchange by trading…
Turnkey solution delivers faster streaming and stronger connections for subscribers while Plume's SaaS platform offers…
Accelerates Development; Enhances Quality, Safety, Performance & Cost EfficiencyLeverages Nexteer's Chassis Domain Knowledge & By-Wire…
G-P's global HR agent recognized as a cutting-edge technology, delivering real-world impact BOSTON, Aug. 11,…
For the second year in a row, Tredence was selected as a Leader in retail and…
MEDAS to Deliver Energy-Efficient Ethanol Plant PUNE, India, Aug. 11, 2025 /PRNewswire/ -- MEDAS EnggDesign…