
March 18, 2025 – Giga Computing, a subsidiary of GIGABYTE and an industry leader in generative AI servers and advanced cooling technologies, today announced participation at NVIDIA GTC 2025 to bring to the market the best in GPU-based solutions for generative AI, media acceleration, and large language models (LLM).
To this end, GIGABYTE booth #1409 at NVIDIA GTC showcases a rack-scale turnkey AI solution, GIGAPOD, that offers both air and liquid-cooling designs for the NVIDIA HGX™ B300 NVL16 system. Also, on display at the booth is a compute node from the newly announced NVIDIA GB300 NVL72 rack-scale solution. And for modularized compute architecture are two servers supporting the newly announced NVIDIA RTX PRO™ 6000 Blackwell Server Edition.
Complete AI solution – GIGAPOD
With the depth of expertise in hardware and system design, Giga Computing has combined infrastructure hardware, platform software, and architecting service to deliver scalable units composed of GIGABYTE GPU servers with NVIDIA GPU baseboards, while running GIGABYTE POD Manager, a powerful software suite designed to enhance operational efficiency, streamline management, and optimize resource utilization. GIGAPOD’s scalable unit is designed for either nine air-cooled racks or five liquid-cooled racks. Giga Computing offers two approaches for the same goal, one powerful GPU cluster using NVIDIA HGX™ Hopper and Blackwell GPU platforms at scale to meet demand for all AI data centers.
One Rack for Liquid, and One for Air
Computing demands continue to reach new feats, and so have the server chassis to support them. The air-cooled GIGAPOD configuration uses an 8U GIGABYTE G893-series server that supports NVIDIA Blackwell architecture, including the NVIDIA HGX™ B300 NVL16. For customers that prefer less modifications to their data centers and are custom to air-cooled systems, the G893-series delivers with up to thirty-two GPUs in a single rack, as seen at the booth. But for customers such as hyperscalers that are chasing greater energy-efficiency, GIGAPOD offers a 4U G4L3-series server using cold plates on all eight GPUs and two CPUs. This DLC technology allows for greater compute density with up to eight G4L3 servers in a rack with sixty-four GPUs.
Built with NVIDIA Blackwell Ultra, HGX B300 NVL16 leads the new era of AI with optimized compute and increased memory, delivering breakthrough performance for AI reasoning, agentic AI, and video inference applications for every data center.
Exascale Computing in a Single Rack – NVIDIA GB300 NVL72
Computex 2024 was the first time Giga Computing showed off its NVIDIA GB200 NVL72 solution, and at NVIDIA GTC 2025, the development continues with its successor, the NVIDIA GB300 NVL72. At the booth, a liquid-cooled GB300 compute node is shown to demonstrate what is possible with liquid cooling technology.
Built with NVIDIA Blackwell Ultra, GB300 NVL72 leads the new era of AI with optimized compute, increased memory, and high-performance networking, delivering breakthrough performance for AI reasoning, agentic AI, and video inference applications.
AI-first Optimization with NVIDIA MGX™ Servers
To better understand the layout and design of an NVIDIA MGX™-based server, the GIGABYTE booth has two distinct servers on the wall. Both follow a modular architecture that is NVIDIA-optimized for AI and HPC by seamlessly integrating the latest hardware, such as NVIDIA Grace™ CPU Superchip, NVIDIA BlueField®-3 DPU, and NVIDIA ConnectX®-7 NIC, and more. Debuting for the first time is the NVIDIA RTX PRO™ 6000 Blackwell Server Edition, and the GIGABYTE XL44-SX0 supports up to eight in a single server for exceptional AI and graphics performance. Also, supporting the new GPU is the GIGABYTE XV23-VC0 that uses the NVIDIA Grace™ CPU Superchip. Both servers support a mix of workloads including virtualization, cloud computing, and AI-optimized. They do so with deep NVIDIA software stack optimizations.
The NVIDIA RTX PRO™ 6000 Blackwell Server Edition is a powerful data center GPU for AI and visual computing. It accelerates demanding enterprise workloads, including AI, scientific computing, graphics, and video applications.
GIGAPOD: Redefining the Future Data Center by Scaling Beyond Servers and Software Stack (Session S74201)
GIGAPOD redefines the future data centers by providing scalable solutions of computing resources, tailored performance of cluster, ease of deployment, verified optimal performance, data storage platforms, networking solutions, and POD Manager software platform to manage infrastructure at scale and AI workload for development and applications.
For GIGAPOD to be more than just high-performance hardware, the complete integration process must be understood starting with the integration of platform software, and that is where Dr. Eric Ming-Chiang Chen comes into play, as he will discuss how GIGABYTE POD Manager achieves next-gen cluster system success at scale.
Visit GIGABYTE booth #1409 to learn more about rack-scale data centers and how Giga Computing has grown to be a service provider for hyperscalers.
For queries or more information, please contact sales.
{{ item.Title }}
{{ item.Desc }}