תיאור
G593-ZD2 HGX | 5U | AMD EPYC™ 9654 x 2 | 1024GB | H100 80GB x 8 | 1.9TB U.2 | ConnectX®-5 InfiniBand/Ethernet 100Gb | 3 YOS
An advanced ai HGX from Gigabyte with 8xH100 GPU connected to a NVSwitch
G593-ZD2 HGX | 5U | EPYC 9654 x 2 | 1024GB | H100 80GB x 8 | 1.9TB U.2 | ConnectX®-5 InfiniBand/Ethernet 100Gb | 3 YOS
Powering the Next Generation of Server Architecture and Energy Efficiency
The path to AMD’s 5nm ‘Zen 4’ architecture was paved with many successful generations of EPYC innovations and chiplet designs, and AMD EPYC 9004 Series processors continue this progression. Adding a host of new features to target a wide range of workloads, the new family of EPYC processors will deliver even better CPU performance and performance per watt, and do so on a platform with 2x the throughput of PCIe 4.0 lanes that also has support for 50% more memory channels. For this new platform, GIGABYTE has products ready to get the most out of EPYC-based systems that support fast PCIe Gen5 accelerators and Gen5 NVMe drives, in addition to support for high performant DDR5 memory.
4th Gen AMD EPYC Processors for SP5 Socket
5nmarchitecture
Compute density increased with more transistors packed in less space
128CPU cores
Dedicated cores and targeted workloads for Zen 4c & Zen 4 cores
Large L3cache
Select CPUs have 3x or more L3 cache for technical computing
SP5compatibility
All 9004 series processors are supported on one platform
12channels
Memory capacity can achieve 6TB per socket
DDR5memory
Increased memory throughput and higher DDR5 capacity per DIMM
PCIe 5.0lanes
Increased IO throughput achieving 128GB/s bandwidth in PCIe x16 lanes
CXL 1.1+support
Disaggregated compute architecture possible via Compute Express Link
Select GIGABYTE for the AMD EPYC 9004 platform
G593-ZD2-AAX1 Product Overview
G593-ZD2-AAX1 Block Diagram
High Performance
Supports NVIDIA HGX™ H100 8-GPU
The NVIDIA H100 Tensor Core GPU enables an order-of-magnitude leap for large-scale AI and HPC with unprecedented performance, scalability and security for every data center. With NVIDIA AI Enterprise for streamlined AI development and deployment, NVIDIA NVLINK Switch System direct communication between up to 256 GPUs, H100 accelerates everything from exascale scale workloads with a dedicated Transformer Engine for trillion parameter language models, down to right-sized Multi-Instance GPU (MIG) partitions.
Power Efficiency
Automatic Fan Speed Control
GIGABYTE servers are enabled with Automatic Fan Speed Control to achieve the best cooling and power efficiency. Individual fan speeds will be automatically adjusted according to temperature sensors strategically placed in the servers.