<optgroup id="02gi4"><wbr id="02gi4"></wbr></optgroup>
<optgroup id="02gi4"></optgroup><optgroup id="02gi4"><small id="02gi4"></small></optgroup>
<optgroup id="02gi4"></optgroup>
<center id="02gi4"><div id="02gi4"></div></center><noscript id="02gi4"><div id="02gi4"></div></noscript>
<optgroup id="02gi4"><wbr id="02gi4"></wbr></optgroup><noscript id="02gi4"></noscript>
<optgroup id="02gi4"><wbr id="02gi4"></wbr></optgroup>
<optgroup id="02gi4"><div id="02gi4"></div></optgroup><center id="02gi4"><div id="02gi4"></div></center><center id="02gi4"><div id="02gi4"></div></center>
<center id="02gi4"><div id="02gi4"></div></center>
<optgroup id="02gi4"><div id="02gi4"></div></optgroup>
<center id="02gi4"></center><noscript id="02gi4"></noscript><optgroup id="02gi4"><div id="02gi4"></div></optgroup>

Magnum IO

Multi-GPU, Multi-Node Network and Storage IO Optimization Stack


Meeting the Bandwidth Demands of Compute-Intensive Workloads

GPUs provide the horsepower required by compute-intensive workloads, but their data consumption increases the demand for input/output (IO) bandwidth. NVIDIA Magnum IO? is NVIDIA’s set of APIs that integrates computing, networking, file systems, and storage to maximize IO performance for multi-GPU, multi-node accelerated systems. It interfaces with CUDA-X? libraries to accelerate IO across a broad range of workloads, from AI to visualization.

GPU-Optimized Networking and Storage IO Performance

Magnum IO integrates infrastructure elements to maximize storage and network I/O performance and functionality. Key benefits include:

Optimized IO Performance: It bypasses the CPU to enable direct IO between GPU memory and network storage.

System Balance and Utilization: It relieves CPU contention to create a more balanced GPU-accelerated system and delivers peak IO bandwidth with up to 10X fewer CPU cores.

Seamless Integration: It provides optimized implementation for current and future platforms, whether the data transfers are latency sensitive, bandwidth sensitive, or collectives.

IO Optimization Stack

Magnum IO includes innovative IO optimization technologies such as NCCL, NVIDIA? GPUDirect RDMA, and NVIDIA Fabric Manager. GPUDirect Storage is a key feature of the stack. It opens a direct data path between GPU memory and storage, avoiding the CPU altogether. This direct path can increase bandwidth, decrease latency, and decrease the utilization load on the CPU and GPU. Addressing the IO problem with Magnum IO drives toward a balanced system.

IO Optimization Stack

Learn more about the technologies powering Magnum IO.