Benchmarking scale-out AI materials with Cisco N9000 + AMD Pensando™ Pollara 400 NICs


The “AI paradox” is a rising hurdle for enterprise leaders: investing hundreds of thousands in highly effective GPUs, solely to observe them sit idle whereas ready for knowledge. As enterprises scale from pilot to manufacturing, the actual bottleneck isn’t compute—it’s the hidden price of an inefficient community. In scale-out architectures, the tens of 1000’s of GPUs should synchronize to finish a single coaching iteration. When the community can’t hold tempo with the bursty calls for of recent AI coaching, GPUs stall and job completion time (JCT) spikes. We’ve partnered with AMD to ship a validated, end-to-end AI infrastructure that eliminates these bottlenecks and transforms the community right into a high-performance engine for innovation.

Material as the inspiration: The Cisco and AMD AI efficiency blueprint

As AI workloads broaden throughout distributed clusters, the community should scale linearly to stop packet loss and retransmissions. This efficiency is barely verifiable by means of rigorous, real-world benchmarking. At Cisco, we prioritize systemic, deterministic efficiency that goes past particular person part specs.

Our reference structure options AMD Intuition™ MI300X GPUs, AMD Pensando™ Pollara 400 NICs, Cisco Silicon One G200-powered N9364E-SG2 switches, and Cisco 800G OSFP optics. Deploying is barely half the problem; working at scale is the opposite. Cisco Nexus Dashboard offers the granular, real-time visibility wanted for day-0 by means of day-N operations.

Cisco N9000 Series Switches, with AMD Instinct GPU accelerators and AMD Pensando AI NICs, unified with Cisco Nexus One in a fully integrated stack. N9000 Series switches are included in AMD reference architecture for AI cluster design.Cisco N9000 Series Switches, with AMD Instinct GPU accelerators and AMD Pensando AI NICs, unified with Cisco Nexus One in a fully integrated stack. N9000 Series switches are included in AMD reference architecture for AI cluster design.
Determine 1: Cisco N9000 Collection Switches, with AMD Intuition™ GPU accelerators and AMD Pensando™ AI NICs

By combining these applied sciences, we decrease JCT and maximize GPU utilization, guaranteeing AI infrastructure stays safe, compliant, and repeatedly optimized.

Benchmarking the structure

We benchmarked two Clos topologies (2×2 & 4×2) with Cisco N9364E-SG2 switches (every with 51.2 Tbps throughput and 64 ports of 800 GbE), 128 AMD Intuition™ MI300X Collection GPUs (16 servers x 8 GPUs), 128 AMD Pensando™ Pollara 400 AI NICs (16 servers x 8 NICs), and the AMD ROCm™ 6.3/7.0.3 software program ecosystem.

2×2 Clos topology

This design absolutely subscribes every leaf swap, forcing the swap into high-congestion states to check material resilience:

  • 2x leaf and 2x backbone (4x Cisco N9364E-SG2) switches
  • 8 servers (8x AMD Intuition™ MI300X Collection GPUs) linked to every leaf swap
  • 8x AMD Pensando™ Pollara 400G NICs per server
  • Change aspect: Cisco OSFP 800G DR8 optics
2x2 CLOS topology with Cisco N9364E-SG2 + AMD Topology2x2 CLOS topology with Cisco N9364E-SG2 + AMD Topology
Determine 2: 2×2 Clos topology

4×2 Clos topology

This design focuses on the efficacy of superior load-balancing methods for environment friendly load distribution throughout synchronous bursts within the GPU scale-out material:

  • 4x leaf and 2x backbone (6x Cisco N9364E-SG2) switches
  • 4 servers (8x AMD Intuition™ MI300X Collection GPUs) linked to every leaf swap
  • 8x AMD Pensando™ Pollara 400G NICs per server
  • Change aspect: Cisco OSFP 800G DR8 optics
4x2 CLOS topology with Cisco N9364E-SG2 + AMD Scale-out Topology4x2 CLOS topology with Cisco N9364E-SG2 + AMD Scale-out Topology
Determine 3: 4×2 Clos topology

Benchmarking instruments

We measured scale-out material efficiency utilizing a complete toolset, together with:

  • IBPerf measures RDMA efficiency over scale-out material in various congestive situations. We used this software to check efficiency between GPUs linked throughout a single leaf and throughout leaf-spine.
  • MLPerf is an industry-standard benchmark used to measure precise workload efficiency. The efficiency output interprets to ROI on absolutely validated designs from Cisco and AMD.

Community material efficiency benchmarking outcomes

We evaluated scale-out material efficiency utilizing complete testing and normal KPIs.

Single-hop IBPerf testing evaluates efficiency inside a localized material area, sometimes inside a single leaf swap. This establishes a baseline for hyperlink utilization, buffer tuning effectiveness, and NIC-to-switch efficiency previous to introducing multi-hop variables.

These checks measure the Distant Direct Reminiscence Entry (RDMA) classes’ throughput between two GPUs linked by means of a Cisco N9364E-SG2 leaf swap. The outcomes seize P01 (1st percentile) and P99 (99th percentile) bandwidth, whereas all of the classes are energetic concurrently. P01 bandwidth represents the throughput of the slowest session—a important metric for synchronized AI/ML workload efficiency—whereas P99 represents the throughput of the quickest session. A minimal delta between P01 and P99 bandwidth and every bandwidth nearer to the hyperlink bandwidth proves the efficacy of the GPU interconnect expertise.

Within the 2-leaf/2-spine (2×2) topology, every leaf swap handles 32 bi-directional classes, successfully saturating the leaf swap. The 4-leaf/2-spine (4×2) topology handles 16 bi-directional classes per leaf. Throughout each topologies and ranging queue pair (QP) counts (4 QPs and 32 QPs), the P01 and P99 bandwidths in each topologies and each units of queue pairs are nearer to one another, with each approaching the hyperlink bandwidth of 400 Gbps.

Determine 4: Single-hop RDMA bandwidth efficiency throughout various leaf-spine topologies and queue pair counts

This efficiency reveals that the AMD Pensando™ Pollara NIC and Cisco N9364E-SG2 switches ship a extremely environment friendly answer for demanding workloads. The tight delta between P01 and P99 metrics throughout totally different scale and configurations demonstrates that this structure maintains deterministic efficiency, no matter cluster measurement or queue pair density.

Bisectional IBPerf testing evaluates cross-fabric visitors traversing a number of tiers to measure bisection bandwidth, path symmetry, cross-spine load balancing, and congestion propagation.

These checks measure RDMA session throughput between two GPUs linked by means of leaf and backbone Cisco N9364E-SG2 switches. The outcomes present P01 and P99 bandwidth measurements with all classes are concurrently energetic. Within the 2×2 topology, there are 32 bi-directional classes per leaf, whereas the 4×2 topology has 16 bi-directional classes per leaf. All these classes go over backbone. The visitors from every session traverses three hops (leaf-spine-leaf) to emphasize the complete material. This take a look at validates the effectivity of the material’s load-balancing algorithm; any visitors polarization would result in some hyperlinks being underutilized, whereas different hyperlinks change into congested, in the end degrading RDMA session efficiency. Assessments have been performed utilizing 4 and 32 QPs.

Determine 5: Bisection RDMA bandwidth stability comparability for 2-leaf/2-spine and 4-leaf/2-spine architectures throughout various queue pair counts

The outcomes reveal that P01 and P99 bandwidths are related and every is nearer to the hyperlink bandwidth of 400 Gbps, mirroring the efficiency noticed in single-hop testing. This confirms that the Cisco N9364E-SG2 switches and AMD Pensando™ Pollara NIC present a high-performance, resilient GPU interconnect expertise able to sustaining constantly deterministic efficiency beneath stress.

Congestive IBPerf testing creates high-contention situations utilizing a 31:1 communication sample, the place 31 GPUs talk with a single GPU. It evaluates queue buildup, Express Congestion Notification (ECN) effectiveness, Knowledge Middle Quantized Congestion Notification (DCQCN) response curves, tail latency, and material stability beneath worst-case AI communication patterns.

Incast situations characterize among the most difficult situations for scale-out AI material. These checks measure P01 and P99 bandwidths beneath incast situations, which manifest throughout collective communications resembling all-to-all. If the scale-out material {hardware}, design, and tuning aren’t optimum, it results in substantial degradation in JCT for coaching workloads. As a result of it’s tough to synchronize all classes to begin concurrently, we use the Quantile Vary Technique to investigate the outcomes. It analyzes bandwidth samples on account of incast congestion as a substitute of all bandwidth samples.

Determine 6: RDMA incast 31:1 congestion efficiency. Comparability of P01 and P99 bandwidth throughout high-contention 31:1 incast visitors

On this take a look at, every of the 128 GPUs establishes 31 RDMA classes to 31 different GPUs throughout the leaf-spine material, leading to a complete of three,968 (31*128 = 3,968) concurrently energetic classes within the scale-out material. The delta between P01 and P99 bandwidth may be very tight, and every bandwidth is near the hyperlink bandwidth of 400 Gbps, which is a strong proof level of the Cisco N9364E-SG2 switches’ means to deal with excessive congestive situations and a testomony to the Cisco and AMD validated design.

MLPerf Coaching and Inference Benchmarking checks set up standardized metrics to guage the efficiency of coaching and inference workloads. By imposing strict tips relating to fashions, datasets, and allowable optimizations, these benchmarks present a stage taking part in subject for truthful comparability amongst competing AI infrastructure options.

The MLPerf checks from MLCommons are designed to offer a standard benchmarking methodology for measuring application-level KPIs, that are the first indicators of efficiency for finish customers. For inference, the Llama 2 70B outcomes reveal clear throughput scaling because the configuration expands from two to 4 nodes. The coaching benchmarks present consultant knowledge for Llama 2 70B (on two nodes) and Llama 3.1 8B (on eight nodes).

Determine 7: MLPerf coaching and inference key efficiency metrics for Llama 2 and Llama 3.1 fashions, detailing throughput and JCT throughout multi-node configurations

These findings present the inspiration for our core declare: the Cisco validated structure is not only theoretically sound; benchmarking reveals it might probably deal with essentially the most demanding AI inference and coaching workloads.

An actual-world deployment of the Cisco and AMD AI answer structure

The Cisco-AMD partnership delivers real-world influence, notably powering G42’s large-scale AI clusters. This end-to-end answer—integrating AMD GPUs, Cisco UCS servers, N9000 800G switches, and Nexus Dashboard—offers the safe, scalable efficiency required for cutting-edge AI workloads.

“As AI workloads scale, community efficiency turns into a important enabler of cluster effectivity. The AMD Pensando™ Pollara 400 AI NIC, with its absolutely programmable, fault-resilient design, delivers constant efficiency for GPU scale-out coaching. In collaboration with Cisco N9000 switching, we’re advancing Ethernet to the following stage, serving to maximize GPU utilization and speed up job completion.”

—Yousuf Khan, Company Vice President, Networking Know-how and Options Group, AMD

Operationalizing intelligence: A brand new normal for efficiency at scale

Within the age of massive-scale AI, a corporation’s infrastructure is both its biggest aggressive benefit or its most vital bottleneck. When the stakes contain mission-critical coaching, fine-tuning, and inferencing, a unified, absolutely validated ecosystem is a should. Cisco and AMD are altering the equation, delivering a deterministic, high-performance material that turns your community right into a catalyst for innovation.

Join with a Cisco AI networking specialist at present to design a deployment tailor-made to your particular workloads.

Further assets:

Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *