Classification of MPI-1 Benchmarks

IntelĀ® MPI Benchmarks introduces the following classes of benchmarks:

The following table lists the MPI-1 benchmarks in each class:

Single Transfer

Parallel Transfer

Collective

PingPong

Sendrecv

Bcast

Multi-Bcast

PingPongSpecificSource

Exchange

Allgather

Multi-Allgather

PingPing

Multi-PingPong

Allgatherv

Multi-Allgatherv

PingPingSpecificSource

Multi-PingPing

Alltoall

Multi-Alltoall

 

Multi-Sendrecv

Alltoallv

Multi-Alltoallv

 

Multi-Exchange

Scatter

Multi-Scatter

 

Uniband

Scatterv

Multi-Scatterv

 

Biband

Gather

Multi-Gather

 

Multi-Uniband

Gatherv

Multi-Gatherv

 

Multi-Biband

Reduce

Multi-Reduce

 

 

Reduce_scatter

Multi-Reduce_scatter

 

 

Allreduce

Multi-Allreduce

 

 

Barrier

Multi-Barrier

Each class interprets results in a different way.

Single Transfer Benchmarks

Single transfer benchmarks involve two active processes into communication. Other processes wait for the communication completion. Each benchmark is run with varying message lengths. The timing is averaged between two processes. The basic MPI data type for all messages is MPI_BYTE.

Throughput values are measured in MBps and can be calculated as follows:

throughput = X/220 * 106/time = X/1.048576/time

where

Parallel Transfer Benchmarks

Parallel transfer benchmarks involve more than two active processes into communication. Each benchmark runs with varying message lengths. The timing is averaged over multiple samples. The basic MPI data type for all messages is MPI_BYTE. The throughput calculations of the benchmarks take into account the multiplicity nmsg of messages outgoing from or incoming to a particular process. For the Sendrecv benchmark, a particular process sends and receives X bytes, the turnover is 2X bytes, nmsg=2. For the Exchange benchmark, the turnover is 4X bytes, nmsg=4.

Throughput values are measured in MBps and can be calculated as follows:

throughput = nmsg*X/220 * 106/time = nmsg*X/1.048576/time,

where

Collective Benchmarks

Collective benchmarks measure MPI collective operations. Each benchmark is run with varying message lengths. The timing is averaged over multiple samples. The basic MPI data type for all messages is MPI_BYTE for pure data movement functions and MPI_FLOAT for reductions.

Collective benchmarks show bare timings.