Skip to content

Benchmarking OpenMPI with OSU micro-benchmarks #45

@heatherkellyucl

Description

@heatherkellyucl

From #44 we want to know what Spack variants to build our main OpenMPI with. We are going to use the C MPI benchmarks from https://mvapich.cse.ohio-state.edu/benchmarks/ to compare how well they perform on our OmniPath clusters.

Our existing mpi/openmpi/4.1.1/gnu-4.9.2 should be below acceptable performance (we assume!), using only vader.

Compiling the OSU microbenchmarks on Young

# wget couldn't validate cert
wget --no-check-certificate https://mvapich.cse.ohio-state.edu/download/mvapich/osu-micro-benchmarks-7.2.tar.gz
mkdir openmpi-4.1.1_vader
cd openmpi-4.1.1_vader
tar -xvf ../osu-micro-benchmarks-7.2.tar.gz

# modules for existing install
module purge
module load gcc-libs/4.9.2
module load compilers/gnu/4.9.2
module load numactl/2.0.12
module load psm2/11.2.185/gnu-4.9.2
module load mpi/openmpi/4.1.1/gnu-4.9.2
module load gnuplot

cd osu-micro-benchmarks-7.2
./configure CC=mpicc CXX=mpicxx --prefix=/home/cceahke/Scratch/mpi_benchmarks/openmpi-4.1.1_vader/osu-micro-benchmarks-7.2_install
make
make install

Now got directories full of benchmarks:

ls ../osu-micro-benchmarks-7.2_install/libexec/osu-micro-benchmarks/mpi/
collective/ one-sided/  pt2pt/      startup/ 
ls ../osu-micro-benchmarks-7.2_install/libexec/osu-micro-benchmarks/mpi/pt2pt/
osu_bibw  osu_bw  osu_latency  osu_latency_mp  osu_latency_mt  osu_mbw_mr  osu_multi_lat  persistent/

Going to start with point-to-point then look at some collectives.

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions