Skip to content

MPI (Message Passing Interface)

The software stack contains several implementations. We recommend using OpenMPI in combination with the GNU compiler. To load OpenMPI, use

module load 2024r1
module load openmpi

This will also reveal a large number of libraries and software that depend on MPI within the software stack.

Be aware that, while different MPI implementations are API-compatible (functions like MPI_Send have the same standardized interface), they are not ABI (Application Binary Interface) compatible: A program compiled, e.g., with OpenMPI cannot be linked against libraries compiled against IntelMPI or NVidia's MPI. All software in a single software stack is compiled against the same OpenMPI version to ensure compatibility.

CUDA support

The OpenMPI version in the software stack on GPU nodes is compiled with CUDA support (allowing MPI calls on CUDA memory via GPUDirect). However, please note that the GPUs on DelftBlue do not have direct InfiniBand connections, so that any data transferred between GPUs must go via the PCI/e bus and host CPU. A performance gain from using GPUdirect can therefore not be expected.

Other MPI libraries

It is not recommended to build your own MPI library on DelftBlue. The reason is that performance and correct functioning of an MPI library depends on an entire stack of software "under the hood", including network drivers and low-level communication primitives. If you would like to use a vendor-speicific MPI library, we also have these two options:

IntelMPI

Available via module load intel/oneapi-all. Beware that

  • Intel has deprecated the icc/icpc/ifort series of compilers.
  • The mpicc/mpicxx/mpif90 wrappers call gcc by default.
  • The mpiicc/mpiicpc/mpiifort wrappers use the deprecated compilers. To switch to the new LLVM-based compilers, use the following for C, C++ and Fortran, respectively:

For more details on IntelMPI with LLVM_based compilers, see here.

NVidia's NVHPC toolkit

If you want to use the nvhpc compilers for building your GPU application, note that the toolkit provides its own MPI implementation. Simply use module load nvhpc and the standard wrappers mpicc etc. should work.

A benchmark example

The following job script downloads and compiles the Intel MPI benchmark suite and runs a few tests using OpenMPI. If you want to use another MPI library, we recommend testing this script with your compiler/MPI combination to make sure everything works as expected.