Skip to content

Fortran with coarrays

gfortran

gfortran requires a separate package, called OpenCoarrays, in order to support multiple image coarrays via openmpi. OpenCoarrays is now available as a module in the 2023r1 software stack:

  1. Load necessary modules:
    module load 2023r1
    module load openmpi
    module load opencoarrays
    
  2. Compile your code:
    gfortran -fcoarray=lib yourcode.f90 -o yourcode.x -L${OPENCOARRAYS_ROOT}/lib64 -lcaf_mpi
    
  3. Run your code locally:

    mpirun -np 8 yourcode.x
    

    The simple hello world code works, however, giving the following warning:

    tag_match.c:62   UCX  WARN  unexpected tag-receive descriptor 0xc52200 was not matched
    

    The warning is likely connected to the fact this coarrays implementation relies on a one-way communication, whereas MPI expects a two-way communication.

    Unfortunately, these warning messages are printed to STDOUT, so simple redirection to STDERR does not work. These warning messages can be avoided by simply filtering them out from the STDOUT with grep -v:

    mpirun -np 8 yourcode.x | grep -v 'UCX'
    

    Alternatively, if you intend to stay within the single node only, disabling the UCX functionality altogether might be an option:

    export OMPI_MCA_pml=^ucx
    
  4. Example submission script:

    #!/bin/bash
    #
    #SBATCH --job-name="gfortran_coarray_test"
    #SBATCH --time=01:00:00
    #SBATCH --nodes=1
    #SBATCH --ntasks-per-node=8
    #SBATCH --cpus-per-task=1
    #SBATCH --partition=compute
    #SBATCH --mem-per-cpu=1G
    
    module load 2023r1
    module load openmpi
    module load opencoarrays
    
    # export OMPI_MCA_pml=^ucx
    
    srun ./yourcode.x > output.log | grep -v 'UCX'
    

Compile OpenCoarrays

OpenCoarrays is now available as a module in the 2022r2 software stack. However, it is also possible to compile your own version:

  1. Load necessary modules:

    module load 2022r1
    module load compute
    module load gcc/11.2.0-midhpa4
    module load openmpi
    module load cmake
    

  2. Compile OpenCoarrays locally in your home directory:

    cd ~
    mkdir tools
    cd tools
    
    wget https://github.com/sourceryinstitute/OpenCoarrays/releases/download/2.9.2/OpenCoarrays-2.9.2.tar.gz
    tar xvzf OpenCoarrays-2.9.2.tar.gz
    
    cd OpenCoarrays-2.9.2
    
    mkdir build
    mkdir install
    
    cd build
    
    cmake ${HOME}/tools/OpenCoarrays-2.9.2/ -DCMAKE_INSTALL_PREFIX=${HOME}/tools/OpenCoarrays-2.9.2/install/
    
    make -j 4
    make test 
    make install
    

  3. Add OpenCoarrays library to your LD_LIBRARY_PATH:

    export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:${HOME}/tools/OpenCoarrays-2.9.2/install/lib64"
    

  4. Compile your code:

    gfortran -fcoarray=lib yourcode.f90 -o yourcode.x -L${HOME}/tools/OpenCoarrays-2.9.2/install/lib64 -lcaf_mpi
    

  5. Run your code locally:

    mpirun -np 8 yourcode.x
    

    The simple hello world code works, however, giving the following warning:

    tag_match.c:62   UCX  WARN  unexpected tag-receive descriptor 0xc52200 was not matched
    
  6. Example submission script:

    #!/bin/sh
    #
    #SBATCH --job-name="gfortran_coarray_test"
    #SBATCH --time=01:00:00
    #SBATCH --nodes=1
    #SBATCH --ntasks-per-node=8
    #SBATCH --cpus-per-task=1
    #SBATCH --partition=compute
    #SBATCH --mem-per-cpu=1G
    
    srun ./yourcode.x > output.log
    

ifort

In theory, ifort supports multiple image coarrays out-of-the-box. The following works for single-node memory sharing:

  1. Load necessary modules:

    module load 2022r2
    module load intel/oneapi-all
    

  2. Compile your code:

    ifort -free -e08 -coarray=shared -o yourcode.x yourcode.f90
    

  3. Export necessary environmental variables:

    export I_MPI_FABRICS=shm
    export I_MPI_DEVICE=shm
    

  4. Run your code locally:

    ./yourcode.x
    

    This will occupy all available CPUs. If you want to control how many images you are requesting:

    export FOR_COARRAY_NUM_IMAGES=8
    

    The simple hello world code works, however, giving the following warning:

    MPI startup(): Warning: I_MPI_PMI_LIBRARY will be ignored since the hydra process manager was found
    
  5. Example submission script (only works on one node!!!):

    #!/bin/sh
    #
    #SBATCH --job-name="ifort_coarray_test"
    #SBATCH --time=01:00:00
    #SBATCH --nodes=1
    #SBATCH --ntasks-per-node=8
    #SBATCH --cpus-per-task=1
    #SBATCH --partition=compute
    #SBATCH --mem-per-cpu=1G
    
    module load 2022r2
    module load intel/oneapi-all
    
    export I_MPI_PMI_LIBRARY=/cm/shared/apps/slurm/current/lib64/libpmi2.so
    
    export I_MPI_FABRICS=shm
    export I_MPI_DEVICE=shm
    
    export FOR_COARRAY_NUM_IMAGES=8
    
    ./yourcode.x > output.log