Fortran with coarrays¶
gfortran
¶
gfortran
requires a separate package, called OpenCoarrays
, in order to support multiple image coarrays via openmpi
. OpenCoarrays
is now available as a module in the 2023r1
software stack:
- Load necessary modules:
- Compile your code:
-
Run your code locally:
The simple hello world code works, however, giving the following warning:
The warning is likely connected to the fact this coarrays implementation relies on a one-way communication, whereas MPI expects a two-way communication.
Unfortunately, these warning messages are printed to STDOUT, so simple redirection to STDERR does not work. These warning messages can be avoided by simply filtering them out from the STDOUT with
grep -v
:Alternatively, if you intend to stay within the single node only, disabling the UCX functionality altogether might be an option:
-
Example submission script:
#!/bin/bash # #SBATCH --job-name="gfortran_coarray_test" #SBATCH --time=01:00:00 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=8 #SBATCH --cpus-per-task=1 #SBATCH --partition=compute #SBATCH --mem-per-cpu=1G module load 2023r1 module load openmpi module load opencoarrays # export OMPI_MCA_pml=^ucx srun ./yourcode.x > output.log | grep -v 'UCX'
Compile OpenCoarrays
OpenCoarrays
is now available as a module in the 2022r2
software stack. However, it is also possible to compile your own version:
-
Load necessary modules:
-
Compile
OpenCoarrays
locally in your home directory:cd ~ mkdir tools cd tools wget https://github.com/sourceryinstitute/OpenCoarrays/releases/download/2.9.2/OpenCoarrays-2.9.2.tar.gz tar xvzf OpenCoarrays-2.9.2.tar.gz cd OpenCoarrays-2.9.2 mkdir build mkdir install cd build cmake ${HOME}/tools/OpenCoarrays-2.9.2/ -DCMAKE_INSTALL_PREFIX=${HOME}/tools/OpenCoarrays-2.9.2/install/ make -j 4 make test make install
-
Add
OpenCoarrays
library to yourLD_LIBRARY_PATH
: -
Compile your code:
-
Run your code locally:
The simple hello world code works, however, giving the following warning:
-
Example submission script:
ifort
¶
In theory, ifort
supports multiple image coarrays out-of-the-box. The following works for single-node memory sharing:
-
Load necessary modules:
-
Compile your code:
-
Export necessary environmental variables:
-
Run your code locally:
This will occupy all available CPUs. If you want to control how many images you are requesting:
The simple hello world code works, however, giving the following warning:
-
Example submission script (only works on one node!!!):
#!/bin/sh # #SBATCH --job-name="ifort_coarray_test" #SBATCH --time=01:00:00 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=8 #SBATCH --cpus-per-task=1 #SBATCH --partition=compute #SBATCH --mem-per-cpu=1G module load 2022r2 module load intel/oneapi-all export I_MPI_PMI_LIBRARY=/cm/shared/apps/slurm/current/lib64/libpmi2.so export I_MPI_FABRICS=shm export I_MPI_DEVICE=shm export FOR_COARRAY_NUM_IMAGES=8 ./yourcode.x > output.log