Intel Compilers¶
DelftBlue software stack¶
Intel compiler suite modules from the DelftBlue software stack can be enabled as follows.
Load the intel/oneapi-all
module:
This will make all relevant Intel compiler suite components available, e.g. this will load the Intel compilers (e.g. icc
and ifortran
etc.) and libraries (mkl
). You also get an Intel specific implementation of MPI
.
'New' and 'Old' Intel compilers
Latest versions of Intel compiler suites have "classic" and "new" versions of C/C++ and Fortran compilers. The module intel/oneapi-all
should include most of them out-of-the-box. For instance, you will get:
C/C++:
icc
:icpc
:icx
:[<netid>@login04 ~]$ which icx /beegfs/apps/generic/intel/oneapi/compiler/latest/linux/bin/icx [<netid>@login04 ~]$ icx -v Intel(R) oneAPI DPC++/C++ Compiler 2022.0.0 (2022.0.0.20211123) Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /mnt/shared/apps/generic/intel/oneapi/compiler/2022.0.2/linux/bin-llvm Found candidate GCC installation: /usr/lib/gcc/x86_64-redhat-linux/8 Selected GCC installation: /usr/lib/gcc/x86_64-redhat-linux/8 Candidate multilib: .;@m64 Candidate multilib: 32;@m32 Selected multilib: .;@m64
icpx
:[<netid>@login04 ~]$ which icpx /beegfs/apps/generic/intel/oneapi/compiler/latest/linux/bin/icpx [<netid>@login04 ~]$ icpx -v Intel(R) oneAPI DPC++/C++ Compiler 2022.0.0 (2022.0.0.20211123) Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /mnt/shared/apps/generic/intel/oneapi/compiler/2022.0.2/linux/bin-llvm Found candidate GCC installation: /usr/lib/gcc/x86_64-redhat-linux/8 Selected GCC installation: /usr/lib/gcc/x86_64-redhat-linux/8 Candidate multilib: .;@m64 Candidate multilib: 32;@m32 Selected multilib: .;@m64
dpcpp
:[<netid>@login04 ~]$ which dpcpp /beegfs/apps/generic/intel/oneapi/compiler/latest/linux/bin/dpcpp [<netid>@login04 ~]$ dpcpp -v Intel(R) oneAPI DPC++/C++ Compiler 2022.0.0 (2022.0.0.20211123) Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /mnt/shared/apps/generic/intel/oneapi/compiler/latest/linux/bin-llvm Found candidate GCC installation: /usr/lib/gcc/x86_64-redhat-linux/8 Selected GCC installation: /usr/lib/gcc/x86_64-redhat-linux/8 Candidate multilib: .;@m64 Candidate multilib: 32;@m32 Selected multilib: .;@m64
Fortran:
ifort
:ifx
:
Important
Please make sure that you are using correct compiler commands and MPI wrappers for the Intel suite! For example:
icc
instead ofgcc
ifort
instead ofgfortran
mpiicc
instead ofmpicc
mpiifort
instead ofmpif90
More info on Intel MPI wrappers
See also our MPI page.
Intel MPI provides two sets of MPI wrappers mpiicc
, mpicpc
, mpiifort
and mpicc
, mpicxx
, mpif90
that use Intel compilers
and GNU compilers
, respectively. Use the -show
option (e.g. mpif90 -show
) to display the underlying compiler in each of the MPI compiler commands.
The correspondence between the MPI wrapper and the back-end compiler is:
mpiifort
uses Intel ifort
mpiicc
uses Intel icc
mpiicpc
uses Intel icpc
mpif90
uses GNU gfortran
mpicc
uses GNU gcc
mpicxx
uses GNU g++
Unless you really need to use GNU compilers, we strongly suggest using the wrappers based on the Intel compilers.
Loading the intel/oneapi-all
module sets the following $MKLROOT
environment variable:
The mkl
libraries can be found in $MKLROOT/lib/intel64
, i.e. in /beegfs/apps/generic/intel/oneapi/mkl/latest/lib/intel64
:
[<netid>@login03 intel64]$ ls -l
total 3680190
-rwxr-xr-x 1 root root 50137440 Nov 12 05:40 libmkl_avx2.so.2
-rwxr-xr-x 1 root root 66658456 Nov 12 05:40 libmkl_avx512.so.2
-rwxr-xr-x 1 root root 53034392 Nov 12 05:40 libmkl_avx.so.2
-rw-r--r-- 1 root root 1273566 Nov 12 05:40 libmkl_blacs_intelmpi_ilp64.a
lrwxrwxrwx 1 root root 32 Jan 14 01:24 libmkl_blacs_intelmpi_ilp64.so -> libmkl_blacs_intelmpi_ilp64.so.2
-rwxr-xr-x 1 root root 523704 Nov 12 05:40 libmkl_blacs_intelmpi_ilp64.so.2
-rw-r--r-- 1 root root 754342 Nov 12 05:40 libmkl_blacs_intelmpi_lp64.a
lrwxrwxrwx 1 root root 31 Jan 14 01:24 libmkl_blacs_intelmpi_lp64.so -> libmkl_blacs_intelmpi_lp64.so.2
-rwxr-xr-x 1 root root 320552 Nov 12 05:40 libmkl_blacs_intelmpi_lp64.so.2
-rw-r--r-- 1 root root 1292782 Nov 12 05:40 libmkl_blacs_openmpi_ilp64.a
lrwxrwxrwx 1 root root 31 Jan 14 01:24 libmkl_blacs_openmpi_ilp64.so -> libmkl_blacs_openmpi_ilp64.so.2
-rwxr-xr-x 1 root root 532928 Nov 12 05:40 libmkl_blacs_openmpi_ilp64.so.2
-rw-r--r-- 1 root root 773558 Nov 12 05:40 libmkl_blacs_openmpi_lp64.a
lrwxrwxrwx 1 root root 30 Jan 14 01:24 libmkl_blacs_openmpi_lp64.so -> libmkl_blacs_openmpi_lp64.so.2
-rwxr-xr-x 1 root root 321552 Nov 12 05:40 libmkl_blacs_openmpi_lp64.so.2
-rw-r--r-- 1 root root 658964 Nov 12 05:40 libmkl_blas95_ilp64.a
-rw-r--r-- 1 root root 657516 Nov 12 05:40 libmkl_blas95_lp64.a
-rw-r--r-- 1 root root 217682 Nov 12 05:40 libmkl_cdft_core.a
lrwxrwxrwx 1 root root 21 Jan 14 01:24 libmkl_cdft_core.so -> libmkl_cdft_core.so.2
-rwxr-xr-x 1 root root 168912 Nov 12 05:40 libmkl_cdft_core.so.2
-rw-r--r-- 1 root root 577561212 Nov 12 05:40 libmkl_core.a
lrwxrwxrwx 1 root root 16 Jan 14 01:24 libmkl_core.so -> libmkl_core.so.2
-rwxr-xr-x 1 root root 73999168 Nov 12 05:40 libmkl_core.so.2
-rwxr-xr-x 1 root root 42416560 Nov 12 05:40 libmkl_def.so.2
-rw-r--r-- 1 root root 28405344 Nov 12 05:40 libmkl_gf_ilp64.a
lrwxrwxrwx 1 root root 20 Jan 14 01:24 libmkl_gf_ilp64.so -> libmkl_gf_ilp64.so.2
-rwxr-xr-x 1 root root 13272328 Nov 12 05:40 libmkl_gf_ilp64.so.2
-rw-r--r-- 1 root root 34401180 Nov 12 05:40 libmkl_gf_lp64.a
lrwxrwxrwx 1 root root 19 Jan 14 01:24 libmkl_gf_lp64.so -> libmkl_gf_lp64.so.2
-rwxr-xr-x 1 root root 17047584 Nov 12 05:40 libmkl_gf_lp64.so.2
-rw-r--r-- 1 root root 44071556 Nov 12 05:40 libmkl_gnu_thread.a
lrwxrwxrwx 1 root root 22 Jan 14 01:24 libmkl_gnu_thread.so -> libmkl_gnu_thread.so.2
-rwxr-xr-x 1 root root 30979016 Nov 12 05:40 libmkl_gnu_thread.so.2
-rw-r--r-- 1 root root 28410638 Nov 12 05:40 libmkl_intel_ilp64.a
lrwxrwxrwx 1 root root 23 Jan 14 01:24 libmkl_intel_ilp64.so -> libmkl_intel_ilp64.so.2
-rwxr-xr-x 1 root root 13277104 Nov 12 05:40 libmkl_intel_ilp64.so.2
-rw-r--r-- 1 root root 34415724 Nov 12 05:40 libmkl_intel_lp64.a
lrwxrwxrwx 1 root root 22 Jan 14 01:24 libmkl_intel_lp64.so -> libmkl_intel_lp64.so.2
-rwxr-xr-x 1 root root 17056672 Nov 12 05:40 libmkl_intel_lp64.so.2
-rw-r--r-- 1 root root 90467410 Nov 12 05:40 libmkl_intel_thread.a
lrwxrwxrwx 1 root root 24 Jan 14 01:24 libmkl_intel_thread.so -> libmkl_intel_thread.so.2
-rwxr-xr-x 1 root root 64858584 Nov 12 05:40 libmkl_intel_thread.so.2
-rw-r--r-- 1 root root 7489080 Nov 12 05:40 libmkl_lapack95_ilp64.a
-rw-r--r-- 1 root root 7417664 Nov 12 05:40 libmkl_lapack95_lp64.a
-rwxr-xr-x 1 root root 50321512 Nov 12 05:40 libmkl_mc3.so.2
-rwxr-xr-x 1 root root 48742776 Nov 12 05:40 libmkl_mc.so.2
-rw-r--r-- 1 root root 51336890 Nov 12 05:40 libmkl_pgi_thread.a
lrwxrwxrwx 1 root root 22 Jan 14 01:24 libmkl_pgi_thread.so -> libmkl_pgi_thread.so.2
-rwxr-xr-x 1 root root 38037904 Nov 12 05:40 libmkl_pgi_thread.so.2
lrwxrwxrwx 1 root root 14 Jan 14 01:24 libmkl_rt.so -> libmkl_rt.so.2
-rwxr-xr-x 1 root root 8695128 Nov 12 05:40 libmkl_rt.so.2
-rw-r--r-- 1 root root 12244638 Nov 12 05:40 libmkl_scalapack_ilp64.a
lrwxrwxrwx 1 root root 27 Jan 14 01:24 libmkl_scalapack_ilp64.so -> libmkl_scalapack_ilp64.so.2
-rwxr-xr-x 1 root root 7718648 Nov 12 05:40 libmkl_scalapack_ilp64.so.2
-rw-r--r-- 1 root root 12334148 Nov 12 05:40 libmkl_scalapack_lp64.a
lrwxrwxrwx 1 root root 26 Jan 14 01:24 libmkl_scalapack_lp64.so -> libmkl_scalapack_lp64.so.2
-rwxr-xr-x 1 root root 7736496 Nov 12 05:40 libmkl_scalapack_lp64.so.2
-rw-r--r-- 1 root root 38539080 Nov 12 05:40 libmkl_sequential.a
lrwxrwxrwx 1 root root 22 Jan 14 01:24 libmkl_sequential.so -> libmkl_sequential.so.2
-rwxr-xr-x 1 root root 29005200 Nov 12 05:40 libmkl_sequential.so.2
-rw-r--r-- 1 root root 757816350 Nov 12 05:40 libmkl_sycl.a
lrwxrwxrwx 1 root root 16 Jan 14 01:24 libmkl_sycl.so -> libmkl_sycl.so.2
-rwxr-xr-x 1 root root 1144881912 Nov 12 05:40 libmkl_sycl.so.2
-rw-r--r-- 1 root root 111751372 Nov 12 05:40 libmkl_tbb_thread.a
lrwxrwxrwx 1 root root 22 Jan 14 01:24 libmkl_tbb_thread.so -> libmkl_tbb_thread.so.2
-rwxr-xr-x 1 root root 40617024 Nov 12 05:40 libmkl_tbb_thread.so.2
-rwxr-xr-x 1 root root 15038968 Nov 12 05:40 libmkl_vml_avx2.so.2
-rwxr-xr-x 1 root root 14364256 Nov 12 05:40 libmkl_vml_avx512.so.2
-rwxr-xr-x 1 root root 15887648 Nov 12 05:40 libmkl_vml_avx.so.2
-rwxr-xr-x 1 root root 7756240 Nov 12 05:40 libmkl_vml_cmpt.so.2
-rwxr-xr-x 1 root root 8766704 Nov 12 05:40 libmkl_vml_def.so.2
-rwxr-xr-x 1 root root 14619984 Nov 12 05:40 libmkl_vml_mc2.so.2
-rwxr-xr-x 1 root root 14628344 Nov 12 05:40 libmkl_vml_mc3.so.2
-rwxr-xr-x 1 root root 14775632 Nov 12 05:40 libmkl_vml_mc.so.2
drwxr-xr-x 3 root root 1 Mar 23 11:57 locale
To use MKL implementation of FFTW, export the following path to correct include
folder:
Error messages and solutions¶
Catastrophic error: could not set locale¶
Sometimes, Intel compiler suite may be sensitive to the regional settings ("locale"). For instance, the regional setting on your local machine might be different from those on the server. In this case, you might get the following compilation error:
This can be fixed by exporting the correct regional environmental variables:
Slurm out-of-memory (OOM) error¶
You might encounter the following error when submitting jobs via slurm
:
slurmstepd: error: Detected 2 oom-kill event(s) in StepId=1170.0. Some of your processes may have been killed by the cgroup out-of-memory handler.
You need to set the --mem-per-cpu
value in the submission script. This value is the amount of memory in MB that slurm
allocates per allocated CPU. It defaults to 1 MB. If your job's memory use exceeds this, the job gets killed with an OOM error message. Set this value to a reasonable amount (i.e. the expected memory use with a little bit added for some head room).
Example: add the following line to the submission script:
Which allocates 1 GB per CPU.
All threads are pinned to the same CPU¶
Important: Current setup of the intel
module requires you to explicitly specify to Intel MPI
to use the Slurm PMI
, and to use srun
instead of mpirun
, so that Intel MPI
threads are correctly pinned to CPU cores!
Use the following in your submission script:
Then invoke your binary with srun
(instead of mpirun
!)
Catastrophic error: Unable to read...¶
This error might occur on parallel file systems, such as BeeGFS. A work-around would be to run your compilation in a directory located on a non-parallel file system, for example in /tmp
. Please do not forget to remove your files from there once you are finished!
More detailed discussion of this problem can be found here.