Skip to content

Julia

A standard version of Julia is available on DelftBlue in the generic stack:

module load julia

Most packages can installed as usual. Just keep in mind that compute nodes have no internet connection, so the Pkg.add command should be run either on the login or on a visual node.

There are two important packages that require some special treatment as they rely on system software: MPI and CUDA.

Installing the CUDA package

Install Julia's CUDA module on a visual node so that Julia "sees" a GPU. You can do this by submitting the following batch script:

setup-julia-cuda.slurm
#!/bin/bash
#SBATCH --partition visual
#SBATCH -n 1
#SBATCH -c 4
#SBATCH -t 00:10:00
#SBATCH --mem-per-cpu=1GB
#SBATCH --account=innovation

module load julia

# This makes sure the installation will work on various GPU node types.
# This is important because the gpu-v100 partition has AMD CPUs whereas
# the gpu-a100 partition has Intel CPUs.
export JULIA_CPU_TARGET="generic"
# Install CUDA.jl -- This will only work correctly on the "visual" partition,
# because login nodes do not have a GPU and GPU nodes do not have an internet
# connection to download the requirements.
julia -e 'using Pkg; Pkg.add(name="CUDA");'

Note: This installs the complete CUDA toolkit in your home directory. While this is not the most elegant thing to do, we do find it the most robust solution as other constructions tend to cause Julia to attempt re-installing stuff on the GPU-nodes, which fails due to the missing internet connection. With the recipe here, do not load a cuda module from the software stack.

To test if your CUDA is working correctly, you can submit this little demo program:

hello_cuda.jl
using CUDA

# Check that CUDA is functional, print ssytem info and which GPU is used
println("CUDA is functional: ", CUDA.functional())
println(CUDA.versioninfo())
println("Using GPU: ", CUDA.name(CUDA.device()))

# try allocating some device memory, and doing a computation:
d_a = [1,2,3,4,5] |> CuArray 
print(d_a)

d_b = d_a.^2
print(d_b)

You can submit it with this job script:

run-julia-cuda.slurm
  #!/bin/bash
  #SBATCH --partition gpu-a100-small
  #SBATCH --ntasks=1
  #SBATCH --cpus-per-task=2
  #SBATCH --gpus-per-task=1
  #SBATCH -t 00:10:00
  #SBATCH --mem-per-cpu=1GB
  #SBATCH --account=innovation

  module load julia

  julia hello_cuda.jl

Installing the MPI package

In order to use MPI in a Julia program, you first have to tell Julia to use the system-provided MPI.

Steps to take:

  1. Load modules:

    module load 2025 openmpi julia
    

  2. Install and configure Julia package "MPIPreferences":

    julia --project -e 'using Pkg; Pkg.add("MPIPreferences")'
    julia --project -e 'using MPIPreferences; MPIPreferences.use_system_binary()'
    

    Output should look like this:

      [<NetID>@login02 ~]$ julia --project -e 'using Pkg; Pkg.add("MPIPreferences")'
        Installing known registries into `~/.julia`
          Updating registry at `~/.julia/registries/General.toml`
        Resolving package versions...
        Installed MPIPreferences  v0.1.10
        Installed Preferences ──── v1.4.1
          Updating `/scratch/dpalagin/.julia/environments/v1.8/Project.toml`
        [3da0fdf6] + MPIPreferences v0.1.10
          Updating `/scratch/dpalagin/.julia/environments/v1.8/Manifest.toml`
        [3da0fdf6] + MPIPreferences v0.1.10
        [21216c6a] + Preferences v1.4.1
        [ade2ca70] + Dates
        [8f399da3] + Libdl
        [de0858da] + Printf
        [fa267f1f] + TOML v1.0.0
        [4ec0a83e] + Unicode
        Precompiling project...
        2 dependencies successfully precompiled in 2 seconds
    
      [<NetID>@login02 ~]$ julia --project -e 'using MPIPreferences; MPIPreferences.use_system_binary()'
       Info: MPI implementation identified
         libmpi = "libmpi"
         version_string = "Open MPI v4.1.4, package: Open MPI feverdij@file01 Distribution, ident: 4.1.4, repo rev: v4.1.4, May 26, 2022\0"
         impl = "OpenMPI"
         version = v"4.1.4"
         abi = "OpenMPI"
       Info: MPIPreferences changed
         binary = "system"
         libmpi = "libmpi"
         abi = "OpenMPI"
         mpiexec = "mpiexec"
         preloads = Any[]
         preloads_env_switch = nothing
    
  3. Install Julia package "MPI":

    julia --project -e 'using Pkg; Pkg.add("MPI")'
    

    Output should look like this:

      [<NetID>@login02 ~]$ julia --project -e 'using Pkg; Pkg.add("MPI")'
          Updating registry at `~/.julia/registries/General.toml`
        Resolving package versions...
        Installed MPICH_jll ─────────── v4.2.0+0
        Installed MicrosoftMPI_jll ──── v10.1.4+2
        Installed Hwloc_jll ─────────── v2.10.0+0
        Installed OpenMPI_jll ───────── v4.1.6+0
        Installed PkgVersion ────────── v0.3.3
        Installed JLLWrappers ───────── v1.5.0
        Installed PrecompileTools ───── v1.2.0
        Installed MPI ───────────────── v0.20.19
        Installed MPItrampoline_jll ─── v5.3.2+0
        Installed Requires ──────────── v1.3.0
        Installed DocStringExtensions  v0.9.3
        Downloaded artifact: Hwloc
        Downloaded artifact: OpenMPI
          Updating `/scratch/dpalagin/.julia/environments/v1.8/Project.toml`
        [da04e1cc] + MPI v0.20.19
          Updating `/scratch/dpalagin/.julia/environments/v1.8/Manifest.toml`
        [ffbed154] + DocStringExtensions v0.9.3
        [692b3bcd] + JLLWrappers v1.5.0
        [da04e1cc] + MPI v0.20.19
        [eebad327] + PkgVersion v0.3.3
        [aea7be01] + PrecompileTools v1.2.0
        [ae029012] + Requires v1.3.0
        [e33a78d0] + Hwloc_jll v2.10.0+0
        [7cb0a576] + MPICH_jll v4.2.0+0
        [f1f71cc9] + MPItrampoline_jll v5.3.2+0
        [9237b28f] + MicrosoftMPI_jll v10.1.4+2
       [fe0851c0] + OpenMPI_jll v4.1.6+0
        [0dad84c5] + ArgTools v1.1.1
        [56f22d72] + Artifacts
        [2a0f44e3] + Base64
        [8ba89e20] + Distributed
        [f43a241f] + Downloads v1.6.0
        [7b1f6079] + FileWatching
        [b77e0a4c] + InteractiveUtils
        [4af54fe1] + LazyArtifacts
        [b27032c2] + LibCURL v0.6.3
        [76f85450] + LibGit2
        [56ddb016] + Logging
        [d6f4376e] + Markdown
        [ca575930] + NetworkOptions v1.2.0
        [44cfe95a] + Pkg v1.8.0
        [3fa0cd96] + REPL
        [9a3f8284] + Random
        [ea8e919c] + SHA v0.7.0
        [9e88b42a] + Serialization
        [6462fe0b] + Sockets
        [a4e569a6] + Tar v1.10.1
        [cf7118a7] + UUIDs
        [e66e0078] + CompilerSupportLibraries_jll v0.5.2+0
        [deac9b47] + LibCURL_jll v7.84.0+0
        [29816b5a] + LibSSH2_jll v1.10.2+0
        [c8ffd9c3] + MbedTLS_jll v2.28.0+0
        [14a3606d] + MozillaCACerts_jll v2022.2.1
        [83775a58] + Zlib_jll v1.2.12+3
        [8e850ede] + nghttp2_jll v1.48.0+0
        [3f19e933] + p7zip_jll v17.4.0+0
              Info Packages marked with  have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated -m`
      Precompiling project...
        16 dependencies successfully precompiled in 10 seconds. 1 already precompiled.
    

Running a parallel example

Now let's take the following example:

hello_mpi.jl
# examples/01-hello.jl
using MPI
MPI.Init()

comm = MPI.COMM_WORLD
print("Hello world, I am rank $(MPI.Comm_rank(comm)) of $(MPI.Comm_size(comm))\n")

MPI.Finalize()

Here's a simple job script that you can submit with sbatch --account=<your account> julia_mpi.slurm:

julia_mpi.slurm
#!/bin/bash
#SBATCH --job-name="julia_mpi"
#SBATCH --output="julia_mpi.out"
#SBATCH --partition=compute
#SBATCH --time=00:01:00
#SBATCH --ntasks=8
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=1G

module load 2025 openmpi julia

# inside a SLURM job, srun replaces mpirun/mpiexec:
srun julia hello_mpi.jl

The output should look like this:

Hello world, I am rank 3 of 8
Hello world, I am rank 6 of 8
Hello world, I am rank 7 of 8
Hello world, I am rank 2 of 8
Hello world, I am rank 0 of 8
Hello world, I am rank 4 of 8
Hello world, I am rank 5 of 8
Hello world, I am rank 1 of 8

You might see a UCX warning like this:

[1708683313.603040] [login03:3022707:0]          parser.c:2038 UCX  WARN  unused environment variables: UCX_MEMTYPE_CACHE (maybe: UCX_MEMTYPE_CACHE?); UCX_ERROR_SIGNALS (maybe: UCX_ERROR_SIGNALS?)
[1708683313.603040] [login03:3022707:0]          parser.c:2038 UCX  WARN  (set UCX_WARN_UNUSED_ENV_VARS=n to suppress this warning)

To disable this, simply export the follwoing:

export UCX_WARN_UNUSED_ENV_VARS=n