Skip to content

OpenFOAM

OpenFOAM (OF) is an open source software for Computational Fluid Dynamics (CFD). Starting with the 2024r1 software stack, we provide multiple versions. The reason is that physical models may be changed between versions, and some users prefer a specific one.

In the examples below, we use OF 10. Every OF installation has a number of tutorials, and below we show how to run the first one of them on DelftBlue.

Specific things to keep in mind

  1. Some OF programs (e.g. for meshing) may run sequentially. If you add this step to a job script that requests multiple tasks (i.e., MPI processes) and a value for --mem-per-cpu, the sequential program may be limited to a fraction of the total memory allowed for the job. You should therefore either run these steps on the login node or in a separate job if they take long. Below we do the sequential meshing and partitioning for a small case in the job script to have a compact example.
  2. OpenFOAM may produce quite a lot of output (i.e., simulation states per time step). In the job script below we therefore work in the /scratch directory.

Running the cavity tutorial

Note: The full tutorial can be found in the OpenFOAM user guide. To visualize the results, you can use paraview on our visual nodes.

Start by loading the OF version you want:

module load 2024rc1 openmpi openfoam-org/10
This will set a number of environment variables starting with WM_ that we will use below. Type
env | grep WM_
to list them. To see the tutorial folders, type

ls $WM_PROJECT_DIR/tutorials

The job script below copies the incompressible cavity flow tutorial files, meshes the domain and runs the simulation using 8 MPI processes:

cavity.slurm
#!/bin/bash
#SBATCH --job-name="cavity"
#SBATCH --output="cavity.out"
#SBATCH --partition=compute
#SBATCH --time=00:10:00
#SBATCH --ntasks=8
#SBATCH --cpus-per-task=1
#SBATCH --mem-per-cpu=1G

module load 2024r1 openmpi openfoam-org/10

# pick a tutorial
TUTORIAL_DIR=$WM_PROJECT_DIR/tutorials/incompressible/icoFoam/cavity/cavity

# copy OpenFOAM tutorial files to our scratch directory (remember that
# scratch is large and fast but temporary storage)
rsync -a ${TUTORIAL_DIR} /scratch/$USER/

# to run in parallel, we need to partition the mesh first,
# and for that we need a decomposeParDict file.
# We copy one from another tutorial. Note that this file happens to ask for 8 partitions,
# if you want to use another number of MPI processes, you should edit it beforehand.
cp $WM_PROJECT_DIR/tutorials/incompressible/pisoFoam/LES/motorBike/motorBike/system/decomposeParDict /scratch/$USER/cavity/system/


# go to our working directory
cd /scratch/$USER/cavity

# note: run the sequential meshing program.
# In case this (or the next) step fails with out-of-memory error,
# you should run it before the job on the login node, or in a separate
# job script with --ntasks=1 --mem-per-cpu=<something larger>.
blockMesh

# partition the mesh
decomposePar

# run the solver, in this case icoFoam , an incompressible, time-dependent simulation.
# Don't forget the "-parallel" option, otherwise, slurm settings like --ntasks will be ignored.
# We do not have to specify -n 8, it's the default because of our SBATCH settings.
srun icoFoam -parallel