Python¶
Python is widely used for scientific programming, also on the DelftBlue cluster. To run your Python scripts you need python (or a specific version of python), most likely some Python-packages, and in may cases a Python Virtual Environment. To ease the maintenance of these parts, several solutions (tools) have been created and can be used on DelftBlue. In this documentation you will find instructions on how to use the solutions:
- System installed Python: Python version (old) provided by operating system
- Python module: fairly new Python version with many Python packages as modules
- Conda: complete solution for all Python versions, packages and virtual environment
uv: complete solution for all Python versions, packages and virtual environment
Depending on your choice you could end up installing packages locally. If that is the case please, read the instructions in the following warning:
Installing packages
Please be aware: package managers such as uv, pip or conda are known to install a lot of tiny files locally. This is important for several reasons:
- These local installations might occupy a lot of space and often use your
/homedirectory as their default destination. You might want to redirect them from/hometo/scratch(see below for more info). - These local installation might rely on the
/tmpfolder as an intermediate storage for unpacking/compiling. Please be aware that the collectively used/tmpmight get overfilled! More info here. /homeand/scratchrely on the parallel file system BeeGFS. While this file system provides high speed for truly parallel jobs (many processes reading/writing from/to one big file), it might struggle with processes generating a lot of tiny files. As such, installing packages viapiporcondamight take noticeably longer than you would expect. This is normal, and only manifests itself once, during the installation. Once installed, accessing these packages should be very fast.- Only the login nodes have a direct connection to the internet. Especially if you want to install GPU-enabled packages, make sure to read the installation instructions of the package to see how to specify that you want GPU-support as login nodes don't have GPUs. You can also use
`pip1` on the login nodes just to download the package archive, and then install it on the GPU node within an (interactive) job.
System installed Python¶
This Python version is mainly installed for system related tasks and should not be used for scientific programming. You can however use this version without any setup or configuration and as such it might come handy as a quick tool to do some small scripting. The program python is located in /usr/bin/python and as of this writing (27/11/2025) is version 3.6.8. You can use packages installed in the system:
And install packages locally in your home directory (~/.local/lib/python3.6/site-packages) with this command (using numpy as an example):
Note: the --user will install the package in your home directory and not in the system location
You can create a virtual environment using the venv package:
This will create the directory test_project/env with a link to the system Python version. After activating with env/bin/activate the environment variable will be set so you will use the local versions of python and pip. All installations of packages will also be done only locally in the test_project/env directory.
In this virtual environment you can also install your packages:
Python module¶
A recent python version is available in every DelftBlue software stack.
In these instructions the latest stack 2025 is used (27/11/2025). If a newer stack is available please replace 2025 with the newer version.
Then we have:
[<netid>@login01 ~]$ python
Python 3.11.9 (main, Sep 20 2025, 05:09:43) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
From this you can see that Python version 3.11.9 is available in stack 2025
Python packages¶
For every new version of Python a large set of packages is build and also made available with the module system. Every package-module starts with py- and the name of the package. Below is an example of the available packages in stack 2025:
[<netid>@login01 ~]$ module load py-
Display all 248 possibilities? (y or n)
py-absl-py py-kiwisolver py-pyproject-metadata
py-absl-py/1.4.0 py-kiwisolver/1.4.5 py-pyproject-metadata/0.7.1
py-aiohttp py-libclang py-python-dateutil
py-aiohttp/3.9.5 py-libclang/14.0.6 py-python-dateutil/2.8.2
py-aiosignal py-llvmlite py-pythran
py-aiosignal/1.2.0 py-llvmlite/0.41.1 py-pythran/0.16.1
py-astunparse py-locket py-pytz
py-astunparse/1.6.3 py-locket/1.0.0 py-pytz/2023.3
py-attrs py-mako py-pyyaml
py-attrs/23.1.0 py-mako/1.2.4 py-pyyaml/6.0.2
py-beniget py-markdown py-reportlab
py-beniget/0.4.1 py-markdown/3.4.1 py-reportlab/4.0.4
py-build py-markdown-it-py py-requests
py-build/1.2.1 py-markdown-it-py/3.0.0 py-requests/2.32.3
py-calver py-markupsafe py-rich
py-calver/2022.6.26 py-markupsafe/2.1.3 py-rich/13.7.1
py-certifi py-matplotlib py-rst2pdf
py-certifi/2023.7.22 py-matplotlib/3.9.2 py-rst2pdf/0.100
py-cffi py-mdurl py-scikit-learn
py-cffi/1.17.1 py-mdurl/0.1.2 py-scikit-learn/1.5.2
py-charset-normalizer py-meson-python py-scipy
py-charset-normalizer/3.3.0 py-meson-python/0.16.0 py-scipy/1.14.1
py-click py-ml-dtypes py-setuptools
py-click/8.1.7 py-ml-dtypes/0.4.0 py-setuptools/69.2.0
py-cloudpickle py-mpmath py-setuptools-scm
py-cloudpickle/3.0.0 py-mpmath/1.3.0 py-setuptools-scm/8.0.4
py-contourpy py-msgpack py-shapely
py-contourpy/1.3.0 py-msgpack/1.0.5 py-shapely/2.0.6
py-coverage py-multidict py-six
py-coverage/7.2.6 py-multidict/6.0.4 py-six/1.16.0
py-cppy py-namex py-smartypants
py-cppy/1.2.1 py-namex/0.0.8 py-smartypants/2.0.1
py-cycler py-networkx py-sortedcontainers
py-cycler/0.11.0 py-networkx/2.7.1 py-sortedcontainers/2.4.0
py-cython py-networkx/3.1 py-sympy
py-cython/0.29.36 py-numba py-sympy/1.13.0
py-cython/3.0.11 py-numba/0.58.1 py-tblib
py-dask py-numpy py-tblib/1.6.0
py-dask/2024.7.1 py-numpy/1.26.3 py-tensorboard
py-distributed py-numpy/1.26.4 py-tensorboard/2.17.1
py-distributed/2024.7.1 py-opt-einsum py-tensorboard-data-server
For example, if we need numpy, scipy, and matplotlib, we need to load the following modules:
Then we have:
[<netid>@login01 ~]$ python
Python 3.11.9 (main, Sep 20 2025, 05:09:43) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> import scipy
>>> import matplotlib
>>>
Using pip in a virtual environment¶
If the package you need is not available as a module, you can use pip to install this package only inside a virtual environment.
Similar to the explanation above with the system installed Python, you can also create a virtual environment with the Python-module:
Warning
Mixing module-loaded packages and your locally installed packages might lead to dependency conflicts!
Avoiding version clash
For example, the standard py-numpy package is at the moment of writing version 1.26.4 (stack 2025). You can update this locally with the following command:
And this will make a more recent (2.3.5) version available:
However, if you then load the standard py-scipy package (e.g., version 1.14.1, stack 2025), it will re-enable the default numpy version:
(env)[<netid>@login03 ~]$ pip list
Package Version
------------- -------
...
numpy 1.26.4
scipy 1.14.1
...
Funfact: watch what happens when you unload this py-scipy module...
In this case, you might want to install your own updated scipy version locally as well to avoid the version conflict:
And then we have both more recent versions of numpy and scipy installed locally:
Mismatched interpreter paths on GPU nodes¶
If you create your virtual environment (test_project above) on the login node and then try to activate the environment on a GPU-node, the symbolic links test_project/env/bin/python and test_project/env/bin/python3 may point to non-existent locations. In that case, simply reload the python module after activating the environment, or remove and re-create the symbolic link like this:
[<netid>@login03 ~]$ srun --partition=gpu-a100-small -n 1 -c 1 --mem-per-cpu=1GB -t 00:05:00 --pty bash
[...]
[<netid>@gpu020 ~]$ module load 2025 python
[<netid>@gpu020 ~]$ rm test_project/env/bin/python*
[<netid>@gpu020 ~]$ ln -s `which python` test_project/env/bin/python
[<netid>@gpu020 ~]$ ln -s `which python3` test_project/env/bin/python3
[<netid>@gpu020 ~]$ exit
Conda¶
A similar result can be achieved by loading the miniconda3 module:
Local conda environment on login nodes
First, load the miniconda module:
Then, create your own conda environment:
After this, you might need to re-login! Otherwise, you might encounter the CommandNotFoundError: Your shell has not been properly configured to use 'conda activate' error.
After you re-login, you can activate your new environment:
You should see the environment activated, which is indicated by the prefix to the login prompt:
Now you can install your own conda packages:
To de-activate your environment, simply issue the following:
To remove your environment, issue the following:
To list all environments, issue the following:
Warning
Even though conda activate [my-conda-env] works on the login node prompt, it might fail on the worker nodes.
The problem is that conda init adds the path of the currently active conda installation to your .bashrc, which is probably not what you want, as the conda might change depending on whether you are in compute or gpu mode. And it might not actually work on worker nodes.
It may be best to avoid conda init altogether and directly call conda.sh that comes with the installed version. That can be done with the following command, which calls conda.sh directly by extracting the long string from conda info):
Why unset is necessary before conda init
If there are multiple versions/instances of conda on the system, the PATH may end up being resolved to the wrong python executable when running conda activate. To avoid this, unsetting the conda is required before activating your environment.
After running this command, conda activate works on all nodes.
Warning
miniconda3 might conflict with the vendor-preinstalled git!!! To avoid this conflict, load the new openssh and git modules from the DelftBlue software stack!
uv¶
Manually setting up virtual environments and installing packages with venv and pip can make it hard to maintain. Conda tends to lead to some extra work maintaining virtual environments and can sometimes take very long to maintain packages.
Recently, a new tool uv has been introduced leveraging these challenges. It has been build with rust, leading to very fast execution times. It has a simple approach to virtual environments, while providing a very rich toolbox for maintanance.
Please read the information found on this website:
For now (until uv is centrally available), you will need to install uv locally in your home directory using this command:
and follow the instructions after installation.
When creating a new uv-project, you configure the Python version that is needed. If this version is not yet available, uv will install this Python version locally:
Then setup a virtual environment:
And add packages:
To ease the use of the right python and pip, you can do:
That's all!
git with uv¶
The whole uv system is also set up with git in mind! When sharing your git-repository build with uv, the receiving party only has to do the following after cloning:
With that, the right Python version will be installed, together with the packages needed in the repository.
Note: you must commit pyproject.toml, uv.lock and .python-version for this to work. Do not include .venv!
FAQ¶
Python only prints STDOUT in a file after the job is finished¶
Example situation: I am running a Python code that contains print statements via Slurm. Normally when I run the Python code directly via python program.py the print statements appear in the terminal. When I run my program via Slurm, the print statements are written either in the output file specified in the submission script, or in the slurm-XXX.out. However, sometimes the contents of the slurm-XXX.out only appear after the job is actually finished, and not during the run, as I would expect.
This behaviour has to do with the buffering of the python's print command. You can either use the flush=True statement in the print command to flush the buffer to force the output to be printed:
Or, if you know what you are doing, you can run python unbuffered: