Customizing Nvidia Containers

Introduction:

Nvidia provides many containers that are optimized for use on their GPUs. These containers are very handy to have but sometimes they don't include everything that we might need. For instance, they have a set of Tensorflow containers but the containers don't include the matplotlib package. This page describes how to build a new container, using the original one that Nvidia provides as a starting point. In the example below, we will start with a TensorFlow 2 container that is provided by Nvidia and we will create a new Singularity Container that has everything from the original container but also includes the matplotlib package.

Nvidia GPU-optimized Container Catalog

To see the list of containers that Nvidia provides, go to https://catalog.ngc.nvidia.com/ . From here, you can search for containers. For instance, searching for Tensorflow shows:

Clicking on the TensorFlow container will give a bunch of information on the container, including how to use it:

The path to the container can be found by clicking on the "Copy Image Path" button and choosing one of the versions of the container. This will be used when we need to create a Definition file. As an example, the most recent one for TensorFlow 2 that I find points to: nvcr.io/nvidia/tensorflow:22.07-tf2-py3 and this will be put into the new Definition file on the second line that starts with "From: "

Creating the Definition file with the Nvidia Container path:

On Katahdin, open a text editor and create a new file called "new_container.def with the following contents, where the second line includes the path that you got from the NGC site in the previous step:


Bootstrap: docker

From: nvcr.io/nvidia/tensorflow:22.07-tf2-py3


%post


    DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \

        coreutils  


    pip install matplotlib

    echo "Done"

The red line is what is being added to the container. You can add other things to the container using the "apt" command, other pip commands, and other ways too.

Save the file and then run the following commands to actually create the new container. The end result will be a file with a ".simg" extension that is placed in your home directory. The process will start by sshing to a system that has Nvidia GPUs. This might not exactly be necessary but it also takes the load off of Katahdin.


ssh grtx-1

module load singularity

export TMPDIR=$XDG_RUNTIME_DIR 

singularity build --fakeroot $HOME/new_container.simg new_container.def

The third line sets up the TMPDIR variable for the singularity command to use. This is done for a couple of reasons but the biggest benefit is that the XDG_RUNTIME_DIR variable points to a tmpfs volume that gets created when you ssh to the grtx-1 system. This tmpfs volume is located in RAM so it is very fast. So, by setting TMPDIR to this directory in RAM, it will speed up the process of creating the container tremendously. 

The singularity command runs the "build" subcommand to build the .simg file and it uses the .def file to know how to build it. The "--fakeroot" parameter is needed in order for regular, not-root, accounts to be able to build the container.

Once the container has been created, you can use the container in a Slurm job with the following in your job submission script:


module load singularity

singularity run --nv new_container.simg python my_python_script.py


where "my_python_script.py" is the name of your python script.