Skip to content

Utilize GPUs in Docker

To use the GPUs within a Docker setup, the system needs to be prepared before.

For more information visit the original guides:

Install the Nvidia container toolkit

1. Configure production repository

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
  && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
    sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
    sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

2. Update packages

sudo apt-get update

3. Install the NVIDIA Toolkit packages

sudo apt-get install -y nvidia-container-toolkit

4. Configure NVIDIA Toolkit for containerd

sudo nvidia-ctk runtime configure --runtime=containerd --set-as-default

5. Restart containerd

sudo systemctl restart containerd

Referencing the GPUs for Docker deployments

There are multiple options for referencing the available GPUs. As our flavors currently represent only one GPU per node, the selections can be left to all. However you can also target the particular GPU via its id that you can find with nvidia-smi.

# docker-compose
...
    deploy:
      resources:
        reservations:
          devices:
              - driver: nvidia
                count: all
                capabilities: [compute, utility]

# Docker
docker run -it --rm --gpus all ubuntu nvidia-smi