Utilize GPUs in Docker
To use the GPUs within a Docker setup, the system needs to be prepared before.
For more information visit the original guides:
Install the Nvidia container toolkit
1. Configure production repository
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
2. Update packages
3. Install the NVIDIA Toolkit packages
4. Configure NVIDIA Toolkit for containerd
5. Restart containerd
Referencing the GPUs for Docker deployments
There are multiple options for referencing the available GPUs. As our flavors currently represent only one GPU per node, the selections can be left to all.
However you can also target the particular GPU via its id that you can find with nvidia-smi.