docker:gpu:use_nvidia_cuda_toolkit_within_a_docker_container
Differences
This shows you the differences between two versions of the page.
docker:gpu:use_nvidia_cuda_toolkit_within_a_docker_container [2025/05/21 15:22] – created peter | docker:gpu:use_nvidia_cuda_toolkit_within_a_docker_container [2025/05/21 16:19] (current) – peter | ||
---|---|---|---|
Line 6: | Line 6: | ||
* These drivers act as the bridge between the operating system and the NVIDIA GPU hardware, ensuring optimal communication and performance. | * These drivers act as the bridge between the operating system and the NVIDIA GPU hardware, ensuring optimal communication and performance. | ||
+ | |||
+ | See: [[Ubuntu: | ||
+ | |||
+ | ---- | ||
+ | |||
+ | ====== Install the NVIDIA Container Toolkit ====== | ||
+ | |||
+ | This toolkit extends Docker to leverage NVIDIA GPUs fully, ensuring that the GPU capabilities can be used within containers without any hitches. | ||
+ | |||
+ | ===== Download the NVIDIA GPG key ===== | ||
+ | |||
+ | <code bash> | ||
+ | curl -fsSL https:// | ||
+ | </ | ||
+ | |||
+ | ---- | ||
+ | |||
+ | ===== Dearmor the GPG key and save it ===== | ||
+ | |||
+ | <code bash> | ||
+ | gpg --dearmor -o / | ||
+ | </ | ||
+ | |||
+ | ---- | ||
+ | |||
+ | ===== Download the NVIDIA container toolkit list file ===== | ||
+ | |||
+ | <code bash> | ||
+ | curl -s -L https:// | ||
+ | </ | ||
+ | |||
+ | ---- | ||
+ | |||
+ | ===== Modify the list file to include the signature ===== | ||
+ | |||
+ | <code bash> | ||
+ | sed 's#deb https://# | ||
+ | </ | ||
+ | |||
+ | ---- | ||
+ | |||
+ | ===== Update the package database ===== | ||
+ | |||
+ | <code bash> | ||
+ | apt update | ||
+ | </ | ||
+ | |||
+ | ---- | ||
+ | |||
+ | ====== Configuring Docker for NVIDIA Support ====== | ||
+ | |||
+ | Having the **NVIDIA Container Toolkit** in place, the next essential task is configuring Docker to recognize and utilize NVIDIA GPUs. | ||
+ | |||
+ | Configure the Docker runtime to use NVIDIA Container Toolkit by using the **nvidia-container-cli** command. | ||
+ | |||
+ | * The Docker configuration file will be modified to use the NVIDIA runtime | ||
+ | |||
+ | <code bash> | ||
+ | nvidia-container-cli configure --runtime=docker | ||
+ | </ | ||
+ | |||
+ | <WRAP info> | ||
+ | **NOTE: | ||
+ | |||
+ | * As a result, Docker becomes aware of the NVIDIA runtime and can access GPU features. | ||
+ | |||
+ | </ | ||
+ | |||
+ | |||
+ | ====== Restart the Docker daemon ====== | ||
+ | |||
+ | <code bash> | ||
+ | systemctl restart docker | ||
+ | </ | ||
+ | |||
+ | ---- | ||
+ | |||
+ | ====== Running the NVIDIA CUDA Docker Image ====== | ||
+ | |||
+ | With all the required setups in place, the exciting part begins: running a Docker container with NVIDIA GPU support. | ||
+ | |||
+ | NVIDIA maintains a series of CUDA images on Docker Hub. | ||
+ | |||
+ | Pull the specific NVIDIA CUDA image: | ||
+ | |||
+ | <code bash> | ||
+ | docker pull nvidia/ | ||
+ | </ | ||
+ | |||
+ | <WRAP info> | ||
+ | **NOTE:** Always check for the latest tags at [[https:// | ||
+ | |||
+ | * Use that instead of the 12.2.0-base-ubuntu22.04 tag shown here. | ||
+ | |||
+ | </ | ||
+ | |||
+ | ---- | ||
+ | |||
+ | ====== Run the Docker container with GPU support ====== | ||
+ | |||
+ | <code bash> | ||
+ | docker run --gpus all -it nvidia/ | ||
+ | </ | ||
+ | |||
+ | <WRAP info> | ||
+ | **NOTE: | ||
+ | |||
+ | * Once inside, use NVIDIA utilities like **nvidia-smi** to confirm GPU access. | ||
+ | </ | ||
+ | |||
+ | ---- | ||
+ | |||
docker/gpu/use_nvidia_cuda_toolkit_within_a_docker_container.txt · Last modified: 2025/05/21 16:19 by peter