Skip to main content
Skip table of contents

Local Development with NVIDIA GPUs

Rendering images can be accelerated with GPU compute. In the Rendered.ai cloud platform, we use GPU compute on deployed channels to improve channel runtimes. Linux developers who have a NVIDIA GPU on their local machine can use the GPU when developing their channels by installing the NVIDIA Container toolkit and configuring a devcontainer.json file in the codebase. Windows users will only need to configure the devcontainer.json file in the codebase.

NVIDIA Container Toolkit

The NVIDIA Container Toolkit allows users to build and run GPU accelerated Docker containers. More information about the library can be found at https://github.com/NVIDIA/nvidia-docker.

To install the toolkit you’ll need to first install the NVIDIA driver and Docker engine for your Linux distribution. To install the NVIDIA Container Toolkit, follow the instructions at https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker.

After installing the NVIDIA Container Toolkit we should be able to test whether Docker has access to your GPU with the following command. This will download an officially supported NVIDIA Docker image and run the 'nvidia-smi' command to query the device.

Command

BASH
docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi

Example Output

BASH
test@test ~ % docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi
Unable to find image 'nvidia/cuda:11.0.3-base-ubuntu20.04' locally
11.0.3-base-ubuntu20.04: Pulling from nvidia/cuda
d5fd17ec1767: Already exists 
ea7643e57386: Pull complete 
622a04926279: Pull complete 
18fcb7509e42: Pull complete 
21e5db7c1fa2: Pull complete 
Digest: sha256:1db9418b1c9070cdcbd2d0d9980b52bd5cd20216265405fdb7e089c7ff96a494
Status: Downloaded newer image for nvidia/cuda:11.0.3-base-ubuntu20.04
Wed Jun  8 16:26:02 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.129.06   Driver Version: 470.129.06   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:09:00.0  On |                  N/A |
|  0%   41C    P8    12W / 180W |    514MiB /  8116MiB |     10%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

Next we will configure our VSCode Dev Container startup instructions to use the GPU.

VSCode Development Container

VSCode Development Containers use a devcontainer.json configuration file to instruct VSCode how to startup and run the Docker container. In the devcontainer.json file, we need to modify the runArgs list to include the “--gpus all“ part of our command above. Take a look at the example below:

CODE
"runArgs": ["--network","host","-v","/var/run/docker.sock:/var/run/docker.sock", "--privileged=true","--gpus","all"],

After we have the change, we can start or restart the Developer Container. In VSCode, press F1 to bring up the command palette then select Remote-Containers: Rebuild and Reopen in Container. We can then test that our Developer Container has access to the GPU by running the nvidia-smi command from a terminal within the container.

BASH
(anatools) anadev@test:/workspaces/example$ nvidia-smi
Wed Jun  8 18:06:36 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.129.06   Driver Version: 470.129.06   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:09:00.0  On |                  N/A |
|  0%   45C    P8    12W / 180W |    575MiB /  8116MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+
(anatools) anadev@test:/workspaces/example$ 

Congratulations, you are now ready to develop with GPU acceleration!

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.