Nvidia GPU not found when running ACE microservices Docker containers under WSL
I am currently exploring the 1st ACE Workflow with Docker Containers, following the documentation provided by Nvidia. I am using WSL2 (with Ubuntu) on Windows 11 with NVIDIA GeForce RTX 3070. I went through the development setup: current Nvidia Driver is Version 535 with CUDA Version 12.2 and Docker Desktop configured to use WSL2 as a backend. I’ve pulled the three docker containers: animation-graph-microservice, omniverse-renderer and audio2face and downloaded the Avatar scene from NGC. When I try to run the Animation Graph Microservice (with the local avatar scene directory attached as a volume per the documentation), I get the following error: Fatal Error: Can't find libGLX_nvidia.so.0
. nvidia-smi
is returning correct information and Nvidia container toolkit is installed. The test Nvidia docker image also runs without any problems.
Access GPUs within container
I have been wanting to move my pytorch project to the GPUs for training. I am working inside a docker container. Thus, I tested torch.cuda.is_available()
wich returned False
.