My GPUS doesn’t work properly by using nn.DataParallel
I’m trying to run my model on multiple GPUs but only one GPU is being used.
My GPUS doesn’t work properly by using nn.DataPrallel
I’m trying to run my model on multiple GPUs but only one GPU is being used.
I am trying to move it onto my 3 GPUs (1, 2, 3). Another user is using GPU 0, so I am using cuda:1 as the default. Here’s my code:
device = torch.device('cuda:1' if torch.cuda.is_available() else 'cpu') model = model.to(device) device_ids = [1, 2, 3] # using GPUs 1, 2, 3 model = nn.DataParallel(model, device_ids=device_ids)
The model should be running on these 3 GPUs, but only cuda:1 is active as shown in the following picture: