Pytorch utilise plusieurs GPU
#easiest solution is to wrap you model in DataParallel like so:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = Model(input_size, output_size)
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
model = nn.DataParallel(model)
model.to(device)
Friendly Hawk