How to switch Pytorch between cpu and gpu

  • 2021-09-12 01:45:29
  • OfStack

In pytorch, when gpu on the server is occupied, we often want to debug the code with cpu first, so we need to switch between gpu and cpu.

Method 1: x. to (device)

Taking device as a variable parameter, argparse is recommended for loading:

When using gpu:


device='cuda'
x.to(device) # x Yes 1 A tensor , spread to cuda Go up 

When using cpu:


device='cpu'
x.to(device) 

Method 2: Use x. cuda () +CUDA_VISIBLE_DEVICES

Many posts say that x. cuda () and x. to ('cuda') are equivalent, but the disadvantage of x. cuda () is that it cannot dynamically switch cpu. However, you can switch with the command line parameter CUDA_VISIBLE_DEVICES.

Create an python script t. py on the server:


import torch
print(torch.cuda.device_count()) #  Available gpu Quantity 
print(torch.cuda.is_available()) #  Available gpu

First of all, look at the normal operation:

Execute command: python t. py Output: Because there are two gpu on the server, it is the result we want.

2
True

If you want to use only one block of gpu, you only need to add one parameter before execution:

CUDA_VISIBLE_DEVICES=0 python t. py, for example, we want to use gpu 0 Next, look at the output: Yes! It is true that only one gpu ~ can be seen in the program

1
True

Next, what if we want to use cpu?

CUDA_VISIBLE_DEVICES="" python t.py Output: It can be seen that although there are 2 blocks of cpu on the server, we can't see the program successfully by setting the execution parameters!

0
False

Therefore, to get back to the point, when we use x.cuda () to allocate gpu, we only need to use torch.cuda.is_available () plus a judgment. When we want to use cpu, we can control the command line parameters of the execution program:


if torch.cuda.is_available():
  x= x.cuda()

Related articles: