GPU Optimization

device = "cpu"
if torch.cuda.is_available():
  device = "cuda"
elif torch.backends.mps.is_available() and torch.backends.mps.is_built():
  device = "mps"

MPS

# MacOS 12.3+
print(torch.backends.mps.is_available())
# PyTorch installation built with MPS
print(torch.backends.mps.is_built())

device = torch.device("mps")

CUDA

Check whether GPU accelaration with CUDA available

Simply move inputs, labels, and model to GPU add parallelization and speed, ez

if torch.cuda.is_available():
    a = torch.LongTensor(10).fill_(3).cuda()

Usage

Model

Inputs

At every step

Tensors

Same

Last updated