basics

Packages

import torch #similar to numpy
import torch.autograd #differntiation lib
import torch.nn as nn# neural networks lib with autograd integration
import torch.nn.functional as F #functional implementation of nn(lower level, not trainable)
import torch.optim as optim # standard op methods like gradient descent

torch.manual_seed(123)

Tensors

Tensors are basically just vector/matrixes

Creation

torch.empty(5,3) #uninitialized 5*3
torch.rand(5,3)
torch.zeroes(5, 3, dtype=torch.long)

From Data

v = [1,2,3]
v_tensor = torch.Tensor(v)

size

  • .size()

  • .shape

Operations

Standard numpy indexing i.e x[:, 1]

Cat

Reshape

Numpy

They share their underlying memory locations so a change to one will change the other

Out:

Computation Graphs and Automatic Differentiation

Not fixed like in other things like tensorflow autograd. Variable keeps track of how it was created

  • You can access data with .data attr

  • Can do all operations with x autograd

See comp graph

S knows enough about itself to determine its derivative backpropagation calculates gradients with respect to every variable. .backward Run backprop starting from it, running multiple times will accumulate it in .grad prop below. Must keep in data autograd to do grad stuff

Last updated