Python Engineer

Free Python and Machine Learning Tutorials

Become A Patron and get exclusive content! Get access to ML From Scratch notebooks, join a private Slack channel, get priority response, and more! I really appreciate the support!

back to course overview

Backpropagation - PyTorch Beginner 04

26 Dec 2019

Learn all the basics you need to get started with this deep learning framework! In this part I will explain the famous backpropagation algorithm. I will explain all the necessary concepts and walk you through a concrete example. At the end we will see how easy it is to use backpropagation in PyTorch.

All code from this course can be found on GitHub.

Backpropagation in Pytorch

import torch x = torch.tensor(1.0) y = torch.tensor(2.0) # This is the parameter we want to optimize -> requires_grad=True w = torch.tensor(1.0, requires_grad=True) # forward pass to compute loss y_predicted = w * x loss = (y_predicted - y)**2 print(loss) # backward pass to compute gradient dLoss/dw loss.backward() print(w.grad) # update weights # next forward and backward pass... # continue optimizing: # update weights, this operation should not be part of the computational graph with torch.no_grad(): w -= 0.01 * w.grad # don't forget to zero the gradients w.grad.zero_() # next forward and backward pass...