Back to course overview

Backpropagation - PyTorch Beginner 04

Learn all the basics you need to get started with this deep learning framework! In this part I will explain the famous backpropagation algorithm. I will explain all the necessary concepts and walk you through a concrete example. At the end we will see how easy it is to use backpropagation in PyTorch.

All code from this course can be found on GitHub.

Backpropagation in Pytorch

import torch x = torch.tensor(1.0) y = torch.tensor(2.0) # This is the parameter we want to optimize -> requires_grad=True w = torch.tensor(1.0, requires_grad=True) # forward pass to compute loss y_predicted = w * x loss = (y_predicted - y)**2 print(loss) # backward pass to compute gradient dLoss/dw loss.backward() print(w.grad) # update weights # next forward and backward pass... # continue optimizing: # update weights, this operation should not be part of the computational graph with torch.no_grad(): w -= 0.01 * w.grad # don't forget to zero the gradients w.grad.zero_() # next forward and backward pass...

FREE VS Code / PyCharm Extensions I Use

✅ Write cleaner code with Sourcery, instant refactoring suggestions: Link *

* This is an affiliate link. By clicking on it you will not have any additional costs, instead you will support me and my project. Thank you! 🙏

Check out my Courses