Skip to content

Backpropagation - PyTorch Beginner 04

In this part I will explain the famous backpropagation algorithm. I will explain all the necessary concepts and walk you through a concrete example.


Learn all the basics you need to get started with this deep learning framework! In this part I will explain the famous backpropagation algorithm. I will explain all the necessary concepts and walk you through a concrete example. At the end we will see how easy it is to use backpropagation in PyTorch.

  • Chain Rule
  • Computational Graph and local gradients
  • Forward and backward pass
  • Concrete example with numbers (Linear Regression)
  • How to use backpropagation in PyTorch

All code from this course can be found on GitHub.

Backpropagation in Pytorch

import torch

x = torch.tensor(1.0)
y = torch.tensor(2.0)

# This is the parameter we want to optimize -> requires_grad=True
w = torch.tensor(1.0, requires_grad=True)

# forward pass to compute loss
y_predicted = w * x
loss = (y_predicted - y)**2
print(loss)

# backward pass to compute gradient dLoss/dw
loss.backward()
print(w.grad)

# update weights
# next forward and backward pass...

# continue optimizing:
# update weights, this operation should not be part of the computational graph
with torch.no_grad():
    w -= 0.01 * w.grad
# don't forget to zero the gradients
w.grad.zero_()

# next forward and backward pass...

FREE VS Code / PyCharm Extensions I Use

✅ Write cleaner code with Sourcery, instant refactoring suggestions: Link*


PySaaS: The Pure Python SaaS Starter Kit

🚀 Build a software business faster with pure Python: Link*

* These are affiliate link. By clicking on it you will not have any additional costs. Instead, you will support my project. Thank you! 🙏