# Backpropagation - PyTorch Beginner 04

Learn all the basics you need to get started with this deep learning framework! In this part I will explain the famous backpropagation algorithm. I will explain all the necessary concepts and walk you through a concrete example. At the end we will see how easy it is to use backpropagation in PyTorch.

- Chain Rule
- Computational Graph and local gradients
- Forward and backward pass
- Concrete example with numbers (Linear Regression)
- How to use backpropagation in PyTorch

All code from this course can be found on GitHub.

## Backpropagation in Pytorch

```
import torch
x = torch.tensor(1.0)
y = torch.tensor(2.0)
# This is the parameter we want to optimize -> requires_grad=True
w = torch.tensor(1.0, requires_grad=True)
# forward pass to compute loss
y_predicted = w * x
loss = (y_predicted - y)**2
print(loss)
# backward pass to compute gradient dLoss/dw
loss.backward()
print(w.grad)
# update weights
# next forward and backward pass...
# continue optimizing:
# update weights, this operation should not be part of the computational graph
with torch.no_grad():
w -= 0.01 * w.grad
# don't forget to zero the gradients
w.grad.zero_()
# next forward and backward pass...
```

**Join My Newsletter! **Get Python and ML tips emailed directly to your inbox. Each month you’ll get a summary of all the content I created, including the newest videos, articles, promotions, tips, and more.

# TensorFlow 2 Beginner

Learn all the necessary basics to get started with TensorFlow 2 and Keras.

# PyTorch Beginner

Learn all the necessary basics to get started with this deep learning framework.

# ML From Scratch

Implement popular Machine Learning algorithms from scratch using only built-in Python modules and numpy.