Backpropagation Algorithm Demo

In this lecture, we implement the backpropagation algorithm for training a simple feedforward neural network. Backpropagation is a method for computing the gradients of the loss function with respect to the weights of the network, allowing us to perform gradient descent optimization.

Backpropagation Demo
Download
import torch
import torch.nn as nn
import torch.nn.functional as F

x = torch.Tensor([0.9, 0.5, 0.1])
y = torch.Tensor([1, 0])
Wa = nn.Linear(3, 4, bias=False)
Wb = nn.Linear(4, 2, bias=False)

Wa.weight.data.fill_(0.1)
Wb.weight.data.fill_(0.1)

def forward(X):
    H = Wa(X)
    L = F.relu(H)
    O = Wb(L)
    Y = F.softmax(O, dim=0)
    return Y


print("before", Wb.weight.grad)
y_ = forward(x)
loss = F.mse_loss(y, y_)
loss.backward()
print("after", Wb.weight.grad)