Deep Learning with PyTorch

Complete your Hemilingo Python journey by completing this PyTorch Deep Learning Course - The hardest of all the Hemilingo Python courses.

Deep Learning with PyTorch

Deep learning is a subset of machine learning that deals with neural networks designed to mimic human learning in a more sophisticated manner. In this article, we’ll explore how to implement deep learning models using PyTorch, focusing on neural networks and convolutional neural networks (CNNs).

1. Recap Neural Networks

A neural network consists of layers of neurons. The simplest type is a fully connected (dense) network, where each neuron in one layer is connected to every neuron in the next layer. The network learns by adjusting weights during training, using backpropagation to minimize the loss.

2. Building a Simple Neural Network

In PyTorch, neural networks are created using the torch.nn.Module class. We can define layers using modules like torch.nn.Linear for fully connected layers, and the forward function defines how the data passes through these layers:

# Define a simple neural network
class SimpleNN(nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.fc1 = nn.Linear(3, 5)  # Input layer (3 inputs) to hidden layer (5 neurons)
        self.fc2 = nn.Linear(5, 1)  # Hidden layer (5 neurons) to output layer (1 output)

    def forward(self, x):
        x = torch.relu(self.fc1(x))  # Activation function after the first layer
        x = self.fc2(x)  # Output layer
        return x

# Initialize the model
model = SimpleNN()
print(model)

3. Convolutional Neural Networks (CNNs)

CNNs are a specialized type of neural network designed for processing grid-like data, such as images. CNNs consist of layers like convolutional layers, pooling layers, and fully connected layers.

class SimpleCNN(nn.Module):
    def __init__(self):
        super(SimpleCNN, self).__init__()
        self.conv1 = nn.Conv2d(1, 32, kernel_size=3, padding=1)  # 1 input channel (grayscale image), 32 output channels
        self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
        self.fc1 = nn.Linear(32 * 7 * 7, 10)  # Flattened output from convolution layers to fully connected

    def forward(self, x):
        x = self.pool(torch.relu(self.conv1(x)))  # Apply convolution, ReLU activation, and pooling
        x = x.view(-1, 32 * 7 * 7)  # Flatten the data for the fully connected layer
        x = self.fc1(x)
        return x

# Initialize CNN
cnn_model = SimpleCNN()
print(cnn_model)

4. Training a Model

Training a model involves several steps:

  1. Defining a Loss Function: This evaluates how far off the model's predictions are from the actual values.
  2. Choosing an Optimizer: This adjusts the model's weights based on the gradients computed during backpropagation.
  3. Forward Pass: The input is passed through the model to generate predictions.
  4. Loss Computation: The predictions are compared to the ground truth.
  5. Backward Pass (backpropagation): The optimizer updates the weights to reduce the loss.
# Create random input and output
inputs = torch.randn(64, 3)  # Batch size of 64, 3 features
targets = torch.randn(64, 1)  # Batch size of 64, 1 target

# Initialize model and optimizer
model = SimpleNN()
optimizer = optim.SGD(model.parameters(), lr=0.01)
criterion = nn.MSELoss()

# Training loop (1 epoch)
for epoch in range(1):
    optimizer.zero_grad()  # Clear gradients from previous step
    outputs = model(inputs)  # Forward pass
    loss = criterion(outputs, targets)  # Calculate loss
    loss.backward()  # Backward pass
    optimizer.step()  # Update weights

    print(f"Epoch {epoch+1}, Loss: {loss.item()}")

5. Evaluating a Model

After training, you need to evaluate how well the model performs on unseen data (validation set). The evaluation process is similar to training, except you don’t perform backpropagation during evaluation.

# Assuming test_data and test_labels are prepared
test_data = torch.randn(10, 3)  # 10 test samples
test_labels = torch.randn(10, 1)

# Forward pass (no need for gradient computation)
with torch.no_grad():
    outputs = model(test_data)
    test_loss = criterion(outputs, test_labels)
    print(f"Test Loss: {test_loss.item()}")

Conclusion

By now, you should have a foundational understanding of how to build, train, and evaluate neural networks in PyTorch, including both simple feedforward networks and convolutional neural networks (CNNs). Mastering these techniques prepares you to dive deeper into more complex deep learning tasks like image classification, natural language processing, and reinforcement learning.