Changes

Jump to: navigation, search

DPS921/PyTorch: Convolutional Neural Networks

1,463 bytes added, 18:30, 29 November 2020
no edit summary
= Neural Networks Using Pytorch =
The basic idea was to create a convolutional simplistic neural network using the python machine learning Framework PyTorch. The actual code will
be written in Jupyter Lab both for demonstration and implementation purposes. Furthermore, using the the torchvision dataset, the goal
print(torch.argmax(net(X[0].view(-1, 784))[0]))
 
= Implementation of a Convolutional Neural Network =
 
''' So essentially we have taken the linear neural network defined above and transformed it into a CNN by transforming
''' our first two layers into convolutional layers. This was achieved by making use of the 'nn' module function called
''' 'conv2d' and making use of 2-d max pooling activation function.
 
import torch.nn as nn
 
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.conv1 = nn.Conv2d(in_channels=1, out_channels=6,kernel_size=5)
self.conv2 = nn.Conv2d(in_channels=6, out_channels=12,kernel_size=5)
self.fc1 = nn.Linear(in_features=12*4*4, out_features=120)
self.fc2 = nn.Linear(in_features=120, out_features=60)
self.out = nn.Linear(in_features=60, out_features=10)
def forward(self, t):
#implement the forward pass
#(1) Hidden conv Layer
t = self.conv1(t)
t = relu(t)
t = F.max_pool2d(t, kernel_size=2,stride=2)
#(2) Hidden conv Layer
t = self.conv2(t)
t = F.relu(t)
t = F.max_pool2d(t, kernel_size=2,stride=2)
#(3) Hidden linear Layer
t = t.reshape(-1, 12 * 4 * 4)
t = self.fc1(t)
t = F.relu(t)
#(4) Hidden linear Layer
t = self.fc2(t)
t = F.relu(t)
t = F.softmax(t, dim=1)
return t
 
== Getting Started With Jupyter ==

Navigation menu