Changes

Jump to: navigation, search

DPS921/PyTorch: Convolutional Neural Networks

798 bytes added, 14:51, 30 November 2020
Parallelization Methods
This section details the ways to parallelize your NN. As image recognition is graphical in nature, multiple GPUs are the best way to parallelize dataset training.
=== Data Parallelism ===
 
The data can be parallelized with multiple GPUs. You can easily put your model on a GPU by writing:
 
device = torch.device("cuda:0")
model.to(device)
 
Then, you can copy all your tensors to the GPU:
 
mytensor = my_tensor.to(device)
 
However, PyTorch will only use one GPU by default. In order to run on multiple GPUs you need to use <code>DataParallel</code>:
 
model = nn.DataParallel(model)
 
==== Imports and Parameters ====
 
Import the following modules and define your parameters:
 
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
 
# Parameters and DataLoaders
input_size = 5
output_size = 2
 
batch_size = 30
data_size = 100
 
Device:
 
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
 
 
 
=== Single-Machine Model ===
56
edits

Navigation menu