PyTorch Tutorial for Beginners

What is PyTorch?

PyTorch is an open-source deep learning library released by Facebook. Before starting PyTorch, we should know about deep learning. So, firstly, we have to understand what deep learning is?

Deep learning & machine learning are ways to achieve Artificial Intelligence. It was released in 1956, and the idea was to create machines. Machines which have the power to think, analyze, and make decisions all on their own. In earlier days, there was not enough data and computation power, but now with the help of big data coming into existence, the invention of GPUs artificial intelligence is possible.

Deep Learning is the collection of statistical machine learning techniques used to learn feature hierarchies which are based on artificial neural networks.

Four python deep learning libraries are PyTorch, TensorFlow, Keras, and theano. In this tutorial, we have to focus on PyTorch only.

PyTorch is an open-source python based scientific computing package, and one of the in-depth learning research platforms construct to provide maximum flexibility and speed. It offers Native support for Python and, its libraries. It is used in the development of Facebook and its subsidiary companies on similar technologies. PyTorch generates an easy way to use API and better understanding while coding. It has dynamic computation graphs so every point of code execution we can build the chart we go along and may be manipulated at runtime based on the needs. It also supports CUDA (Compute Unified Device Architecture) where code can run on a graphical processing unit of a graphics card which decreasing the time and increasing the overall performance.

Origin(history) of PyTorch

The initial release of PyTorch was in October 2016 and created by Facebook. However, many other companies are interested in it. There was still another framework called Torch before PyTorch was created. The Torch is a machine learning framework and based on the Lua programming language. PyTorch is a cousin brother of Lua-based Torch framework. Soumith Chintala who worked at Facebook as AI Researcher is credited with bootstrapping the PyTorch project and his reason for developing PyTorch is simple, the Lua version of Torch was aging so, the newer version written in Python was needed. As a result, PyTorch has come. PyTorch is not a simple set of the folder to support popular languages. It is a very young player in the field compare to its other competitors. However, it is gaining momentum fast. It is giving a fierce competition to TensorFlow, especially in used of research work.

competition to TensorFlow

PyTorch is known for having three layers of Abstraction:

Tensor- Imperative n-dimensional array running on GPU.

Variable- Node in computational graph-to store data and gradient.

Module- Neural network layer-store states and learnable weights.

Difference between PyTorch and TensorFlow

PyTorchTensorFlow
PyTorch is based on the Torch and developed by Facebook.Tensorflow is based on Theano and developed by Google.
PyTorch believes in dynamic graphs.TensorFlow creates a Static Graphs.
PyTorch is more comfortable to learn then Tensorflow.TensorFlow is more challenging to learn.
PyTorch is more pythonic & building ML model feel more initiative.Tensorflow is more steep learning curve then PyTorch.
PyTorch is comparatively new framework compared to Tensorflow, and then it becomes harder to find resources to learn PyTorch.TensorFlow has a more significant community behind it then PyTorch; It becomes easier to find resources to learn TensorFlow.

Features of PyTorch

There are some incredible features of PyTorch are given below:

PyTorch is based on Python: Python is the most popular language using by deep learning engineers and data scientist. PyTorch creators wanted to create a tremendous deep learning experience for Python, which gave birth to a cousin Lua-based library known as Torch. Hence PyTorch intends to become a Python-based deep learning and machine learning library which is open source.

The dynamic approach to graph computation: PyTorch building deep learning on the top of a dynamic graph which can be played on runtime. Other popular deep learning frameworks are work on a static figure. The user cannot see what the GPU and CPU processing the chart is doing. Whereas in PyTorch, each level of computation can be accessed. Jemery Howard from fast.ai says, " An additional benefit of PyTorch is that it gives our students a much more in-depth knowledge of what was going on each algorithm that was covered.”

Increased developer productivity: Pytorch is simple to use and give us a chance to manipulate the computational graph. Jeremy Howard from Fast.ai teaches deep learning using PyTorch said, “The key was to create an OO(object-oriented) class which encapsulated all of the important data choices along with the choice of the model architecture.

Easier to learn and more uncomplicated to code: PyTorch is easier to learn than any other deep learning library. The documentation of PyTorch is very brilliant and helpful for beginners.

Easy to debug: The considerable advantage of PyTorch is the Python debugging tools such as pdb, ipdb, and PyCharm debugger can be used with the freedom to debug PyTorch code.

Simplicity and transparency: When dynamic graph comes into clarity for developer and data scientists. Programming deep neural network is more comfortable in PyTorch then in TensorFlow because of the deep learning.

Data Parallelism: One of the essential features is of PyTorch is declarative data parallelism. These features allow you to use torch.nn.DataParallel to wrap any module. This will help you to load multiple GPUs quickly.

Imperative Programming: PyTorch performs computations through each line of the written code. This is similar to how a Python program is executed. This concept is called imperative programming.

Elements of PyTorch

The main elements of PyTorch are shown below:

  • PyTorch Tensors
  • PyTorch NumPy
  • Mathematical Operations
  • Autograd module
  • Optim module
  • nn module

PyTorch Tensors: Tensors are nothing but a multidimensional array. Tensor in PyTorch is similar to numpy's nd array. Tensors can be used on a GPU. There are many tensors in PyTorch one dimensional array and also a two-dimensional array.

# import PyTorch
import torch
# define a tensor
torch.floatTensor([2])
2
Torch. float tensor of size 1

PyTorch NumPy: A Pytorch tensor is identical to a NumPy array. PyTorch tensors utilize GPUs to accelerate their numeric computation. User can manually implement forward and backward passes in the network.

Mathematical operations: As with NumPy, It is very crucial that has a scientific computing library has an efficient implementation of mathematical operations. There are more than 200 mathematical operations.

Below is an example of simple addition operation

a= touch.FloatTensor([2]) 
b= touch.FloatTensor([3])
a+b
5
torch.float tensor of size 1

Autograd module: There is an automatic differentiation technique used in PyTorch. This technique is more powerful when we are building a neural network. There is a recorder which records what operations we have performed, and then it replays it backs to compute our gradient.

From touch. autograd import Variable 
x= Variable(train_x)
y= variable(train_y, requires_grad=False)           
Pytorch autoguard module 2

Optim Module: Torch.optim is a module that implements various optimization algorithm used for building neural networks. PyTorch already supports most of the commonly used methods.

Below is the code of Adam optimizer

Optimizer = torch.optim.Adam(mode1, parameters( ), lr=learning rate

nn module: The nn package define a set of modules, which are thought of as a neural network layer that produce output from the input and have some trainable weights.

Consider a example of nn module:

Import torch
# define mode1
model= torch.nn.Sequential(
torch.nn.Linear(hidden_num_units, hidden_num_units), 
torch.nn.ReLU( ),
torch.nn.Linear(hidden_num_units, output_num_units), 
)
loss_fn= torch.nn.crossEntropyLoss( )

Deep learning with PyTorch

This table has a list of PyTorch packages and their compatible description. There is some primary PyTorch component.

PackageDescription
torchThe excellent PyTorch package and tensor library.
Torch.nnA sub-package that contains modules and expendable classes is for generating the neural network.
torch.autogradA sub-package supports all the differentiable tensor operations in PyTorch.
torch.nn.functionalA functional interface which contains regular operations used for building a neural network. Like loss functions, activation function, and convolution functions.
Torch .optimA sub-package that standard optimization operations like SGD and Adam.
torch.utilsA sub-package that contains utility classes like dataset and data loader make data preprocessing easier.
TorchvisionA package that provides access to a famous dataset model architecture and image transformation is used in computer vision.

The torch-vision package is separate from the excellent torch package. However, this may change in the future if torch-vision is pulled in a sub-package of the Torch.

Why use PyTorch for deep learning?

For beginners, deep learning and neural network is the top reason for learning Pytorch. When we build a neural network through Pytorch, We are super close to the neural network from scratch. Common PyTorch characteristics often pop off its excellent result. The reason for the effect is to do suitably technical design consideration. To optimize neural networks, we need to calculate derivatives, and to do this computationally, deep learning framework use what is called computational graph. PyTorch is a great library of deep learning.

There are a few reasons you might prefer PyTorch to other deep learning libraries:

  1. Other libraries like TensorFlow, PyCharm where we have first to define an entire computational graph before you can run your model, PyTorch allows us to set our figure dynamically.
  2. PyTorch is better for in-depth learning research and provides maximum flexibility and speed.

Advantages of PyTorch

  • PyTorch has provided a great platform to AI development and research and also given its tight coupling to Python.
  • It is easier to learn and simpler to code as compared to another deep learning framework. Like, tensorFlow.
  • It has a straightforward interface and easily usable API.
  • Python provides an excellent platform which is known as a dynamic computational graph. Thus, we can change them during runtime when we have no idea how much memory will be required for creating a neural network model.
  • It is pythonic so that it can leverage all the functions and services offered by the python environment.
  • It includes many layers as Torch.

Installation of PyTorch through Anaconda

If, we, install PyTorch in our System having windows operating system. Then follow the steps given below; The entire thing which is needed to fix PyTorch are below :

  1. Install Anaconda
  2. Open Anaconda Prompt (NOT Anaconda Navigator)
  3. Conda install PyTorch -c pytorch
  4. pip install torch-vision
  5. Add environment to ipykernel

1.Install Anaconda:

installing anaconda step 1

After that, click on "Download the Anaconda installer" then this page comes on the screen.

installing anaconda step 2

After that, download Anaconda installer with the latest version of Python.

After downloading, run the anaconda installer from there;

installing anaconda step 5

Here, we have to click on “I agree."

installing anaconda step 6

then click on “just me” radio button.

installing anaconda step 7

click on "Next."

installing anaconda step 8

Click on "Next."

click on "Install."

installing anaconda step 10 installing anaconda step 11

The installation has completed.

installing anaconda step 12

Click on "Next."

installing anaconda step 13

Anaconda has been successfully installed in our System.

Open "pytorch.org."

installing anaconda step 14

After opening the official website of PyTorch “pytorch.org." then click on “Get Started.”

installing anaconda step 15

We consider "Windows" as our Operating System. The steps for a successful environmental setup are as follows

Step 1

The following link includes a list of packages which has suitable packages for PyTorch.

https://drive.google.com/drive/folders/0B-X0-FlSGfCYdTNldW02UGl4MXM

you need to download the respective packages and install it as shown in the following screenshots ?

installing anaconda step 16

Step 2

For verifying the installation of PyTorch framework using Anaconda Framework. The command has given below:

Conda list

installing anaconda step 17

Conda list” shows the list of frameworks which are installed in our System.

installing anaconda step 18

The highlighted part shows that PyTorch has been successfully installed in our System.

Verifying the installation through this command

In [1]: import torch
In [2]: print (torch_version_)
Out [2]: 0.4.1
In [3]: torch.cuda.is_available( )
Out [3]: true
In [4]: torch.version.cuda
Out[4]: 9.0

if this code run in Anaconda command prompt then the Pytorch has been successfully working into your system.

Building Neural Network using PyTorch

installing anaconda step 19

A PyTorch implementation of neural networks looks precisely as a NumPy implementation. The nature of NumPy and PyTorch is equivalent. For example; let's create a simple three layer network having four-layer in the input layer, five in the hidden layer and one in the output layer.we have only one row which has five features and one target. The neural network is constructed by using a Torch.nn package.

Import torch
n_input,
n_hidden, n_output=5, 4, 1

The first step is to do the parameter initialization. The weight and bias parameter for each layer is initializing a Tensor variable. Tensors are the base data-structure of the Py-Torch which are used for building many types of neural networks. They can generalize the array and matrices.

## initialize tensor for inputs and outputs
x=  touch.randn((1, n_input))
y= touch.randn((1, n_output))
## Initialize tensor variables for weight
w1= torch.randn(n_input, n_hidden) # weight for hidden layer
w2= torch.randn(n_hidden, n_output) #weight for output layer.
## initialize tensors for bias system 
b1= torch.randn(1, n_hidden)# bias for hidden layer
b2=touch.randn(1,n_output) # bias for output layer.

After the initialization, the neural network can be discussed in four key steps.

  • Forward Propagation
  • Loss computation
  • Backpropagation
  • Updating the parameters
installing anaconda step 21

Forward propagation: In this step, activation is calculated at every layer using these two steps given below, these activation flow in a forward direction from the input layer to output layer to generate the final output.

 z = weight * input + bias
a = activation_function (z)

Loss Computation: In this step, the error is calculated in the output layer. A simple loss function can say the difference between the actual value and the assumed value.

Loss= y-output

Backpropagation: This step aims to minimize the error in the output layer by making changes in bias and weights. These marginal changes are overcoming by error team.

Based on the calculus principle of the chain rule, The data changes are pressed in hidden layers, where corresponding changes in weights and bias are adjusted until the error is minimized.

#  Function to calculate the derivative of the activation 
def sigmoid _delta(x); 
return x* (1-x)

Updating the Parameters: The weights and bias are updated using the delta changes received from the above backpropagation step.

learning_rate= 0.1
b2+ = d_outp.sum ()* learning_rate     
b1+ = d_hidn.sum ()* lerning_rate

Advantages of PyTorch

  • PyTorch has provided a great platform to AI development and research and also given its tight coupling to Python.
  • It is easier to learn and simpler to code as compared to another deep learning framework. Like, tensorFlow.
  • It has a straightforward interface and easily usable API.
  • Python provides an excellent platform which is known as a dynamic computational graph. Thus, we can change them during runtime when we have no idea how much memory will be required for creating a neural network model.
  • It is pythonic so that it can leverage all the functions and services offered by the python environment.
  • It includes many layers as Torch.

Future Scope of PyTorch

PyTorch represents a significant step forward in the evolution of machine learning development tools. It should serve as an inspiration to other machine learning framework. PyTorch is Python-based library build to provide flexibility as a deep learning development platform.

The workflow of PyTorch is close to getting the computer library. As compared to TensorFlow, PyTorch is more initiative. If you don't have solid mathematics and real machine learning, you will be able to understand PyTorch models.

It is not regularly used as a production framework. We can handle all sorts of profound learning challenges using PyTorch like image (detection and classification), text (NLP), Reinforcement learning. So that we can say that PyTorch is the most popular and useful framework for machine learning.

PyTorch is a helpful framework according to future aspects because now a day people make their home and offices smart and robot oriented machines which automatically with the help of their sensors and actuators. With the help of machine learning tools, it can be possible.