Machine Learning Tutorial

What is Machine Learning? Machine Learning Life Cycle Python Anaconda setup Difference between ML/ AI/ Deep Learning Understanding different types of Machine Learning Data Pre-processing Supervised Machine Learning

ML Regression Algorithm

Linear Regression

ML Classification Algorithm

Introduction to ML Classification Algorithm Logistic Regression Support Vector Machine Decision Tree Naïve Bayes Random Forest

ML Clustering Algorithm

Introduction to ML Clustering Algorithm K-means Clustering Hierarchical Clustering

ML Association Rule learning Algorithm

Introduction to association Rule Learning Algorithm

Miscellaneous

Top 5 programming languages and their libraries for Machine Learning Basics Vectors in Linear Algebra in ML Decision Tree Algorithm in Machine Learning Bias and Variances in Machine Learning Machine Learning Projects for the Final Year Students Top Machine Learning Jobs Machine Learning Engineer Salary in Different Organisation Best Python Libraries for Machine Learning Regularization in Machine Learning Some Innovative Project Ideas in Machine Learning What is Cross Compiler Decoding in Communication Process IPv4 vs IPv6 Supernetting in Network Layer TCP Ports TCP vs UDP TCP Working of ARP Hands-on Machine Learning with Scikit-Learn, TensorFlow, and Keras Kaggle Machine Learning Project Machine Learning Gesture Recognition Machine Learning IDE Pattern Recognition and Machine Learning a MATLAB Companion Chi-Square Test in Machine Learning Heart Disease Prediction Using Machine Learning Machine Learning and Neural Networks Machine Learning for Audio Classification Standardization in Machine Learning Student Performance Prediction Using Machine Learning

Bias and Variances in Machine Learning

Machine Learning is an important part in many industries. Machine Learning mainly works on predicting things depending on given input. Machine Learning models are trained over a sample data set. These data sets are collected from original sources. You must have faced some recommendation system in YouTube or Google. This prediction is very much used by service providing companies to fulfil the demand of customer. But it often happens that the predictions do not satisfy us. There may be a huge error or a small error in the prediction. But it is confirmed that you cannot find any Machine Learning model which does not give error in the predictions. These errors are called as bias and variance. In this article we will discuss about bias and variance of Machine Learning models.

Let’s understand the concept of bias and variance in Machine Learning.

First we have to understand about errors of Machine Learning models to understand more about bias and variance. Mainly error is a measure of judging the wrong predictions. From errors you can get clear concept how your Machine Learning model is giving wrong predictions. There are mainly two types of errors as follows:

  • Reducible Errors: These errors can be reduced to improve the accuracy of the model. Such errors can be classified into bias and variance.
  • Irreducible Errors: These errors will always appear in model.

Regardless of the algorithm used, the cause of these errors is undefined variables whose value cannot be reduced.

 What is Bias in Machine Learning?

To understand the basic concept of bias you have to understand the working process of Machine Learning models. Firstly one algorithm is chosen for the given problem. Then the model is trained with available set of data. The Machine Learning models try to understand the pattern. After recognising the proper pattern or you can say after completion of training, new data inputs are given to Machine Learning model. It gives the prediction depending upon recognised pattern. But as we have discussed there is always some difference between original value and prediction value. This difference is mainly called as bias. Every Machine Learning model has some bias. We can classify the bias into two different classes.

They are as follows:

  1. Low Bias: Low Bias model makes fewer assumptions about the shape of the objective function.
  2. High Bias: A model with high bias makes more assumptions and the model cannot capture important features of our data set. Models with high bias also cannot perform well on new data.

Some examples of low bias Machine Learning algorithms are decision trees, k-nearest neighbours, and support vector machines. At the same time, an algorithm with strong bias is linear regression, linear discriminate analysis, and logistic regression. In general, linear algorithms are highly biased because it makes them learn fast. The simpler the algorithm, the larger the bias. On the other hand, a nonlinear algorithm usually has low bias.

 Possible Ways to Reduce High Bias

 High bias occurs mainly due to a very simple model. Here are some ways to reduce high bias:

  1. Increase the input features because the model is not fully equipped.
  2. Reducing the adjustment period.
  3. Use more complex models, such as including some polynomial features.

 What is the Variance Error in Machine Learning?

We know that the Machine Learning model is trained from a particular data set. After that the input is given to the model. But it is often seen that if you change the training data set then the prediction changes. This change may go to positive or negative directions. For an algorithm it is preferred that it should work equally over any data set. But it does not happen.  This change is called as variance of Machine Learning model. There may be high variance or low variance.

If a Machine Learning model has low variance then it shows less change after changing the training data set. If a Machine Learning model has high variance then it shows high variance after changing the training data set. Some examples of low-variance Machine Learning algorithms are linear regression, logistic regression, and linear discriminate analysis. At the same time, the algorithms with high variance are decision trees, support vector machines, and K nearest neighbours.

 Because of the high variance, the model learns too much from the data set, leading to overfitting the model. A high variance model has the following problems:

  1. A high variance model leads to overfitting.
  2. Increasing the complexity of the models.

 Usually nonlinear algorithms have high flexibility to fit the model, have high variance.

 Possible Ways to Reduce High Variance

  1. Do not use an overly complicated model.
  2. Increase the regularization delay.
  3. Increase the training data.
  4. Reduce input features or the number of parameters when the model is overfitted.