Fundamental of Algorithms
An algorithm is a part of any programming solution or coding. If we have to make a solution then first we have to think of a clear idea about the solution. This idea is called an algorithm. The algorithm makes our job very easy. If anyone gets the right algorithm, he can easily access the solution. One problem may be solved by many algorithms for example multiplication of two numbers.
Example:
If we multiply 3 with 5, we will get 15. Now, if we solve this problem in C++, it is very easy to multiply two numbers. But this problem can also be solved by summation.
So, from the above example, we can see there are lots of approaches to solve one problem. Here, one new problem arises. If we have multiple ways of solution, we have to choose between many solutions. One way to compare between two solutions is time. Another way is space. Let us try to understand the analysis of the algorithm in deep.
Analysis of Algorithm
When we compare many solutions to a problem, it happens that the time or space taken by any solution may differ from environment to environment. It happens because an algorithm is expected to work fast for all input size but for smaller input size our algorithm will work smoothly but for higher input size the execution time is much higher. By increasing the size of n (input size), we can analyze how well our algorithm works. Let us take an example to understand this:
When the input size is 5, we have to sort the array of elements for example, 60, 24, 75, 26, 85, 1. So, it is easy to solve this problem when n=5 but when n becomes 10000, it is a very time-consuming process. So, the algorithm will take a much longer time to sort the elements or cause small delays to give the result. How the behaviour of the algorithm changes with the no of inputs is called the order of growth.
Asymptotic notation
If we use the second unit to measure the time then it creates a problem because the time measured in the second may differ from system to system. So, we use asymptotic notation to denote the time complexity of an algorithm. In the aspect of the input value, we calculate the time. There may occur three cases for a particular algorithm:
Best case: When the algorithm takes the lowest time then the best case occurs. In general, for small test cases, it gives the best case means takes very less time. Suppose, we have to sort an array that consists of elements 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10, the array is already sorted. So, we don't need to sort the array. In some sorting approaches, it takes only O (n) time complexity.
Worst case: When the algorithm takes the highest time then the worst case occurs. In general, for big test cases, it gives worst case means to take very huge time. Suppose we have to sort an array that consists of elements 10, 9, 8, 7, 6, 5, 4, 3, 2, 1. Now the array is reversely sorted. So, we need to sort the array completely. In some sorting approaches, it takes O (n^2) time complexity.
Average case: When we are between best and worst, it happens to be the average case. Suppose the array is partially sorted then it will be the average case.
Operations in order of growth
To denote the time complexities in different cases we use three types of operations. They are as follows:
- Big 'O' ( oh ): It is mainly used when calculating the worst case. As an example, the time complexity of bubble sort is O ( n^2 ).
- Big omega: It represents the minimum time that the algorithm takes for execution.
- Big theta: It represents the average time that an algorithm takes for its execution.
Searching and sorting
In DSA or problem-solving, searching and sorting is an essential parts We cannot solve most problems without searching and sorting. First, we will discuss searching and then sorting.
Let us take an example:
We have a list of numbers like 25, 36, 1, 24, 85, 2, 3, 14, 49, 26, 9, 0. Now we have to check whether 3 is present in the list or not. So, this checking process is called searching. There are many methods of searching. Let us check-
Linear search: This is a very simple approach for searching. In this method, we just traverse the list and try to find the given number.
Example:
Program to implement linear search in c
#include <stdio.h>
int linear search(int a[], int n, int val) {
for (int i = 0; i < n; i++)
{
if (a[i] == val)
return i+1;
}
return -1;
}
int main() {
int a[] = {70, 40, 30, 11, 57, 41, 25, 14, 52};
int val = 41;
int n = sizeof(a) / sizeof(a[0]);
int res = linearSearch(a, n, val);
printf("The elements of the array are - ");
for (int i = 0; i < n; i++)
printf("%d ", a[i]);
printf("\nElement to be searched is - %d", val);
if (res == -1)
printf("\nElement is not present in the array");
else
printf("\nElement is present at %d position of array", res);
return 0;
}
program to implement linear search in C++.
#include <iostream>
using namespace std;
int linearSearch(int a[], int n, int val) {
for (int i = 0; i < n; i++)
{
if (a[i] == val)
return i+1;
}
return -1;
}
int main() {
int a[] = {69, 39, 29, 10, 56, 40, 24, 13, 51};
int val = 56;
int n = sizeof(a) / sizeof(a[0]);
int res = linearSearch(a, n, val);
cout<<"The elements of the array are - ";
for (int i = 0; i < n; i++)
cout<<a[i]<<" ";
cout<<"\nElement to be searched is - "<<val;
if (res == -1)
cout<<"\nElement is not present in the array";
else
cout<<"\nElement is present at "<<res<<" position of array";
return 0;
}
Complexity Analysis:
Time complexity: Here, the looping will be used. Complexity will be O(n).
Space complexity: In this solution, we use constant memory. So, space complexity will be O(1).
Binary search
When linear search is very time taking process, binary search takes very less time. It mainly applies the divide and conquers approach. First, we divide the full list of numbers into two equal parts and then check for elements. After that one of the parts will be divided and checking will be continued. This process continues till we don't find the element.
Example:
#include<stdio.h>
void sort(int n,int a[]);
void search(int low,int high,int a[],int s);
int main()
{
int n,s;
printf("enter the size:\n");
scanf("%d",&n);
int a[n-1];
printf("enter the elements:\n");
for(int i=0;i<n;i++)
{
scanf("%d",&a[i]);
}
sort(n,a);
printf("entered sorted elements:");
for(int i=0;i<n;i++)
{
printf("%d ",a[i]);
}
printf("\nenter the search element:\n");
scanf("%d",&s);
search(1,n,a,s);
}
void sort(int n,int a[])
{
int i,j,temp;
for(i=0;i<n;i++)
{
for(j=i+1;j<n;j++)
{
if(a[i]>a[j])
{
temp=a[i];
a[i]=a[j];
a[j]=temp;
}
}
}
}
void search(int low,int high,int a[],int s)
{
int mid;
if(low>high)
{
printf("search is not successful");
return;
}
mid=(low+high)/2;
if(a[mid]==s)
{
printf("search successful\nposition:%d",mid+1);
}
else if(s<a[mid])
{
search(low,mid-1,a,s);
}
else if(s>a[mid])
{
search(mid+1,high,a,s);
}
}
Complexity Analysis:
Time complexity: Here, the recursion continues till we find the element. So, the complexity directly depends on the number of elements. Complexity will be O(logn).
Space complexity: In this recursive solution, we use constant memory. So, space complexity will be O(1).
Applications of binary search
In many problems, we use binary search but it has a lot of drawbacks. The array should be sorted. If it is not a sorted array, we can't implement a binary search. So, first, we need to check whether the array is sorted or not. In real-world problems many times we use binary search.
Sorting
Now, we are going toward the sorting process. Sorting is also an essential concept to learn in DSA. Let us take an example to understand the fundamentals of sorting.
For example:
We have an array like this [2, 5, 9, 7, 6, 1, 3, 8, 0]. Sorting means to arrange this number in an order (it may be in ascending or descending). After sorting the array will be [ 0, 1, 2, 3, 5, 6, 7, 8, 9].
We can use different types of the algorithm for sorting. The simplest one is bubble sort. Let us discuss different types of sorting algorithms: -
1.Bubble sort: It is one of the simplest sorting algorithms. But it also has a drawback. It requires two loopings. So time complexity goes to O ( n^2 ). Here we compare the elements one by one. In this way, after each loop, we get one sorted element in the last of the array. This algorithm is generally avoided in problem-solving. Here is the code for it:
#include<stdio.h>
int main()
{
int a[10],n;
printf("enter the number of elements\n");
scanf("%d",&n);
printf("enter the elements\n");
for(int i=0;i<n;i++)
{
scanf("%d",&a[i]);
}
for(int i=0;i<n-1;i++)
{
for(int j=0;j<n-i-1;j++)
{
if(a[j]>a[j+1])
{
int temp;
temp=a[j];
a[j]=a[j+1];
a[j+1]=temp;
}
}
}
printf("sorted elements\n");
for(int i=0;i<n;i++)
{
printf("%d\n",a[i]);
}
return 0;
}
2. Selection sort: It is also like bubble sort. But selection sort can benefit us in some cases. When bubble sort keeps sorted elements in the last of the array selection sort puts all the sorted elements in the first of the array. Here also time complexity is O ( n^2). Here is the code for it:
#include<stdio.h>
void sort(int a[],int n);
int main()
{
int a[100],n;
printf("enter the number of elements:\n");
scanf("%d",&n);
printf("enter the elements:\n");
for(int i=0;i<n;i++)
{
scanf("%d",&a[i]);
}
sort(a,n);
return 0;
}
void sort(int a[],int n)
{
for(int i=0;i<n-1;i++)
{
int min=i;
int j=i+1;
while(j<n)
{
if(a[j]<a[min])
{
min=j;
}
j++;
}
if(min!=i)
{
int temp;
temp=a[i];
a[i]=a[min];
a[min]=temp;
}
}
printf("sorted array:\n");
for(int i=0;i<n;i++)
{
printf(" %d",a[i]);
}
}
3. Quick sort: This algorithm is more efficient than bubble sort or selection sort. Its time complexity will be O (nlogn). It mainly uses the approach of divide and conquer to sort the array. One pivot element will be taken first. After that, the minimum element will be found using pivot. In this way, we divide the array and sort the array. Here is the code to implement this approach:
#include <stdio.h>
void quicksort (int [], int, int);
int main()
{
int list[50];
int size, I;
printf("Enter the number of elements: ");
scanf("%d", &size);
printf("Enter the elements to be sorted:\n");
for (i = 0; i < size; i++)
{
scanf("%d", &list[i]);
}
quicksort(list, 0, size - 1);
printf("After applying quick sort\n");
for (i = 0; i < size; i++)
{
printf("%d ", list[i]);
}
printf("\n");
return 0;
}
void quicksort(int list[], int low, int high)
{
int pivot, i, j, temp;
if (low < high)
{
pivot = low;
i = low;
j = high;
while (i < j)
{
while (list[i] <= list[pivot] && i <= high)
{
i++;
}
while (list[j] > list[pivot] && j >= low)
{
j--;
}
if (i < j)
{
temp = list[i];
list[i] = list[j];
list[j] = temp;
}
}
temp = list[j];
list[j] = list[pivot];
list[pivot] = temp;
quicksort(list, low, j - 1);
quicksort(list, j + 1, high);
}
}
4. Merge sort: Merge sort is one of the most used sorting algorithms. It is a very efficient algorithm. Its time complexity is O( nlogn). It also uses the divide and conquers approach to sort the array. First, it divides the array into two parts and divides the parts into two parts again. This process continues till we get a single element in every part. After that, we merge the parts according to their position. Here is the code for the implementation of this approach:
#include<stdio.h>
int merge(int [],int ,int );
int sort(int [],int ,int ,int );
int main()
{
int a[100],n;
printf("enter the number of elements\n");
scanf("%d",&n);
printf("enter the elements\n");
for(int i=0;i<n;i++)
{
scanf("%d",&a[i]);
}
merge(a,0,n-1);
printf("sorted array:\n");
for(int i=0;i<n;i++)
{
printf("%2d",a[i]);
}
return 0;
}
int merge(int a[],int p,int q)
{
if(p<q)
{
int r=(p+q)/2;
merge(a,p,r);
merge(a,r+1,q);
sort(a,p,q,r);
}
}
int sort(int a[],int p,int q,int r)
{
int b[100];
int I =p;
int j=r+1;
int k=q;
while(i<=r&&j<=q)
{
if(a[i]<=a[j])
{
b[k]=a[i];
i++;
}
else
{
b[k]=a[j];
j++;
}k++;
}
if(i<r)
{
while(j<=q)
{
b[k]=a[j];
j++;
k++;
}
}
else
{
while(i<=r)
{
b[k]=a[i];
i++;
k++;
}
}
for(k=p;k<=q;k++)
{
a[k]=b[k];
}
}
There are also many other sorting approaches like Radix Sort, Counting Sort, Bucket Sort, ShellSort, Comb Sort, Pigeonhole Sort, Cycle Sort, insertion sort, etc.
Greedy Algorithm
The name greedy gives us some hints about this algorithm. In this approach, we just think about the current situation and get optimal solutions from the current situation. But the solution may not be universally optimal. So, this approach can be helpful in some cases but not in every case. It is more likely a top-down approach. Some well-known examples of the greedy approach are prims algorithm, Kruskal's algorithm, Dijkstra's algorithm, etc.
Code for Dijkstra's algorithm:
#include <limits.h>
#include <stdio.h>
#include <stdbool.h>
#define V 9
int minDistance(int dist[], bool sptSet[])
{
// Initialize min value
int min = INT_MAX, min_index;
for (int v = 0; v < V; v++)
if (sptSet[v] == false && dist[v] <= min)
min = dist[v], min_index = v;
return min_index;
}
void print solution(int dist[])
{
printf("Vertex \t\t Distance from Source\n");
for (int i = 0; i < V; i++)
printf("%d \t\t %d\n", i, dist[i]);
}
void dijkstra(int graph[V][V], int src)
{
int dist[V];
bool sptSet[V];
for (int i = 0; i < V; i++)
dist[i] = INT_MAX, sptSet[i] = false;
dist[src] = 0;
for (int count = 0; count < V - 1; count++) {
int u = minDistance(dist, spent);
sptSet[u] = true;
for (int v = 0; v < V; v++)
if (!sptSet[v] && graph[u][v] && dist[u] != INT_MAX
&& dist[u] + graph[u][v] < dist[v])
dist[v] = dist[u] + graph[u][v];
}
printSolution(dist);
}
int main()
{
int graph[V][V] = { { 0, 4, 0, 0, 0, 0, 0, 8, 0 },
{ 4, 0, 8, 0, 0, 0, 0, 11, 0 },
{ 0, 8, 0, 7, 0, 4, 0, 0, 2 },
{ 0, 0, 7, 0, 9, 14, 0, 0, 0 },
{ 0, 0, 0, 9, 0, 10, 0, 0, 0 },
{ 0, 0, 4, 14, 10, 0, 2, 0, 0 },
{ 0, 0, 0, 0, 0, 2, 0, 1, 6 },
{ 8, 11, 0, 0, 0, 0, 1, 0, 7 },
{ 0, 0, 2, 0, 0, 0, 6, 7, 0 } };
dijkstra(graph, 0);
return 0;
}
Dynamic programming
When a greedy algorithm thinks about just the current situation dynamic programming thinks about a universally optimal solution. Dynamic programming mainly uses an array to store the data and optimize the solution according to this data set. Dynamic programming is mostly used in longest common subsequence, longest increasing subsequence, and Fibonacci number.
Code for Fibonacci number:
#include<stdio.h>
int fib(int n)
{
int f[n+2];
int I;
f[0] = 0;
f[1] = 1;
for (i = 2; i <= n; i++)
{
f[i] = f[i-1] + f[i-2];
}
return f[n];
}
int main ()
{
int n = 9;
printf("%d", fib(n));
getchar();
return 0;
}
Divide and Conquer
It is a well-used algorithm. In many searching and sorting algorithms like quick sort, merge sort, and binary search, we use divide and conquer. This approach is very much clear. The steps are as follows.
Step 1: First we take the big input.
Step 2: We divide this large input into small inputs.
Step 3: We solve this small problem.
Step 4: Lastly the answers are assembled.
Step 5: Output generated.
Backtracking
In this approach first, we check all possible solutions and then optimize them. We go every possible way and check out the result. The famous backtracking problems are the N-queen problem and the knight's tour problem. To understand backtracking, you should go through these problems.
Question: The N Queen is the problem of placing N queens on an N*N chessboard so that no two queens attack each other. Queen can kill other queens in all of the 8 directions possible (left, right, up, down, and all the 4 diagonals). Given the value of N, you have to print all the valid chess board configurations possible.
Example:
import java.io.*;
import java.util.*;
public class Main {
public static void main(String[] args) throws Exception {
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
int n = Integer.parseInt(br.readLine());
int[][] chess = new int[n][n];
printNQueens(chess, "", 0);
}
public static void printNQueens(int[][] chess, String qsf, int row) {
if (row == chess.length) {
System.out.println(qsf + ".");
return;
}
for (int col = 0; col < chess.length; col++) {
if (chess[row][col] == 0
&& isQueenSafe(chess, row, col) == true) {
chess[row][col] = 1;
printNQueens(chess,
qsf + row + "-" + col + ", ", row + 1);
chess[row][col] = 0;
}
}
}
public static boolean isQueenSafe(int[][] chess,
int row, int col) {
for (int i = row - 1, j = col - 1;
i >= 0 && j >= 0; i--, j--) {
if (chess[i][j] == 1) {
return false;
}
}
for (int i = row - 1, j = col; i >= 0; i--) {
if (chess[i][j] == 1) {
return false;
}
}
for (int i = row - 1, j = col + 1; i >= 0
&& j < chess.length; i--, j++) {
if (chess[i][j] == 1) {
return false;
}
}
for (int i = row, j = col - 1; j >= 0; j--) {
if (chess[i][j] == 1) {
return false;
}
}
return true;
}
}
Question: We are given a number n which represents the size of a chess board, and a row and a column, as a starting point for a knight piece, you are required to generate the all moves of a knight starting in (row, col) such that knight visits all cells of the board exactly once.
A knight should move according to the rules in chess. Please explore the next moves in the clockwise direction to get the same result as ours.
Example:
import java.io.*;
import java.util.*;
public class Main {
public static void main(String[] args) throws Exception {
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
int n = Integer.parseInt(br.readLine());
int[][] chess = new int[n][n];
printNQueens(chess, "", 0);
}
public static void printNQueens(int[][] chess, String qsf, int row) {
if (row == chess.length) {
System.out.println(qsf + ".");
return;
}
for (int col = 0; col < chess.length; col++) {
if (chess[row][col] == 0
&& isQueenSafe(chess, row, col) == true) {
chess[row][col] = 1;
printNQueens(chess,
qsf + row + "-" + col + ", ", row + 1);
chess[row][col] = 0;
}
}
}
public static boolean isQueenSafe(int[][] chess,
int row, int col) {
for (int i = row - 1, j = col - 1;
i >= 0 && j >= 0; i--, j--) {
if (chess[i][j] == 1) {
return false;
}
}
for (int i = row - 1, j = col; i >= 0; i--) {
if (chess[i][j] == 1) {
return false;
}
}
for (int i = row - 1, j = col + 1; i >= 0
&& j < chess.length; i--, j++) {
if (chess[i][j] == 1) {
return false;
}
}
for (int i = row, j = col - 1; j >= 0; j--) {
if (chess[i][j] == 1) {
return false;
}
}
return true;
}
}