Data Structures Tutorial

Data Structures Tutorial Asymptotic Notation Structure and Union Array Data Structure Linked list Data Structure Type of Linked list Advantages and Disadvantages of linked list Queue Data Structure Implementation of Queue Stack Data Structure Implementation of Stack Sorting Insertion sort Quick sort Selection sort Heap sort Merge sort Bucket sort Count sort Radix sort Shell sort Tree Traversal of the binary tree Binary search tree Graph Spanning tree Linear Search Binary Search Hashing Collision Resolution Techniques

Misc Topic:

Priority Queue in Data Structure Deque in Data Structure Difference Between Linear And Non Linear Data Structures Queue Operations In Data Structure About Data Structures Data Structures Algorithms Types of Data Structures Big O Notations Introduction to Arrays Introduction to 1D-Arrays Operations on 1D-Arrays Introduction to 2D-Arrays Operations on 2D-Arrays Strings in Data Structures String Operations Application of 2D array Bubble Sort Insertion Sort Sorting Algorithms What is DFS Algorithm What Is Graph Data Structure What is the difference between Tree and Graph What is the difference between DFS and BFS Bucket Sort Dijkstra’s vs Bellman-Ford Algorithm Linear Queue Data Structure in C Stack Using Array Stack Using Linked List Recursion in Fibonacci Stack vs Array What is Skewed Binary Tree Primitive Data Structure in C Dynamic memory allocation of structure in C Application of Stack in Data Structures Binary Tree in Data Structures Heap Data Structure Recursion - Factorial and Fibonacci What is B tree what is B+ tree Huffman tree in Data Structures Insertion Sort vs Bubble Sort Adding one to the number represented an array of digits Bitwise Operators and their Important Tricks Blowfish algorithm Bubble Sort vs Selection Sort Hashing and its Applications Heap Sort vs Merge Sort Insertion Sort vs Selection Sort Merge Conflicts and ways to handle them Difference between Stack and Queue AVL tree in data structure c++ Bubble sort algorithm using Javascript Buffer overflow attack with examples Find out the area between two concentric circles Lowest common ancestor in a binary search tree Number of visible boxes putting one inside another Program to calculate the area of the circumcircle of an equilateral triangle Red-black Tree in Data Structures Strictly binary tree in Data Structures 2-3 Trees and Basic Operations on them Asynchronous advantage actor-critic (A3C) Algorithm Bubble Sort vs Heap Sort Digital Search Tree in Data Structures Minimum Spanning Tree Permutation Sort or Bogo Sort Quick Sort vs Merge Sort Boruvkas algorithm Bubble Sort vs Quick Sort Common Operations on various Data Structures Detect and Remove Loop in a Linked List How to Start Learning DSA Print kth least significant bit number Why is Binary Heap Preferred over BST for Priority Queue Bin Packing Problem Binary Tree Inorder Traversal Burning binary tree Equal Sum What is a Threaded Binary Tree? What is a full Binary Tree? Bubble Sort vs Merge Sort B+ Tree Program in Q language Deletion Operation from A B Tree Deletion Operation of the binary search tree in C++ language Does Overloading Work with Inheritance Balanced Binary Tree Binary tree deletion Binary tree insertion Cocktail Sort Comb Sort FIFO approach Operations of B Tree in C++ Language Recaman’s Sequence Tim Sort Understanding Data Processing Applications of trees in data structures Binary Tree Implementation Using Arrays Convert a Binary Tree into a Binary Search Tree Create a binary search tree Horizontal and Vertical Scaling Invert binary tree LCA of binary tree Linked List Representation of Binary Tree Optimal binary search tree in DSA Serialize and Deserialize a Binary Tree Tree terminology in Data structures Vertical Order Traversal of Binary Tree What is a Height-Balanced Tree in Data Structure Convert binary tree to a doubly linked list Fundamental of Algorithms Introduction and Implementation of Bloom Filter Optimal binary search tree using dynamic programming Right side view of binary tree Symmetric binary tree Trim a binary search tree What is a Sparse Matrix in Data Structure What is a Tree in Terms of a Graph What is the Use of Segment Trees in Data Structure What Should We Learn First Trees or Graphs in Data Structures All About Minimum Cost Spanning Trees in Data Structure Convert Binary Tree into a Threaded Binary Tree Difference between Structured and Object-Oriented Analysis FLEX (Fast Lexical Analyzer Generator) Object-Oriented Analysis and Design Sum of Nodes in a Binary Tree What are the types of Trees in Data Structure What is a 2-3 Tree in Data Structure What is a Spanning Tree in Data Structure What is an AVL Tree in Data Structure Given a Binary Tree, Check if it's balanced B Tree in Data Structure Convert Sorted List to Binary Search Tree Flattening a Linked List Given a Perfect Binary Tree, Reverse Alternate Levels Left View of Binary Tree What are Forest Trees in Data Structure Compare Balanced Binary Tree and Complete Binary Tree Diameter of a Binary Tree Given a Binary Tree Check the Zig Zag Traversal Given a Binary Tree Print the Shortest Path Given a Binary Tree Return All Root To Leaf Paths Given a Binary Tree Swap Nodes at K Height Given a Binary Tree Find Its Minimum Depth Given a Binary Tree Print the Pre Order Traversal in Recursive Given a Generate all Structurally Unique Binary Search Trees Perfect Binary Tree Threaded Binary Trees Function to Create a Copy of Binary Search Tree Function to Delete a Leaf Node from a Binary Tree Function to Insert a Node in a Binary Search Tree Given Two Binary Trees, Check if it is Symmetric A Full Binary Tree with n Nodes Applications of Different Linked Lists in Data Structure B+ Tree in Data Structure Construction of B tree in Data Structure Difference between B-tree and Binary Tree Finding Rank in a Binary Search Tree Finding the Maximum Element in a Binary Tree Finding the Minimum and Maximum Value of a Binary Tree Finding the Sum of All Paths in a Binary Tree Time Complexity of Selection Sort in Data Structure How to get Better in Data Structures and Algorithms Binary Tree Leaf Nodes Classification of Data Structure Difference between Static and Dynamic Data Structure Find the Union and Intersection of the Binary Search Tree Find the Vertical Next in a Binary Tree Finding a Deadlock in a Binary Search Tree Finding all Node of k Distance in a Binary Tree Finding Diagonal Sum in a Binary Tree Finding Diagonal Traversal of The Binary Tree Finding In-Order Successor Binary Tree Finding the gcd of Each Sibling of the Binary Tree Greedy Algorithm in Data Structure How to Calculate Space Complexity in Data Structure How to find missing numbers in an Array Kth Ancestor Node of Binary Tree Minimum Depth Binary Tree Mirror Binary Tree in Data Structure Red-Black Tree Insertion Binary Tree to Mirror Image in Data Structure Calculating the Height of a Binary Search Tree in Data Structure Characteristics of Binary Tree in Data Structure Create a Complete Binary Tree from its Linked List Field in Tree Data Structure Find a Specified Element in a binary Search Tree Find Descendant in Tree Data Structure Find Siblings in a Binary Tree Given as an Array Find the Height of a Node in a Binary Tree Find the Second-Largest Element in a Binary Tree Find the Successor Predecessor of a Binary Search Tree Forest of a Tree in Data Structure In Order Traversal of Threaded Binary Tree Introduction to Huffman Coding Limitations of a Binary Search Tree Link State Routing Algorithm in Data Structure Map Reduce Algorithm for Binary Search Tree in Data Structure Non-Binary Tree in Data Structure Quadratic Probing Example in Hashing Scope and Lifetime of Variables in Data Structure Separate Chaining in Data Structure What is Dynamic Data Structure Separate Chaining vs Open Addressing Time and Space Complexity of Linear Data Structures Abstract Data Types in Data Structures Binary Tree to Single Linked List Count the Number of Nodes in the Binary Tree Count Total No. of Ancestors in a Binary Search Tree Elements of Dynamic Programming in Data Structures Find cost of tree with prims algorithm in data structures Find Preorder Successor in a Threaded Binary Tree Find Prime Nodes Sum Count in Non-Binary Tree Find the Right Sibling of a Binary Tree with Parent Pointers Find the Width of the Binary Search Tree Forest trees in Data Structures Free Tree in Data Structures Frequently asked questions in Tree Data Structures Infix, Postfix and Prefix Conversion Time Complexity of Fibonacci Series What is Weighted Graph in Data Structure What is the Advantage of Linear Search?

Fundamental of Algorithms

An algorithm is a part of any programming solution or coding. If we have to make a solution then first we have to think of a clear idea about the solution. This idea is called an algorithm. The algorithm makes our job very easy. If anyone gets the right algorithm, he can easily access the solution. One problem may be solved by many algorithms for example multiplication of two numbers.

Example:

If we multiply 3 with 5, we will get 15. Now, if we solve this problem in C++, it is very easy to multiply two numbers. But this problem can also be solved by summation.

So, from the above example, we can see there are lots of approaches to solve one problem. Here, one new problem arises. If we have multiple ways of solution, we have to choose between many solutions. One way to compare between two solutions is time. Another way is space. Let us try to understand the analysis of the algorithm in deep.

Analysis of Algorithm

When we compare many solutions to a problem, it happens that the time or space taken by any solution may differ from environment to environment. It happens because an algorithm is expected to work fast for all input size but for smaller input size our algorithm will work smoothly but for higher input size the execution time is much higher. By increasing the size of n (input size), we can analyze how well our algorithm works. Let us take an example to understand this:

When the input size is 5, we have to sort the array of elements for example, 60, 24, 75, 26, 85, 1. So, it is easy to solve this problem when n=5 but when n becomes 10000, it is a very time-consuming process. So, the algorithm will take a much longer time to sort the elements or cause small delays to give the result. How the behaviour of the algorithm changes with the no of inputs is called the order of growth.

Asymptotic notation

If we use the second unit to measure the time then it creates a problem because the time measured in the second may differ from system to system. So, we use asymptotic notation to denote the time complexity of an algorithm. In the aspect of the input value, we calculate the time. There may occur three cases for a particular algorithm:

Best case: When the algorithm takes the lowest time then the best case occurs. In general, for small test cases, it gives the best case means takes very less time. Suppose, we have to sort an array that consists of elements 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10, the array is already sorted. So, we don't need to sort the array. In some sorting approaches, it takes only O (n) time complexity.

Worst case: When the algorithm takes the highest time then the worst case occurs. In general, for big test cases, it gives worst case means to take very huge time. Suppose we have to sort an array that consists of elements 10, 9, 8, 7, 6, 5, 4, 3, 2, 1. Now the array is reversely sorted. So, we need to sort the array completely. In some sorting approaches, it takes O (n^2) time complexity.

Average case: When we are between best and worst, it happens to be the average case. Suppose the array is partially sorted then it will be the average case.

Operations in order of growth

To denote the time complexities in different cases we use three types of operations. They are as follows:

  1. Big 'O' ( oh ): It is mainly used when calculating the worst case. As an example, the time complexity of bubble sort is O ( n^2 ).
  2. Big omega: It represents the minimum time that the algorithm takes for execution.
  3.  Big theta: It represents the average time that an algorithm takes for its execution.

Searching and sorting

In DSA or problem-solving, searching and sorting is an essential parts We cannot solve most problems without searching and sorting. First, we will discuss searching and then sorting.

Let us take an example:

We have a list of numbers like 25, 36, 1, 24, 85, 2, 3, 14, 49, 26, 9, 0. Now we have to check whether 3 is present in the list or not. So, this checking process is called searching. There are many methods of searching. Let us check-

Linear search: This is a very simple approach for searching. In this method, we just traverse the list and try to find the given number.

Example:

Program to implement linear search in c
#include <stdio.h>  
int linear search(int a[], int n, int val) {  
   for (int i = 0; i < n; i++)  
    {  
        if (a[i] == val)  
        return i+1;  
    }  
  return -1;  
}  
int main() {  
  int a[] = {70, 40, 30, 11, 57, 41, 25, 14, 52}; 
  int val = 41; 
  int n = sizeof(a) / sizeof(a[0]); 
  int res = linearSearch(a, n, val); 
  printf("The elements of the array are - ");  
  for (int i = 0; i < n; i++)  
  printf("%d ", a[i]);   
  printf("\nElement to be searched is - %d", val);  
  if (res == -1)  
  printf("\nElement is not present in the array");  
  else  
  printf("\nElement is present at %d position of array", res);  
  return 0;  
} 
program to implement linear search in C++.
#include <iostream>  
using namespace std;  
int linearSearch(int a[], int n, int val) {  
  for (int i = 0; i < n; i++)  
    {  
        if (a[i] == val)  
        return i+1;  
    }  
  return -1;  
}  
int main() {  
  int a[] = {69, 39, 29, 10, 56, 40, 24, 13, 51}; 
  int val = 56; 
  int n = sizeof(a) / sizeof(a[0]); 
  int res = linearSearch(a, n, val); 
  cout<<"The elements of the array are - ";  
  for (int i = 0; i < n; i++)  
  cout<<a[i]<<" ";    
  cout<<"\nElement to be searched is - "<<val;    
  if (res == -1)  
  cout<<"\nElement is not present in the array";  
  else  
  cout<<"\nElement is present at "<<res<<" position of array";  
  return 0;  
}  

Complexity Analysis:

Time complexity: Here, the looping will be used. Complexity will be O(n).

Space complexity: In this solution, we use constant memory. So, space complexity will be O(1).

Binary search

When linear search is very time taking process, binary search takes very less time. It mainly applies the divide and conquers approach. First, we divide the full list of numbers into two equal parts and then check for elements. After that one of the parts will be divided and checking will be continued. This process continues till we don't find the element.

Example:

#include<stdio.h>
void sort(int n,int a[]);
void search(int low,int high,int a[],int s);
int main()
{
	int n,s;
	printf("enter the size:\n");
	scanf("%d",&n);
	int a[n-1];
	printf("enter the elements:\n");
	for(int i=0;i<n;i++)
	{
		scanf("%d",&a[i]);
	}
	sort(n,a);
	printf("entered sorted elements:");
	for(int i=0;i<n;i++)
	{
		printf("%d ",a[i]);
	}
	printf("\nenter the search element:\n");
	scanf("%d",&s);
	search(1,n,a,s);
}
void sort(int n,int a[])
{
	int i,j,temp;
	for(i=0;i<n;i++)
	{
		for(j=i+1;j<n;j++)
		{
			if(a[i]>a[j])
			{
				temp=a[i];
				a[i]=a[j];
				a[j]=temp;
			}
		}
	}
}
void search(int low,int high,int a[],int s)
{
	int mid;
	if(low>high)
	{
		printf("search is not successful");
		return;
	}
	mid=(low+high)/2;
	if(a[mid]==s)
	{
		printf("search successful\nposition:%d",mid+1);
	}
	else if(s<a[mid])
	{
		search(low,mid-1,a,s);
		
	}
	else if(s>a[mid])
	{
		search(mid+1,high,a,s);
	}
}

Complexity Analysis:

Time complexity: Here, the recursion continues till we find the element. So, the complexity directly depends on the number of elements. Complexity will be O(logn).

Space complexity: In this recursive solution, we use constant memory. So, space complexity will be O(1).

Applications of binary search    

 In many problems, we use binary search but it has a lot of drawbacks. The array should be sorted. If it is not a sorted array, we can't implement a binary search. So, first, we need to check whether the array is sorted or not. In real-world problems many times we use binary search.

Sorting

Now, we are going toward the sorting process. Sorting is also an essential concept to learn in DSA. Let us take an example to understand the fundamentals of sorting.

For example:  

We have an array like this [2, 5, 9, 7, 6, 1, 3, 8, 0]. Sorting means to arrange this number in an order (it may be in ascending or descending). After sorting the array will be [ 0, 1, 2, 3, 5, 6, 7, 8, 9].

We can use different types of the algorithm for sorting. The simplest one is bubble sort. Let us discuss different types of sorting algorithms: -

1.Bubble sort: It is one of the simplest sorting algorithms. But it also has a drawback. It requires two loopings. So time complexity goes to O ( n^2 ). Here we compare the elements one by one. In this way, after each loop, we get one sorted element in the last of the array. This algorithm is generally avoided in problem-solving. Here is the code for it:

#include<stdio.h>
int main()
{
	int a[10],n;
	printf("enter the number of elements\n");
	scanf("%d",&n);
	printf("enter the elements\n");
	for(int i=0;i<n;i++)
	{
		scanf("%d",&a[i]);
	}
	for(int i=0;i<n-1;i++)
	{
		for(int j=0;j<n-i-1;j++)
		{
			if(a[j]>a[j+1])
			{
				int temp;
				temp=a[j];
				a[j]=a[j+1];
				a[j+1]=temp;
			}
		}
	}
	printf("sorted elements\n");
	for(int i=0;i<n;i++)
	{
		printf("%d\n",a[i]);
	}
	return 0;
}

2. Selection sort: It is also like bubble sort. But selection sort can benefit us in some cases. When bubble sort keeps sorted elements in the last of the array selection sort puts all the sorted elements in the first of the array. Here also time complexity is O ( n^2). Here is the code for it:

#include<stdio.h>
void sort(int a[],int n);
int main()
{
	int a[100],n;
	printf("enter the number of elements:\n");
	scanf("%d",&n);
	printf("enter the elements:\n");
	for(int i=0;i<n;i++)
	{
		scanf("%d",&a[i]);
	}
	sort(a,n);
	
	return 0;
}
void sort(int a[],int n)
{
	for(int i=0;i<n-1;i++)
	{
		int min=i;
		int j=i+1;
		while(j<n)
		{
			if(a[j]<a[min])
			{
				min=j;
			}
			j++;
		}
		if(min!=i)
		{
			int temp;
			temp=a[i];
			a[i]=a[min];
			a[min]=temp;
		}
	}
		printf("sorted array:\n");
	for(int i=0;i<n;i++)
	{
		printf(" %d",a[i]);
		
	}
}

3. Quick sort: This algorithm is more efficient than bubble sort or selection sort. Its time complexity will be O (nlogn). It mainly uses the approach of divide and conquer to sort the array. One pivot element will be taken first. After that, the minimum element will be found using pivot. In this way, we divide the array and sort the array. Here is the code to implement this approach:

#include <stdio.h>
 
void quicksort (int [], int, int);
 
int main()
{
    int list[50];
    int size, I;
 
    printf("Enter the number of elements: ");
    scanf("%d", &size); 
    printf("Enter the elements to be sorted:\n");
    for (i = 0; i < size; i++)
    {
        scanf("%d", &list[i]);
    } 
    quicksort(list, 0, size - 1);
    printf("After applying quick sort\n");
    for (i = 0; i < size; i++)
    {
        printf("%d ", list[i]);
    }
    printf("\n");
 
    return 0;
}
void quicksort(int list[], int low, int high)
{
    int pivot, i, j, temp;
    if (low < high)
    {
        pivot = low;
        i = low;
        j = high;
        while (i < j) 
        {
            while (list[i] <= list[pivot] && i <= high)
            {
                i++;
            }
            while (list[j] > list[pivot] && j >= low)
            {
                j--;
            }
            if (i < j)
            {
                temp = list[i];
                list[i] = list[j];
                list[j] = temp;
            }
        }
        temp = list[j];
        list[j] = list[pivot];
        list[pivot] = temp;
        quicksort(list, low, j - 1);
        quicksort(list, j + 1, high);
    }
}

4. Merge sort: Merge sort is one of the most used sorting algorithms. It is a very efficient algorithm. Its time complexity is O( nlogn). It also uses the divide and conquers approach to sort the array. First, it divides the array into two parts and divides the parts into two parts again. This process continues till we get a single element in every part. After that, we merge the parts according to their position. Here is the code for the implementation of this approach:

#include<stdio.h>


int merge(int [],int ,int );
int sort(int [],int ,int ,int );
int main()
{
	int a[100],n;
	printf("enter the number of elements\n");
	scanf("%d",&n);
	printf("enter the elements\n");
	for(int i=0;i<n;i++)
	{
	scanf("%d",&a[i]);
	}
	merge(a,0,n-1);
	printf("sorted array:\n");
	for(int i=0;i<n;i++)
	{
	printf("%2d",a[i]);
	}
	return 0;
}
int merge(int a[],int p,int q)
{
	if(p<q)
	{
		int r=(p+q)/2;
		merge(a,p,r);
		merge(a,r+1,q);
		sort(a,p,q,r);
	}
}
int sort(int a[],int p,int q,int r)
{
int b[100];
int I =p;
int j=r+1;
int k=q;
while(i<=r&&j<=q)
{
	if(a[i]<=a[j])
	{
		b[k]=a[i];
		i++;
	}
	else
	{
		b[k]=a[j];
		j++;
	}k++;
}
if(i<r)
{
	while(j<=q)
	{
		b[k]=a[j];
		j++;
		k++;
		
	}
	}
	else
	{
		while(i<=r)
		{
			b[k]=a[i];
			i++;
			k++;
		}
	}
	for(k=p;k<=q;k++)
	{
		a[k]=b[k];
	}


}

There are also many other sorting approaches like Radix Sort, Counting Sort, Bucket Sort, ShellSort, Comb Sort, Pigeonhole Sort, Cycle Sort, insertion sort, etc.

Greedy Algorithm

The name greedy gives us some hints about this algorithm. In this approach, we just think about the current situation and get optimal solutions from the current situation. But the solution may not be universally optimal. So, this approach can be helpful in some cases but not in every case. It is more likely a top-down approach. Some well-known examples of the greedy approach are prims algorithm, Kruskal's algorithm, Dijkstra's algorithm, etc.

Code for Dijkstra's algorithm:

#include <limits.h>
#include <stdio.h>
#include <stdbool.h>
#define V 9
int minDistance(int dist[], bool sptSet[])
{
	// Initialize min value
	int min = INT_MAX, min_index;


	for (int v = 0; v < V; v++)
		if (sptSet[v] == false && dist[v] <= min)
			min = dist[v], min_index = v;


	return min_index;
}
void print solution(int dist[])
{
	printf("Vertex \t\t Distance from Source\n");
	for (int i = 0; i < V; i++)
		printf("%d \t\t %d\n", i, dist[i]);
}
void dijkstra(int graph[V][V], int src)
{
	int dist[V]; 


	bool sptSet[V]; 
	for (int i = 0; i < V; i++)
		dist[i] = INT_MAX, sptSet[i] = false;
	dist[src] = 0;
	for (int count = 0; count < V - 1; count++) {
	
		int u = minDistance(dist, spent);
		sptSet[u] = true;
		for (int v = 0; v < V; v++)
			if (!sptSet[v] && graph[u][v] && dist[u] != INT_MAX
				&& dist[u] + graph[u][v] < dist[v])
				dist[v] = dist[u] + graph[u][v];
	}
	printSolution(dist);
}
int main()
{
int graph[V][V] = { { 0, 4, 0, 0, 0, 0, 0, 8, 0 },
			{ 4, 0, 8, 0, 0, 0, 0, 11, 0 },
			{ 0, 8, 0, 7, 0, 4, 0, 0, 2 },
			{ 0, 0, 7, 0, 9, 14, 0, 0, 0 },
			{ 0, 0, 0, 9, 0, 10, 0, 0, 0 },
			{ 0, 0, 4, 14, 10, 0, 2, 0, 0 },
			{ 0, 0, 0, 0, 0, 2, 0, 1, 6 },
			{ 8, 11, 0, 0, 0, 0, 1, 0, 7 },
			{ 0, 0, 2, 0, 0, 0, 6, 7, 0 } };


	dijkstra(graph, 0);


	return 0;
}

Dynamic programming

When a greedy algorithm thinks about just the current situation dynamic programming thinks about a universally optimal solution. Dynamic programming mainly uses an array to store the data and optimize the solution according to this data set. Dynamic programming is mostly used in longest common subsequence, longest increasing subsequence, and Fibonacci number.

Code for Fibonacci number:

#include<stdio.h>


int fib(int n)
{
int f[n+2]; 
int I;
f[0] = 0;
f[1] = 1;


for (i = 2; i <= n; i++)
{
	f[i] = f[i-1] + f[i-2];
}


return f[n];
}


int main ()
{
int n = 9;
printf("%d", fib(n));
getchar();
return 0;
}

Divide and Conquer

It is a well-used algorithm. In many searching and sorting algorithms like quick sort, merge sort, and binary search, we use divide and conquer. This approach is very much clear. The steps are as follows.

Step 1: First we take the big input.

Step 2: We divide this large input into small inputs.

Step 3: We solve this small problem.

Step 4: Lastly the answers are assembled.

Step 5: Output generated.

Backtracking

In this approach first, we check all possible solutions and then optimize them. We go every possible way and check out the result. The famous backtracking problems are the N-queen problem and the knight's tour problem. To understand backtracking, you should go through these problems.

Question: The N Queen is the problem of placing N queens on an N*N chessboard so that no two queens attack each other. Queen can kill other queens in all of the 8 directions possible (left, right, up, down, and all the 4 diagonals). Given the value of N, you have to print all the valid chess board configurations possible.

Example:

import java.io.*;


import java.util.*;


public class Main {


  public static void main(String[] args) throws Exception {
    BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
    int n = Integer.parseInt(br.readLine());
    int[][] chess = new int[n][n];
    printNQueens(chess, "", 0);
  }


  public static void printNQueens(int[][] chess, String qsf, int row) {
    if (row == chess.length) {
      System.out.println(qsf + ".");
      return;
    }
    for (int col = 0; col < chess.length; col++) {
      if (chess[row][col] == 0
          && isQueenSafe(chess, row, col) == true) {
        chess[row][col] = 1;
        printNQueens(chess,
                     qsf + row + "-" + col + ", ", row + 1);
        chess[row][col] = 0;
      }
    }
  }


  public static boolean isQueenSafe(int[][] chess,
                                    int row, int col) {
    for (int i = row - 1, j = col - 1;
         i >= 0 && j >= 0; i--, j--) {
      if (chess[i][j] == 1) {
        return false;
      }
    }


    for (int i = row - 1, j = col; i >= 0; i--) {
      if (chess[i][j] == 1) {
        return false;
      }
    }


    for (int i = row - 1, j = col + 1; i >= 0
         && j < chess.length; i--, j++) {
      if (chess[i][j] == 1) {
        return false;
      }
    }


    for (int i = row, j = col - 1; j >= 0; j--) {
      if (chess[i][j] == 1) {
        return false;
      }
    }


    return true;
  }
}

Question: We are given a number n which represents the size of a chess board, and a row and a column, as a starting point for a knight piece, you are required to generate the all moves of a knight starting in (row, col) such that knight visits all cells of the board exactly once.

A knight should move according to the rules in chess. Please explore the next moves in the clockwise direction to get the same result as ours.

Example:

import java.io.*;


import java.util.*;


public class Main {


  public static void main(String[] args) throws Exception {
    BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
    int n = Integer.parseInt(br.readLine());
    int[][] chess = new int[n][n];
    printNQueens(chess, "", 0);
  }


  public static void printNQueens(int[][] chess, String qsf, int row) {
    if (row == chess.length) {
      System.out.println(qsf + ".");
      return;
    }
    for (int col = 0; col < chess.length; col++) {
      if (chess[row][col] == 0
          && isQueenSafe(chess, row, col) == true) {
        chess[row][col] = 1;
        printNQueens(chess,
                     qsf + row + "-" + col + ", ", row + 1);
        chess[row][col] = 0;
      }
    }
  }


  public static boolean isQueenSafe(int[][] chess,
                                    int row, int col) {
    for (int i = row - 1, j = col - 1;
         i >= 0 && j >= 0; i--, j--) {
      if (chess[i][j] == 1) {
        return false;
      }
    }


    for (int i = row - 1, j = col; i >= 0; i--) {
      if (chess[i][j] == 1) {
        return false;
      }
    }


    for (int i = row - 1, j = col + 1; i >= 0
         && j < chess.length; i--, j++) {
      if (chess[i][j] == 1) {
        return false;
      }
    }


    for (int i = row, j = col - 1; j >= 0; j--) {
      if (chess[i][j] == 1) {
        return false;
      }
    }


    return true;
  }
}