# How to Calculate Time Complexity in C?

## What is time complexity?

An algorithm's time complexity measures how long it takes to complete a task in relation to the size of the input. It should be noted that the time required to complete the procedure depends on the length of the input rather than the machine's real processing speed.

There are various approaches to problem solving in computer programming, just like there are in other facets of life. We need to assess the effectiveness of several strategies in order to choose the best one because these different techniques could indicate different times, computational power, or any other measurement you choose.

As we may already be aware, computers can solve issues using algorithms.

They have changed so much in modern times that they can perform the same duty very differently. In the worst scenario several algorithms implemented in various programming languages may instruct various computers with various hardware and operating systems to do the same work in various ways.

There are methods for doing this, and we don't need to watch the algorithm in action to determine whether it can do the task fast or if its input will cause it to fail. When evaluating an algorithm's complexity, we shouldn't actually be concerned with the precise number of operations carried out; rather, we should be concerned with how the number of operations relates to the size of the problem.

The temporal complexity is a measure of how frequently a statement is executed. The actual time needed to run a piece of code is not the time complexity of an algorithm because that is dependent on other elements like the operating system, programming language, processor speed, etc.

The concept behind time complexity is that it can only measure an algorithm's execution time in a way that solely depends on the method and its input.

### Big O Notation

We use a technique known as "Big O notation" to describe how time-consuming a method is. We use a language called the Big O notation to express how time-consuming an algorithm is. It's how we assess the effectiveness of several solutions to an issue and helps in our decision-making.

Big O notation describes an algorithm's run time in terms of how quickly the input (referred to as "n") expands in comparison to the algorithm.

Thus, if we were to say, for example, that an algorithm's runtime increases "on the order of the size of the input," we would write it as "O(n)". If we want to state that an algorithm's execution time increases "on the order of the square of the size of the input," we would use the notation "O(n2)".

Knowing the speeds at which things might increase is essential to understand time complexity. Time taken per input size is the unit of measurement here. There are various time complexity types.

Let’s see them with some examples:

Example 1:

``````#include <stdio.h>
int main()
{
printf("Statement Executed\n");
return 0;
}
``````

Output:

`Statement Executed`

The word "Statement Executed" appears only once on the screen in the code above.

Therefore, regardless of the operating system or machine configuration you are using, the time complexity is constant: O(1), i.e., every time a constant amount of time is needed to execute code.

Irrespective of the size of the input, an algorithm will always run in the same amount of time if its time complexity is constant. It doesn't matter how large the input size is, for instance, if we only have one statement in the loop.

Example 2:

``````#include <stdio.h>
void main()
{
int i, n = 10;
for (i = 1; i <= n; i++) {
printf("Statement Executed\n”);
}
}
``````

Output:

``````Statement Executed
Statement Executed
Statement Executed
Statement Executed
Statement Executed
Statement Executed
Statement Executed
Statement Executed
Statement Executed
Statement Executed
``````

Because the value of n is a variable, "Statement Executed” is only displayed n times in the code above.

Therefore, linear time complexity exists: Code execution takes O(n), or a linear amount of time each time.

The runtime of an algorithm increases practically linearly with input size if the time complexity of the algorithm is linear. This is typically demonstrated by iterating through an array. The length of the looping process increases with the number of elements.

Example 3:

``````#include <stdio.h>
void main()
{
int i, n = 10;
for (i = 1; i <= n; i=i*2) {
printf("Statement Executed\n”);
}
}
``````

Output:

``````Statement Executed
Statement Executed
Statement Executed
Statement Executed
Statement Executed
``````

The best time complexity for a given programming problem is the one which asymptotically grows the slowest as N (e.g., the number of elements, nodes, users, transactions, etc.) increases and becomes very large.

For example, if the problem is to search for an item in a sorted list, there are multiple ways to do this, and an approach yielding the slower-growing time complexity is better than one which has a faster-growing time complexity. Given 1 trillion elements in a sorted list with random access (i.e., an array), linear search is going to be O(N), with a worst case of 1 trillion comparisons, but binary search is going to be O(log2N), with a worst case of 40 comparisons.

Obviously, binary search has a better time complexity than linear search, for this simple searching operation(log2N) is the best time complexity for this particular problem, assuming you choose the right algorithm to achieve it.

Of course, O(1) is the best overall, because it doesn’t depend on N at all. But few problems can be reduced to this time complexity. For example, searching for an item in a sorted list can’t be reduced to O(1).

However, the time complexity for the search as N grows large is not going to be O(1) — it’s going to be the time complexity of the algorithm you use (e.g., O(N) for linear search,  O(log2N) for binary search).

So, the best achievable time complexity varies from one type of problem to another. There is no best complexity which all algorithms can achieve.