Understanding Algorithm: Complexity and Performance

Big O Notation: Space and Time Complexity


Analysis of the runtime of the algorithm is performed in three ways they are Big O notation, Theta notation, and Omega notation. This article mainly explains Big O notation. It is the worst-case time complexity of the algorithm. It is calculated when huge inputs are given to the computer program. Big O notation is represented in terms of the given input to the algorithm. Generally, as O(N), where N is the size of the input.

The mathematical definition of Big O notation is as follows:

The function f is said to be O(g) (read big-oh of g), where g and f are the functions from the set of natural numbers to itself if there is a constant c > 0 and a natural number n0 such that f(n) ≤ cg(n) for all n ≥ n0.

Big O notation gives the upper bound of the function as shown below:

f(n)= O(g(n)), f there exists a positive integer n0 and a positive constant c, such that f(n)≤c.g(n) ∀ n≥n0.

Examples of Big O time complexity are as follows:



Big O time complexity


Linear Search

O(N) Runtime grows linearly in nature


Binary Search

O(Log N) Runtime grows logarithmically to N


Selection Sort

O(N2) Runtime is growing in a polynomial way.


Merge  Sort

O(N log N) A super linear algorithm where Runtime grows directly with N.


Heap Sort

O(N Log N) A super linear algorithm 


Tower of Hanoi 

O(cN) Runtime grows even faster than polynomial algorithm based on N. 


Factorial algorithm

O(N!) Runtime grows very fast as compared to all other time complexity and is not considered to be efficient.

It is clearly explained in the below image:

Source: GeeksforGeeks


This article mainly explained Big O notation analysis in detail. Big O notation is the upper bound analysis of the computer program. It is calculated when huge input is given for the algorithm. It is considered to be the worst-case time complexity of the algorithm.