## What is Complexity? Explain Complexity with C examples. |

Asymptotic complexity is a way of expressing the cost of an algorithm using idealized units of computational work.

In order to choose the best algorithm for a task many factors, like how long will it take for an algorithm to run or how much memory will be taken by the algorithm during running, has to be considered.

Asymptotic notation is a way of estimating the cost of an algorithm. The main goal of asymptotic notations is to remove complexity from the algorithm and facilitate easy analysis of the algorithm.

The Big-O notation measures efficiency based on the time it takes for the algorithm to run as the function of the input size, i.e. the parameter required by the function. It is an upper bound function.

Big–O Notation (O) may be denoted by the following expression:

O(g(n)) = { f(n) : there exist positive constants c and n0 such that 0 < f(n) < c*g(n) for all n > n0 }.

The Big-Omega notation is similar to Big-O notation except that it is a lower bound function. It describes the best that can happen for a given data size.

Omega Notation may be denoted by the following expression:

omega (g (n)) = { f(n) : there exist positive constants c and n0 such that 0 < c * g(n) < f(n) for all n > n0 }

The Theta notation denotes that the function f(n) is bounded by the function g(n) from both the top and bottom.

Theta Notation may be denoted by the following expression:

theta(g(n)) = { f(n) : there exist positive constants c1 and c2 and n0 such that 0 < c1 * g(n) < f(n) < c2 * g(n) for all n > n0 }

The Little o Notation represents a loose bounding version of Big-O. The function g(n) bounds from the top of function f(n) but not the bottom.

Little oh Notation (o) may be denoted by the following expression:

o(g(n)) = { f(n) : for any positive constant c>0 , there exists a constant n0 such that 0 < f(n) < c * g(n) for all n > n0 }

The Little omega Notation represents a loose bounding version of Big-Omega. The function g(n) bounds from the bottom of the function f(n) but not the top.

Little omega Notation (w) may be denoted by the following expression:

w(g(n)) = { f(n) : for any positive constant c>0 , there exists a constant n0 such that 0 < c * g (n) < f(n) for all n > n0 }

Any problem associated with computer science generally has more than one solution. These solutions come in the form of algorithms. It is necessary to find the efficiency of the algorithms so that the best algorithm may be adapted as the solution. The Big-O notation provides a basis for measuring the efficiency of the algorithm.

The Big-O notation measures efficiency based on the time it takes for the algorithm to run as the function of the input size, i.e. the parameter required by the function.

Big–O Notation (O) may be denoted by the following expression:

O(g(n)) = { f(n) : there exist positive constants c and n0 such that 0 < f(n) < c*g(n) for all n > n0 }.

The utility of Big-O notation may be best explained by considering two different algorithms performing the same task. The task to be performed is to find the largest element in the array provided by the user.

The first algorithm allocates the first element of the array to a variable and checks with the other elements of the array changing the contents of the variable as larger elements are discovered.

The function following this algorithm is as follows:

int findlarge1( int ar[10] ) { /* Variable 'large' is initialized with the first value of the array. */ int large = ar[0]; for( int i = 0; i < 10; i++ ) { if( ar[i] > large ) large = ar[i]; } return large; }

The second algorithm checks every element of the array with every other element of the array. The largest element is found when all other elements in the array is smaller than the number.

The function following this algorithm is as follows:

int findlarge2( int ar[10] ) { bool large; for( int i = 0; i < 10; i++ ) { /*The variable 'large' is a boolean variable and is initialized to true before comparison begins for every element. The variable is set to false if an element larger than ar[i] is detected in the array. The variable remains TRUE if all elements of the array is smaller than ar[i] */ large = true; for( int j = 0; j < 10; j++ ) { if( ar[i] < ar[j] ) large = false; } if(large) break; } return ar[i]; }

Big-O analysis of an algorithm expresses efficiency by considering how many times n input elements have been processed. Processing of elements may take place in different forms like elements may be compared, added or multiplied to another element.

Here, the array sizes has been considered 10. But big-O analysis may be done on an array of any size. So, for flexibility, we consider the number of elements to be n.

Big-O considers linear time, i.e. the time taken to run the algorithm increases as the number of input elements increases. In other words, the time taken by an algorithm is directly proportional to the number of input elements. So, a function with 20 input elements will take longer time to complete than a function with 10 input elements.

Considering the above two algorithms, a best case and a worst case may be estimated.

The best case of Big-O notation basically means the scenerio in which the shortest time is required for the function to perform the task.

In case of the above two functions, this scenerio occurs when the first element is the largest element of the array.

For findlarge1, the largest element is assigned to 'large' when the first element from the array is assigned to it. However, all the elements have to be compared with 'large' in order to verify the fact. So, time complexity of the algorthm is O(n) since every element is processed once.

For findlarge2, the first element is compared with all other elements of the array to determine whether it is the largest element of the array. Since it is the largest element, no more comparisons are made. So, time complexity of the algorithm is O(n) since every element is processed once.

Thus, it is seen that in best case, both the algorithms have the same time complexity.

For the worst case scenerio, the algorithm has to take the longest time possible to achieve its task.

In case of the above two functions, this scenerio occurs when the last element is the largest element of the array.

For findlarge1, when the elements of the array are compared to the values present in 'large' and the value of 'large' is modified accordingly, the largest element gets stored in 'large'. So, time complexity of the algorthm is O(n) since every element is processed once and only once.

For findlarge2, every element has to be compared individually with every other element in the array and the largest element is detected only in the last pass. So, time complexity of the algorthm is O(n*n) = O(n^{2}) since every element is processed twice.

Thus, it is seen that for findlarge1, the complexity remains the same for best and worst cases, but for findlarge2, the complexity changes from O(n) to O(n^{2}). This is because in case of findlarge2, the elements are being compared to each other twice. Each element is being compared with every element of the array and this process is being repeated for all the elements of the array. Thus, if the array contains 10 elements, then 10*10 = 100 comparisons are being made before the result may be obtained.

Note: For findlarge1, when the first element ar[0] is assigned to 'large', it counts as a process (assignment). So, the Big-O notation should be O(n+1). However, Big-O is concerned with the running time of the function as the input element (n) tennds to infinity. As n approaches infinity, the constant '1' becomes insignificant and is ignored. Similarly, for a notation n^{2}+n, the n may be ignored as n^{3} approaches infinity.

Comparing the functions, findlarge1 with O(n) and findlarge2 with O(n^{2}). For an input element of 1,000 (assuming a value), findlarge1 with process on the order of 1,000 elements whereas findlarge2 will process on the order of 1,000*1,000 = 1,000,000 elements. Thus, it is understandable how fast findlarge1 will run compared to findlarge2 especially when a huge number of input elements are concerned.

Efficiency of an algorithm is very important as it makes a huge difference in the operation of the program and ultimately the entire system. It is important to know how to find the efficiency of an algorithm and how to create efficient algorithms.

You have viewed 1 page out of 248. Your C learning is 0.00% complete. Login to check your learning progress.

**Similar topics related to this section**

Binary search tree, double Binary tree, mirror Binary tree, height of Binary tree, heap, Complexity, Linear Search, Binary Search, Hash Table,

C Programming Language (Prentice Hall Software)

Let Us C Paperback - 2006 by Yashavant Kanetkar

Understanding and Using C Pointers Core techniques for memory management

Data Structures Using C and C++ Paperback - 1998

Data Structures In C Paperback - August 11, 2008 by Noel Kalicharan