Tuesday, March 6, 2012

Running Time Analysis (Big O notation)

Running Time Analysis
Running Time Analysis (Big O notation)

"Big O" notation is the language we use for articulating how long an algorithm takes to run. It is a way to compare the efficiency 
of different solutions/algorithms to a problem. However "Big O" represents the worst case analysis means "The supposed algorighm
can't be slower than this upper bound time".
This means: Algo_Running_Time <= c(f(n)). Where c is some variable for all input size "n".

There are other representations too for asymptotic analysis:
    Big-O :     represents worst case analysis, upper bound of running time (Running_Time <= c(f(n))). See proper definition below.    
    Big-Theta : represents average case but not exactly (c1f(n) <= Algo_Running_Time <= c2f(n) where c1 and c2 are some variables for all n).
                Big Theta says that the running time of our algorithm is bounded ABOVE AND BELOW by some multiple of f(n). See proper definition below.        
    Big-Omega : represents best case. This is the Lower bound of running time. See proper definition below.    
Again the definition of above asymptotic notations:
    Big-O : The function g(n) is O(f(n)) if there exists a positive real constant c and a positive integer n0 such that g(n) ≤ cf(n) for all n > n0.
    Big-Ω : The function g(n) is Ω(f(n)) if there exists a positive real constant c and a positive integer n0 such that g(n) ≥ cf(n) for all n > n0.
    Big-Θ : The function g(n) is Θ(f(n)) if there exists two positive real constants c1 and c2 and a positive integer n0 such that c1f(n) ≤ g(n) ≤ c2f(n) for all n > n0.    

As a programmer, we focus for the worst case scenario. Big-O analysis is very useful as it helps a programmer to predict an algorithm's 
running time without having to implement it and run benchmarks.

1) O(1) : Below function runs in O(1) time (or "Constant time") relative to its input. The input array could be 1 item or 1000 items, 
          but this function would just require just one "step."

          public void printFirst(int[] arrayOfItems) {

2) O(n) : Below function runs in O(n) time (or "Linear time") where "n" is the number of items in array. If the array has 10 items, 
          we have to print 10 times. If it has 1000 items, we have to print 1000 times.

          public void printAll(int[] arrayOfItems) {
            for (int item : arrayOfItems) {  //This will run number of times of size of array

3) O(n^2) : Below function runs in O(n^2). Here we are nesting two loops. If our array has "n" items, our outer loop runs "n" times and 
            our inner loop runs "n" times for each iteration of the outer loop, giving us as n*n = n^2.
​            O(n^2) is also called "Quadratic time". If the array has 10 items, we have to print 100 times. If has 1000 items, we have to print 1000000 times.

            public void printAllOrderedPairs(int[] arrayOfItems) {
                for (int firstItem : arrayOfItems) {
                    for (int secondItem : arrayOfItems) {
                        int[] orderedPair = new int[]{firstItem, secondItem};

4) NOTE: While computing run time in Big O notation, we ignores the constants values. See below:
        public void printAllThenAllPairSums(int[] arrayOfNumbers) {
            System.out.println("These are the numbers:");
            for (int number : arrayOfNumbers) {   // O(n)
            System.out.println("And these are their consecutive sums:");
            for (int firstNumber : arrayOfNumbers) {  // O(n)
                for (int secondNumber : arrayOfNumbers) {  // O(n)
                    System.out.println(firstNumber + secondNumber);  //This line will execute O(n^2) times.
    Here our runtime is O(n + n^2). Which we just call O(n^2).
    Even if it was O(n^2/2 + 100n), it would still be O(n^2). This is because as "n" reaches towards infinity, other constants has no role.

    O(n^3 + 50n^2 + 10000) ===> O(n​3​​ +50n​2​​ +10000) ===> is O(n^3) 
    O((n + 30) * (n + 5))  ===> O((n+30)(n+5))     ===> is O(n^2)

5) O(2^N) :  O(2^N) Denotes an algorithm whose growth doubles with each additon to the input data set. The growth curve of an O(2^N) function is 
             exponential. It starts very shallow, then rising meteorically. 
             An example of an O(2^N) function is the recursive calculation of Fibonacci numbers:

        int Fibonacci(int number){
            if (number <= 1) return number;
            return Fibonacci(number - 2) + Fibonacci(number - 1);


Common Running Times and Scenarios:
O(1)        Insertion and deletion for arrays due to random access.
            Insertion and deletion of the head element in linked lists.
            Insertion and deletion of the root element in heaps.
O(log [n])  Binary search in arrays (search in sorted array, it is actually "log [n]" with base 2 as binary search divides items by 2).
            Insertion, deletion, and balancing of binary search trees (all are divide by 2).

O(n)        Linked list traversal. 
            Linear Search opeations.
O(nlog[n])  Merge Sort, Quick Sort (best and average case).
O(n^2)      Shell Sort, Insertion/Selection Sort, Bubble Sort, Quick Sort (worst case).

O(2^n)      Very bad. Finding the (exact) solution to the travelling salesman problem using dynamic programming, Recursive Fibonacci Series.
O(n!)       Solving the traveling salesman problem via brute-force search, Permutation..

Big-O follows a hierarchy that goes something like this:
    O(1) < O(log[n]) < O(n) < O(n*log[n]) < O(n^2) < O(2^n) < O(n!)   (Here 2 can be any number greater than 1 for O(n^2) and O(2^n)).

Graph of running time for various common Big O algorithm:
Will upload later... not able to upload.. Image1

Will upload later... not able to upload.. Image2

No comments:

Post a Comment