Header Ads Widget

Asymptotic Analysis of algorithms (Growth of function)

Asymptotic notation

Asymptotic analysis of algorithms, often referred to as the study of the growth of functions, is a mathematical technique used to describe how the performance (time or space complexity) of an algorithm behaves as the input size approaches infinity. It helps computer scientists and programmers compare and analyze algorithms in a way that abstracts away constant factors and focuses on the most significant aspects of their efficiency. Asymptotic analysis is typically expressed using Big O notation, Big Theta notation, and Big Omega notation. Here's an overview of these notations:

  1. Big O Notation (O):

    • Big O notation provides an upper bound on the growth rate of an algorithm's performance.
    • It describes the worst-case scenario, or the maximum amount of resources (time or space) an algorithm will use for any input size.
    • Formally, an algorithm has a time or space complexity of O(f(n)) if, for sufficiently large input sizes, its performance is bounded by a constant multiple of the function f(n).
f(n)⩽c*g(n) for n>n0n>n0 in all case 
    • Example: O(n^2) means the algorithm's performance grows quadratically with the input size.
    • { 100 , log (2000) , 10^4 } belongs to O(1) U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to O(n) U { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to O( n^2) Note: Here, U represents union, we can write it in these manner because O provides exact or upper bounds .

            • images/Introduction_BigOGraph.png
  1. Big Theta Notation (Θ):

    • Big Theta notation provides a tight bound on the growth rate of an algorithm's performance.
    • It describes both the upper and lower bounds on the algorithm's performance, providing a more precise characterization.
    • Formally, an algorithm has a time or space complexity of Θ(f(n)) if, for sufficiently large input sizes, its performance matches the function f(n) within constant factors.
c1 * g (n) ≤ f(n)≤ c2 g(n)for all n, n≥ n0
    • Example: Θ(n) means the algorithm's performance grows linearly with the input size.
    • { 100 , log (2000) , 10^4 } belongs to Θ(1) { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Θ(n) { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Θ( n2) Note: Θ provides exact bounds.
            • images/Introduction_ThetaGraph.png
  1. Big Omega Notation (Ω):

    • Big Omega notation provides a lower bound on the growth rate of an algorithm's performance.
    • It describes the best-case scenario, or the minimum amount of resources (time or space) an algorithm will use for any input size.
    • Formally, an algorithm has a time or space complexity of Ω(f(n)) if, for sufficiently large input sizes, its performance is at least a constant multiple of the function f(n).
F (n) ≥ c* g (n) for all n, n≥ n0
    • Example: Ω(n log n) means the algorithm's performance is at least linearithmic with the input size.
    • { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Ω( n^2) U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Ω(n) U { 100 , log (2000) , 10^4 } belongs to Ω(1) Note: Here, U represents union, we can write it in these manner because Ω provides exact or lower bounds.
  1. Little O Notation (o):

    • Definition: Little O notation, denoted as o(f(n)), represents an upper bound on an algorithm's time or space complexity that is not tight. It indicates that the algorithm's resource usage grows strictly slower than the function f(n) for large input sizes.
    • Explanation: An algorithm has a time complexity of o(f(n)) if its resource usage grows strictly slower than the function f(n). It is a stricter bound than Big O notation, as it excludes the possibility of the algorithm reaching the same growth rate as f(n).
    • f(n)<c*g(n) for n>n0n>n0 in all case 
  1. Little Omega Notation (ω):

    • Definition: Little Omega notation, denoted as ω(f(n)), represents a lower bound on an algorithm's time or space complexity that is not tight. It indicates that the algorithm's resource usage grows strictly faster than the function f(n) for large input sizes.
    • Explanation: If an algorithm has a time complexity of ω(f(n)), it means that the algorithm's resource usage grows strictly faster than the function f(n). It is a stricter bound than Big Omega notation, as it excludes the possibility of the algorithm growing at the same rate as f(n).
    • F (n) > c* g (n) for all n, n≥ n0

Key concepts and rules related to asymptotic analysis of algorithms:

  • When performing asymptotic analysis, constant factors and lower-order terms are typically ignored because they have a relatively small impact on the overall behavior of the algorithm as the input size becomes large.

  • Asymptotic analysis is concerned with the growth rate of functions and how they compare to each other. For example, O(n^2) is worse than O(n) in terms of performance.

  • Asymptotic analysis helps in making informed decisions when choosing algorithms, as it provides insights into which algorithm is more efficient for different input sizes.

  • It's important to remember that asymptotic analysis deals with worst-case, average-case, or best-case scenarios depending on the context, and it provides a high-level understanding of an algorithm's efficiency.

Properties of Asymptotic Notation 
1. Reflexivity:
If f(n) is O(g(n)) and f(n) = Ω(g(n))  
then f(n) is Θ(g(n))

2. Symmetry:
f(n) = Θ(g(n)) if and only if g(n) = Θ(f(n))
3. Transitivity:
f(n) = O(g(n)) and g(n) = O(h(n)) ⇒ f(n) = O(h(n))

4. Transpose Symmetry:
f(n) = O(g(n)) if and only if g(n) = Ω(f(n))

f(n) = o(g(n)) if and only if g(n) = ω(f(n))

5. Let f(n) and g(n) be asymptotically non negative functions then

max(f(n), g(n)) = Θ(f(n) + g(n))

6. If f(n) is O(g(n)) then h(n)*f(n) is O(h(n)*g(n))

7. If f(n) is O(g(n)) and  g(n) is O(h(n)) then

    f(n) + g(n) = O[max(g(n),h(n))]

    f(n) * g(n) = O(g(n)*h(n))

Overall, asymptotic analysis simplifies the comparison and classification of algorithms, making it a fundamental tool in algorithm design and analysis. It allows developers to focus on the fundamental growth rates of algorithms, which is especially valuable for making decisions in large-scale computing applications.



Q. Which is asymptotically larger: lg(lg*n) or lg*(lg n)? Explain.
log*n is called the iterated logarithm of n, defined as the count of how many times you apply the logarithm function iteratively such that the value is less than or equal to 1.
eg, log*16=3
The recurrence relation is as follows : 
log*n = 0 , if n<=1
         = 1 + log*(log n) , if n>1
log 16 = 1+log*(log16) = 1+log*4
           = 1+1+log*(log4) = 1+1+log*2
           = 1+1+1+log*(log2) = 1+1+1+log*1 = 1+1+1+0 = 3
                   
Notice that the iterated logarithm function grows at an extremely slower rate compared to logarithm function. 
For a very large value of n, log*n will be a very small value compared to logn.
 eg, for n=1024, log n=10 and log*n=4 
             n= 265536 , log n = 65536 and log*n=5

Also, log*n will take one less iteration on log*(logn) [ Look at the recurrence relation ]
log*n -1 = log*(logn)
Consider log*n=x,
log(log*n) = log x
log*(log n) = log*n - 1 = x - 1 
clearly, log*(log n) is asymptotically larger than log(log*n)

Post a Comment

0 Comments