**Asymptotic Notation **

- The running time of an algorithm depends on how long it takes a computer to run the lines of code of the algorithm and that depends on the speed of the computer, the programming language, and the compiler that translates the program from the programming language into code that runs directly on the computer, among other factors.
- We can't measure this using absolute terms such as time in seconds because different computers have different hardware hence we need a mathematical way.
- Asymptotic notation refers to computing the runtime of any operation in mathematical units of computation.
- In Asymptotic Analysis, we evaluate the performance of an algorithm in terms of
**input size** (we don’t measure the actual running time). We calculate, how the **time (or space)** taken by an algorithm increases with the** input size**. *Usually the time required by an algorithm falls under three types: *

**Average case ****Best case ****Worst case **

**Average case**: Average time required for program execution. But overall the life remains balanced with the mixture of lucky and unlucky times.

**Best case**: Minimum time required for program execution. Sometimes we get lucky in life. Exams canceled when you were not prepared, a surprise test when you were prepared, etc.

**Worst** **case**: Maximum time required for program execution. Sometimes we get unlucky. Questions you never prepared asked in exams, rain during the sports period, etc.

*To represent the above cases for run time, there are four asymptotic notations:*

**Big-oh notation****Omega notation****Theta notation****Little-oh notation**

**Big-oh notation(0) **

- Big-O notation represents the upper bound of the running time of an algorithm. Thus, it gives the worst-case complexity of an algorithm.
- Let f(n) and g(n) are two non-negative functions f(n) is said to be 0(g(n)) if and only if there exists a positive constant ‘M’ and ‘n
_{0}' such that

g(n) ≤M*f(n) for all non-negative values of n,

where n≥n_{0}

- The above definition states that the function ‘f’ is at most M times the function ‘g’ when n is greater than n
_{0} . - The notation provides an upper bound for the function f. i. e., the function g(n) is an upper bound on the value of f(n) for all n, where n≥no
_{0}

*Example 1: *

f(n) =3n+8

Let f(n) =n

According to big-oh notation , f(n)≤c.g(n) 3n+8≤c(n)

We know that, 3n+8<3n+n

3n+8<4n

Therefore , c=4

F(n)=0.(n)

For n≥8

F(n) ≤4n

3(8) +8≤4(8)

32≤32

Therefore, the upper bound of f(n) =3n+8 is 8.

__Omega Notation(__Ω-notation__) __

- Omega notation represents the lower bound of the running time of an algorithm. Thus, it provides best case complexity of an algorithm.
- Let f(n) and g(n) be two non-negative functions f(n) is said to be
**Ω**(g(n)) if and only if there exist positive constant ‘k’ and ‘n_{0}’ such that f(n) ≥c.*g(n) - For all non-negative values of n where n≥n
_{0}.

The above definition states that the function ‘f’ is at least ‘c’ times the function ‘g’ where ‘n’ is greater than n_{0}_{}_{}

This notation provides a lower bound for the function ‘f’ i. e., the function g(n) is a lower bound on the value of all n, where n≥n_{0}

**Terms in omega notation **- The examples of omega notation can be shown based on different asymptotic functions. Some of these functions are as follows:
- 1(constant function)
- n(linear function)
- n²(quadratic function)
- n³(cubic function)

__Examples__

Let omega be **Ω**

.

F(n) =14

F(n) =**Ω**(n)

F(n) =**Ω**(1)

F(n) ≥13+1 for all ‘n’

where, c=13

f(n) =11

f(n) =**Ω**(n)

f(n) =**Ω**(1)

f(n) ≥10+1, for all ‘n’

where c=10

F(n) =4n+6

F(n) =**Ω**(n)

F(n) =**Ω**(6)

4n<4n+6 for all n

where c=4

or

c=4n+6≥4n

f(n) =8n+7

f(n) =**Ω**(n)

f(n) =**Ω**(8)

8n<8n+7 for all n

where c=8 or 8n+7≥8n

F(n) =11n²+8

So

F(n) =**Ω**(n²)

F(n) =**Ω**(11)

11n²<11n²+8 for all n

where c=11 or 11n²+8≥11n²

F(n) =4n³+5n

Here, F(n) =**Ω**(n³)

F(n) =**Ω**(4)

4n³=4n³+5n for all n

where c=4 or 4n³+5n≥4n³

**Theta Notation(∅) **

- Theta notation encloses the function from above and below. Since it represents the upper and the lower bound of the running time of an algorithm, it is used for analyzing the average case complexity of an algorithm.
- Let f(n) and g(n) be two non-negative functions. Then, f(n) is said to be ∅(g(n)) if and only if there exist a positive constant k1, k2 and n
_{0} such that

c1g(n) ≤f(n) ≤c2.g(n) for all non-negative values of n,

where n≥n_{0}

fig

- The above definition states that the function f(n), lies between c1 times the function g(n) and c2 times the function g(n), where c1 and c2 are positive constant.
- This notation provides both lower and upper bounds for the function f(n) i. e. g(n) is both lower and upper bound on the value of f(n) for large n.
- In other words, theta notation say that f(n) is both 0(g(n)) and y(g(n)) , for all n, where n≥n
_{0}

**Example**

4n+3=ø(n)

Consider

F(n) =4n+3

Here

4n+3=ø(n)

4n≤4n+3≤5n

C1=4 c2=5

For all n≥3 4(3) ≤4(3) +3≤5(3)

12≤15≤15

N_{0}=3

Therefore, 4n+3=ø(n)

**Little oh notation(ō)**

- Let f(n) and g(n) be two non-negative functions f(n) is said to be ō(g(n)) if and only if there exists positive constant ‘k’ and ‘n
_{0}’ such that f(n) <k*g(n) for all non-negative values of n, where n≥n_{0}.

*Example*: The function f(x) =3x³+4x² is ō(n)

- In order to provide the above statement using little oh notation, two constant are required i. e. a real constant (which should be greater than ō) and an integer constant (which should be greater than 1).Such that it satisfy the equation 3n³+4n²<cn where n is being greater than ‘n’.