Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
179 views

Chapter Two Complexity Analysis

The document discusses complexity analysis of algorithms. It introduces different data structures and operations that can be performed on them. It then discusses computational complexity and asymptotic complexity analysis which are used to determine the efficiency of algorithms as the size of the input grows. The document covers various concepts related to complexity analysis including primitive and non-primitive data structures, analysis rules for calculating time complexity of algorithms, and asymptotic notations used for describing growth rates of algorithms.

Uploaded by

Soressa Hassen
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
179 views

Chapter Two Complexity Analysis

The document discusses complexity analysis of algorithms. It introduces different data structures and operations that can be performed on them. It then discusses computational complexity and asymptotic complexity analysis which are used to determine the efficiency of algorithms as the size of the input grows. The document covers various concepts related to complexity analysis including primitive and non-primitive data structures, analysis rules for calculating time complexity of algorithms, and asymptotic notations used for describing growth rates of algorithms.

Uploaded by

Soressa Hassen
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 40

Chapter Two: Complexity Analysis

Data Structures and Algorithms


Prepared by:
Ataklti Nguse

07/07/2023 Ataklti N. 1
2.1. Introduction
A program is written in order to solve a problem.
A solution to a problem actually consists of two things:
 A way to organize the data
 Sequence of steps to solve the problem
There are some basic operations that can be performed on data
structure:-
 Accessing/Traversing/update
 Searching
 Sorting, Merging
 Creating/ Inserting
 Destroying/ Deleting
07/07/2023 Ataklti N. 2
Abstract Data Type(ADT)
An ADT consists of an abstract data structure and operations. or
ADT is a type(class) for objects whose behavior is defined by a set of value
and set of operation.
There are lots of formalized and standard Abstract data types such as Linked
Lists, Stacks, Queues, Trees, etc.
Properties of an algorithm
Finiteness, Definiteness, Sequence, Correctness, Completeness,
language independent , generality …
Algorithms can be expressed in pseudocode, through
flowcharts or program code

07/07/2023 Ataklti N. 3
2.2. Primitive and Non-primitive Data structures

• The primitive data types are the basic data types that are available in
most of the programming languages.
• The primitive data types are used to represent single values.
• Example is integer, real, Boolean and characters.

• The data types that are derived from primary data types are known as
non-Primitive data types. These data types are used to store group of
values.
• Example Arrays, Structure, linked list, Stacks,Queue

07/07/2023 Ataklti N. 4
2.3. Computational and Asymptotic
Complexity
The field of complexity analysis is concerned with the study of the
efficiency of algorithms.
• Time Complexity: Determine the approximate number of operations
required to solve a problem of size n.
• Space Complexity: Determine the approximate memory required to solve a
problem of size n.
• The factor of time is more important than space.
These differences may not be noticeable for small amounts of data, but as
the size of the input data becomes large, differences will become significant.
It should be clear that we couldn’t use real-time units such as microseconds
to evaluate algorithms efficiency.
A better measure is to use the number of operations required to perform an
algorithm.
07/07/2023 Ataklti N. 5
…continued
Definition 1: The computational complexity of an algorithm
is a measure of the cost incurred by applying the algorithm.
Definition 2: The asymptotic complexity of an algorithm is
an approximation of the computational complexity that
holds for large amounts of input data.
Objectives of computational complexity analysis:
 To determine the feasibility of an algorithm by estimating an
upper bound on the amount of work performed
 To compare different algorithms before deciding on which one to
implement

07/07/2023 Ataklti N. 6
..continued
Complexity analysis involves two distinct phases:
I. Algorithm Analysis: Analysis of the algorithm to produce a
function T (n) that describes the algorithm in terms of the
operations performed in order to measure the complexity of the
algorithm.
II. Order of Magnitude Analysis: Analysis of the function T (n) to
determine the general complexity category to which it belongs.
There is no generally accepted set of rules for algorithm analysis.
However, an exact count of operations is commonly used.

07/07/2023 Ataklti N. 7
2.3.1. Analysis Rules:

 assume any arbitrary time unit.


Execution of one of the following operations takes time 1:
Assignment, Single Input/Output, Single Boolean, Single Arithmetic Operations and
Function Return
Running time of a selection statement (if, switch) is the time for the condition evaluation
+ the maximum of the running times for the individual clauses in the selection.
Loops: Running time for a loop is equal to the running time for the statements inside the
loop * number of iterations.
The total running time of a statement inside a group of nested loops is the running time of
the statements multiplied by the product of the sizes of all the loops.
Always assume that the loop executes the maximum number of iterations possible.
Running time of a function call is 1 for setup + the time for any parameter calculations +
the time required for the execution of the function body.
07/07/2023 Ataklti N. 8
Running time calculation examples
using Analysis rule
1. int count(){
int k=0;
cout<< “Enter an integer”;
cin>>n; Time Units to Compute

1 for the assignment statement: int k=0


for (i=0;i<n;i++) 1 ,1 for the input and output statement.

k=k+1; In the for loop:

1 assignment, n+1 tests, and n increments.

return 0;} n loops of 2 units for an assignment, and an addition.

1 for the return statement.

T (n)= 1+1+1+(1+n+1+n)+2n+1 = 4n+6 = O(n)

07/07/2023 Ataklti N. 9
2. int sum (int n)
{
int partial_sum = 0;
for (int i = 1; i <= n; i++)
partial_sum = partial_sum +(i * i * i);
return partial_sum;
}

07/07/2023 Ataklti N. 10
…Continued
• Calculate T(n) for the following
• 3. k=0;
Cout<<“enter an integer”;
Cin>>n;
For (i=0;i<n;i++)
K++
• T(n) = 1+1+1+ (1+n+1+ n +n)
=3n+5

07/07/2023 Ataklti N. 11
Cont. …
• 4. i=0;
While (i<n)
{
x++;
i++;
}
J=1;
While(j<=10)
{
x++;
j++
}
• T(n)=1+n+1+n+n+1+11+10+10

07/07/2023 Ataklti N. 12
cont.…
5. for(i=1;i<=n;i++)
for(j=1;j<=n; j++)
k++;
T(n)=1+n+1+n+n(1+n+1+n+n)=3n2+4n+2

07/07/2023 Ataklti N. 13
Cont …
6. Sum=0;
if(test==1)
{
for (i=1;i<=n;i++)
sum=sum+i
}
else
{
cout<<sum;
}
• T(n)=1+1+Max(1+n+1+n+n+n,1)= 4n+4
07/07/2023 Ataklti N. 14
Exercise
• Calculate T(n) for the following codes
A. sum=0;
for(i=1;i<=n;i++)
for(j=1;j<=m;j++)
Sum++;

B. sum=0;
for(i=1;i<=n;i++)
for(j=1;j<=i;j++)
sum++;

07/07/2023 Ataklti N. 15
Cont..
C. int total(int n){
int sum=0; T (n)= 5n+5
for (int i=1;i<=n;i++) T (n)= 4n+4
sum=sum+1;
return sum;}
D. void func(){
int x=0;int i=0;int j=1;
cout<< “Enter value”;
cin>>n;
while (i<n){
x++;
i++;}
while (j<n){
j++;}}

07/07/2023 Ataklti N. 16
Formal Approach to Analysis

• In the previous slide examples we have seen that analysis is so complex. However, it can be simplified by
using some formal approach in which case we can ignore initializations, loop control and increments.
• for Loops: Formally
– In general, a for loop translates to a summation. The index and bounds of the summation are the
same as the index and bounds of the for loop.
for (int i = 1; i <= N; i++) {
sum = sum+i; }

Suppose we count the number of operations that are done. There is 2 operation per iteration of the loop 1 for
assignment and 1 for addition , hence 2N in total.

• Nested Loops: Formally


– Nested for loops translate into multiple summations, one for each for loop.
for (int i = 1; i <= N; i++) {
for (int j = 1; j <= M; j++) {
sum = sum+i+j;}}

07/07/2023 Ataklti N. 17
Cont..
• Consecutive Statements: Formally
– Add the running times of the separate blocks of your code
for (int i = 1; i <= N; i++) {
sum = sum+i;}
for (int i = 1; i <= N; i++) {
for (int j = 1; j <= N; j++) {
sum = sum+i+j;}}
• Conditionals: Formally
– If (test) s1 else s2: Compute the maximum of the running time for s1 and s2.
if (test == 1) {
for (int i = 1; i <= N; i++) {
sum = sum+i;}}
else
for (int i = 1; i <= N; i++) {
for (int j = 1; j <= N; j++) {
sum = sum+i+j; }}
07/07/2023 Ataklti N. 18
2.3.2.Asymptotic Analysis
Asymptotic analysis is concerned with how the running time of an algorithm
increases with the size of the input in the limit, or size of the input increases
without bound.
It uses to determine the growth rate of the complexity function.
There are five notations used to describe a running time function. These are:
I. Big-Oh Notation (O)
II. Big-Omega Notation ()
III. Theta Notation ()
IV. Little-o Notation (o)
V. Little-Omega ( notation)

07/07/2023 Ataklti N. 19
Finding Asymptotic Complexity: Examples
• Rules to find Big-O from a given T(n)
– Take highest order term
– Drop lower order terms and constant multiplier

T(n)=4n4+3n3+2n+4=O(n4)

07/07/2023 Ataklti N. 20
Finding Big O of given Algorithm

1. for (i=1;i<=n;i++) 2. For (i=1;i<=n;i++)


Cout<<i; For(j=1;j<=n;j++)
T(n) = 1+n+1+n+n Cout<<i;
=3n+2 T(n)=1+n+1+n+n(1+n+1+n+n)
T(n) = O(n) =3n2 +4n+2
T(n)=O(n2)

07/07/2023 Ataklti N. 21
Exercise

Find Big O of the following algorithm


1. for(i=1;i<=n;i++) 2. if(k==1)
Sum=sum+i; {
For (i=1;i<=n;i++) For (i=1;i<=100;i++)
For(j=1;j<=m;j++) For (j=1;j<=1000;j++)
Sum++; Cout<<I
}
Else
{
for (i=1;i<=n;i++)
Sum++;
}

07/07/2023 Ataklti N. 22
Cont.….
3. for (i=1; i<=n;i++)
4. for (int i=1; i<=N; ++i)
{
If (i<10) for (int j=i; j>0; --j)
For(j=1;j<=n;j=j*2) {} // do nothing
Cout<<j;
Else
Cout<<i;
}

07/07/2023 Ataklti N. 23
Cont.….

7. for (int i=1; i<N; ++i)


for (int j=0; j<N; j+=i)
{}

8. int i=0, sum=0;


while (sum < N)
sum = sum + i++;

9. for (int i=1; i<N; i=i*2)


for (int j=0; j<i; ++j)
{}

07/07/2023 Ataklti N. 24
Big-O Notation

The most commonly used notation for specifying asymptotic


complexity, for estimating the rate of growth of complexity
functions.
Definition 3: The function f(n) is O(g(n)) if there exist positive
numbers c and N such that f(n) ≤ c.g(n) for all n ≥ N.
 n is the size of the data set.
 g(n) is a function that is calculated using n as the parameter.
 O(g(n)) means that the curve described by g(n) is an upper
bound for the resource needs of a function.

07/07/2023 Ataklti N. 25
The following points are facts that you can use for Big-Oh problems:

1<=n for all n>=1


n<=n2 for all n>=1
2n <=n! for all n>=4
log2n<=n for all n>=2
n<=nlog2n for all n>=2

07/07/2023 Ataklti N. 26
Theorem on Big-O notation
Theorem 1: k is O(1)
Theorem 2: A polynomial is O(the term containing the highest power of n).
If f(n) is a polynomial of degree d, then f(n) is O(nd)
In general, f(n) is big-O of the dominant term of f(n).
Theorem 3: k*f(n) is O(f(n))
Constant factors may be ignored
E.g. f(n) =7n4+3n2+5n+1000 is O(n4)
Theorem 4(Transitivity): If f(n) is O(g(n))and g(n) is O(h(n)), then f(n) is
O(h(n)).
Theorem 5: For any base b, logb(n) is O(logn).
All any base logarithms grow at the same rate
Theorem 6: Each of the following functions is big-O of its successors:
K, logbn, n, nlogbn , n2,n3, 2n , n!, nn
07/07/2023 Ataklti N. 27
Example

1. f(n)=10n+5 and g(n)=n. Show that f(n) is


O(g(n)).
2. f(n) = 3n2 +4n+1. Show that f(n)=O(n2).
Big-O expresses an upper bound on the growth
rate of a function, for sufficiently large values
of n.
An upper bound is the best algorithmic solution
that has been found for a problem.

07/07/2023 Ataklti N. 28
Ω, Θ and Little-o Notations
We have seen that big-O notation refers to an smallest upper
bound on the rate of growth of a function.

There is a similar definition for the lower bound, called big-


omega (Ω) notation. 

Definition 4: The function f(n) is Ω(g(n)) if there exist positive


numbers c and N such that f(n) ≥ c.g(n) for all n ≥ N.

In other words, we choose the smallest upper bound (big-O)


function and the largest lower bound (Ω)
07/07/2023 Ataklti N. 29
Continued
For some algorithms (but not all), the lower and upper bounds on
the rate of growth will be the same.
In this case, a third notation exists for specifying asymptotic
complexity, called theta (Θ) notation.
Definition 5: The function f(n) is Θ(g(n)) if there exist positive
numbers c1, c2 and N such that c1.g(n) ≤ f(n) ≤ c2.g(n) for all n
≥ N.
This definition states that f(n) is Θ(g(n)) if f(n) is O(g(n)) and
f(n) is Ω(g(n)). In other words, the lower and upper bounds on
the rate of growth are the same.
For the same example, f(n) = n2 + 5n, we can see that g(n) = n2
satisfies definition 5, so the function n2 + 5n is Θ(n2).
07/07/2023 Ataklti N. 30
Little-o Notation

Big-Oh notation may or may not be asymptotically


tight, for example:
2n2 = O(n2)
=O(n3)
f(n)=o(g(n)) means for all c>0 there exists some k>0
such that f(n)<c.g(n) for all n>=k. Informally,
f(n)=o(g(n)) means f(n) becomes insignificant relative
to g(n) as n approaches infinity.
Example: f(n)=3n+4 is o(n2)

07/07/2023 Ataklti N. 31
Little-Omega ( notation)
Little-omega () notation is to big-omega () notation as
little-o notation is to Big-Oh notation.
We use  notation to denote a lower bound that is not
asymptotically tight.
Formal Definition: f(n)=  (g(n)) if there exists a
constant no>0 such that c. g(n)<f(n) for all n>=k.
Example: 2n2=(n) but it’s not (n).
OO Notation
Definition 7: The function f (n) is OO (g (n)) if it is O
(g(n)) but the constant c is too large to be of practical
significance.
07/07/2023 Ataklti N. 32
2.4. Complexity Classes

A number of complexity classes of algorithms exist,


and some of the more common ones are illustrated in
Figure 1 below.

07/07/2023 Ataklti N. 33
….Continued

Figure 1 – A comparison of various complexity


07/07/2023 Ataklti N. 34
classes
…Continued

07/07/2023 Ataklti N. 35
Best, average and worst case complexity

 
Worst case analysis is used to find an upper bound on algorithm
performance for large problems (large n).
We must know the case that causes maximum number of operations to be
executed.
Average Case Analysis
in average case analysis, we take all possible inputs and calculate the average
computing time for all of the inputs.
Best Case Analysis
In the best case analysis, we calculate lower bound on running time of an
algorithm.
We must know the case that causes minimum number of operations to be
executed.

07/07/2023 Ataklti N. 36
2.5. Relational Properties of the
Asymptotic Notations
• Transitivity:
– f(n) = (g(n)) and g(n) = (h(n))  f(n) = (h(n))
– Same for O and 
• Reflexivity:
– f(n) = (f(n))
– Same for O and 
• Symmetry:
– f(n) = (g(n)) if and only if g(n) = (f(n))
• Transpose symmetry:
– f(n) = O(g(n)) if and only if g(n) = (f(n))

07/07/2023 Ataklti N. 37
2.6. Amortized Analysis
• Amortized analysis computes the average time required to
perform a sequence of n operations on a data structure.
• Example. Priority queue.

07/07/2023 Ataklti N. 38
MORE Exercises
1. An algorithm takes 0.5 ms for input size 100. How long will it take for input size
500 if the running time is the following (assume low-order terms are negligible)
(a) linear
(b) O(N log N )
(c) quadratic
(d) Cubic
2.An algorithm takes 0.5 ms for input size 100. How large a problem can be solved in
1
min if the running time is the following (assume low-order terms are negligible.
07/07/2023 Ataklti N. 39
»I Thank You

07/07/2023 Ataklti N. 40

You might also like