DAA Unit 1
DAA Unit 1
DAA Unit 1
Introduction: What is an Algorithm, Algorithm Specification, Pseudo code Conventions, Recursive Algorithm,
Performance Analysis, Space complexity, Time complexity, Amortized Complexity, Asymptotic Notation.
Practical Complexities, Performance Measurement.
ALGORITHM
An Algorithm is any well-defined computational procedure that takes some value or set of values as Input
and produces a set of values or some value as output. Thus algorithm is a sequence of computational steps
that transforms the input into the output.
Definition: An Algorithm is a finite set of instructions that, if followed, accomplishes a particular task.
1|Page
Design and Analysis of Algorithms Unit-I
ALGORITHM SPECIFICATION
Algorithm can be described in many ways.
Natural language like English: When this way is chosen, we should ensure that resulting
instructions are definite.
Graphic representations called flowchart: This method will work well when the algorithm is
small& simple.
Pseudo-code Method: In this method, we should typically describe algorithms as program, which
resembles language like C. The advantage of pseudo code over flowchart is that it is very much
similar to the final program code. It requires less time and space to develop, and we can write it in
our own way as there are no fixed rules.
Pseudo-Code Conventions:
1. Comments begin with // and continue until the end of line.
2. Blocks are indicated with matching braces { and }. A compound statement can be represented as a
block. The body of the procedure also forms a block. The statements are delimited by ; .
3. An identifier begins with a letter. The data type of variables, or a variable is local or global, depends
on the context. we assume simple data types such as integer, float, char, boolean, and so on.
Compound data types can be formed with records. Here is an example
node = record
{ datatype_1 data1;
datatype_2 data2;
:
:
datatype_n datan;
node *link;
}
link is a pointer to the record type node. The data items of a record can be accessed with -> and a
period. For example if p points to a record of type node, p->data1 stands for the value of the first
field in the record. If q is a record of type node, q.data1 denote its first field.
5. There are two Boolean values true and false. In order to produce these values, the logical operators
and, or and not and the relational operators <, ≤, = , ≠, ≥, and > are used.
6. Elements of the multidimensional arrays are accessed using [ and ]. For example, if A is a two
dimensional array, the (i, j)th element of the array is denoted as A[i, j]. Array indices start at zero.
7. The general forms of looping statements are for, while and repeat-until.
As long as <condition> is true, the statements get executed. The loop is exited whenever the
condition becomes false.
The general form of for loop is
for variable := value1 to value2 step st do
{
<statement 1>
:
<statement n>
}
Here value1, value2, and step are arithmetic expressions. Step value can be either positive or
negative. Default value of step is +1.
9. Input and output are done using the instructions read and write. No format is used to specify the size
of input or output quantities.
10. There is only one type of procedure: Algorithm. An algorithm consists of a heading and body. The
heading takes the form
Algorithm Name ( <parameter list>)
Where Name is the name of the procedure and ( <parameter list> ) is the procedure parameters. The
body of the procedure consist of one or more statements enclosed within braces { and }. Simple
variables to the procedure are passed by value. Arrays and records are passed by reference.
3|Page
Design and Analysis of Algorithms Unit-I
Examples:
1. Algorithm for finding the maximum of n given numbers. In this algorithm (named Max), A & n are procedure
parameters. Result & i are Local variables.
1 Algorithm Max ( A, n)
2 // A is an array of size n.
3 {
4 Result := A[1];
5 for i:= 2 to n do
6 If A[i] > Result then Result := A[i];
7 return Result;
8 }
Recursive Algorithms:
A Recursive function is a function that is defined in terms of itself.
Similarly, an algorithm is said to be recursive if the same algorithm is invoked in the body.
An algorithm that calls itself is Direct Recursive.
Algorithm „A‟ is said to be Indirect Recursive if it calls another algorithm which in turns calls „A‟.
The Recursive mechanisms are extremely powerful. They can express complex process very clearly.
The following 2 examples show how to develop a recursive algorithm.
4|Page
Design and Analysis of Algorithms Unit-I
1 Algorithm perm(a,k,n)
2 {
3 if (k = n) then write (a[1:n]); // output permutation
4 else //a[k:n] has more than one permutation
5 // Generate this recursively.
6 for i := k to n do
7 {
8 t:=a[k]; a[k]:=a[i]; a[i]:=t;
9 perm(a, k+1, n); //all permutation of a[k+1:n]
10 t:=a[k]; a[k]:=a[i]; a[i]:=t;
11 }
12 }
5|Page
Design and Analysis of Algorithms Unit-I
EXERCISES
1. Define algorithm and List out the criteria‟s of an algorithm. (features / properties)
2. What are the four distinct areas of study of algorithm?
3. Define debugging and profiling.
4. Describe pseudo code conventions for specifying algorithms. **
5. The factorial function n! has value 1 when n <=1 and value n * (n – 1)! when n>1. Write
both recursive and iterative algorithm to computer n!.
6. What is pseudo-code? Explain with an example.
7. Distinguish between algorithm and pseudo code.
8. Devise an algorithm that inputs three integers and outputs them in nondecreasing order.
9. Present an algorithm that searches an unsorted array a[1:n] for the element x. If x occurs,
then return a position in the array; else return zero.
10. The Fibonacci numbers are defined as f0 = 0, f1 = 1, and fi = fi-1 + fi-2 for i > 1. Write both
recursive and iterative algorithms to compute fi.
11. Devise an algorithm that sorts a collection of n>=1 elements of arbitrary type.
12. Write a recursive algorithm to solve Towers of Hanoi problem with an example.
PERFORMANCE ANALYSIS
Computing time and storage requirement are the criteria for judging algorithms that have direct relationship
to performance.
When analyzing the space complexity of an algorithm first we estimate SP(instance characteristics). For
any given problem, we need to determine which instance characteristics to use to measure the space
requirements.
6|Page
Design and Analysis of Algorithms Unit-I
Example 1:
1 Algorithm abc(a, b, c) It is characterized by values of a, b, c. If we assure that
2 { one word is needed to store the values of each a, b, c ,
3 return a+b+b*c + (a+b –c) / ( a+b) + 4.0; result and also we see SP(instance characteristics)=0 as
4 } space needed by abc is independent of instance
characteristics; So 4 words of space is needed by this
algorithm.
Example 2:
1 Algorithm sum(a, n) This algorithm is characterized by ‘n’ (number of
2 { elements to be summed). The space required for ‘n’ is 1
3 s := 0.0; word. The array a[] of float values require atleast ‘n’
4 for i := 1 to n do words.
5 s := s+a[i]; So, we obtain
6 return s; Ssum(n) ≥ n+3
7 } (n words for a, 1 word for each of n, i, s)
Example 3:
1 Algorithm Rsum(a, n) Instances of this algorithm are characterized by n. The
2 { recursion stack space includes space for formal variables,
3 if ( n ≤ 0 ) then return 0.0; local variables, return address (1-word). In the above
algorithm each call to Rsum requires 3 words (for n, return
4 else return Rsum(a, n-1) + a[n]; address, pointer to a[]). since the depth of the recursion is
5 } n+1, the recursion stack space needed is ≥3(n + 1)
Time Complexity :
The time complexity of an algorithm is the amount of computer time it needs to run to completion.
The time T(P) taken by a program P is the sum of the compile time and Run time. Compile time does not
depend on instance characteristics and a compiled program will be run several times without
recompilation, so we concern with just run time of the program. Run time is denoted by TP(instance
characteristics).
The time complexity of an algorithm is given by the number of steps taken by the algorithm to
compute the function. The number of steps is computed as a function of some subset of number of
inputs and outputs and magnitudes of inputs and outputs.
Number of steps needed by a program to solve a particular problem is determined in two ways.
7|Page
Design and Analysis of Algorithms Unit-I
Example:
1 Algorithm Sum (a, n) The change in the value of count by the time this program
2 { terminates is the number of steps executed by the algorithm.
3 s:=0.0;
The value of count is increment by 2n in the for loop.
4 count :=count + 1; //count is global; initially 0
5 for i := 1 to n do At the time of termination the value of count is 2n+3.
6 {
7 count:=count+1; //for for So invocation of Sum executes a total of 2n+3 steps.
8 s:=s+a[i]; count:=count+1; //for assignment
9 }
10 count :=count + 1; //for last time of for
11 count :=count + 1; //for the return
12 return s;
13 }
Example 2:
1 Algorithm RSum(a, n) Let tRSum(n) be the increase in the value of count when the
2 { algorithm terminates.
3 count:=count+1; //for the if conditional
When n=0, tRSum(0) = 2
4 if ( n ≤ 0) then
When n>0, count increases by 2 plus tRSum(n-1)
5 {
6 count:=count+1; //for the return When analyzing a recursive program for its step count, we
7 return 0.0; often obtain a recursive formula.
8 }
9 else For example,
10 { 2 if n 0
11 count:=count+1; //for addition, function call, return t RSum (n) 2 t (n 1) if n 0
12 return RSum(a,n-1)+a[n]; RSum
8|Page
Design and Analysis of Algorithms Unit-I
Example 1:
Statement s/e Frequency Total Steps
1 Algorithm sum(a, n) 0 _ 0
2 { 0 _ 0
3 s := 0.0; 1 1 1
4 for i := 1 to n do 1 n+1 n+1
5 s := s+a[i]; 1 n n
6 return s; 1 1 1
7 } 0 - 0
Total: 2n + 3
Example 2:
Statement s/e Frequency Total Steps
n=0 n>0 n=0 n>0
1 Algorithm Rsum(a, n) 0
2 { 0
3 if ( n ≤ 0 ) then 1 1 1 1 1
4 return 0.0; 1 1 0 1 0
5 else 0
6 return Rsum(a, n-1) + a[n]; 1+x 0 1 0 1+x
7 } 0
Total: 2 2+x
x = tRSum(n-1)
Example 3:
Statement s/e Frequency Total Steps
1 Algorithm add(a, b, c, m, n) 0 _ 0
2 { 0 _ 0
3 for i := 1 to m do 1 m+1 m+1
4 for j := 1 to n do 1 m(n + 1) m(n + 1)
5 c[i,j] := a[i,j]+b[i,j]; 1 mn mn
6 } 0 - 0
Total: 2mn+2m+1
Exercise :
1. Define time complexity and space complexity.
2. What is space complexity? Illustrate with an example for fixed and variable part in space complexity.
3. Explain the method of determining the complexity of procedure by the step count approach. Illustrate with
an example.
9|Page
Design and Analysis of Algorithms Unit-I
4. Implement iterative function for sum of array elements and find the time complexity use the increment
count method.
5. Using step count method, analyze the time complexity when 2 m X n matrix added.
6. Write an algorithm for linear search and analyze the algorithm for its time complexity.
7. Give the algorithm for matrix multiplication and find the time complexity of the algorithm using step-count
method.
8. Determine the frequency counts for all statements in the following algorithm segment.
i:=1
while ( i <= n) do
{
x := x + 1
i := i + 1
}
9. Write a recursive algorithm to find the sum of first n integers and Derive its time complexity.
10. Implement an algorithm to generate Fibonacci number sequence and determine the time complexity of the
algorithm using the frequency method.
11. What is the time complexity of the following function.
int fun() {
for ( int i = 1; i<=n; i++ )
{
for ( int j = 1; j < n; j += i)
{
sum = sum + i * j;
}
}
return(sum);
}
12. Give the algorithm for transpose of a matrix m X n and determine the time complexity of the algorithm by
frequency-count method.
13. Give the algorithm for matrix addition and find the time complexity of the algorithm using frequency-count
method
14. Explain recursive function analysis with an example
Amortized Analysis
Amortized Analysis not just considers one operation, but a sequence of operations on a given data
structure. It averages cost over a sequence of operations.
In an amortization scheme we charge some of the actual cost of an operation to other operations.
This reduces the charged cost of some operations and increases that of others. The amortized cost of
an operation is the total cost charged to it.
The cost transferring (amortization) scheme is required to be such that the sum of the amortized costs
of the operations is greater than or equal to the sum of their actual costs.
The only requirement of amortized complexity is that sum of the amortized complexities of all
operations in any sequence of operations be greater than or equal to their sum of the actual
complexities.
That is amortized (i) actual(i) --(1)
1in 1in
Where amortized(i) and actual(i), respectively denote the amortized and actual complexities of the ith
operation in a sequence of n operations.
For this reason, we may use the sum of the amortized complexities as an upper bound on the
complexity of any sequence of operations.
10 | P a g e
Design and Analysis of Algorithms Unit-I
Amortized cost of an operation is viewed as the amount you charge the operation rather than the
amount the operation costs. You can charge an operation any amount you wish so long as the amount
charged to all operations in the sequence is at least equal to the actual cost of the operation sequence.
Relative to the actual and amortized costs of each operation in a sequence of n operations, we define
a potential function P(i) as below
P(i) = amortized(i) – actual(i) + P(i-1) -- (2)
That is, the ith operation causes the potential function to change by the difference between the
amortized and actual costs of that operation.
Under the assumption P(0)=0, the potential P(i) is the amount by which the first i operations have been
overcharged.
Generally, when we analyze the complexity of a sequence of n operations, n can be any nonnegative integer.
Therefore equation (3) can hold for all nonnegative integers.
There are three popular methods to arrive at amortized costs for operations.
1. Aggregate method: In this we determine the upper bound [UpperBoundOnSumOfActualCosts(n) ]for
sum of the actual costs of the n operations.
The amortized cost of each operation is set equal to UpperBoundOnSumOfActualCosts(n) / n
2. Accounting Method: In this we assign amortized costs to the operations (by guessing), compute the p(i)s
and show that p(n) – p(0) ≥ 0.
3. Potential method: we start with a potential function that satisfies p(n) – p(0) ≥ 0. And compute the amortized
complexities using
P( i ) = amortized ( i ) – actual ( i ) + p ( i – 1 )
Asymptotic notations:
The main idea of asymptotic analysis is to have a measure of efficiency of algorithms that doesn‟t
depend on machine specific constants, and doesn‟t require algorithms to be implemented and time
taken by programs to be compared.
Asymptotic notations are mathematical tools to represent time complexity of algorithms for
asymptotic analysis.
11 | P a g e
Design and Analysis of Algorithms Unit-I
The following asymptotic notations are mostly used to represent time complexity of algorithms.
[Big-Oh] O-notation:
The Big O notation defines an upper bound of an algorithm; it bounds a function only from above.
Big-Oh notation is used widely to characterize running time and space bounds in terms of some
parameter n, which varies from problem to problem.
Constant factors and lower order terms are not included in the big-Oh notation.
For example, consider the case of Insertion Sort. It takes linear time in best case and quadratic time
in worst case. We can safely say that the time complexity of Insertion sort is O(n2). Note that O(n2)
also covers linear time.
Example : 7n – 2 is O(n)
Proof: By the big-Oh definition, we need to find a real constant c>0 and an integer constant n0≥1 such that
7n – 2 ≤ cn for every integer n≥n0.
Possible choice is c=7 and n0=1.
Here is a list of functions that are commonly encountered when analyzing algorithms. The slower growing
functions are listed first. k is some arbitrary constant.
Notation Name
O(1) Constant
O(log n) Logarithmic
O(n) Linear
O(n log n) Polylogarithmic
2
O(n ) Quadratic
O(n3) Cubic
O(n ) (k≥1)
k
Polynomial
n
O(k ) (k>1) exponential
Instead of always applying the big-Oh definition directly to obtain a big-Oh characterization, we can use the
following rules to simplify notation.
Theorem: Let d(n), e(n), f(n) and g(n) be functions mapping nonnegative integers to nonnegative reals.
Then
1. If d(n) is O(f(n)), then kd(n) is O(f(n)), for any constant k > 0.
2. If d(n) is O(f(n)) and e(n) is O(g(n)), then d(n)+e(n) is O(f(n)+g(n)).
12 | P a g e
Design and Analysis of Algorithms Unit-I
[Omega] Ω-notation:
Ω-notation provides an asymptotic lower bound.
If the running time of an algorithm is Ω(g(n)), then the meaning is, “the running time on that input is
atleast a constant times g(n), for sufficiently large n”.
It says that, Ω(n) gives a lower-bound on the best-case running time of an algorithm.
Eg: Best case running time of insertion sort is Ω(n).
[Theta] - notation :
Def: The function f (n) (g(n)) iff
there exists positive constants c1, c2 and
n0 such that
c1*g(n) ≤ f(n) ≤ c2*g(n) for all n, n≥n0
For all values of n at and to the right of n0, the value of f(n) lies at or above c1.g(n) and at or below
c2.g(n).
13 | P a g e
Design and Analysis of Algorithms Unit-I
For all n≥n0, the function f(n) is equal to g(n) to within constant factors.
We can say that g(n) is an asymptotically tight bound for f(n).
Example :
3n 2 (n)
find c1, c2 , n0 such that
3n 2 c1.g(n)
n n0
3n 2 c .g(n)
2
c1 3, c2 4, n0 2 satisfies the above.
3n 2 (n)
[Little-oh] o-notation :
O-notation provides asymptotic upper bound. It may or may not be asymptotically tight.
The bound 2n2 = O(n2) is asymptotically tight, but bound 2n = O(n2) is not.
Def: The function f(n)=o(g(n)) iff there exists a constant n0>0 such that 0≤f(n)≤c.g(n) for all n>n0 and for
any positive constant c>0.
f(n)
The function f(n)=o(g(n)) iff lim 0
n g(n)
[little-omega] - notation :
- notation is used to denote a lower bound that is not asymptotically tight.
By analogy, - notation to Ω-notation as o-notation to O-notation.
Def: The function f (n) (g(n))iff there exists a constant n0>0 such that 0 ≤ c.g(n) ≤ f(n) for all n≥n0 and
for any positive constant c>0.
g(n)
The function f (n) (g(n)) iff lim 0
n f(n)
Practical complexities
Time complexity of an algorithm is generally some function of the instance characteristics.
This function is useful
o in determining how the time requirements vary as the instance characteristics change.
o in comparing two algorithm P and Q which perform the same task,
Assume that algorithm P has complexity (n) and algorithm Q has complexity n2 . we can assert
that algorithm P is faster than algorithm Q for sufficiently large n.
14 | P a g e
Design and Analysis of Algorithms Unit-I
It is very clear from the table that the function 2n grows very rapidly with n.
eg:
Let a computer can execute 1 billion steps per second
And n = 40
If algorithm needs 2 n steps for execution, then the number of steps needed = 1.1 * 10 12 which takes 18.3
minutes.
Important Questions :
1. Write about three popular methods to arrive at amortized costs for operations with example.
2. What is amortized analysis and Explain with and example. ****
3. What is amortized analysis of algorithms and how is it different from asymptotic analysis.
4. What are asymptotic notations? And give its properties.
5. Give the definition and graphical representation of asymptotic notations.
6. Describe and define any three asymptotic notations.
7. Give the Big – O notation definition and briefly discuss with suitable example.
8. Define Little Oh notation with example.
9. Write about big oh notation and also discuss its properties.
10. Define Omega notation
11. Explain Omega and Theta notations.
12. Compare Big-oh notation and Little-oh notation. Illustrate with an example.
13. Differentiate between Bigoh and omega notation with example.
14. Show that the following equalities are incorrect with suitable notations.
10 𝑛2 + 9 = 𝑂(𝑛)
𝑛2 log 𝑛 = 𝜃(𝑛2)
𝑛2/ log 𝑛 = 𝜃(𝑛2)
𝑛32𝑛 + 5 𝑛23𝑛 = 𝑂(𝑛32𝑛 )
15. Describe best case, average case and worst case efficiency of an algorithm.
16. Find big-oh and little-oh notation for 𝑓(𝑛) = 7 𝑛3 + 50 𝑛2 + 200
17. What are different mathematical notations used for algorithm analysis
18. Prove the theorem if f(n) = amnm + …. + a1n + a0, where f(n) = O(nm).
19. Prove the theorem if f(n) = amnm + …. + a1n + a0, where f(n) = 𝜃(nm).
15 | P a g e