Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

DAA Unit1

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 139

Design and Analysis of

Algorithms
(CS6402)

Prof. Dr. P.Ramasubramanian


Department of Computer Science and Engineering,
Annai Vailankanni College of Engineering,
AVK Nagar, Azhagappapuram – 629401.
Kanyakumari District.

8/20/2019 DAA - Unit - I Presentation Slides 1 1


Design & Analysis of Algorithms
 Algorithm analysis
Analysis of resource usage of given algorithms
(time , space)
 Efficient algorithms
Algorithms that make an efficient usage of
resources
 Algorithm design
Methods for designing efficient algorithms

8/20/2019 DAA - Unit - I Presentation Slides 2 2


Design & Analysis of Algorithms
"algos" = Greek word for pain.
"algor" = Latin word for to be cold.
Why study this subject?
 Efficient algorithms lead to efficient programs.

 Efficient programs sell better.

 Efficient programs make better use of hardware.

 Programmers who write efficient programs are

preferred.
8/20/2019 DAA - Unit - I Presentation Slides 3
Objectives
 To gain experiences in fundamental
techniques used for algorithm analysis and
the main methodologies used for the
design of efficient algorithms.
 To study the most important computer
algorithms of current practical use.

8/20/2019 DAA - Unit - I Presentation Slides 4 4


Contents
 Analysis of Iterative and Recursive Algorithms
1. Brute Force Algorithms
2. Recursive Algorithms
 Major Algorithm Design Methodologies
1. Transform & Conquer Algorithms
2. Divide & Conquer Algorithms
3. Greedy Algorithms
4. Intermezzo
5. Dynamic Programming
6. Backtracking Algorithms
7. Graph Algorithms
8. Branch & Bound
9. Other Strategies (Heuristics, String & Numerical Algorithms)
8/20/2019 DAA - Unit - I Presentation Slides 5 5
Course Outcomes
After completing the course, students should be able to:
1. Determine the time and space complexity of simple
algorithms.
2. Use big O, omega, and theta notation to give asymptotic
upper, lower, and tight bounds on time and space
complexity of algorithms.
3. Recognize the difference between mathematical modeling
and empirical analysis of algorithms, and the difference
between deterministic and randomized algorithms.
4. Deduce recurrence relations that describe the time
complexity of recursively defined algorithms and work out
their particular and general solutions.
8/20/2019 DAA - Unit - I Presentation Slides 6 6
Course Outcomes (Contd…)
5. Practice the main algorithm design strategies of Brute
Force, Divide & Conquer, Greedy methods, Dynamic
Programming, Backtracking and Branch & Bound and
implement examples of each.
6. Implement the most common sorting and searching
algorithms and perform their complexity analysis.
7. Solve problems using the fundamental graph algorithms
including DFS, BFS, SSSP and APSP, transitive closure,
topological sort, and the minimum spanning tree
algorithms.
8. Evaluate, select and implement algorithms in
programming context.
8/20/2019 DAA - Unit - I Presentation Slides 7 7
UNIT – I - Introduction
Notion of an Algorithm – Fundamentals of
Algorithmic Problem Solving – Important
Problem Types – Fundamentals of the Analysis
of Algorithm Efficiency – Analysis Framework –
Asymptotic Notations and its properties –
Mathematical analysis for Recursive and Non-
recursive algorithms.

8/20/2019 DAA - Unit - I Presentation Slides 8


What is an algorithm?
An algorithm is a list of steps (sequence of unambiguous
instructions ) for solving a problem that transforms the
input into the output.

problem

algorithm

input “computer” output

8/20/2019 DAA - Unit - I Presentation Slides 9


Difference between Algorithm and Program

S. Algorithm Program
No
1 Algorithm is finite Program need not to be
finite
2 Algorithm is written Programs are written
using natural using a specific
language or programming
algorithmic language language

8/20/2019 DAA - Unit - I Presentation Slides 10


Fundamentals of Algorithm and Problem Solving

8/20/2019 DAA - Unit - I Presentation Slides 11


Problem Solving Techniques
1. Understand the problem or Review the Specifications.
2. Plan the logic
3. a) (Informal Design)
i. List major tasks
ii. List subtasks,sub-subtasks & so on
b) (Formal Design)
i. Create formal design from task lists
ii. Desk check design
4. Writing an algorithm 5. Flowcharting 6. Coding
7. Translate the program into machine language
8. Test the program
i. If necessary debug the program
10. Documentation
11. Put the program into production. If necessary maintain the
program.
8/20/2019 DAA - Unit - I Presentation Slides 12
Example of computational problem: sorting
• Arranging data in a specific order (increasing or
decreasing) is called sorting. The data may be
numerical data or alphabetical data.
A1  A2  A3  ……  An or
An ≥ An–1 ≥ An–2 ≥ …… ≥ A1 ≥ A0
• Internal Sorting
Here, all data are held in primary memory during the
sorting process.
• External Sorting
Here, it uses primary memory for the data currently being
sorted and uses secondary storage for string data.
8/20/2019 DAA - Unit - I Presentation Slides 13
Types of Sorting
• Internal Sorting
– Insertion (Insertion sort, Address Calculation sort, Shell sort)
– Selection (Selection sort, Heap sort)
– Exchange (Bubble sort, Quick sort, Radix sort)
• External Sorting
– Natural sort
– Merge sort
– Multi-way merge sort
– Balanced sort
– Polyphase sort

8/20/2019 DAA - Unit - I Presentation Slides 14
Selection Sort
Suppose A is an array which consists of ‘n’ elements namely A[l], A[2], . . . , A[N]. The
selection sort algorithm will works as follows.
1. Step 1: a. First find the location LOC of the smallest element in the list A[l],
A[2], . . . , A[N] and put it in the first position.
b. Interchange A[LOC] and A[1].
c. Now, A[1] is sorted.
2. Step 2: a. Find the location of the second smallest element in the list A[2], . . . ,
A[N] and put it in the second position.
b. Interchange A[LOC] and A[2].
c. Now, A[1] and A[2] is sorted. Hence, A[l]  A[2].
3. Step 3: a. Find the location of the third smallest element in the list A[3], . . . ,
A[N] and put it in the third position.
b. Interchange A[LOC] and A[3].
c. Now, A[1], A[2] and A[3] is sorted. Hence, A[l]  A[2]  A[3].
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . .
N-1. Step N–1 : a. Find the location of the smallest element in the list A[A–1] and A[N].
b. Interchange A[LOC] and A[N–1] & put into the second last position.
c. Now, A[1], A[2], …..,A[N] is sorted. Hence, A[l]  …  A[N–1] A[N].
8/20/2019 DAA - Unit - I Presentation Slides 15
8/20/2019 DAA - Unit - I Presentation Slides 16
Some Well-known Computational Problems
 Sorting
 Searching
 Shortest paths in a graph
 Minimum spanning tree
 Primality testing
 Traveling salesman problem
 Knapsack problem
 Chess
 Towers of Hanoi
Program
 Some of these termination
problems don’t have efficient algorithms, or algorithms at
all!

8/20/2019 DAA - Unit - I Presentation Slides 17


Basic Issues Related to Algorithms
 How to design algorithms
 How to express algorithms

 Proving correctness

 Efficiency (or complexity) analysis


• Theoretical analysis

• Empirical analysis

 Optimality

8/20/2019 DAA - Unit - I Presentation Slides 18


Algorithm design strategies
 Brute force  Greedy approach

 Divide and conquer  Dynamic


programming

 Decrease and conquer  Backtracking and


branch-and-bound

 Transform and conquer  Space and time


8/20/2019
tradeoffs
DAA - Unit - I Presentation Slides 19
PROPERTIES OF AN ALGORITHM
1. An algorithm takes zero or more inputs
2. An algorithm results in one or more outputs
3. All operations can be carried out in a finite amount of
time
4. An algorithm should be efficient and flexible
5. It should use less memory space as much as possible
6. An algorithm must terminate after a finite number of
steps.
7. Each step in the algorithm must be easily understood
for some reading it
8. An algorithm should be concise and compact to
facilitate verification
8/20/2019
of their correctness.
DAA - Unit - I Presentation Slides 20
STEPS FOR WRITING AN ALGORITHM
• An algorithm consists of two parts.
– The first part is a paragraph, which tells the
purpose of the algorithm, which identifies the
variables, occurs in the algorithm and the lists of
input data.
– The second part consists of the list of steps that is
to be executed.

8/20/2019 DAA - Unit - I Presentation Slides 21


STEPS FOR WRITING AN ALGORITHM (Contd…)
Step 1: Identifying Number
Each algorithm is assigned an identifying number.
Example: Algorithm 1. Algorithm 2 etc.,

Step 2: Comment
Each step may contain comment brackets, which identifies or
indicates the main purpose of the step.

The Comment will usually appear at the beginning or end of


the step. It is usually indicated with two square brackets [ ].

• Example :
Step 1: [Initialize]
set K : = 1
8/20/2019 DAA - Unit - I Presentation Slides 22
STEPS FOR WRITING AN ALGORITHM (Contd…)
Step 3 : Variable Names
It uses capital letters. Example MAX, DATA. Single letter
names of variables used as counters or subscripts.

Step 4 : Assignment Statement


It uses the dot equal notation (: =). Some text uses  or  or =
notations
Step 5 : Input and Output
Data may be input and assigned to variables by means of a
Read statement. Its syntax is :
READ : variable names
Example:
READ: a,b,c
Similarly, messages placed in quotation marks, and data in
variables may be output by means of a write or print
statement. Its syntax is:
WRITE : Messages and / or variable names.
Example:
8/20/2019
Write: DAA - Unit - I Presentation Slides
a,b,c 23
STEPS FOR WRITING AN ALGORITHM (Contd…)
Step 7 : Controls:
It has three types
(i) : Sequential Logic :
It is executed by means of numbered steps or by the
order in which the modules are written
(ii) : Selection or Conditional Logic
It is used to select only one of several alternative
modules. The end of structure is usually indicated by the
statement.
[ End - of-IF structure]

The selection logic consists of three types

Single Alternative, Double Alternative and Multiple Alternative


8/20/2019 DAA - Unit - I Presentation Slides 24
STEPS FOR WRITING AN ALGORITHM (Contd…)
a). Single Alternative : Its syntax is : b) Double alternative : Its syntax is
IF condition, then : IF condition, then :
[ module a] [module A]
[ End - of-IF structure] Else :
[ module B]
[ End - of-IF structure]
c). Multiple Alternative : Its syntax is :
IF condition(1), then :
[module A1]
Else IF condition (2), then:
[Module A2]
Else IF condition (2) then.
[module A2]
.............
Else IF condition (M) then :
[ Module Am]
Else [ Module B]
[ End - of-IF structure]

8/20/2019 DAA - Unit - I Presentation Slides 25


STEPS FOR WRITING AN ALGORITHM (Contd…)
Iteration or Repetitive
It has two types. Each type begins with a repeat statement and
is followed by the module, called body of the loop. The following
statement indicates the end of structure.
[End of loop ]
(a) Repeat for loop
It uses an index variable to control the loop.
Repeat for K = R to S by T:
[Module]
[End of loop]
(b) Repeat while loop
It uses a condition to control the loop.
Repeat while condition:
[Module]
[End of loop]
iv) EXIT :
The algorithm is completed when the statement EXIT is
encountered.
8/20/2019 DAA - Unit - I Presentation Slides 26
Important problem types
 sorting

 searching

 string processing
 graph problems
 combinatorial problems

 geometric problems

 numerical problems

8/20/2019 DAA - Unit - I Presentation


1-27 Slides
Real-World Applications
 Hardware design: VLSI  Computer aided design
chips and manufacturing
 Compilers  Security: e-commerce,
 Computer graphics: voting machines
movies, video games  Multimedia: CD player,
 Routing messages in the DVD, MP3, JPG, HDTV
Internet  DNA sequencing, protein
 Searching the Web folding
 Distributed file sharing  and many more!
8/20/2019 DAA - Unit - I Presentation Slides 28
28
Some Important Problem Types
 Sorting  Combinatorial
 a set of items  find desired permutation,
combination or subset
 Searching
 among a set of items  Geometric
 graphics, imaging, robotics
 String processing
 text, bit strings, gene  Numerical
sequences  continuous math: solving
equations, evaluating functions
 Graphs
 model objects and their
relationships

8/20/2019 DAA - Unit - I Presentation Slides 29


29
Algorithm Design Techniques
 Brute Force &  Dynamic Programming
Exhaustive Search  break problem into overlapping
subproblems
 follow definition / try all
possibilities  Greedy
 Divide & Conquer  repeatedly do what is best now
 break problem into distinct  Iterative Improvement
subproblems
 repeatedly improve current solution
 Transformation  Randomization
 convert problem to another one
 use random numbers

8/20/2019 DAA - Unit - I Presentation Slides 30


30
Searching

Find a given value, called a search


key, in a given set.
Examples of searching algorithms
• Sequential search
• Binary search
• Interpolation search
• Robust interpolation search

8/20/2019 DAA - Unit - I Presentation Slides 31


String Processing
A string is a sequence of characters from an
alphabet.
 Text strings: letters, numbers, and special characters.
 String matching: searching for a given word/pattern
in a text.
Examples:
(i) searching for a word or phrase on WWW or in a
Word document
(ii) searching for a short read in the reference genomic
sequence

8/20/2019 DAA - Unit - I Presentation Slides 32


Graph Problems
 Informal definition
• A graph is a collection of points called vertices, some
of which are connected by line segments called
edges.
 Modeling real-life problems
• Modeling WWW
• Communication networks
• Project scheduling …
 Examples of graph algorithms
• Graph traversal algorithms
• Shortest-path algorithms
• Topological sorting

8/20/2019 DAA - Unit - I Presentation Slides 33


Analysis of Algorithms

 How good is the algorithm?


• Correctness
• Time efficiency
• Space efficiency

 Does there exist a better algorithm?


• Lower bounds
• Optimality

8/20/2019 DAA - Unit - I Presentation Slides 34


PERFORMANCE ANALYSIS OF AN ALGORITHM
Any given problem may be solved by a number of
algorithms. To judge an algorithm there are many
criteria. Some of them are:
1. It must work correctly under all possible condition
2. It must solve the problem according to the given
specification
3. It must be clearly written following the top down strategy
4. It must make efficient use of time and resources
5. It must be sufficiently documented so that anybody can
understand it
6. It must be easy to modify, if required.
7. It should not be dependent on being run on a particular
computer.

8/20/2019 DAA - Unit - I Presentation Slides 35


Algorithm Classification
There are various ways to classify algorithms:
1. Classification by implementation :
Recursion or iteration:
Logical:
Serial or parallel or distributed:
Deterministic or non-deterministic:
Exact or approximate:
2.Classification by Design Paradigm :
Divide and conquer.
Dynamic programming.
The greedy method.
Linear programming.
Reduction.
Search and enumeration.
The probabilistic and heuristic paradigm.
Solution Methods
I.Try every possibility (n-1)! possibilities –
grows faster than exponentially

To calculate all possibilities when n = 100

I.Optimising Methods obtain guaranteed optimal solution,


but can take a very, very, long time

III. Heuristic Methods obtain ‘good’ solutions ‘quickly’


by intuitive methods.
No guarantee of optimality.
Heuristic algorithm for the Traveling Salesman Problem (T.S.P)
Euclid’s Algorithm
Problem: Find gcd(m,n), the greatest common divisor of two
nonnegative, not both zero integers m and n

Examples: gcd(60,24) = 12, gcd(60,0) = 60, gcd(0,0) = ?


Euclid’s algorithm is based on repeated application of equality
(i) gcd(m,n) = gcd(n, m mod n) (OR)
(ii) gcd(m, n) = gcd(m − n, n) for m ≥ n > 0.

until the second number becomes 0, which makes the problem


trivial.

Example: gcd(60,24) = gcd(24,12) = gcd(12,0) = 12


gcd(60,24) = gcd(36,24) = gcd(12,24)
8/20/2019 DAA - Unit - I Presentation Slides 38
Two descriptions of Euclid’s algorithm

Step 1 If n = 0, return m and stop; otherwise go to Step 2


Step 2 Divide m by n and assign the value of the remainder to r
Step 3 Assign the value of n to m and the value of r to n. Go to
Step 1.

while n ≠ 0 do
r ← m mod n
m← n
n←r
return m
8/20/2019 DAA - Unit - I Presentation Slides 39
Other methods for computing gcd(m,n)
Consecutive integer checking algorithm
Step 1 Assign the value of min{m,n} to t
Step 2 Divide m by t. If the remainder is 0, go to
Step 3; otherwise, go to Step 4
Step 3 Divide n by t. If the remainder is 0, return
t and stop; otherwise, go to Step 4
Step 4 Decrease
Is this slower t by 1How
than Euclid’s algorithm? and go to Step 2
much
slower?

8/20/2019 DAA - Unit - I Presentation Slides


Other methods for gcd(m,n) [cont.]
Middle-school procedure
Step 1 Find the prime factorization of m
Step 2 Find the prime factorization of n
Step 3 Find all the common prime factors
Step 4 Compute the product of all the common
prime factors and return it as gcd(m,n)
Is this an algorithm?
How efficient is it? Time complexity:
O(sqrt(n))
DAA - Unit - I Presentation Slides
8/20/2019
Problem. Find gcd(31415, 14142) by applying Euclid’s algorithm

gcd(31415, 14142) = gcd(14142, 3131)


= gcd(3131, 1618)
= gcd(1618, 1513)
= gcd(1513, 105)
= gcd(1513, 105)
= gcd(105, 43)
= gcd(43, 19)
= gcd(19, 5)
= gcd(5, 4)
= gcd(4, 1) = gcd(1, 0) = 1.
8/20/2019 DAA - Unit - I Presentation Slides 42
Estimate how many times faster it will be to find gcd(31415, 14142)
by Euclid’s algorithm compared with the algorithm based on
checking consecutive integers from min{m, n} down to gcd(m, n).

• The number of divisions made by Euclid’s algorithm is 11 .


• The number of divisions made by the consecutive integer
checking algorithm on each of its 14142 iterations is
either 1 and 2; hence the total number of multiplications
is between
1·14142 and 2·14142.
• Therefore, Euclid’s algorithm will be between
1·14142/11 ≈ 1300 and 2·14142/11 ≈ 2600 times faster.

8/20/2019 DAA - Unit - I Presentation Slides 43


Algorithm Efficiency
• The efficiency of an algorithm is usually
measured by its CPU time and Storage space.
• The time is measured by counting the number of
key operations, that is how much time does it
take to run the algorithm. For example, in sorting
and searching algorithms, number of
comparisons.
• The space is measured by counting the maximum
of memory needed by the algorithm. That is, the
amount of memory required by an algorithm to
run to completion
8/20/2019 DAA - Unit - I Presentation Slides 44
Time and space complexity
 This is generally a function of the input size
 E.g., sorting, multiplication
 How we characterize input size depends:
 Sorting: number of input items
 Multiplication: total number of bits
 Graph algorithms: number of nodes & edges
 Etc

8/20/2019 DAA - Unit - I Presentation Slides 45


Algorithm Analysis
• We only analyze correct algorithms
– An algorithm is correct
– If, for every input instance, it halts with the correct output
• Incorrect algorithms
– Might not halt at all on some input instances
– Might halt with other than the desired answer
• Analyzing an algorithm
– Predicting the resources that the algorithm requires
– Resources include
• Memory, Communication bandwidth, Computational time (usually most important)
• Factors affecting the running time
– computer , compiler, algorithm used
– input to the algorithm
• The content of the input affects the running time
• typically, the input size (number of items in the input) is the main consideration
– E.g. sorting problem  the number of items to be sorted
– E.g. multiply two matrices together  the total number of elements in the two matrices
• Machine model assumed
– Instructions are executed one after another, with no concurrent operations  Not
parallel computers
8/20/2019 DAA - Unit - I Presentation Slides 46
Algorithm Analysis (Contd…)
Many criteria affect the running time of an
algorithm, including
 speed of CPU, bus and peripheral hardware

 design think time, programming time and


debugging time
 language used and coding efficiency of the
programmer
 quality of input (good, bad or average)

8/20/2019 DAA - Unit - I Presentation Slides 47


Algorithm Analysis (Contd…)
Programs derived from two algorithms for solving
the same problem should both be
 Machine independent

 Language independent

 Environment independent (load on the system,...)

 Amenable to mathematical study

 Realistic

8/20/2019 DAA - Unit - I Presentation Slides 48


Faster Algorithm vs. Faster CPU
 A faster algorithm running on a slower machine will
always win for large enough instances
 Suppose algorithm S1 sorts n keys in 2n2 instructions
 Suppose computer C1 executes 1 billion instruc/sec
 When n = 1 million, takes 2000 sec
 Suppose algorithm S2 sorts n keys in 50nlog2n instructions
 Suppose computer C2 executes 10 million instruc/sec
 When n = 1 million, takes 100 sec

8/20/2019 DAA - Unit - I Presentation Slides 49


49
Performance measures: worst case,
average case and Best case

8/20/2019 DAA - Unit - I Presentation Slides 50


LINEAR LOOPS
Example :
i=1
Loop (i <=1000)
Application code The answer is 1000 times.
i=i+1
Assume that i is an integer.

Example:
i=1
Loop (i <=1000)
Application code
i=i+2

Here, the answer is 500 times, because the efficiency is directly


proportionate to a number of iterations. The higher the factor, higher the
number of loops. Therefore,

f(n)=n.
8/20/2019 DAA - Unit - I Presentation Slides 51
LOGARITHMIC LOOPS
Multiply Loop Divide Loop
i=1 i=1
f(n) = log2n.
Loop (i < 1000) Loop (i < 1000)
Application code Application code
i=i*2 i=i / 2
Multiply  2iterations < 1000 ; Divide  1000 / 2iterations > = 1
Multiply Divide
Iteration Value of I Iteration Value of I
1 1 1 1000
2 2 2 500
3 4 3 250
4 8 4 125
5 16 5 62
6 32 6 31
7 64 7 15
8 128 8 7
9 256 9 3
10 512 10 1
8/20/2019 (exit) DAA 1024 (exit)
- Unit - I Presentation Slides 0 52
NESTED LOOPS
When we analyze loops, we must determine how many iterations each
loop completes. The total is then the product of the number of iterations
for the inner loop and the number of iterations in the outer loop.
Iterations = outer loop iterations x inner loop iterations
i=1
Loop (i < = 10)
j=1
Loop (j<=10)
Application code
j=j*2
i=i+1

• The number of iterations in the inner loop is log210. In the above program
code, the inner loop is controlled by an outer loop. The above formula
must be multiplied by the number of times the outer loop executes, which
is 10. this gives us,

10 x log2 10. In general, f(n) = n x log2 n

8/20/2019 DAA - Unit - I Presentation Slides 53


DEPENDENT QUADRATIC
i=1
Loop (i < = 10)
j=1
Loop (j<=i)
Application code
j=j+1
i=i+1
Here, the outer loop is same as the previous loop. However, the inner
loop is dependent on the outer loop for one of its factors. It is
executed only once the first iteration, twice the second iteration, three
times the third iteration and so on. The number of iterations in the
body of the inner loop is 1+2+3+4+……+8+9+10 = 55.

If we compute the average of this loop, it is 5.5 (55/10), which is the


same as the number of iterations (10) plus 1 divided by 2. this can be
written as (n+1)/2.
Multiply the inner loop by the number of times the outer loop is
executed gives the following formula:
f(n) =n . (n+1)/2
8/20/2019 DAA - Unit - I Presentation Slides 54
QUADRATIC
i=1
Loop (i < = 10)
j=1
Loop (j<=10)
Application code
j=j+1
i=i+1

The outer loop executed 10 times. For each of its


iterations, the inner loop is also executed ten
times. The answer is 100. Thus, f(n) = n2.
8/20/2019 DAA - Unit - I Presentation Slides 55
What isWhat
the isisthe
therunning
Whatrunning time
runningtime
ofofthis
time ofthis algorithm?
thisalgorithm?
algorithm?

PUZZLE(x)
while x != 1
if x is even
then x = x / 2
else x = 3x + 1

Sample run: 7, 22, 11, 34, 17, 52, 26, 13, 40, 20,
10, 5, 16, 8, 4, 2, 1

8/20/2019 DAA - Unit - I Presentation Slides 56


Write pseudocode for an algorithm
for finding real roots of equation
ax2 + bx + c = 0 for arbitrary real
coefficients a, b, and c. (You may
assume the availability of the square
root function sqrt (x).)

8/20/2019 DAA - Unit - I Presentation Slides 57


Growth of Functions
• The relative performance of an algorithms
depends on input data size N. If there are
multiple input parameters, we will try to reduce
them to a single parameter, expressing some
parameters in terms of the selected parameter.
• We know that, the performance of algorithm on
an input of size N is generally represented in
terms of 1, logN, N, N log N, N2, N3, and 2N. The
performance depends heavily on loops, and can
be increased by minimizing the inner loops.
8/20/2019 DAA - Unit - I Presentation Slides 58
Computing Big – O
• The big-O notation can be derived from f(n) using the
following steps.

• In each term, set the co-efficient of the term to one


• Keep the largest term in the function and discard the
others. Terms are ranked from lowest to highest as
shown below:

Log n n nlog n n2 n3 … nk 2n n!

8/20/2019 DAA - Unit - I Presentation Slides 59


Rate of growth of function
Logarithmic Linear Linear Quadratic Polynomial Exponential
logarithmic

Log2n N nlog2n n2 n3 2n
0 1 0 1 1 2
1 2 2 4 8 4
2 4 8 16 64 16
3 8 24 64 512 256
4 16 64 256 4096 65536
5 32 160 1024 32768 4294967296

3.322 10 33.22 102 103 > 103


6.644 102 664.4 104 106 > >1025
9.966 103 9966.0 106 10 > > 10250

8/20/2019 DAA - Unit - I Presentation Slides 60


Asymptotic Algorithm Analysis
• The asymptotic analysis of an algorithm
determines the running time in big-Oh
notation. To perform the asymptotic analysis

• We find the worst-case number of primitive


operations executed as a function of the input
size
• We express this function with big-Oh notation
– Since constant factors and lower-order terms are
eventually dropped , we can disregard them when
counting primitive operations.
8/20/2019 DAA - Unit - I Presentation Slides 61
Asymptotic Performance
• Running time
• Memory/storage requirements
• Bandwidth/power requirements/logic gates/etc.

62
Analysis
• Worst case
– Provides an upper bound on running time
– An absolute guarantee
• Average case
– Provides the expected running time
– Very useful, but treat with care: what is “average”?
• Random (equally likely) inputs
• Real-life inputs

63
Asymptotic Notation – Big – Oh
• The idea is to establish a relative order among
functions for large n
– Given function f(n) and g(n) , we say that f(n) is
O(g(n)) if there are positive constants c and n0
such that f(n) ≤ cg(n) for n ≥ n0.

•The growth rate of f(N)


is less than or equal to
the growth rate of g(N)

•g(N) is an upper bound


on f(N)
8/20/2019 DAA - Unit - I Presentation Slides 64
Big – Oh – Example
• Let f(n) = 2N2. Then
– f(n) = O(N4); f(n) = O(N3); f(N) = O(N2)
(best answer, asymptotically tight)
• O(N2): reads “order N-squared” or “Big-Oh N-squared”
• N2 / 2 – 3N = O(N2); 1 + 4N = O(N);
• 7N2 + 10N + 3 = O(N2)
• log10 N = log2 N / log2 10 = O(log2 N) = O(log N)
• sin N = O(1); 10 = O(1), 1010 = O(1);
• log N + N = O(N); logk N = O(N) for any constant k
• N = O(2N), but 2N is not O(N); 210N is not O(2N)
8/20/2019 DAA - Unit - I Presentation Slides 65
Rules for finding Big – Oh
• If f(n) is a polynomial of degree d, then f(n) is
O(nd), i.e,
– Drop lowest term
– Drop constant factors
• Use the simplest possible class of function
– Say “2n is O(n)” instead of “2n is O(n2)”
• Use the simplest expression of the class
– Say “3n+5 is O(n)” instead of “3n+5 is O(3n)”
• If T1(N) = O(f(N) and T2(N) = O(g(N)), then
– T1(N) + T2(N) = max(O(f(N)), O(g(N))),
– T1(N) * T2(N) = O(f(N) * g(N))
8/20/2019 DAA - Unit - I Presentation Slides 66
8/20/2019 DAA - Unit - I Presentation Slides 67
Math Review
Intuition for Asymptotic Notation
Recurrence Relation
A recurrence relation is an equation which
is defined in terms of itself. That is, the
nth term is expressed in terms of one or
more previous elements. (an-1, an-2 etc).

Example:
an = 2an-1 + an-2
Two fundamental rules
–Must always have a base case
–Each recursive call must be a case
that eventually leads toward a base
case
Recurrence Relation of Fibonacci
Number fib(n):

{0, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, …}


Example: Fibonacci Numbers
Problem: What is fib(200)? What about
fib(n), where n is any positive integer?

Algorithm 1 fib(n)
if n = 0 then
return (0)
if n = 1then
return (1)
return (fib(n − 1) + fib(n − 2))
Recurrences
• The expression:
 c n 1

T ( n)  
2T  n   cn n  1
  2 
is a recurrence.
– Recurrence: an equation that describes a function
in terms of its value on smaller functions
Recurrence Examples

 0 n0  0 n0
s ( n)   s ( n)  
c  s(n  1) n  0 n  s(n  1) n  0

n 1 
 c  c n 1
 
T ( n)   T ( n)  
2T  n   c n  1  n
  2  aT    cn n  1
 b
Recursion Methods
• Substitution Method
• Recursion Tree or Iteration Method
• Master Method
Substitution Method

• Guess the form of the solution


• Use mathematical induction to find the
constants and show that the solution
works.
Solving Technique 3 Approximate Form
and Calculate Exponent
Iteration method
• Expand the recurrence
– Work some algebra to express as a summation
– Evaluate the summation
• We will show several examples
 0 n0
s ( n)  
c  s(n  1) n  0
• s(n) = c + s(n-1)
= c + c + s(n-2)
= 2c + s(n-2)
= 2c + c + s(n-3)
= 3c + s(n-3)

= kc + s(n-k) = ck + s(n-k)
 0 n0
s ( n)  
c  s(n  1) n  0
• So far for n >= k we have
– s(n) = ck + s(n-k)
• What if k = n?
– s(n) = cn + s(0) = cn
 0 n0
s ( n)  
c  s(n  1) n  0
• So far for n >= k we have
– s(n) = ck + s(n-k)
• What if k = n?
– s(n) = cn + s(0) = cn
• So  0 n0
s ( n)  
c  s(n  1) n  0
• Thus in general
– s(n) = cn
 0 n0
s ( n)  
n  s(n  1) n  0
• s(n) = n + s(n-1)
= n + n-1 + s(n-2)
= n + n-1 + n-2 + s(n-3)
= n + n-1 + n-2 + n-3 + s(n-4)
= …
= n + n-1 + n-2 + n-3 + … + n-(k-1) + s(n-k)
 0 n0
s ( n)  
n  s(n  1) n  0
s(n)
= n + s(n-1)
= n + n-1 + s(n-2)
= n + n-1 + n-2 + s(n-3)
= n + n-1 + n-2 + n-3 + s(n-4)
= …
= n + n-1 + n-2 + n-3 + … + n-(k-1) + s(n-k)
= n

i
i  n  k 1
 s(n  k )
 0 n0
s ( n)  
n  s(n  1) n  0
• So far for n >= k we have
n

i
i  n  k 1
 s(n  k )
 0 n0
s ( n)  
n  s(n  1) n  0
• So far for n >= k we have
n

i
i  n  k 1
 s(n  k )

• What if k = n?
 0 n0
s ( n)  
n  s(n  1) n  0
• So far for n >= k we have
n

i
i  n  k 1
 s(n  k )

• What if k = n?
n
n 1
n


i 1
i  s(0)   i  0  n
i 1 2
 0 n0
s ( n)  
n  s(n  1) n  0
• So far for n >= k we have
n

i
i  n  k 1
 s(n  k )

• What if k = n?
n
n 1
n


i 1
i  s(0)   i  0  n
i 1 2
• Thus in general
n 1
s ( n)  n
2
 c n 1
 n
T (n)  2T
   c n 1
  2 
• T(n) =
2T(n/2) + c
2(2T(n/2/2) + c) + c
22T(n/22) + 2c + c
22(2T(n/22/2) + c) + 3c
23T(n/23) + 4c + 3c
23T(n/23) + 7c
23(2T(n/23/2) + c) + 7c
24T(n/24) + 15c

2kT(n/2k) + (2k - 1)c
 c n 1
 n
T (n)  2T
   c n 1
  2 
• So far for n > 2k we have
– T(n) = 2kT(n/2k) + (2k - 1)c
• What if k = lg n?
– T(n) = 2lg n T(n/2lg n) + (2lg n - 1)c
= n T(n/n) + (n - 1)c
= n T(1) + (n-1)c
= nc + (n-1)c = (2n - 1)c
 c n 1
 n
T (n)  aT
   cn n  1
  b 
• T(n) =
aT(n/b) + cn
a(aT(n/b/b) + cn/b) + cn
a2T(n/b2) + cna/b + cn
a2T(n/b2) + cn(a/b + 1)
a2(aT(n/b2/b) + cn/b2) + cn(a/b + 1)
a3T(n/b3) + cn(a2/b2) + cn(a/b + 1)
a3T(n/b3) + cn(a2/b2 + a/b + 1)

akT(n/bk) + cn(ak-1/bk-1 + ak-2/bk-2 + … + a2/b2 + a/b + 1)
 c n 1
 n
T (n)  aT
   cn n  1
  b 
• So we have
– T(n) = akT(n/bk) + cn(ak-1/bk-1 + ... + a2/b2 + a/b + 1)
• For k = logb n
– n = bk
– T(n) = akT(1) + cn(ak-1/bk-1 + ... + a2/b2 + a/b + 1)
= akc + cn(ak-1/bk-1 + ... + a2/b2 + a/b + 1)
= cak + cn(ak-1/bk-1 + ... + a2/b2 + a/b + 1)
= cnak /bk + cn(ak-1/bk-1 + ... + a2/b2 + a/b + 1)
= cn(ak/bk + ... + a2/b2 + a/b + 1)
 c n 1
 n
T (n)  aT
   cn n  1
  b 
• So with k = logb n
– T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
• What if a = b?
– T(n) = cn(k + 1)
= cn(logb n + 1)
= (n log n)
 c n 1
 n
T (n)  aT
   cn n  1
  b 
• So with k = logb n
– T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
• What if a < b?
 c n 1
 n
T (n)  aT
   cn n  1
  b 
• So with k = logb n
– T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
• What if a < b?
– Recall that (xk + xk-1 + … + x + 1) = (xk+1 -1)/(x-1)
 c n 1
 n
T (n)  aT
   cn n  1
  b 
• So with k = logb n
– T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
• What if a < b?
– Recall that (xk + xk-1 + … + x + 1) = (xk+1 -1)/(x-1)
– So:

a k a k 1 a
 k 1     1 
a b k 1  1 
1  a b 
k 1

1
k
b b b a b   1 1  a b  1 a b
 c n 1
 n
T (n)  aT
   cn n  1
  b 
• So with k = logb n
– T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
• What if a < b?
– Recall that (xk + xk-1 + … + x + 1) = (xk+1 -1)/(x-1)
– So:

a k a k 1 a
 k 1     1 
a b k 1  1 
1  a b 
k 1

1
k
b b b a b   1
– T(n) = cn ·(1) = (n)
1  a b  1 a b
 c n 1
 n
T (n)  aT
   cn n  1
  b 
• So with k = logb n
– T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
• What if a > b?
 c n 1
 n
T (n)  aT
   cn n  1
  b 
• So with k = logb n
– T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
• What if a > b?
a k a k 1 a
 k 1     1 
a b k 1  1 
  a b 
k

k
b b b a b   1
 c n 1
 n
T (n)  aT
   cn n  1
  b 
• So with k = logb n
– T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
• What if a > b?
a k a k 1 a
 k 1     1 
a b k 1  1 
  a b 
k

k
b b b a b   1
– T(n) = cn · (ak / bk)
 c n 1
 n
T (n)  aT
   cn n  1
  b 
• So with k = logb n
– T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
• What if a > b?
a k a k 1 a
 k 1     1 
a b k 1  1 
  a b 
k

k
b b b a b   1
– T(n) = cn · (ak / bk)
= cn · (alog n / blog n) = cn · (alog n / n)
 c n 1
 n
T (n)  aT
   cn n  1
  b 
• So with k = logb n
– T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
• What if a > b?
a k a k 1 a
 k 1     1 
a b k 1  1 
  a b 
k

k
b b b a b   1
– T(n) = cn · (ak / bk)
= cn · (alog n / blog n) = cn · (alog n / n)
recall logarithm fact: alog n = nlog a
 c n 1
 n
T (n)  aT
   cn n  1
  b 
• So with k = logb n
– T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
• What if a > b?
a k a k 1 a
 k 1     1 
a b k 1  1 
  a b 
k

k
b b b a b   1
– T(n) = cn · (ak / bk)
= cn · (alog n / blog n) = cn · (alog n / n)
recall logarithm fact: alog n = nlog a
= cn · (nlog a / n) = (cn · nlog a / n)
 c n 1
 n
T (n)  aT
   cn n  1
  b 
• So with k = logb n
– T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
• What if a > b?
a k a k 1 a
 k 1     1 
a b k 1  1 
  a b 
k

k
b b b a b   1
– T(n) = cn · (ak / bk)
= cn · (alog n / blog n) = cn · (alog n / n)
recall logarithm fact: alog n = nlog a
= cn · (nlog a / n) = (cn · nlog a / n)
= (nlog a )
 c n 1
 n
T (n)  aT
   cn n  1
  b 
• So…

 n  ab

T (n)  n log b n  ab
 
  n logb a
 ab
Master Method

• The master method provides a “cookbook”


method for solving recurrences of the form
T(n) = aT(n/b)+f(n),
Where a ≥ 1, b > 1 and f(n) is a given function;
it requires memorization of three cases, but
once you do that, determining asymptotic
bounds for many simple recurrences is easy.

8/20/2019 DAA - Unit - I Presentation Slides 122


Example:
• Many natural functions are easily expressed as
recurrences:
• (Polynomial) a n  a n  1  1, a 1  1  a n  n
• (Exponential) an  2an 1 , a1  1  an  2 n 1
• It is often easy to find a recurrence as the
solution of a counting problem. Solving the
recurrence can be done for many special
cases. Example Fibonacci and Factorial of a
number.

8/20/2019 DAA - Unit - I Presentation Slides 123


Recursion is Mathematical Induction

We have general and boundary conditions, with the general


condition breaking the problem into smaller and smaller pieces.
The initial or boundary condition terminate the recursion.

The induction provides a useful tool to solve recurrences -


guess a solution and prove it by induction.

Example: Tn  2Tn 1  1, T0  0

n 0 1 2 3 4 5 6 7
0 1 3 7 15 31 63 127

Prove that
Tn  2 n  1 by induction:

8/20/2019 DAA - Unit - I Presentation Slides 124


Tn  2Tn 1  1, T0  0
• Proof:
• T0  2 0  1 = 0 Tn  2  1
n

• Show that the basis is true


• Now assume true for Tn  1 .
• Using this assumption we show
n 1
Tn  2 Tn 1  1  2( 2  1)  1  2  1
n

8/20/2019 DAA - Unit - I Presentation Slides 125


Solving recursive equations by repeated substitution

T(n) = T(n/2) + c substitute for T(n/2)


= T(n/4) + c + c substitute for T(n/4)
= T(n/8) + c + c + c
= T(n/23) + 3c in more compact form
=…
= T(n/2k) + kc “inductive leap”

T(n) = T(n/2logn) + clogn “choose k = logn”


= T(n/n) + clogn
= T(1) + clogn
= b + clogn = θ(logn)
8/20/2019 DAA - Unit - I Presentation Slides 126
Solving recursive equations by telescoping
• T(n) = T(n/2) + c initial equation
• T(n/2) = T(n/4) + c so this holds
• T(n/4) = T(n/8) + c and this …
• T(n/8) = T(n/16) + c and this …
• …
• T(4) = T(2) + c eventually …
• T(2) = T(1) + c and this …
• T(n) = T(1) + clogn sum equations,
canceling the terms
appearing on both sides
• T(n) = θ(logn)

8/20/2019 DAA - Unit - I Presentation Slides 127
Mathematical Analysis of Non-Recursive Algorithms
• Finding the value of the largest element in a list of n numbers.
• General Plan for Analyzing the Time of Non-recursive Algorithms
– 1. Decide on a parameter (or parameters) indicating an input’s
size.
– 2. Identify the algorithm’s basic operation.
– 3. Check whether the number of times the basic operation is
executed depends
• only on the size of an input. If it also depends on some
additional property,
• the worst-case, average-case, and, if necessary, best-case
efficiencies have to be investigated separately.
– 4. Set up a sum expressing the number of times the algorithm’s
basic operation is executed.
– 5. Using standard formulas and rules of sum manipulation,
either find a closed form
8/20/2019 DAA - Unit - I Presentation Slides 133
Analysis of Linear Search
• Another name - Sequential Search
• Definition
– Suppose A is a linear array with n elements and ITEM
is a given item of information. This algorithm finds the
location LOC of ITEM in A, or sets LOC=0 if the search
is unsuccessful.
– To do this, we compare ITEM with each element of A
one by one. That is, first we test whether A[1]=ITEM,
and then we test whether A[2]=ITEM and so on. This
method, which traverses A sequentially to locate A, is
called linear search or sequential search.
– To simplify the algorithm, we first assign ITEM to
A[N+1]. Then the outcome LOC=N+1, signifies the
search is unsuccessful.
Linear Search : Algorithm
Algorithm:LINEAR (A, N, ITEM, LOC)
• Suppose A is a linear array with n elements and ITEM is a
given item of information. This algorithm finds the location LOC
of ITEM in A, or sets LOC=0 if the search is unsuccessful.
1. [Insert ITEM at the end of A]
2. Set A[N+1]=ITEM
3. [Initialize Counter]
Set LOC=1
4. [Search for ITEM]
Repeat while A[LOC]  ITEM
Set LOC=LOC+1
[End of Loop]
5. [Successful?]
If Loc=N+1
Then:
Set Loc=0
6. Exit.
Analysis of Linear Search
• The number of comparisons depends on where the
target key appears in the list.
• Two important cases to consider are the average
case and worst case.
Location of the Element No. of Comparison Reqd
0 1
1 2
2 3
: :
N-1 N
Not in the array N
Average Case
• Suppose if ITEM does appear in A. That is, if
the desired record is the first one in the list,
only one comparison is required. If the desired
record is the second one in the list, two
comparisons are required. If it is last one in
the list, n comparisons are compulsory. The
performance of a successful search will
depend on where the target is found.
Average Case (Contd …)
• Find the average number of key comparisons done in case of a
successful sequential search by adding the number of
comparisons needed for all the successful searches and divide
it by n, the number of entries in the list as
1  2  3  ......  n
n
n( n  1)

2n
n1

2
Worst Case
• If the search is unsuccessful, it makes n
comparisons, as the target will be compared
to all entries in the list. In this case, the
algorithm requires f(n)= n+1 comparisons.
Thus in the worst case, the running time is
proportional to n.
• Therefore, in both the cases, the number of
comparisons is of the order of n denoted as
O(n).

You might also like