Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
33 views

Lecture2 Algorithms-Complexity REV

This document discusses algorithms for searching and sorting, including linear search, binary search, insertion sort, bubble sort, and quicksort. It analyzes the time complexity of each algorithm using big O notation, finding that linear search and insertion sort are O(N), binary search is O(logN), bubble sort is O(N^2), and quicksort has average time complexity of O(NlogN). Exercises are provided to demonstrate steps and time complexities of each algorithm.

Uploaded by

HaiNguyen
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views

Lecture2 Algorithms-Complexity REV

This document discusses algorithms for searching and sorting, including linear search, binary search, insertion sort, bubble sort, and quicksort. It analyzes the time complexity of each algorithm using big O notation, finding that linear search and insertion sort are O(N), binary search is O(logN), bubble sort is O(N^2), and quicksort has average time complexity of O(NlogN). Exercises are provided to demonstrate steps and time complexities of each algorithm.

Uploaded by

HaiNguyen
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Design Automation

Lecture 2
Algorithms & Algorithm Complexity

References:
Sait & Youssef Appendix A

1
Part 1: Algorithms (Searching, Sorting) and Algorithm Complexity

An algorithm is a set of well-defined steps to accomplish a given task.

In computing we want algorithms that use minimal resources.

We generally want to minimize


time
space
power

used to accomplish a task. Often we can minimize one of these only at the expense of
the others. Usually time is the most important resource to minimize. We will focus on
time here.

We will consider the worst case time for a given algorithm over all possible inputs (for
example, for all possible sets of N integers). We will describe the worst case time by
using the order of or big O notation
2

Definition: big-O notation. A theoretical measure of the execution of
an algorithm, usually the time or memory needed, given the problem
size n, which is usually the number of items. Informally, saying some
equation f(n) = O(g(n)) means it is less than some constant multiple of
g(n) for all n. https://xlinux.nist.gov/dads/HTML/ bigOnotation.html

Example: the following expressions are all O(n 2):


10,000n2; 80n2 + 6n + 95; 27nlog(n); 4n + 8; 9999
the following are NOT O(n2): 2n; n3, n2logn
[aside: it is also true that 4n + 8 = O(n) and 9999 = O(1)]
(note: in computing we usually use log 2n; the base for the logarithm
does not really matter, since , as we remember from algebra, for any
positive numbers a and b, and for any nonnegative real number x, log b x
= loga x/loga b. Here we will assume we are talking about base 2 logs).)
3
Exercise 1:
for each of the following expressions, give the smallest a such that the expression is
O(na):

a. 17n + n3

b. 56n5logn

c. 54n1/2

d. 4n

e. n3 + 3n4log10n

NOTE: class exercises make good quiz questions 4


Algorithm complexity:

For a given algorithm, we want to find how many time units it takes for the
algorithm to execute on a set of a specific size. This can depend on how
the data is stored. (most pseudocode below is from Wikipedia, sometimes
slightly modified)

5
Task 1. Searching. Given a set of N integers and an integer X, is X a member of the set?

Case 1: given an unsorted array of N integers, is X in the array?

Pseudocode: Iterative version--if item not in array then return -1:

For each item in the array:


if that item has the desired value,
stop the search and return the item's location.
Return -1.

Example: recall array A: items are stored in consecutive memory locations, 1 st array item is at A[0]
(hw OR sw)
PO 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
S
VA 1 2 11 4 5 6 19 10 21 3 13 14 8 17 7
L
We can implement this as a for loop, with a few extra statements before and after the loop. In the worst
case, the for loop will execute N times. So this is an O(N) algorithm.
6
Task 1. Searching. Given a set of N integers and an integer X, is X a
member of the set?

Recursive version:

LinearSearch(value, array)
if the array size is 0, return -1;
else
if the first item of the array has the desired value, return its location;
else return LinearSearch(value, remainder of the array)

Exercise 2. Explain why the recursive version is also an O(N) algorithm.

7
Task 1. Searching. Given a set of N integers and an integer X, is X a member of the set?

Case 2. Given a SORTED array of integers, is there an algorithm to search for X that has complexity less than O(N)?
Answer: YESbinary search (an example of a divide and conquer algorithm):

Pseudocode from Wikipedia:


Given an array A of N elements with values or records A0 ... AN1, sorted such that A0 ... AN1, and target value X, the following subroutine uses
binary search to find the index of X in A.[6]
For example here: let X = 9
Step 1: Set L to 0 and R to N 1. Step 1: L = 0, R = 14
Step 2: If L > R, the search terminates as unsuccessful. Step 2: m = 7: A7 < 9, set L = 8
Else Set m (the position of the middle element) to the floor of (L + R)/2. Repeat step 2: m = (8 + 14) / 2 = 11, A11 >9,
If Am < X, set L to m + 1 and go to step 2. set R =10
Repeat step 2: m = (8 + 10)/ 2 = 9, A9 > 9,
If Am > X, set R to m 1 and go to step 2.
set R = 8
Now Am = X, the search is done; return m.
Repeat step 2: m = ((8 + 8) / 2 = 8, A8 > 9,
Example: set R = 7
PO 0 1 2 3 4 5 6 7 8 9 10 11
Repeat12step13
2: L =14
8, R = 7, stop, X not in this
S array
VA 1 2 3 4 5 6 7 10 11 12 13 14 15 17 18
L
Exercise 3. How many steps will this algorithm? take if N = 7? What if N = 31?

Exercise 4. How many steps will this algorithm take for N a power of 2? In general, what is f(N) so that binary search is O(f(N) )?

(aside if you like math: we say an algorithm has time complexity T(n) which is o(n), if for all c > 0 there exists some k > 0 such that 0 f(n) < cg(n)
for all n k.
https://xlinux.nist.gov/dads/HTML/littleOnotation.html) 8
Task 2. Sorting. We assume we have an array A of N integers. We want to sort the array so that is in
ascending order. (Complexity will be the same if we choose descending order). We will look at 3
algorithms.

Algorithm 1. Insertion sort.


for i 1 to length(A) -1
ji
while j > 0 and A[j-1] > A[j]
swap A[j] and A[j-1]
jj-1
end while
end for

Exercise 5. Show the steps if the original array looks like: 7,6,5,4,3.

Exercise 6. Based on the results of Exercise 5, explain why the (worst case) time complexity of this algorithm for an
array of N elements is O(N2).

9
Task 2. Sorting. We assume we have an array A of N integers. We want to sort the array so that is in ascending order.
(Complexity will be the same if we choose descending order). We will look at 3 algorithms.
Algorithm 2. Bubble sort
procedure bubbleSort( A : list of sortable items )
N = length(A)
repeat
swapped = false
for i = 1 to N-1 inclusive do
/* if this pair is out of order */
if A[i-1] > A[i] then
/* swap them and remember something changed */
swap( A[i-1], A[i] )
swapped = true
end if
end for
until not swapped
end procedure

Exercise 7. Show the steps in this algorithm for A = 7,6,5,4,3,

Exercise 8. Based on the result of Exercise 7, explain why the (worst case) time complexity for this algorithm is O(N 2).

Exercise 9. How ling does the algorithm take to run for the array 7,3,4,5,6?

10
Remark. Bubble sort is actually a very efficient algorithm if an array is already mostly in order . In that case, it will take time O(N).
Task 2. Sorting. We assume we have an array A of N integers. We want to sort the array so that is is in ascending order.
(Complexity will be the same if we choose descending order). We will look at 3 algorithms.
Algorithm 3. Quicksort.
Quicksort is a good general-purpose sorting algorithm for an array. It is usually described recursively. It has worst case time complexity O(N 2) but
AVERAGE time complexity O(NlogN). So in general it will be quicker than the other sorts we have looked at here.
algorithm quicksort(A, lo, hi) is
if lo < hi then
p := partition(A, lo, hi)
quicksort(A, lo, p 1)
quicksort(A, p + 1, hi)
algorithm partition(A, lo, hi) is
pivot := A[hi]
i := lo // place for swapping
for j := lo to hi 1 do
if A[j] pivot then
swap A[i] with A[j]
i := i + 1
swap A[i] with A[hi]
return i

Exercise 10. Demonstrate the steps in quicksort for the array 7,3,6,4,8,5,9,1
Exercise 11. To get a feel for the advantage of using quicksort over insertion sort or bubble sort, make a table of the values of N and NlogN for N =
1,2,,10
Remark. It can be proved mathematically that for a an array sorting algorithm that uses comparisons to sort, the best (worst case) time complexity
we can achieve is O(NlogN). There are sorting algorithms, heapsort, and mergesort, which can achieve this time, but the constant for heapsort is
bigger than that for quicksort and mergesort takes more memory, so quicksort is generally the preferred sort for an array that is not known to be
mostly sorted already. 11
NOTE: this slide is corrected
Part 2: Computational complexity
Classes of problems:
P: Polynomial time--problems that can be solved in time polynomial in the problem size on a
(deterministic) Turing machine
examples: search an array of size N: O(N)
sort an array of size N: O(N2)

NP: Nondeterministic polynomial problemsgiven a proposed solution, we can check if it is correct


in
polynomial time; note that problems in the class P are also in the class NP
example: given a Boolean expression written as a product of sums, for Boolean variables
X1, X2,, , Xn, decide whether there is an assignment of values {0,1} to X 1, X2, , Xn such
that the output from the circuit is 1.
this is the satisfiability problem
NP-hard: No known algorithm to solve such a problem in polynomial time
examples: most of the problems we encounter in VLSI design automation

12
Why is this distinction important?
Example: comparison of values for a problem of size N (source: http://bigocheatsheet.com/ )

Example: comparison of values for a problem of size N (source: http://bigocheatsheet.com/)

N Nlog2N N2 2N N!
4 8 16 16 24
Examples:
8 24 64 256 40320
16 64 256 65536 20922789888000
32 160 1024 4294967296 ???
64 384 4092 18446744073709551616 ???
12 896 16384 3402823669209384634633746074317682 ??? 13
8 11456
Most problems we encounter in VLSI Design Automation are constrained optimization problems which are in the NP-hard
class.

We can use an approximation from https://en.wikipedia.org/wiki/Binomial_coefficient:

Example: how many ways are there to partition a set of 2N vertices into 2 subsets of equal size?
Choose N vertices from 2N vertices: special case of choose k elements from a set of n elements,

So, for example. If n = 220 and k = n/2 or 219, then there are 524288 possible choices for the two sets.

If we add the condition that the n points are vertices in a graph and we want the number of edges between the two sets to be minimal, we have an
example of a partitioning problem of the type we need our design automation tools to solve.

14
In practical terms, we cannot find a polynomial-time algorithm to solve such a
problem.

So we are reduced to using nondeterministic techniques or heuristics, i.e., we


must use some technique that will allow us to

Find a solution that is acceptably good


Takes a reasonable amount of time to find such a solution

NOTE: please read SY appendix A to find out more about NP-hard and NP-
complete problems.

Problem ?: to partition a graph with 2N nodes into 2 sets of N nodes, we


have to look at ?? Cases?
Problem ??: what is a heuristic?
strategy that works pretty well most of the time
solution found is not guaranteed to be best possible
15
Example: Satisfiability. This is a standard NP-complete (non-deterministic polynomial time complete)
problem. This means it cannot be solved in polynomial time (as far as we know), but if we are given a
possible solution, then we can check if it is a solution in polynomial time.

Problem statement: Given a Boolean circuit written as a product of sums for Boolean variables X 1, X2,, ,
Xn, decide whether there is an assignment of values {0,1} to X 1, X2, , Xn such that the output from the
circuit is 1.

There is no known polynomial-time algorithm that will solve this problem (i.e., give an answer in time
proportional to nk for a fixed k for ANY possible input).

What we can do in polynomial time: if we are given specific values for X 1, X2, , Xn, we can determine in
polynomial time IF these specific values give an output of 1.

Satisfiability is also NP-complete, i.e., if we could find a polynomial-time algorithm to solve the
satisfiability problem, we would be able to find a polynomial-time algorithm for any other NP-complete
problem.

NP-hard: These are at least as hard as NP problems. They do NOT have to be in NP.
Since the problems we will look at are NP-hard, we will need benchmark sets to test the quality of the
16
solutions we come up with.

You might also like