Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
8 views

Algorithm Analysis

The document discusses algorithm analysis and different approaches to analyzing algorithms including simulation timing, modeling/counting, and asymptotic analysis. It explains big-O, big-Omega, and how to determine the time complexity of algorithms using three rules: adding terms for non-nested loops, dropping constant multipliers, and dropping lower order terms.

Uploaded by

Jenesis Escobar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Algorithm Analysis

The document discusses algorithm analysis and different approaches to analyzing algorithms including simulation timing, modeling/counting, and asymptotic analysis. It explains big-O, big-Omega, and how to determine the time complexity of algorithms using three rules: adding terms for non-nested loops, dropping constant multipliers, and dropping lower order terms.

Uploaded by

Jenesis Escobar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Algorithm Analysis

REVIEW
Key Terms:

Algorithm – step by step procedure for solving a problem (like a recipe)


Program – the code that embodies an algorithm (like a chef that cooks a recipe)

What is algorithm analysis and why do we use it?


Many problems in computer science (and in the real world) have multiple correct solutions. Correct solutions are
good, but in this class, we are looking for more than just correct. It is our job to figure out what the BEST correct
solution/algorithm is. This is the one that is most efficient in terms of time and space. Algorithm analysis allows us
to compare performances of different algorithms to determine the BEST one for a problem.

Algorithm Analysis Approaches:

Approach 1: Simulation Timing – Physically timing your algorithm using a clock

auto t1 = clock::now()

// code for algorithm here … Easy to measure and interpret

auto t2 = clock::now() Results vary across machines


Not predictable for small inputs
print(“Time it takes algorithm to execute is…”);
print(t2 – t1);

Approach 2: Modeling/Counting – Physically adding up how many times each expression will execute

int sum = 0;

Independent of computer
for(int i = 0; i < n; i++)
{ Very tedious to compute
sum += i;
}

print(sum);

Total: 3n + 4

Approach 3: Asymptotic Analysis


2n n3 n2
nlog(n)

n
Time

Growth rates: MEMORIZE THIS


log(n)
log(n) < n < nlog(n) < n2 < n3 < 2n < n!

Input size
Asymptotic Analysis: Some things to know

• All algorithms have a growth rate function T(n) that represents the relationship between their input size
(x) and their execution time (y).

Ex] Linear search algorithm: T(n) = n y = 100 picoseconds

x = 100 elements

• We say that T(n)  O (f(n)) (an algorithm with growth rate T(n) is Big-O f(n)) IF…
f(n) is an UPPER BOUND on the function T(n). In other words, T(n) grows SLOWER THAN or at the
SAME RATE as f(n)

T(n) <= f(n) Ex] n is O(n2) because n grows SLOWER THAN n2


T(n) <= f(n)

• We say that T(n)   (f(n)) (an algorithm with growth rate T(n) is Big-  f(n)) IF…
f(n) is a LOWER BOUND on the function T(n). In other words, T(n) grows FASTER THAN or at the
SAME RATE as f(n)

T(n) >= f(n) Ex]


n3 is  (n2) because n3 grows FASTER THAN n2
T(n) >= f(n)

CHECK FOR UNDERSTANDING:

Q: Which of these functions is (n5log2(n)) ?

(a) 2n6
(b) n5
(c) n5log4(n)

A: (a) and (c) because (a) grows faster and (c) grows at the same rate

Q: So how do we determine an algorithms time complexity in terms of Big-O?


3 RULES TO REMEMBER

Non nested loops? ADD TERMS Drop constant multipliers Drop lower order terms

for(int i = 0; i < n; i++) for(int i = 0; i < n; i++) for(int i = 0; i < n; i++)


{ { {

} } }

for(int j = 0; j < m; j++) for(int j = 0; j < n; j++) for(int j = 0; j < n; j*= 2)


{ gkjhg { {

} } }

O(n + m) O(n + n) O(n + log2n)


O(2n) O(n)
Notice: this CANNOT be simplified to O(n)
O(n) or O(m) because we don’t know if
n amd m grow at the same rate or not

Try out some problems!

for(int i = 0; i < n; i++) for(int i = 0; i < n; i++) Explanation:


{ { Start inside and work your way out. Print
statement is O(1). In the inner loop j starts
for(int j = 0; j < m; j++) for(int j = n; j > 0; j /= 2) at n, and then it halves itself at every
{ { iteration (j /= 2) until it reaches 0. So, first
print(“Hi”); print(“I like pie”); j = n, then j = n/2, then j = n/4 … etc.
} } This pattern is characteristic of logarithmic
} } growth, so the inner loop is O(log2n). The
outer loop is again O(n). So, we multiply
(b/c of nesting) and get
Explanation: O(n*log(n))
Start from inside and work your way out.
The print line only executes 1 time so it is for(int i = 100; i > -1; i--) Explanation:
O(1). In the inner loop, j starts at 0 and Start by examining the innermost loop.
{
then keeps incrementing by a value of 1 Notice that the “step” expression is j /=2.
(j++) until it reaches m. This means the for(int j = i; j > 1; j /= 2) This indicates that we are dealing with
whole inner loop will run a total of m { logs. But log of what exactly…? Notice
times. The inner loop is thus O(m). print(“Apple pie”); that the start expression in the inner loop is
Similarly, the outer loop will run n times } j = i. What is i? Well, we have to look at
so it is O(n). The loops are nested, so we the outer loop. i = 100 initially so the inner
}
multiply and get our final result. most loop will initially run log2(100) times.
log2(100) ~ 6.64 = constant = O(1). What
O (n * m) about in the second iteration of the outer
loop when i decrements to 99? Well then
IMPORTANT NOTE: So far, we have been looking only at WORST again, in the inner loop j starts at 99 then
CASE time complexities. What about best and average? goes to 99/2 then to (99/2)/2 = 99/4 … etc.
Best case: minimum time required for algorithm execution This pattern tells us that on the second
Average case: average time required for algorithm execution iteration of the outer loop, the inner loop
Worst case: maximum time required for algorithm execution will run log2(99) times. log2(99) ~ 6.62 =
constant = O(1). So basically, for each
When looking at best vs average vs worst case, input size needs to stay iteration of the outer loop, the inner loop
runs a constant number of times. The outer
CONSTANT. Meaning, you shouldn’t say “Oh best case is when the loop runs 100 times (also O(1)). So overall
input is really small, and worst case is when the input is really big” we end with the result O(1)

You might also like