Algorithm Complexity
Algorithm Complexity
1
The Problem-Solving Process (continued)
2
What is an algorithm?
3
What is an algorithm?
Problem: Sorting
Input: A sequence of n keys a1, . . . , an.
Output: The permutation (reordering) of the input sequence
such that
a_1 ≤ a_2 ≤· · · ≤ a_n−1 ≤ a_n.
An instance of sorting might be an array of names, like
{Mike, Bob, Sally, Jill, Jan}, or a list of numbers like
{154, 245, 568, 324, 654, 324}. Determining that you are
dealing with a general problem is your first step towards
solving it. An algorithm is a procedure that takes any of
the possible input instances and transforms it to the
desired output.
4
Algorithm properties
5
Algorithm Characteristics
7
Expressing Algorithms
8
The heart of algorithm
ExhaustiveScheduling(I)
j=0
Smax = ∅
For each of the Sn subsets Si of intervals I
If (Si is mutually non-overlapping) and (size(Si) > j)
then j = size(Si) and Smax = Si.
Return Smax
9
The heart of algorithm
10
Problems and Properties
11
Algorithm vs. program
A computer program is actually an algorithm written
in a specific programming language.
– The advantage is that it can be “understood” and
executed by a computer.
– The disadvantage is that it is hard for human
beings to understand the algorithm.
So usually algorithms are written in pseudo-code or
technical English, which is easier to understand.
– Translating an algorithm to a program is called
“implementation” of the algorithm in the
corresponding programming language.
12
Algorithm Analysis
Why we should analyze algorithms?
• Predict the resources that the algorithm requires
– Computational time (CPU consumption)
– Memory space (RAM consumption)
– Communication bandwidth consumption
• The running time of an algorithm is:
– The total number of primitive operations
executed (machine independent steps)
– Also known as algorithm complexity
13
Time Complexity
Worst-case
• An upper bound on the running time for
any input of given size
Average-case
• Assume all inputs of a given size are equally
likely
Best-case
• The lower bound on the running time
14
Time Complexity – Example
Sequential search in a list of size n
• Worst-case:
– n comparisons
• Best-case:
– 1 comparison … … … … … … …
• Average-case:
– n/2 comparisons n
The algorithm runs in linear time
• Linear number of operations
15
Algorithms Complexity
Algorithm complexity is rough estimation of the
number of steps performed by given computation
depending on the size of the input data
• Measured through asymptotic notation
– O(g) where g is a function of the input data size
• Examples:
16
Big-O notation
Examples:
• 3 * n2 + n/2 + 12 ∈ O(n2)
• 4*n*log2(3*n+1) + 2*n-1 ∈ O(n * log n)
19
The Big Oh notation groups functions
Constant functions, f(n) = 1 – Such functions might measure
the cost of adding two numbers, printing out “The Star
Spangled Banner,” or the growth realized by functions such as
f(n) = min(n, 100). In the big picture, there is no dependence
on the parameter n.
Logarithmic functions, f(n) = log n – Logarithmic time-
complexity shows up in algorithms such as binary search.
Such functions grow quite slowly as n gets big, but faster than
the constant function (which is standing still, after all).
Linear functions, f(n) = n – Such functions measure the cost of
looking at each item once (or twice, or ten times) in an n-
element array, say to identify the biggest item, the smallest
item, or compute the average value.
20
The Big Oh notation groups functions
Superlinear functions, f(n) = n log n – This important class of
functions arises in such algorithms as Quicksort and
Mergesort. They grow just a little faster than linear , just
enough to be a different dominance class.
Quadratic functions, f(n) = n2 – Such functions measure the
cost of looking at most or all pairs of items in an n-element
universe. This arises in algorithms such as insertion sort and
selection sort.
Cubic functions, f(n) = n3 – Such functions enumerate through
all triples of items in an n-element universe.
Exponential functions, f(n) = cn for a given constant c > 1 –
Functions like 2n arise when enumerating all subsets of n
items. As we have seen, exponential algorithms become useless
fast, but not as fast as. . .
21
Typical Complexities
Complexity Notation Description
Constant number of
operations, not depending on
constant O(1) the input data size, e.g.
n = 1 000 000 1-2
operations
Number of operations propor-
tional of log2(n) where n is the
logarithmic O(log n)
size of the input data, e.g. n =
1 000 000 000 30 operations
Number of operations
proportional to the input data
linear O(n)
size, e.g. n = 10 000 5 000
22 operations
Typical Complexities (2)
Complexity Notation Description
Number of operations
proportional to the square of
quadratic O(n2)
the size of the input data, e.g.
n = 500 250 000 operations
Number of operations propor-
tional to the cube of the size of
cubic O(n3)
the input data, e.g. n =
200 8 000 000 operations
O(2n), Exponential number of
exponential O(kn) operations, fast growing, e.g. n
= 20 1 048 576 operations
23
Time and Memory Complexity
Complexity can be expressed as formula on
multiple variables, e.g.
• Algorithm filling a matrix of size n * m with
natural numbers 1, 2, … will run in O(n*m)
• DFS traversal of graph with n vertices and m edges
will run in O(n + m)
Memory consumption should also be considered,
for example:
• Running time O(n), memory requirement O(n2)
• n 24= 50 000 OutOfMemoryException
Complexity Examples
int FindMaxElement(int[] array)
{
int max = array[0];
for (int i=0; i<array.length; i++)
{
if (array[i] > max)
{
max = array[i];
}
}
return max;
}
decimal Sum3(int n)
{
decimal sum = 0;
for (int a=0; a<n; a++)
for (int b=0; b<n; b++)
for (int c=0; c<n; c++)
sum += a*b*c;
return sum;
}
decimal Factorial(int n)
{
if (n==0)
return 1;
else
return n * Factorial(n-1);
}
A. 1-33
Why Data Structures are Important?
A. 1-34
35