Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
495 views

Algorithm Complexity

The document describes the problem-solving process and provides steps to follow: 1. Identify the problem 2. Generate possible solutions 3. Evaluate solutions and select the best one 4. Implement the selected solution and evaluate its performance

Uploaded by

ِAl Turaihi
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
495 views

Algorithm Complexity

The document describes the problem-solving process and provides steps to follow: 1. Identify the problem 2. Generate possible solutions 3. Evaluate solutions and select the best one 4. Implement the selected solution and evaluate its performance

Uploaded by

ِAl Turaihi
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 35

The Problem-Solving Process

1. Identify the problem.


2. Generate possible solutions.
3. By applying constraints, eliminate proposals which
do not solve the problem.
4. Evaluate the expected performance or behavior of
each proposed solution.

1
The Problem-Solving Process (continued)

5. Using the criteria, compare the alternatives to select


the best solution.
6. Plan how to implement the selected solution.
7. Implement the solution.
8. Evaluate the performance of the solution after its
implementation.

2
What is an algorithm?

An algorithm is a procedure to accomplish a specific task.


An algorithm is the idea behind any reasonable computer
program. To be interesting, an algorithm must solve a
general, well-specified problem. An algorithmic problem is
specified by describing the complete set of instances it
must work on and of its output after running on one of
these instances. This distinction, between a problem and an
instance of a problem, is fundamental. For example, the
algorithmic problem known as sorting is defined as
follows:

3
What is an algorithm?

Problem: Sorting
Input: A sequence of n keys a1, . . . , an.
Output: The permutation (reordering) of the input sequence
such that
a_1 ≤ a_2 ≤· · · ≤ a_n−1 ≤ a_n.
An instance of sorting might be an array of names, like
{Mike, Bob, Sally, Jill, Jan}, or a list of numbers like
{154, 245, 568, 324, 654, 324}. Determining that you are
dealing with a general problem is your first step towards
solving it. An algorithm is a procedure that takes any of
the possible input instances and transforms it to the
desired output.
4
Algorithm properties

There are three desirable properties for a good algorithm.


We seek algorithms that are correct and efficient, while
being easy to implement. These goals may not be
simultaneously achievable. In industrial settings, any
program that seems to give good enough answers without
slowing the application down is often acceptable,
regardless of whether a better algorithm exists. The issue of
finding the best possible answer or achieving maximum
efficiency usually arises in industry only after serious
performance or legal troubles.

5
Algorithm Characteristics

Every algorithm should have the following five


characteristics:
1.Input: The algorithm should take zero or more input.
2. Output: The algorithm should produce one or more
outputs.
3. Definiteness: Each and every step of algorithm should
be defined unambiguously.
4. Effectiveness: A human should be able to calculate the
values involved in the procedure of the algorithm using
paper and pencil.
5. Termination: An algorithm must terminate after a finite
number of steps. 6
Expressing Algorithms

Reasoning about an algorithm is impossible


without a careful description of the sequence
of steps to be performed. The three most
common forms of algorithmic notation are :
(1) English,
(2) pseudocode,
(3) Or a real programming language.

7
Expressing Algorithms

Pseudocode is perhaps the most mysterious of the


bunch, but it is best defined as a programming
language that never complains about syntax errors.
All three methods are useful because there is a
natural tradeoff between greater ease of expression
and precision. English is the most natural but least
precise programming language, while Java and C/C+
+ are precise but difficult to write and understand.
Pseudocode is generally useful because it represents a
happy medium.

8
The heart of algorithm

A common mistake may students make is to use


pseudocode to dress up an ill defined idea so that it looks
more formal. Clarity should be the goal. For example, the
ExhaustiveScheduling algorithm :

ExhaustiveScheduling(I)
j=0
Smax = ∅
For each of the Sn subsets Si of intervals I
If (Si is mutually non-overlapping) and (size(Si) > j)
then j = size(Si) and Smax = Si.
Return Smax
9
The heart of algorithm

could have been better written in English as:


ExhaustiveScheduling(I)
Test all Sn subsets of intervals from I, and return the largest
subset
consisting of mutually non-overlapping intervals.

The heart of any algorithm is an idea. If your idea is


not clearly revealed when you express an algorithm,
then you are using too low-level a notation to describe
it.

10
Problems and Properties

We need more than just an algorithm description in


order to demonstrate correctness. We also need a
careful description of the problem that it is intended
to solve. Problem specifications have two parts:

(1) the set of allowed input instances, and


(2) the required properties of the algorithm’s output.

11
Algorithm vs. program
 A computer program is actually an algorithm written
in a specific programming language.
– The advantage is that it can be “understood” and
executed by a computer.
– The disadvantage is that it is hard for human
beings to understand the algorithm.
 So usually algorithms are written in pseudo-code or
technical English, which is easier to understand.
– Translating an algorithm to a program is called
“implementation” of the algorithm in the
corresponding programming language.
12
Algorithm Analysis
 Why we should analyze algorithms?
• Predict the resources that the algorithm requires
– Computational time (CPU consumption)
– Memory space (RAM consumption)
– Communication bandwidth consumption
• The running time of an algorithm is:
– The total number of primitive operations
executed (machine independent steps)
– Also known as algorithm complexity

13
Time Complexity
 Worst-case
• An upper bound on the running time for
any input of given size
 Average-case
• Assume all inputs of a given size are equally
likely
 Best-case
• The lower bound on the running time

14
Time Complexity – Example
 Sequential search in a list of size n
• Worst-case:
– n comparisons
• Best-case:
– 1 comparison … … … … … … …
• Average-case:
– n/2 comparisons n
 The algorithm runs in linear time
• Linear number of operations

15
Algorithms Complexity
 Algorithm complexity is rough estimation of the
number of steps performed by given computation
depending on the size of the input data
• Measured through asymptotic notation
– O(g) where g is a function of the input data size
• Examples:

– Linear complexity O(n) – all elements are processed


once (or constant number of times)
– Quadratic complexity O(n2) – each of the elements is
processed n times

16
Big-O notation

Expressing running time in terms of basic computer steps


is already a simplification. After all, the time taken by one
such step depends crucially on the particular processor
and even on details such as caching strategy (as a result of
which the running time can differ subtly from one
execution to the next). Accounting for these architecture-
specific minutiae is a nightmarishly complex task and
yields a result that does not generalize from one computer
to the next. It therefore makes more sense to seek an
uncluttered, machine-independent characterization of an
algorithm's efficiency. To this end, we will always express
running time by counting the number of basic computer
steps, as a function of the size of the input.
17
Big-O notation
 And this simplification leads to another. Instead of
reporting that an algorithm takes, say, 5n3 +4n+3 steps on
an input of size n, it is much simpler to leave out lower-
order terms such as 4n and 3 (which become
insignificant as n grows), and even the detail of the
coefficient 5 in the leading term (computers will be have
times faster in a few years anyway), and just say that the
algorithm takes time O(n3) (pronounced .big oh of n3.).
 It is time to define this notation precisely. In what follows,
think of f(n) and g(n) as the running times of two
algorithms on inputs of size n. Let f(n) and g(n) be
functions from positive integers to positive real's. We say f
= O(g) (which means that “f grows no faster than g”) if
there is a constant c > 0 such that f(n)<= c. g(n). 18
Asymptotic Notation: Definition
 Asymptotic upper bound
• O-notation (Big O notation)
 For given function g(n), we denote by O(g(n)) the set
of functions that are different than g(n) by a constant

O(g(n)) = {f(n): there exist positive constants c


and n0 such that f(n) <= c*g(n) for all n >= n0}

 Examples:
• 3 * n2 + n/2 + 12 ∈ O(n2)
• 4*n*log2(3*n+1) + 2*n-1 ∈ O(n * log n)
19
The Big Oh notation groups functions
 Constant functions, f(n) = 1 – Such functions might measure
the cost of adding two numbers, printing out “The Star
Spangled Banner,” or the growth realized by functions such as
f(n) = min(n, 100). In the big picture, there is no dependence
on the parameter n.
 Logarithmic functions, f(n) = log n – Logarithmic time-
complexity shows up in algorithms such as binary search.
Such functions grow quite slowly as n gets big, but faster than
the constant function (which is standing still, after all).
 Linear functions, f(n) = n – Such functions measure the cost of
looking at each item once (or twice, or ten times) in an n-
element array, say to identify the biggest item, the smallest
item, or compute the average value.

20
The Big Oh notation groups functions
 Superlinear functions, f(n) = n log n – This important class of
functions arises in such algorithms as Quicksort and
Mergesort. They grow just a little faster than linear , just
enough to be a different dominance class.
 Quadratic functions, f(n) = n2 – Such functions measure the
cost of looking at most or all pairs of items in an n-element
universe. This arises in algorithms such as insertion sort and
selection sort.
 Cubic functions, f(n) = n3 – Such functions enumerate through
all triples of items in an n-element universe.
 Exponential functions, f(n) = cn for a given constant c > 1 –
Functions like 2n arise when enumerating all subsets of n
items. As we have seen, exponential algorithms become useless
fast, but not as fast as. . .
21
Typical Complexities
Complexity Notation Description
Constant number of
operations, not depending on
constant O(1) the input data size, e.g.
n = 1 000 000  1-2
operations
Number of operations propor-
tional of log2(n) where n is the
logarithmic O(log n)
size of the input data, e.g. n =
1 000 000 000  30 operations
Number of operations
proportional to the input data
linear O(n)
size, e.g. n = 10 000  5 000
22 operations
Typical Complexities (2)
Complexity Notation Description
Number of operations
proportional to the square of
quadratic O(n2)
the size of the input data, e.g.
n = 500  250 000 operations
Number of operations propor-
tional to the cube of the size of
cubic O(n3)
the input data, e.g. n =
200  8 000 000 operations
O(2n), Exponential number of
exponential O(kn) operations, fast growing, e.g. n
= 20  1 048 576 operations
23
Time and Memory Complexity
 Complexity can be expressed as formula on
multiple variables, e.g.
• Algorithm filling a matrix of size n * m with
natural numbers 1, 2, … will run in O(n*m)
• DFS traversal of graph with n vertices and m edges
will run in O(n + m)
 Memory consumption should also be considered,
for example:
• Running time O(n), memory requirement O(n2)
• n 24= 50 000  OutOfMemoryException
Complexity Examples
int FindMaxElement(int[] array)
{
int max = array[0];
for (int i=0; i<array.length; i++)
{
if (array[i] > max)
{
max = array[i];
}
}
return max;
}

 Runs in O(n) where n is the size of the array


 The number of elementary steps is ~ n
Complexity Examples (2)

 Runs in O(log n) where n is the size of the array


 The number of elementary steps is
Complexity Examples (3)
long FindInversions(int[] array)
{
long inversions = 0;
for (int i=0; i<array.Length; i++)
for (int j = i+1; j<array.Length; i++)
if (array[i] > array[j])
inversions++;
return inversions;
}

 Runs in O(n2) where n is the size of the array


 The number of elementary steps is
~ n*(n+1) / 2
Complexity Examples (4)

decimal Sum3(int n)
{
decimal sum = 0;
for (int a=0; a<n; a++)
for (int b=0; b<n; b++)
for (int c=0; c<n; c++)
sum += a*b*c;
return sum;
}

 Runs in cubic time O(n3)


 The number of elementary steps is ~ n3
Complexity Examples (5)

long SumMN(int n, int m)


{
long sum = 0;
for (int x=0; x<n; x++)
for (int y=0; y<m; y++)
sum += x*y;
return sum;
}

 Runs in quadratic time O(n*m)


 The number of elementary steps is ~ n*m
Complexity Examples (6)
long SumMN(int n, int m)
{
long sum = 0;
for (int x=0; x<n; x++)
for (int y=0; y<m; y++)
if (x==y)
for (int i=0; i<n; i++)
sum += i*x*y;
return sum;
}

 Runs in quadratic time O(n*m)


 The number of elementary steps is
~ n*m + min(m,n)*n
Complexity Examples (7)

decimal Factorial(int n)
{
if (n==0)
return 1;
else
return n * Factorial(n-1);
}

 Runs in linear time O(n)


 The number of elementary steps is ~ n
Complexity Examples (8)
decimal Fibonacci(int n)
{
if (n == 0)
return 1;
else if (n == 1)
return 1;
else
return Fibonacci(n-1) + Fibonacci(n-2);
}

 Runs in exponential time O(2n)


 The number of elementary steps is
~ Fib(n+1) where Fib(k) is the k-th Fibonacci's number
Values of some important fun. as n  

A. 1-33
Why Data Structures are Important?

 Data structures and algorithms are the


foundation of computer programming
 Algorithmic thinking, problem solving and
data structures are vital for software
engineers
 Computational complexity is important for
algorithm design and efficient programming

A. 1-34
35

You might also like