Module 1 ADA
Module 1 ADA
Prepared by
Dr.A.Kannagi
Introduction
What is an Algorithm?
An algorithm is a step by step procedure to solve logical and
mathematical problems.
Notion of Algorithms
Disadvantages of Algorithm
It is time consuming and cumbersome as an algorithm is developed first which
is converted into a flowchart and then into a computer program.
Fundamentals of Algorithmic Problem Solving
Understanding the Problem
Decision making
Methods of Specifying an Algorithm
Proving an Algorithm’s Correctness
Analyzing an Algorithm
Coding an Algorithm
Role of algorithms in computing
Sorting
Searching
String processing
Graph problems
Combinatorial problems
Geometric problems
Numerical problems
Algorithms as a technology
The Internet
Bio informatics
Electronic commerce
The approach of linear programming is also one such technique which is widely used like
In manufacturing and other commercial enterprises where resources need to be allocated scarcely in the
most beneficial way.
Or a institution may want to determine where to spend money buying advertising in order to maximize the
chances of their institution to grow.
Shortest path algorithm also has an extensive use as
In a transportation firm such as a trucking or railroad company, may have financial interest in finding
shortest path through a road or rail network because taking shortest path result in lower labour or fuel
costs.
Or a routing node on the Internet may need to find the shortest path through the network in order to route a
message quickly.
Even an application that does not require algorithm content at the application level relies heavily on algorithms
as the application depends on hardware, GUI, networking or object orientation and all of these make an
extensive use of algorithms.
Fundamentals of the Analysis of Algorithm Efficiency
Average case
Provides a prediction about the running time
Assumes that the input is random
How do we compare algorithms?
We need to define a number of objective
measures.
Time Complexity
It’s a function describing the amount of time required to run an algorithm in
terms of the size of the input. "Time" can mean the number of memory accesses
performed, the number of comparisons between integers, the number of times
some inner loop is executed, or some other natural unit related to the amount of
real time the algorithm will take.
Space Complexity
It’s a function describing the amount of memory an algorithm takes in terms of
the size of input to the algorithm. We often speak of "extra" memory needed, not
counting the memory needed to store the input itself. Again, we use natural (but
fixed-length) units to measure this.
The algorithm analysis framework consists of the
following:
polynomial degree
# of elements in a matrix
Algorithm 1 Algorithm 2
Cost Cost
arr[0] = 0; c1 for(i=0; i<N; i++) c2
arr[1] = 0; c1 arr[i] = 0; c1
arr[2] = 0; c1
... ...
arr[N-1] = 0; c1
----------- -------------
c1+c1+...+c1 = c1 x N (N+1) x c2 + N x c1 =
(c2 + c1) x N + c2
Example
Algorithm 3 Cost
sum = 0; c1
for(i=0; i<N; i++) c2
for(j=0; j<N; j++) c2
sum += arr[i][j]; c3
------------
c1 + c2 x (N+1) + c2 x N x (N+1) + c3 x N2
Orders of Growth
A difference in running times on small inputs is not what really distinguishes
efficient algorithms from inefficient ones.
Consider the example of buying elephants and goldfish:
Cost: cost_of_elephants + cost_of_goldfish
Cost ~ cost_of_elephants (approximation)
The low order terms in a function are relatively insignificant for large n
n4 + 100n2 + 10n + 50 ~ n4
i.e., we say that n4 + 100n2 + 10n + 50 and n4 have the same rate of growth
Worst-Case, Best-Case, and Average-Case
Efficiencies
To compare two algorithms with running times f(n) and g(n), we need a rough
measure that characterizes how fast each function grows.
Let t(n) and g(n) can be any nonnegative functions defined on the set of natural
numbers. The algorithm’s running time t(n) usually indicated by its basic
operation count C(n), and g(n), some simple function to compare with the
count.
Algorithm 3 Cost
sum = 0; c1
for(i=0; i<N; i++) c2
for(j=0; j<N; j++) c2
sum += arr[i][j]; c3
------------
c1 + c2 x (N+1) + c2 x N x (N+1) + c3 x N2 = O(N2)
Big-O Visualization O(g(n)) is the
set of functions
with smaller or
same order of
growth as g(n)
Examples
2n = O(n ):
2 3 2n2 ≤ cn3 2 ≤ cn c = 1 and n0= 2
n = O(n ):
2 2 n2 ≤ cn2 c ≥ 1 c = 1 and n0= 1
1000n2+1000n = O(n2):
1000n2+1000n ≤ 1000n2+ n2 =1001n2 c=1001 and n0 = 1000
100n + 5 ≠ (n2)
c, n0 such that: 0 cn2 100n + 5
100n + 5 100n + 5n ( n 1) = 105n
cn2 105n
Since n is positive cn – 105 0 n 105/c
contradiction: n cannot be smaller than a constant
ALGORITHM Factorial(n)
//Computes n! Recursively
//Input: A nonnegative integer n
//Output: The value of n!
if n = 0 return 1
else return Factorial(n − 1) * n
Algorithm analysis
• For simplicity, we consider n itself as an indicator of this algorithm’s input size. i.e. 1.
The basic operation of the algorithm is multiplication, whose number of executions we
denote M(n). Since the function F(n) is computed according to the formula F(n) = F(n −1)*n
for n > 0.
• The number of multiplications M(n) needed to compute it must satisfy the equality
• M(n − 1) multiplications are spent to compute F(n − 1), and one more multiplication is needed to
multiply the result by n.
Algorithm design