Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
33 views

Algorithms 1

The document discusses algorithms and their analysis. Chapter 1 discusses analyzing algorithm complexity. Chapters 2-4 cover various sorting, graph, and spanning tree algorithms. Chapters 5-6 discuss storing data in binary search trees and multiway search trees. Chapter 7 covers hashing algorithms. The document then provides definitions and examples related to analyzing algorithms, including analyzing worst-case complexity and comparing the efficiency of algorithms.

Uploaded by

reema alafifi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views

Algorithms 1

The document discusses algorithms and their analysis. Chapter 1 discusses analyzing algorithm complexity. Chapters 2-4 cover various sorting, graph, and spanning tree algorithms. Chapters 5-6 discuss storing data in binary search trees and multiway search trees. Chapter 7 covers hashing algorithms. The document then provides definitions and examples related to analyzing algorithms, including analyzing worst-case complexity and comparing the efficiency of algorithms.

Uploaded by

reema alafifi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Algorithms

Contents

CH1 : Analyzing the complexity of Algorithms

CH2 : Sorting Algorithms


A - Internal Sorting Algorithms
1- Bubble Sort
2- Insertion Sort
3- Selection Sort
4- Quick Sort
5- Merge Sort
6- Heap Sort
B- External Sorting Algorithms
1- Balanced Merge Sort
2- PolyPhase Sort

CH3 : Graph Algorithms ( Shortest Path Algorithms )


1- Dijkstra's Algorithm
2- Greedy's Algorithm
3- Floyd's Algorithm
4- Modified Floyd's Algorithm

CH4 : Spanning Tree Algorithms


1- Boruvka's Algorithm
2- Kruskal's Algorithm
3- Prim's Algorithm

CH5 : Storing in Binary Search Tree ( AVL Tree )

CH6 : Storing in Multy Way Search Tree


1- B-Tree Algorithms
2- B*-Tree Algorithms

CH7 : Hashing Algorithms


Algorithm: A finite sequence of instructions for solving a problem.

** Two-Reasons to choose an Algorithm:


1. Human Reason :
To understand and implement the algorithm.

2. Machine Reason :
Time , Space.

Cost of algorithm :

Cost = (Cost of understanding and programming + Cost of running the program)


= Software Cost + Hardware Cost

Hardware Cost = (Cost of running program once ) * number of execution

( Choosing the algorithm is dependent on the cost of the algorithm )

** How do you compare the efficiency of two algorithms ( for one problem ) ?
1. Compare the execution time.
2. Compare the size of program or (algorithm)
Size of program or algorithm is dependent on :
- The number of lines.
- The number of instructions.

** It’s better \
to measure the complexity of algorithm, that means (count number of basic
operations).

Type of Basic Operations :

1. By Searching (Comparing or logical operations )….[ < , <= , ≠ , > , >= , == ]


2. By arithmetic operations [ + , - , * , / , % , ++ , -- , ^ , *= , ….]

Three types to determine cost of algorithm :-


1. Worst case complexity
2. Best case complexity
3. Average case complexity

Suppose n >= 0 (The size of input) :

1. Best case complexity :


n >= 0 ,  I (instances of problem) define :
B(n) = the minimum value of T(I) ,
where T(I) the number of basic operations for instance I.

( B(n) the Best case complexity of the algorithm )


2. Worst case complexity :
n >= 0 ,  I (instances of problem) define :
W(n) = the maximum value of T(I) ,
where T(I) the number of basic operations for instance I.

( W(n) the worst case complexity of the algorithm )

3. Average complexity :
A(n) = ∑ ( ) ( )
Where P(I) is the probability that the instance I will occur and
T(I) the number of basic operations for instance I.

 Two ways to define the notation of complexity of an algorithm to solve a class of


problems (Worst or Average case copmlexity ) :

Examples :

Example1 - Meaning of Instance I :

Suppose we have a one dim array length 10 containing different int keys

19 22 13 45 34 31 100 90 75 60
1 2 3 4 5 6 7 8 9 10

Problem : Searching a given key


Algorithm : Sequential Searching

 There are 11 Instances of this problem :

I1 : Find first  T(I1) = 1 B.O.


I2 : Find second  T(I2) = 2 B.O.
I3 : Find third  T(I3) = 3 B.O.

….
I10 : Find Last  T(I10) = 10 B.O.
I11 : Not found  T(I11) = 10 B.O.

Best case complexity = B(n) = B(10) = 1 B.O.


Worst case complexity = W(n) = W(10) = 10 B.O.

Average case complexity =


A(n) = A(10) = ∑ ( ) ( ) = (1/11)*1 + (1/11)*2 + ...+(1/11)*10 +
(1/11)*10 = ?
Example2 :
Analyze and find the worst case complexity of the following algorithm :

t=1;
while ( t <= n ) // n is the input size
{
for ( int i = 1 ; i < n ; ++i )
{
Add += i % t ;
for ( int j = n ; j > 1 ; --j )
P = P *3 ;
}
if ( X > 2)
S.O.P(Y - 1);
t = t + 1;

Basic Op Count
<= N
< n2
++ n2
+= n2
% n2
> n3
-- n3
* n3
> N
- <= n
+ N

 W(n) = 4n + 4n2 + 3n3


Example3 :

Given following :

Problem with input size n ∞?

solved

Algorithm1 (P1) Algorithm2 (P2)

W1(n) = 100n W2(n) = 4n2

Two algorithms P1, P2 for solving the same problem with W1 , W2 as worst case
complexities of both algorithms :
P1 : W1(n) = 100n
P2 : W2(n) = 4n 2
1. suppose the input size : n < 25

n= 1  W1(n) = 100n = 100 B.O. , W2(n) = 4n2 = 4 B.O.


 In this case it’s better to use P2

n= 2  W1(n) = 100n = 200 B.O. , W2(n) = 4n2 = 16 B.O.


 In this case it’s better to use P2 than P1

n= 3  W1(n) = 100n = 300 B.O. , W2(n) = 4n2 = 36 B.O.


 In this case it’s better to use P2 than P1

……..
…….

n = 24  W1(n) = 100n = 2400 B.O. , W2(n) = 4n2 = 2304 B.O.


 In this case it’s better to use P2 than P1

2. Suppose the input size : n = 25 

W1(n) = 100n = 2500 B.O. , W2(n) = 4n2 = 2500 B.O.


 In this case P1 and P2 are same

3. suppose the input size : n > 25

n= 26  W1(n) = 100n = 2600 B.O. , W2(n) = 4n2 = 2704 B.O.


 W2(n) > W1(n)
 In this case it’s better to use P1 than P2

In general :
  n : P1 better than P2 , ( using the same computer )
Definitions :

f , g : N  R\ {0} ( Two positive real valued functions ) Then :

1. g(n) is O(f(n)) ( read: g(n) is big O of f(n) )


  k  R\{0}, n0  N such that
g(n) ≤ k * f(n)  n ≥ n0

2. g(n) is Ω (f(n)) ( read: g(n) is big Omega of f(n) )


  k  R\{0}, n0  N such that
g(n)  k * f(n)  n ≥ n0

3. g(n) is (f(n)) ( read: g(n) is big Theta of f(n) )


  k1 , k2  R\{0}, n0  N such that
k1*f(n) ≤ g(n) ≤ k2 * f(n)  n ≥ n0
** if g(n) is O(f(n)) but f(n) is not O(g(n))  O(g(n)) better than O(f(n)).

That means an algorithm with worst case complexity g(n) runs faster than one with
worst case complexity f(n)
 an algorithm is efficient  W(n) is O(nk) , where k  N\{0}.

Examples :

Ex1 :
Given two positive real functions W1(n) and W2(n) , where

W1(n) = 100n and W2(n) = 4n2

Question : W1(n) is O(W2(n)) ? or


W2(n) is O(W1(n)) ?

W1(n) is O(W2(n)) ?
Solution :
Suppose k = 1 and n0 = 25 , g(n) = W1(n) and f(n) = W2(n) 
g(n) ≤ k*f(n) ?  n ≥ n0
W1(n) ≤ 1*W2(n) ?  n ≥ 25
100n ≤ 1*4n2 ?  n ≥ 25
25 ≤ n ?  Yes  n ≥ 25
 W1(n) is O(W2(n))

W2(n) is O(W1(n)) ?

Solution :
Suppose k = 1 and n0 = 25 , g(n) = W2(n) and f(n) = W1(n) 
g(n) ≤ k*f(n) ?  n ≥ n0
W2(n) ≤ 1*W1(n) ?  n ≥ 25
4n2 ≤ 100n ?  n ≥ 25
n ≤ 25 ?  NO  n ≥ 25
 W2(n) is NOT O(W1(n))  W2(n) is Ω (W1(n))
Ex2 :
W2(n) is Ω (W1(n)) ?

Solution :
Suppose k = 1 and n0 = 25 , g(n) = W2(n) and f(n) = W1(n) 
g(n) ≥ k*f(n) ?  n ≥ n0
W2(n) ≥ 1*W1(n) ?  n ≥ 25
4n2 ≥ 100n ?  n ≥ 25
n ≥ 25 ?  YES  n ≥ 25  W2(n) is Ω (W1(n))

Try with other values for k and n0 , e.g. k = 1/2 and n0 = 50 ….

Other Definitions ( using limit ) :

f , g : N  R\ {0} ( Two positive real valued functions ) Then :

1. g(n) is O(f(n))  ( ) ( )=c,


where c ≥ 0 , c nonnegative real number c  R+

2. g(n) is Ω(f(n))  ( ) ( )=c,


where c>0 , strictly positive real number
OR ( ) ( )=∞

3. g(n) is (f(n))  ( ) ( )=c


where 0 < c < ∞ , c positive real number

** If lim g(n) / f(n) = c , c positive real number :


 lim f(n)/ g(n) = 1/c , 1/c > 0
 g(n) is O(f(n)) and f(n) is O(g(n))

** If lim g(n) / f(n) = 0


 g(n) is O(f(n)) but f(n) is not O(g(n))
 g(n) is better than f(n)

Examples : ( classes of positive real functions)

1. Infinite constant functions (like g(n) = 1/2 , = 1/5 , = 7.5 , = 10000 , 1020 , ….)
2. Infinite log functions ( Like g(n) = log2n , 3.5*log2n , 10000*log2n , 5*log2n+1000 -9.5 , … )
3. Infinite linear functions (Like g(n) = 100*n , ….. )
4. Infinite linear log functions ( Like g(n) = 7*n*log2n , …. )
5. Infinite polynomial functions ( like g(n) = n2 , n3 , n5 , 15*n4 – n*log2n , …..)
6. Infinite expontial functions ( Like g(n) = 2n , 3n , ….)

 1. lim ( any constant function) / log2(n) = 0 , likes g(n) = 1/2 ,g(n) = 1000 , g(n) = 10200
2. lim ( log2 (n) / n ) =0
3. lim (n / (n*log2 (n)) ) = 0
4. lim ((n*log2 (n)) / n2) = 0
5. lim ( np / nq ) =0 … If ( p < q ) and p , q >= 3
6. p n
lim ( n / 2 ) =0 …  positive integer indices p
Efficiency of algorithms :
1. O(1) [ constant functions ] is better than O(log2n)
2. O( log2 n) [log functions ] is better than O(n)
3. O( n) [ linear functions ] is better than O(n log2n)
4. O( n log2n) [ log linear functions ]is better than O(n2)
5. O( np) [ polynomial functions ] is better than O(nq) … if ( p < q) and p,q >= 2
6. O( np) [Expontial functions ] is better than O(2n) …  positive integer indices p

OR : \
O(1) < O( log2n) < O(n) < O( n log2n) < O(n2) < O(np) (p > 2) < O(2n) .

Example : (Calculation of complexity of algorithms)

Given following :

Problem with input size n ∞?

solved

Algorithm1 (P1) Algorithm2 (P2)

W1(n) = 100n W2(n) = 4n2

Two algorithms P1, P2 for solving the same problem with W1 , W2 as worst case
complexities of both algorithms :
P1 : W1(n) = 100n
P2 : W2(n) = 4n 2
Which algorithm is better to use than the other ?

P1 : W1(n) = 100n
 lim W1(n)/n = lim 100n/n = 100 ≠ ∞  W1(n) is O(n)

P2 : W2(n) = 4n 2
 lim W2(n)/n2 = lim 4n2/n2 = 4 ≠ ∞  W2(n) is O(n2)

Problem with input size n ∞?

solved

Algorithm1 (P1) Algorithm2 (P2)

W1(n) = 100n W2(n) = 4n2

W1(n) is O(n) W2(n) is O(n2)

 It is better to use P1 than P2 for all cases


Example :
Suppose we have an algorithm P with worst case complexity W(n) ,
Every basic operation costs  ,
T the used time to run the algorithm for the input n ,

T = W(n)*
when we solve the equation , we can know the maximum size , which can be handled
in T .

Examples :

Ex1 :
Suppose τ
 = 1 ms ,
W(n) = n2 ,
T = 1 hour

 T = W(n)*
 60*60 = n2 * 10-3
 n2 = 6* 105
 n = 600 * √ 10 ≈ 1897 input size

Ex2 :
Given an algorithm with W(n) = 2n runs on two different machines so that the time
for execution a basic operation for the first machine equal to  and for the other one
equal to /k , where k >= 2.
Calculate n1 , n2 maximum input size for two machines which can be handled in T time
(Same time interval)

Problem?

Solved

Algorithm

W(n) = 2n

Make a program

Runs on two different machines (Computers) using the same time interval

comp1 with speed =  comp2 with speed = /k , k > 1

calculate n1 claculate n2
Solution :
Using the equation T = W(n)*

T = W(n1)*
T = /k*W(n2)

W(n1)* = T = /k*W(n2)
 W(n2) = k*W(n1)

Now we have the complexity W(n) = 2n


W(n2) = k*W(n1)
n
 2 2 = k*2n1 | log2
n2
 log2(2 ) = log2(k*2n1)
 n2 = log2k + n1
 n2 > n1 .

You might also like