www-studocu-com-in-u-113452153-sid-01735146971
www-studocu-com-in-u-113452153-sid-01735146971
www-studocu-com-in-u-113452153-sid-01735146971
DAA All 5 Units - DAA notes all unit Was this document helpful?
Shashank Mishra
Dr. A.P.J. Abdul Kalam Technical…
Syllabus
0 9 2 1. Introduction: Algorithms, Analyzing Algorithms, Complexity of Algorithms, Growth
Followers Uploads Upvotes of Functions.
• Shell Sort
Recommended for you • Quick Sort
• Merge Sort
• Heap Sort
Lecture Notes of DAA (Design and
Analysis of Algorithms) 4. Comparison of Sorting Algorithms.
116 Btech cse 3rd year daa
Lecture notes 100% (4) 5. Sorting in Linear Time.
DAA Unit 1
Btech cse 3rd year daa 1
44 Lecture notes 100% (1)
Comments
2
3
Example:
Growth of Functions describes how the time or space requirements of an algorithm
grow
• with the size
Analyzing theoftime
the input. The growth
complexity rate helps
of the Binary in understanding
Search algorithm. the efficiency of
an algorithm.
Examples:
5
ITECH WORLD AKTU Document continues below
116 12 6
Btech cse 3rd year daa
Kcs503 Lecture Notes of DAA unit 2 - DAA Question Unit 5 - D
Dr. A.P.J. Abdul Kalam Technica… DAA (Design and Daaa bank (UNIT 1 to notes for
406 documents Analysis of… handwritten un… 5) compute
Btech cse 3rd yea… Btech cse 3rd yea… Btech cse 3rd yea… Btech cse 3
Go to course
100% (4) 100% (2) 100% (2) 100%
Question 1.6
Solve the recurrence relation:
n
T (n) = 7T + n2
2
Now, consider another algorithm with the recurrence:
n
T ′ (n) = aT ′ + n2
4
Find the largest integer a such that the algorithm T ′ runs faster than the first algorithm.
Solution:
n n
T (n) = 7T + n2 and T ′ (n) = aT ′ + n2
2 4
By comparing, we have:
a = 7, b = 2, f (n) = n2
Thus,
T (n) = Θ(n2.81 )
Now for the second recurrence:
log 7 log a log 7
= ⇒ log a = × log 4 = 1.6902
log 2 log 4 log 2
a = 48
Question 1.7
Solve the recurrence relation:
n
T (n) = 7T + n2
3
Now, consider another algorithm with the recurrence:
n
S(n) = aS + n2
9
Find the largest integer a such that the algorithm S runs faster than the first algorithm.
Solution:
n n
T (n) = 7T + n2 and S(n) = aS + n2
3 9
Comparing the equations:
a = 7, b = 3, f (n) = n2
Example of Recursion: Let’s take the example of calculating the factorial of a
number n, which is defined as:
Using Master’s theorem:
logb a log 7 1.771
n! = n n
× (n −=1)n × 3(n=−n2) × · · · × 1
Case 3 of Master’s theorem applies:
The recursive definition of factorial is:
( b a+ϵ
f (n) = Ω(nlog1, ) = Ω(n1.771+ϵ ) = Ω( n2=
if n ) 0
factorial(n) =
n × factorial(n − 1), if n > 0
8
9
In this example, the base case is factorial(0) = 1, and the recursive case is n×factorial(n−
1). Thus, the complexity is:
Recursion Tree: T (n) = Θ(n2 )
A
Forrecursion
algorithmtree
B, is
weaget:
tree representation of the recursive calls made by a recursive
log9 a log9 81 2
algorithm. Each node represents a nfunction = ncall, and
=nits children represent the subsequent
recursive calls. The depth of the tree represents the depth of recursion.
Thus, for a = 81, both algorithms have the same complexity.
If a > 81, algorithm B has a higher complexity than A:
3.7 Sorting Algorithms
S(n) = Θ(n2 log n) > T (n)
3.7.1 Shell Sort
Therefore, algorithm B can never be faster than A.
Shell sort is an in-place comparison sort algorithm that extends the basic insertion sort
algorithm by allowing exchanges of elements that are far apart. The main idea is to
Question 1.8 so that elements that are far apart are sorted before doing a finer
rearrange the elements
sort using insertion sort.
Solve thesort
Shell recurrence relation:
improves the performance of insertion sort by breaking the original list into
sublists based on a gap sequence and then√sorting each sublist using insertion sort. This
T (n) =more
allows the algorithm to move elements T ( n) + O(log especially
efficiently, n) when they are far apart
fromSolution:
their correct positions.
Algorithm:
Let:
m =gap
1. Start with a large log between n = 2m A
n and elements. n1/2 = 2m/2
⇒ commonly used gap sequence is to
divide the length of the list by 2 repeatedly until the gap is 1.
Then:
T (2m ) = T (2m/2 ) + O(log 2m ) Let x(m) = T (2m )
2. For each gap size, go through the list and compare elements that are that gap
Substituting into the equation:
distance apart.
m
3. Use insertion sort to sort the sublists
x(m) = x created by these gaps.
+ O(m)
2
4. Continue reducing the gap until it becomes 1. When the gap is 1, the list is fully
The solution is:
sorted by insertion sort.
x(m) = Θ(m
Shell Sort Algorithm log m) ⇒ T (n) = Θ(log n log log n)
(Pseudocode):
Recursion:
shellSort(arr, n):
Recursion
gap = isn a//
process
2 # where a function
Initialize thecalls
gapitself
sizeeither directly or indirectly to solve
a problem. In recursion,
while gap > 0: a problem is divided into smaller instances of the same problem,
and solutions to these smaller
for i = gap to n-1: instances are combined to solve the original problem.
Recursion typically involves
temp two main parts:
= arr[i]
• Base Case: j A
= condition
i under which the recursion stops.
while j >= gap and arr[j - gap] > temp:
• Recursive Case: The =part
arr[j] where
arr[j the function calls itself to break the problem
- gap]
into smaller instances.
j = j - gap
arr[j] = temp
gap = gap // 2
Example: Let’s sort the array [12, 34, 54, 2, 3] using Shell sort.
Step 1: Initial array
[12, 34, 54, 2, 3]
1. Start with gap = 5//2 = 2, meaning the array will be divided into sublists based
on the gap 2.
10
• Compare elements at index 0 and 2: [12, 54]. No change since 12 < 54.
• Compare elements at index 1 and 3: [34, 2]. Swap since 34 > 2, resulting in:
• Compare elements at index 2 and 4: [54, 3]. Swap since 54 > 3, resulting in:
2. Reduce gap to 1: gap = 2//2 = 1. Now we perform insertion sort on the whole
array:
• Compare index 0 and 1: [12, 2]. Swap since 12 > 2, resulting in:
• Compare index 1 and 2: [12, 3]. Swap since 12 > 3, resulting in:
• Shell sort is more efficient than insertion sort for large lists, especially when elements
are far from their final positions.
• The efficiency depends on the choice of the gap sequence. A commonly used se-
quence is gap = n//2, reducing until gap equals 1.
1. Choose a Pivot: Select an element from the array to act as the pivot.
2.Advantages
Partition: of Quick Sort:
Rearrange the array such that elements less than the pivot come before
Efficient
• it, Average
and elements Case:
greater comeQuick Sort has an average-case time complexity of
after it.
O(n log n).
11
12
•Pseudocode:
Worst-Case Performance: The worst-case time complexity is O(n2 ), typically
occurring with poor pivot choices.
QuickSort(arr, low, high):
• if
NotlowStable:
< high:Quick Sort is not a stable sort.
pivotIndex = Partition(arr, low, high)
QuickSort(arr,
3.7.3 Merge Sort low, pivotIndex - 1)
QuickSort(arr, pivotIndex + 1, high)
Merge Sort is a stable, comparison-based divide-and-conquer algorithm that divides the
array into smaller low,
Partition(arr, sub-arrays,
high):sorts them, and then merges them back together.
Algorithm:
pivot = arr[high]
i = low - 1
1. Divide: Recursively divide the array into two halves until each sub-array contains
for j = low to high - 1:
a single element.
if arr[j] < pivot:
2. Merge: iMerge = i the
+ 1sorted sub-arrays to produce sorted arrays until the entire array
is merged.swap arr[i] with arr[j]
swap arr[i + 1] with arr[high]
Pseudocode:
return i + 1
Example: Consider
MergeSort(arr, left, the array:
right):
if left < right:
mid = (left + right) //[10 2, 7, 8, 9, 1, 5]
MergeSort(arr, left, mid)
• Choose pivot: 5
MergeSort(arr, mid + 1, right)
Merge(arr,
• Partition around left,
pivot 5:mid, right)
[1, 5, 8, 9, 7, 10]
Merge(arr, left, mid, right):
• n1 = mid - apply
Recursively left +Quick
1 Sort to [1] and [8, 9, 7, 10]
n2 = right - mid
• LContinue until the entire
= arr[left:left + n1] array is sorted:
R = arr[mid + 1:mid + 1 + n2][1, 5, 7, 8, 9, 10]
i = 0
j = 0
Visualization:
k = left
while iArray:
Initial < n1 and j < n2:
if L[i] <= R[j]: [10, 7, 8, 9, 1, 5]
arr[k] = L[i]
i = i + 1
Afterelse:
Partitioning:
arr[k] = R[j] [1, 5, 8, 9, 7, 10]
j = j + 1
Finalk Sorted
= k + 1Array:
while i < n1: [1, 5, 7, 8, 9, 10]
arr[k] = L[i]
i = i + 1
k = k + 1
while j < n2:
13
arr[k] = R[j]
j = j + 1
k = k + 1
Example: Consider the array:
and
[9, 10, 82]
Visualization:
Initial Array:
[38, 27, 43, 3, 9, 82, 10]
• Stable Sort: Merge Sort maintains the relative order of equal elements.
• Slower for Small Lists: It may be slower compared to algorithms like Quick Sort
for smaller lists.
14
15
• Not Stable: Heap Sort is not a stable sort, meaning equal elements may not retain
3.7.4 their
Heap Sortorder.
original
Heap Sort is a comparison-based sorting algorithm that utilizes a binary heap data struc-
• Performance: It can be slower compared to algorithms like Quick Sort due to the
ture. It works by building a max heap and then repeatedly extracting the maximum
overhead of heap operations.
element to build the sorted array.
Algorithm:
4 1. Build
Comparison of Sorting Algorithms
a Max Heap: Convert the input array into a max heap where the largest
element is at the root.
Comparison Table:
2. Extract Max: Swap the root of the heap (maximum element) with the last element
of the heap
Algorithm andComplexity
Time then reduce(Best)
the heap size Complexity
Time by one. Heapify the root
(Worst) to maintain
Space the
Complexity
Shellmax
Sortheap property.
O(n log n) O(n2 ) O(1)
Quick O(n log n)
Sort Continue O(n2 ) O(log n)
3. Repeat: the extraction and heapify process until the heap is empty.
Merge Sort O(n log n) O(n log n) O(n)
Heap Sort
Pseudocode: O(n log n) O(n log n) O(1)
HeapSort(arr):
5 nSorting in Linear Time
= length(arr)
BuildMaxHeap(arr)
5.1 forIntroduction
i = n - 1 downtotoLinear
1: Time Sorting
swap arr[0] with arr[i]
Linear time sorting algorithms
Heapify(arr, 0, i) such as Counting Sort, Radix Sort, and Bucket Sort are
designed to sort data in linear time O(n).
Example:
BuildMaxHeap(arr):
• nCounting
= length(arr)
Sort: Efficient for small range of integers.
for i = n // 2 - 1 down to 0:
Heapify(arr, i, n)
5.1.1 Bucket Sort
Bucket Sort is ai,distribution-based
Heapify(arr, n): sorting algorithm that divides the input into several
buckets and then
largest = isorts each bucket individually. It is particularly useful when the input
is uniformly
left = distributed
2 * i + 1 over a range.
Algorithm:
right = 2 * i + 2
if left < n and arr[left] > arr[largest]:
1. Create Buckets:
largest = leftCreate an empty bucket for each possible range.
16
InsertionSort(arr):
for i from 1 to length(arr):
key = arr[i]
j = i - 1
while j >= 0 and arr[j] > key:
arr[j + 1] = arr[j]
j = j - 1
arr[j + 1] = key
• Insertion Sort: Maintain the order of elements with equal keys by inserting each
element into its correct position relative to previously sorted elements.
Pseudocode:
InsertionSort(arr):
for i from 1 to length(arr):
key = arr[i]
j = i - 1
while j >= 0 and arr[j] > key:
arr[j + 1] = arr[j]
j = j - 1
arr[j + 1] = key
17
ITECH WORLD AKTU
ITECH WORLD AKTU
Question:
5.1.3 Among Merge Sort, Insertion Sort, and Quick Sort, which algorithm
Radix Sort
performs the best in the worst case? Apply the best algorithm to sort the list
Radix Sort is a non-comparative integer sorting algorithm that processes digits of num-
bers. It works by sorting numbersE, digit by M,
X, A, digit,
P, starting
L, E from the least significant digit
to the most significant digit.
in alphabetical
Algorithm:order.
Answer: - Merge Sort has a worst-case time complexity of O(n log n). - Insertion
1. has
Sort Determine Maximum
a worst-case Digits: Find
time complexity the2 ).
of O(n maximum
- Quicknumber of digits
Sort has in the array.
a worst-case time
complexity of O(n2 ), though its average-case complexity is O(n log n).
2. Sort by Digit: Sort the array by each digit using a stable sort (e.g., Counting
In the worst case, Merge Sort performs the best among these algorithms.
Sort).
Sorted List using Merge Sort:
3.Step-by-Step Solution:
Repeat: Continue until all digits are processed.
1.Pseudocode:
Initial List:
RadixSort(arr):
• Given List:
maxValue = max(arr) E, X, A, M, P, L, E
exp = 1
2. while
DividemaxValue
the List:// exp > 0:
CountingSort(arr, exp)
Divide
•exp the *list10into two halves:
= exp
E, X, A and M, P, L, E
CountingSort(arr, exp):
n = length(arr)
3. Recursive Division:
output = [0] * n
count = [0]
• For * 10half E, X, A:
the first
for i in range(n):
– Divide further into:
index = (arr[i] // exp) % 10
E and X, A
count[index] += 1
– For
for i in X, A: 10):
range(1,
count[i] += count[i
∗ Divide into: - 1]
for i in range(n - 1, -1, -1): X and A
index = (arr[i] // exp) % 10
• For the second half M, P, L, E:
output[count[index] - 1] = arr[i]
– Divide further
count[index] -= 1into:
for i in range(n): M, P and L, E
arr[i]
– For=M, output[i]
P:
∗ Dividethe
Example: Consider into:
array:
M and P
[170, 45, 75, 90, 802, 24, 2, 66]
– For L, E:
∗ Divide
• Sort by least into:digit:
significant
L and E
[170, 90, 802, 2, 24, 45, 75, 66]
4. Merge the Sorted Sublists:
18
19
• Merge E and X, A:
– Merge X and A to get:
A, X
– Merge E and A, X to get:
A, E, X
• Merge M and P to get:
M, P
• Merge L and E to get:
E, L
• Merge M, P and E, L:
– Merge M and E, L, P to get:
E, L, M, P
• Merge A, E, X and E, L, M, P:
– Final merge results in:
A, E, E, L, M, P, X
6. Sorted List:
• Sorted List:
A, E, E, L, M, P, X
20
Syllabus
• Red-Black Trees
• B-Trees
• Binomial Heaps
• Fibonacci Heaps
• Tries
• Skip List
• LEFT: A pointer to the left child node, which contains only nodes with keys less
than the current node’s key.
• KEY: The value stored in the current node. This value determines the order within
the tree.
• PARENT: A pointer to the parent node. The root node’s parent pointer is NULL.
• RIGHT: A pointer to the right child node, which contains only nodes with keys
greater than the current node’s key.
• The left subtree of a node contains only nodes with keys less than the node’s key.
• The right subtree of a node contains only nodes with keys greater than the node’s
key.
• Both the left and right subtrees must also be binary search trees.
21
1
• Worst-Case Time Complexity: In the worst case, such as when the tree becomes
unbalanced (e.g., inserting sorted data), the height of the BST can reach O(n),
resulting in search, insertion, and deletion operations having a time complexity of
O(n).
• Space Complexity: Each node requires extra memory for storing pointers to
its children, which can lead to higher space complexity compared to array-based
structures especially in unbalanced trees
structures, especially in unbalanced trees.
• Poor Performance with Sorted Data: If input data is already sorted, the BST
will degenerate into a linked list, causing all operations to degrade to O(n) time
complexity.
• Balancing Overhead: Self-balancing BSTs (like AVL or Red-Black Trees) re-
quire additional operations (rotations and recoloring), which add extra overhead to
insertion and deletion operations.
• Cache Inefficiency: Due to pointer-based navigation, BSTs exhibit poor cache
locality, leading to slower performance compared to structures like arrays.
Red-Black Trees
A Red-Black Tree is a type of self-balancing binary search tree in which each node contains
an extra bit for denoting the color of the node, either red or black. The tree maintains
its balance by following a set of rules during insertion and deletion operations.
3
4
4. The value of bh when a leaf (NIL node) is reached is the black height of the tree.
• The height of the tree, h, is at most 2 × bh, where bh is the black height of the tree.
This is because at most every alternate node along a path from the root to a leaf
can be red.
• Thus, if the black height of a Red-Black Tree is bh, the maximum height of the tree
is 2 × bh.
Time Complexity:
Calculating the black height of a Red-Black Tree requires traversing from the root to any
leaf, resulting in a time complexity of O(log n), where n is the number of nodes in the
tree.
6
7
if w.color == RED
w.color = BLACK
x.p.color = RED
LEFT-ROTATE(T, x.p)
w = x.p.right
if w.left.color == BLACK and w.right.color == BLACK
w.color = RED
x = x.p
else if w.right.color == BLACK
w.left.color = BLACK
w.color = RED
RIGHT-ROTATE(T, w)
w = x.p.right
w.color = x.p.color
x.p.color = BLACK
w.right.color = BLACK
LEFT-ROTATE(T, x.p)
x = T.root
else
(mirror image of the above)
x.color = BLACK
• Case 3: Sibling is BLACK, sibling’s left child is RED, and sibling’s right child is
BLACK.
B-Trees
A B-Tree is a self-balancing search tree in which nodes can have more than two children.
It is commonly used in databases and file systems to maintain sorted data and allow
searches, sequential access, insertions, and deletions in logarithmic time.
Properties of B-Trees:
• All leaves are at the same level.
9
10
Binomial Heaps
Binomial Heaps are a type of heap data structure that supports efficient merging of two
heaps. It is composed of a collection of binomial trees that satisfy the heap property.
11
• The root has the smallest key and the heap is represented as a linked list of binomial
trees.
2 Binomial Heap Merge Algorithm
The merge algorithm combines two binomial heaps into one by merging their binomial
trees of the same degree, similar to the binary addition process. BINOMIAL-HEAP-
MERGE(H1, H2)
1 Union of Binomial Heap
1. Create a new binomial heap H
A binomial
2. Set heap
H.head is aroot
to the collection of binomial
of the merged list oftrees. The
H 1 and H 2union of two binomial heaps
involves merging two heaps into one, preserving the properties of binomial heaps.
3. Return H
4 BINOMIAL-HEAP-DECREASE-KEY Algorithm
The decrease-key operation reduces the key of a given node to a smaller value and then
adjusts the heap to maintain the binomial heap property.
BINOMIAL-HEAP-DECREASE-KEY(H, x, k)
1. If k > x.key then Error: New key is larger than current key
2. Set x.key ← k
3. Set y ← x, z ← y.p
4. While z = NIL and y.key < z.key :
5. Exchange y and z
6. Set y ← z, z ← y.p
14
15
16
17
• Each node has a boolean flag indicating whether it marks the end of a key.
• The time complexity of search, insert, and delete operations is proportional to the
length of the key.
20
Syllabus
• Divide and Conquer with examples such as Sorting, Matrix Multiplication, Convex Hull,
DD A
and Searching.
• Greedy Methods with examples such as Optimal Reliability Allocation, Knapsack, Min-
imum Spanning Trees (Prim’s and Kruskal’s algorithms), Single Source Shortest Paths
(Dijkstra’s and Bellman-Ford algorithms).
RL L
The Divide and Conquer technique involves solving problems by breaking them into smaller
sub-problems, solving each sub-problem independently, and then combining their results. It
W O
parts.
• Conquer: The sub-problems are solved recursively. If the sub-problem is small enough,
solve it directly.
• Combine: The solutions of the sub-problems are combined to get the final solution to
the original problem.
ECC
1
2
1. Divide: - Split each n × n matrix A and B into four sub-matrices of size n2 × n2 . Let:
Example 1: Merge Sort
Merge Sort is a sorting algorithm A11 follows
A12 the Divide B11
andBConquer
A = that paradigm. The array is
12
, B=
A21 A22 and thenBmerged.
divided into two halves, sorted independently, 21 B22
Algorithm:
2. Conquer: - Perform seven recursive multiplications:
• Divide the array into two halves.
M1 = (A11 + A22 )(B11 + B22 )
• Recursively sort each half.
TU
M2 = (A21 + A22 )B11
• Merge the two sorted halves to get the sorted array.
M3 = A11 (B12 − B22 )
Example: For an array [38, 27, 43,M 3, = 9,
A 82,
(B 10],
− the
B )array is divided and merged in steps.
4 22 21 11
AK
Matrix Multiplication
M6 = (A21 − A11 )(B11 + B12 )
M7 = (operation
Matrix multiplication is a fundamental A12 − A22 )( 21 + B
inBmany 22 ) of computer science and math-
areas
ematics. There are two main methods for matrix multiplication:
3. Combine: - Combine the seven products to get the final sub-matrices of C :
11 = M1 + M4 − M5(Naive
1. Conventional Matrix CMultiplication + M7 Method)
D
The conventional method of multiplying C12two
= Mmatrices
3 + M5
A and B follows the standard O(n3 )
approach. If A and B are n × n matrices, the product matrix C = AB is calculated as:
RL
C21 = M2 + M4
n
C 22 = M=1 − M A2[i+ + M6
][k]M·3B[k][j]
X
C[i][j]
Thus, the matrix C is: k=1
The time complexity of this methodC is=O(n C11) since
3 C12 each element of the resulting matrix C
O
is computed by multiplying n pairs of elements C21fromC22A and B .
Recurrence Relation: The time complexity of Strassen’s Algorithm can be expressed by
W
the Divide
2. recurrenceand
relation:
Conquer Approach to n Matrix Multiplication
T (n) = 7T + O(n2 )
In the divide and conquer approach, matrices A2 and B are divided into smaller sub-matrices.
Here, T (n) represents
This method recursivelythe time complexity
multiplies for multiplying
the sub-matrices and combinestwo nthe
×n matrices.
results Solving
to obtain this
the final
log2 7
recurrence using The
product matrix. the master
key ideatheorem gives T
is to reduce (n)matrix
the ) ≈ O(n2.81problem
= O(nmultiplication ). size by breaking
H
down large matrices into smaller parts.
Advantages and Disadvantages
EC
Example:
Let’s multiply two 2 × 2 matrices using Strassen’s method.
Let A and B be:
1 2 5 6
A= , B=
3 4 7 8
Using Strassen’s method, we compute the seven products M1 , M2 , . . . , M7 and then combine
them to get the resulting matrix C .
The resulting product matrix C = AB is:
TU U
19 22
C=
43 50
T
This example demonstrates the power of Strassen’s method in reducing the number of
AK K
The Convex Hull of a set of points is the smallest convex polygon that contains all the points.
The problem can be solved using various algorithms, and one of the most efficient is the Graham
Scan Algorithm.
RL LD
in the plane. The main idea is to sort the points based on their polar angle with respect to a
reference point and then process them to form the convex hull.
W O
2. Sort the remaining points based on the polar angle they make with P0 . If two points have
the same polar angle, keep the one that is closer to P0 .
EC CH
3. Initialize the convex hull with the first three points from the sorted list.
(a) While the angle formed by the last two points in the hull and the current point
IT TE
makes a non-left turn (i.e., the turn is clockwise or collinear), remove the second-to-
last point from the hull.
(b) Add the current point to the hull.
5. After processing all points, the points remaining in the hull list form the convex hull.
I
Time Complexity:
TU
• Sorting the points based on the polar angle takes O(n log n).
• Processing each point and constructing the convex hull takes O(n).
Thus, the overall time complexity of the Graham Scan algorithm is:
AK
O(n log n)
D
Binary Search is used to find an element in a sorted array. The array is divided into two halves,
and the search is performed in the half where the element may exist.
RL
Algorithm:
• Compare the middle element with the target value.
0.2.1 Greedy Algorithm for Activity Selection
• If equal, return the position.
The Greedy Activity Selector algorithm selects activities based on their finish times. The idea
O
If the target
is to• always chooseisthe
smaller, search inthat
next activity the left half;first
finishes otherwise,
and is search in thewith
compatible rightthe
half.
previously
selected activities.
article amsmath
W
Pseudocode for Greedy Activity Selector (S, F):
GreedyActivitySelector(S, F):
Greedy Methods
n = length(S)
A = {1} // The first activity is always selected
The Greedy method constructs a solution step by step by selecting the best possible option
H k = 1
at each stage, without revisiting or considering the consequences of previous choices. It works
under the assumption that by choosing a local optimum at every step, the overall solution will
for m = 2 to n:
EC
be globally optimal.
if S[m] >= F[k]:
A = A {m}
0.1 Key Characteristics
k = m of Greedy Algorithms
1. Greedy Choice Property: At every step, choose the best option available without
IT
return
worryingA about the future implications. The choice must be feasible and should follow
the rules of the problem.
0.3 Example: Activity Selection Problem
2. Optimal Substructure: A problem has an optimal substructure if the optimal solution
Given the starting and finishing times of 11 activities:
• (2, 3)
• (8, 12)
• (12, 14)
• (3, 5)
• (0, 6)
• (1, 4)
• (6, 10)
• (5, 7)
• (3, 8)
• (5, 9)
TU
• (8, 11)
4 (5, 9)
RL
5 (6, 10)
6 (8, 11)
7 (8, 12)
8 (12, 14)
WO
9 (0, 6)
10 (1, 4)
11 (3, 8)
Now we will select activities using the greedy approach: - Start with Activity 1 (2, 3) - The
next compatible activity is Activity 2 (3, 5) - Continue this process.
EC
• Activity 1: (2, 3)
IT
• Activity 2: (3, 5)
• Activity 3: (5, 7)
Greedy Approach
0.4 Pseudocode for Recursive and Iterative Approaches
The greedy approach for the Knapsack Problem follows these steps:
Recursive Approach:
• Sort items by their value-to-weight ratio.
RecursiveActivitySelector(S, F, k, n):
Pick
• if items
k >= n: with the highest ratio until the weight limit is reached.
return []
Branch
m = k and
+ 1 Bound Approach
while m <= n and S[m] < F[k]:
TTUU
The Branchm =and
m +Bound
1 method is another approach to solve the Knapsack Problem efficiently
by exploring the solution
return [m] space using an implicit tree structure:
+ RecursiveActivitySelector(S, F, m, n)
• Implicit Tree: Each node represents a state of including or excluding an item.
Iterative Approach:
• Upper Bound of Node: Calculate the maximum possible value that can be obtained
AAKK
IterativeActivitySelector(S,
from the current node to pruneF):the tree.
n = length(S)
A = {1} // The first activity is always selected
Greedy
k = 1 Algorithm for Discrete Knapsack Problem
The greedy method can be effective for the fractional knapsack problem but not for the 0/1
for mproblem.
knapsack = 2 to n:
DD
if S[m] >= F[k]:
A = A {m}
0/1 Knapsack k = mProblem
RRLL
In the 0/1 Knapsack Problem, each item can either be included (1) or excluded (0) from the
returnThe
knapsack. A greedy method is not effective for solving the 0/1 Knapsack Problem because
it may lead to suboptimal solutions.
0.5 Optimization Problem
OO
Simple Knapsack
An optimization problem is a Problem using
problem in which we seekGreedy Method
to find the best solution from a set of
feasible solutions. It involves maximizing or minimizing a particular objective function subject
W
to constraints.
Simple Knapsack Problem using Greedy Method
W
Consider the following
0.5.1 Using Greedyinstance
Method forfor
theOptimization
simple knapsack problem. Find the solution using the
Problems
greedy method:
The greedy method can be applied to solve optimization problems by:
HH
• N =8
• Breaking the problem into smaller sub-problems.
• P = {11, 21, 31, 33, 43, 53, 55, 65}
EECC
• Making the locally optimal choice at each step, hoping it leads to a globally optimal
• W = {1, 11, 21, 23, 33, 43, 45, 55}
solution.
M = 110 that the greedy choice property and optimal substructure hold for the specific
•• Ensuring
problem.
IITT
Solution
Common examples of optimization problems solved using greedy algorithms include the
Knapsack
To Problem,
solve the problemMinimum
using theSpanning Tree, and
greedy method, we Huffman
calculate Coding.
the value-to-weight ratio for each
item, sort them, and fill the knapsack until we reach the maximum weight M .
The total value obtained is 152.6.
Example 2: Knapsack Problem
Items in Knapsack
The Knapsack Problem involves selecting items with given weights and values to maximize the
totalitems
The value included
without exceeding the weight
in the knapsack are I1limit.
, I2 , I3 , I4 , I5 , and a fraction of I6 .
Final Answer: Maximum value = 152.6.
• N =4
D
Solution
RL
We will solve this problem using the dynamic programming approach by creating a table to
keep track of the maximum value at each capacity.
Item Weight (W) Value (P) Capacity (C) Current Value Set
0 0 0 0 0 {(0,0)}
O
1 2 11 2 11 {(11,2)}
2 11 21 11 21 {(21,11)}
3 22 31 22 31 {(31,22)}
W
4 15 33 15 33 {(33,15)}
Starting with the initial set S0 = {(0, 0)}, we add items according to their weight and value,
H
Conclusion
IT
The greedy method provides an efficient way to solve the fractional knapsack problem, but it
may not yield an optimal solution for the 0/1 knapsack problem. For 0/1 knapsack, dynamic
programming or branch and bound methods are preferred.
10
ITECH
ITECH WORLD
WORLD AKTUDesign
AKTUDesign and
and Analysis
Analysis of
of Algorithms
Algorithms (BCS503)
(BCS503)
Unique
Point Minimum Spanning
Kruskal’s Tree
Algorithm Prim’s Algorithm
1 Works on edges Works on vertices
Theorem: If the weights on the edges of a connected undirected graph are distinct, then there
2 a unique minimum
exists Greedily adds edges
spanning tree. Greedily adds vertices
3 Suitable for sparse graphs Suitable for dense graphs
Example: Consider a graph with vertices A, B, C, D and edges: - (A, B, 1) - (A, C, 3) -
(B, 4C, 2) - (B, D, 4)
Requires sorting of edges Uses a priority queue for edge selection
5 Can be used on disconnected graphs Works only on connected graphs
Since the weights are distinct, following either Kruskal’s or Prim’s algorithm will lead to
6 Forms forests before the MST is complete Grows a single tree
the same unique MST: 1. Add (A, B ) (weight 1). 2. Add (B, C ) (weight 2). 3. Add (B, D )
7 Easier to implement with disjoint set Easier to visualize tree growth
(weight 4).
TU
Table 3: Comparison of Kruskal’s and Prim’s Algorithms
Prim’s Minimum Spanning Tree Algorithm in Detail
• Initialize theStart
- Initialization: tree with
withan arbitrary
any vertex. vertex.
- Growth: Always pick the least weight edge that
expands thethe
• Mark treevertex
until all
as vertices
includedare
in included.
the MST. - Termination: When all vertices are included
AK
in the MST.
• While there are vertices not in the MST:
Greedy Single
– Select theSource Shortest
edge with Path
the minimum Algorithm
weight (Dijkstra’s
that connects a vertex inAlgorithm)
the MST to a
Algorithm:vertex outside it.
– Add the selected edge and vertex to the MST.
• Initialize distances from the source vertex to all others as infinity, except the source itself
D
(0).
Example: Consider the following graph:
• Create a priority queue and insert the source vertex.
RL
Vertices: A, B, C, D
• While the queue is not(A,
Edges: empty:
B, 1), (A, C, 4), (B, C, 2), (B, D, 5), (C, D, 3)
Starting from vertex
– Extract A: 1. with
the vertex Addthe
edge (A, B ) distance.
smallest (weight 1). 2. Add edge (B, C ) (weight 2). 3.
Add edge– (For ) (weight
C, Deach 3). calculate the potential distance through the current vertex.
neighbor,
O
The minimum spanning tree consists of edges (A, B), (B, C ), and (C, D) with total weight
6. – If the calculated distance is smaller, update it and add the neighbor to the queue.
W
Kruskal’s Algorithm
Algorithm:
H
• Sort all edges in non-decreasing order of their weights.
EC
• For each edge, in sorted order:
12
11
TU
AK
D
RL
O
W
H
Step 1: Initialization
EC
E ∞ -
F ∞ -
G ∞ -
H ∞ -
13
TTUU
E
H 7
12 C
G
F ∞ -
G Table 8: Final
∞ Distance Table -
H ∞ -
Repeat the process of extracting the minimum node and updating the distances until all
AAKK
nodes have been visited. Table
After 5: Distance the
completing Table After Extracting
process, the shortestCpath from A to H will be
found.
Step 4: Extract Minimum
Final Distance Table: (B = 3)
Conclusion: The shortest path from A to H is:
• Extract B: Update the distances of its adjacent nodes.
A→C→D→G→H
Step 5: Relax Edges Leaving B
DD
with a total distance of 12.
Node Distance from A Previous Node
A 0 -
RRLL
Bellman-Ford Algorithm B 3 A
Algorithm: C 2 A
D 7 C
• Initialize the distance of
E the source to
7 0 and all othersCto infinity.
OO
• For each edge, relax it F
|V | − 1 times:9 B
G ∞ -
– For each edge (u, H v, w ): ∞ -
WW
– If distance[u] +Table
w < 6:
distance[v], updateAfter
Distance Table distance[v ].
Extracting B
• Check for negative weight cycles by iterating through all edges again.
Step 6: Extract Minimum (D = 7)
Example: Consider a graph with vertices A, B, C and edges: - (A, B, 1) - (B, C, −2) -
HH
(C, •A,Extract
−1) D: Update the distances of its adjacent nodes.
Starting from A: 1. Initialize distances: A = 0, B = ∞, C = ∞. 2. After one iteration:
EECC
Step 7: Relax Edgestwo
B = 1, C = −1. 3. After Leaving D A = 0, B = 1, C = −1 (no updates).
iterations:
Final distances: A = 0, B = 1, C = −1.
Node Distance from A Previous Node
article tikz amsmath booktabs graphicx
A 0 -
Bellman-Ford Algorithm Analysis ITECH WORLD AKTU
B 3 A
IITT
1 When Dijkstra
C
E 7
2
D and Bellman-Ford
7
A
C
C
Fail to Find the
Shortest PathF 9 B
G 10 D
Dijkstra Algorithm Failure: Dijkstra’s algorithm fails when there are negative weight edges
H ∞ -
in the graph. It assumes that once a node’s shortest path is found, it doesn’t need to be updated
again. However, in the presence
Table 7: of negative
Distance weight
Table edges,
After shorter Dpaths may be found after
Extracting
visiting a node, leading to incorrect results.
Bellman-Ford
Step 8: Continue Algorithm
Extracting Failure:
NodesThe(E,Bellman-Ford
G, F, H) algorithm can handle negative
weight edges and will find the shortest path as long as there is no negative weight cycle.
14
15
However, it will fail to find a shortest path if the graph contains a negative weight cycle that
is reachable from the source node.
TU
relaxation is possible. If a distance is updated in this extra iteration, a negative weight cycle
exists in the graph.
The graph contains both positive and negative weights. We will perform multiple relax-
ations, updating the shortest path estimates.
IT
16
TTUU
4 self.V = vertices
0 2 4 7 -2
self.graph = []
Table 9: Distance Table for Bellman-Ford Algorithm
def add_edge(self, u, v, w):
The algorithm terminates after the
self.graph.append([u, v, last
w]) iteration when no more updates occur.
AAKK
def bellman_ford(self, src):
dist = [float("Inf")] * self.V
dist[src] = 0
DD
for u, v, w in self.graph:
if dist[u] != float("Inf") and dist[u] + w < dist[v]:
dist[v] = dist[u] + w
RRLL
for u, v, w in self.graph:
if dist[u] != float("Inf") and dist[u] + w < dist[v]:
print("Graph contains negative weight cycle")
OO
return
W
for i in range(self.V):
W
print("{0}\t\t{1}".format(i, dist[i]))
g = Graph(5)
HH
g.add_edge(0, 1, 6)
g.add_edge(0, 2, 7)
g.add_edge(1, 2, 8)
EECC
g.add_edge(1, 3, 5)
g.add_edge(1, 4, -4)
g.add_edge(2, 3, -3)
g.add_edge(2, 4, 9)
g.add_edge(3, 1, -2)
IITT
g.add_edge(4, 0, 2)
g.add_edge(4, 3, 7)
g.bellman_ford(0)
17
18
ITECH WORLD
ITECH AKTUDesign
WORLD AKTU and Analysis of Algorithms (BCS503)
UNIT 4:
Syllabus
• Dynamic Programming with Examples Such as Knapsack.
Dynamic Programming
Dynamic Programming (DP) is a technique for solving complex problems by breaking
them down into simpler overlapping subproblems. It applies to problems exhibiting two
main properties:
2. Other approaches like divide and conquer might solve the same subproblem multiple
times.
Example
The Fibonacci sequence is a classic example, where each number is the sum of the two
preceding ones. DP avoids redundant calculations by storing already computed values.