Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

www-studocu-com-in-u-113452153-sid-01735146971

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Search for courses, books or documents University High School Books Sign in

Information AI Chat Download AI Quiz 0 0 Save

DAA All 5 Units - DAA notes all unit Was this document helpful?

DAA notes all unit ITECH WORLD AKTU


0 0
Course
Btech cse 3rd year daa (Kcs503)
ITECH WORLD AKTU
406 documents
SUBJECT NAME: DESIGN AND
University
Dr. A.P.J. Abdul Kalam Technical University
ANALYSIS OF ALGORITHM (DAA)
SUBJECT CODE: BCS503
Academic year: 2024/2025
UNIT 1: INTRODUCTION
Uploaded by:

Shashank Mishra
Dr. A.P.J. Abdul Kalam Technical…
Syllabus
0 9 2 1. Introduction: Algorithms, Analyzing Algorithms, Complexity of Algorithms, Growth
Followers Uploads Upvotes of Functions.

Follow 2. Performance Measurements.

3. Sorting and Order Statistics:

• Shell Sort
Recommended for you • Quick Sort
• Merge Sort
• Heap Sort
Lecture Notes of DAA (Design and
Analysis of Algorithms) 4. Comparison of Sorting Algorithms.
116 Btech cse 3rd year daa
Lecture notes 100% (4) 5. Sorting in Linear Time.

Unit 5 - Daa notes for computer


science of aktu
11 Btech cse 3rd year daa
Lecture notes 100% (2)
1 Introduction to Algorithms

DAA Question bank (UNIT 1 to 5) 1.1 Definition of an Algorithm


Btech cse 3rd year daa An algorithm is a clear and precise sequence of instructions designed to solve a specific
6 Lecture notes 100% (2) problem or perform a computation. It provides a step-by-step method to achieve a de-
sired result. .
DAA unit 2 - Daaa handwritten unit 2
notes
12 Btech cse 3rd year daa
Lecture notes 100% (2)

DAA Unit 1
Btech cse 3rd year daa 1
44 Lecture notes 100% (1)

Comments

Please sign in or register to post


comments. ITECH WORLD AKTU

Report Document 1.2 Difference Between Algorithm and Pseudocode


Algorithm Pseudocode
Algorithm to find the GCD of two Pseudocode for GCD:
Students also viewed numbers:
function GCD(a, b)
DATE Sheet b - Important for students • Start with two numbers, say a while b 0
and b. temp := b
AKTU DAA Unit 1 Assignment b := a mod b
• If b = 0, the GCD is a. Stop. a := temp
U-IV All pair shortest path (Flyod-Warshall…
• If b = 0, assign a = b, b = a return a
Binomial, Fibonacci Heap, Tries & Skip List end function
mod b.
Prospectus Super Star V 1 21e5a78563
• Repeat the above steps until b =
0.

1.3 Characteristics of an Algorithm


An effective algorithm must have the following characteristics:

1. Finiteness: The algorithm must terminate after a finite number of steps.

2. Definiteness: Each step must be precisely defined; the actions to be executed


should be clear and unambiguous.

3. Input: The algorithm should have zero or more well-defined inputs.

4. Output: The algorithm should produce one or more outputs.

5. Effectiveness: Each step of the algorithm must be simple enough to be carried


out exactly in a finite amount of time.

2
3

ITECH WORLD AKTU

ITECH WORLD AKTU


1.4 Difference Between Algorithm and Pseudocode
Algorithm Pseudocode
2 Complexity of Algorithms
A step-by-step procedure to A representation of an algorithm us-
solve a problem, expressed ing structured, human-readable code-
2.1 Time Complexity
in plain language or mathe- like syntax.
matical form.
Time Complexity is the computational complexity that describes the amount of time
Focuses on the logical se- Focuses on illustrating the algorithm
it takes to run an algorithm as a function of the length of the input.
quence of steps to solve a using a syntax closer to a programming
Cases of Time Complexity:
problem. language.
• BestLanguage
Case: Theindependent; can required
minimum time Languagefordependent; mimics
the algorithm the struc- given the
to complete,
mostbefavorable
writteninput.
in natural lan- Inture
Example: andSearch,
Binary syntax the
of programming
best-case time lan-
complexity is
O(1)guage
when or
themathematical
target elementno- guages.
is the middle element.
tation.
• Average
More Case: Theand
abstract expected
high- time
Morerequired
concretefor
andthe algorithm
closer to code
to actual complete, av-
eraged over all possible inputs. Example:
level. For Quick Sort, the average-case time
implementation.
complexity is O(n log n ).
No need for specific format- Requires a consistent syntax, but not
ting rules. as strict as actual programming lan-
• Worst Case: The maximum time required for the algorithm to complete, given the
guages.
least favorable input. Example: In Linear Search, the worst-case time complexity
is O(n) when the element is not present in the array.
1.5 Analyzing Algorithms
Example:
Analyzing an algorithm involves understanding its time complexity and space com-
• Time
plexity. complexity
This of thedetermine
analysis helps Merge Sort algorithm
how is O(n
efficiently log n). performs, especially
an algorithm
in terms of execution time and memory usage.
2.2What is Analysis
Space of Algorithms?
Complexity
• Definition:
Space ComplexityThe process
refers to the of determining
total amount of the computational
memory complexity
space required of algo-
by an algorithm
rithms,itsincluding
to complete execution.both the time complexity (how the runtime of the algorithm
scalesofwith
Cases Spacethe Complexity:
size of input) and space complexity (how the memory requirement
grows with input size).
• Auxiliary Space: Extra space or temporary space used by an algorithm.
• Purpose: To evaluate the efficiency of an algorithm to ensure optimal performance
Total
• in termsSpace:
of timeThe
andtotal space used by the algorithm, including both the input and
space.
auxiliary space.
• Types: Analyzing algorithms typically involves two main types of complexities:
Example:
– Time Complexity: Measures the total time required by the algorithm to
• Spacecomplete
complexity
as aoffunction
the Quick Sortinput
of the algorithm
size. is O(n).
– Space Complexity: Measures the total amount of memory space required
3 Growth of Functions
by the algorithm during its execution.

Example:
Growth of Functions describes how the time or space requirements of an algorithm
grow
• with the size
Analyzing theoftime
the input. The growth
complexity rate helps
of the Binary in understanding
Search algorithm. the efficiency of
an algorithm.
Examples:

• Polynomial Growth: O(n2 ) - Example: Bubble Sort algorithm.


• Exponential Growth: O(2n ) - Example: Recursive Fibonacci algorithm.

ITECH WORLD AKTU

3.1 Big-O Notation


Big-O notation, denoted as O(f (n)), describes the upper bound of an algorithm’s time
or space complexity. It gives the worst-case scenario of the growth rate of a function.
Graphical Representation:

3.2 Theta Notation


Theta notation, denoted as Θ(f (n)), describes the tight bound of an algorithm’s time or
space complexity. It represents both the upper and lower bounds, capturing the exact
growth rate.
Graphical Representation:

3.3 Omega Notation


Omega notation, denoted as Ω(f (n)), describes the lower bound of an algorithm’s time
or space complexity. It gives the best-case scenario of the growth rate of a function.
Graphical Representation:

5
ITECH WORLD AKTU Document continues below

Discover more from:

116 12 6
Btech cse 3rd year daa
Kcs503 Lecture Notes of DAA unit 2 - DAA Question Unit 5 - D
Dr. A.P.J. Abdul Kalam Technica… DAA (Design and Daaa bank (UNIT 1 to notes for
406 documents Analysis of… handwritten un… 5) compute
Btech cse 3rd yea… Btech cse 3rd yea… Btech cse 3rd yea… Btech cse 3
Go to course
100% (4) 100% (2) 100% (2) 100%

3.4 Numerical Example


ITECH WORLD AKTU
Problem: If f (n) = 100 · 2n + n5 + n, show that f (n) = O(2n ).
Solution:
3. Master Theorem: Provides a direct way to find the time complexity of recurrences
The
• of theterm
form100 · 2n=dominates
T (n)
  as
a · T nb + fn(n) ∞.comparing f (n) to nlogb a .
→by

• n5 and n grow much slower compared to 2n .


3.6 Master Theorem
• Therefore, f (n) = 100 · 2n + n5 + n = O(2n ).
The Master Theorem provides a solution for the time complexity of divide-and-conquer
algorithms. It applies to recurrence relations of the form:
3.5 Recurrences  n
Recurrence relations are equations T (n)that
= aTexpress+af (sequence
n) in terms of its preceding
b
terms. In the context of Data Structures and Algorithms (DAA), a recurrence relation
where:
often represents the time complexity of a recursive algorithm. For example, the time
complexity
• a ≥ 1 Tand
(n)bof>a1recursive function can be expressed as:
are constants.
n
(n) = afunction.
• f (n) is an asymptotically Tpositive ·T + f (n)
b
where: of the Master Theorem:
Cases
a is the
1.• Case 1:number
If f (n) of nlogb a−ϵ ) for
subproblems
= O( in some ϵ > 0, then T (n) = Θ(nlog b a ).
the recursion,
b a · log k k+1
b is the
2.• Case 2:factor
If f (n)by=which
Θ(nlogthe subproblem sizekis≥reduced
n) for some 0, then in each
T (n) Θ(nlogb a · log
= recursive call, n).
f (n) represents
3: If f (n) the b a+ϵ
costlogof the work doneϵ >outside the nrecursive
ofaf calls.
 
3.• Case = Ω(n ) for some 0 and if b
≤ cf (n) for some c<1
and large n, then T (n) = Θ(f (n)).
There are several methods to solve recurrence relations:
Example:
1. Substitution Method: Guess the  form of the solution and use mathematical
it. T (n) = 2T n2 + n.

• induction to prove
Consider the recurrence
2.• Recursion
Here, a = 2, Tree
b = 2,Method:
and f (n) =Visualize
n. the recurrence as a tree where each node
represents the cost of a recursive call and its children represent the costs of the
logb a = log 2subproblems.
• subsequent 2 = 1.

• f (n) = n = Θ(nlog2 2 ) = Θ(n1 ), so it matches Case 2.


6
• Therefore, T (n) = Θ(n log n).

Question 1.6
Solve the recurrence relation:
 n
T (n) = 7T + n2
2
Now, consider another algorithm with the recurrence:
n 
T ′ (n) = aT ′ + n2
4
Find the largest integer a such that the algorithm T ′ runs faster than the first algorithm.
Solution:
n   n
T (n) = 7T + n2 and T ′ (n) = aT ′ + n2
2 4

ITECH WORLD AKTU

By comparing, we have:

a = 7, b = 2, f (n) = n2

Using Master’s theorem:


nlogb a = nlog2 7 = n2.81
Case 1 of Master’s theorem applies:

f (n) = O(nlogb a−ϵ ) = O(n2.81−ϵ ) = O(n2 )

Thus,
T (n) = Θ(n2.81 )
Now for the second recurrence:
log 7 log a log 7
= ⇒ log a = × log 4 = 1.6902
log 2 log 4 log 2

Taking antilog, we get:


a = 48.015
Thus, for a = 49, algorithm A′ will have the same complexity as A. The largest
integer a such that A′ is faster than A is:

a = 48

Question 1.7
Solve the recurrence relation:
 n
T (n) = 7T + n2
3
Now, consider another algorithm with the recurrence:
n
S(n) = aS + n2
9
Find the largest integer a such that the algorithm S runs faster than the first algorithm.
Solution:
 n n 
T (n) = 7T + n2 and S(n) = aS + n2
3 9
Comparing the equations:
a = 7, b = 3, f (n) = n2
Example of Recursion: Let’s take the example of calculating the factorial of a
number n, which is defined as:
Using Master’s theorem:
logb a log 7 1.771
n! = n n
× (n −=1)n × 3(n=−n2) × · · · × 1
Case 3 of Master’s theorem applies:
The recursive definition of factorial is:
( b a+ϵ
f (n) = Ω(nlog1, ) = Ω(n1.771+ϵ ) = Ω( n2=
if n ) 0
factorial(n) =
n × factorial(n − 1), if n > 0
8
9

ITECH WORLD AKTU


ITECH WORLD AKTU

In this example, the base case is factorial(0) = 1, and the recursive case is n×factorial(n−
1). Thus, the complexity is:
Recursion Tree: T (n) = Θ(n2 )
A
Forrecursion
algorithmtree
B, is
weaget:
tree representation of the recursive calls made by a recursive
log9 a log9 81 2
algorithm. Each node represents a nfunction = ncall, and
=nits children represent the subsequent
recursive calls. The depth of the tree represents the depth of recursion.
Thus, for a = 81, both algorithms have the same complexity.
If a > 81, algorithm B has a higher complexity than A:
3.7 Sorting Algorithms
S(n) = Θ(n2 log n) > T (n)
3.7.1 Shell Sort
Therefore, algorithm B can never be faster than A.
Shell sort is an in-place comparison sort algorithm that extends the basic insertion sort
algorithm by allowing exchanges of elements that are far apart. The main idea is to
Question 1.8 so that elements that are far apart are sorted before doing a finer
rearrange the elements
sort using insertion sort.
Solve thesort
Shell recurrence relation:
improves the performance of insertion sort by breaking the original list into
sublists based on a gap sequence and then√sorting each sublist using insertion sort. This
T (n) =more
allows the algorithm to move elements T ( n) + O(log especially
efficiently, n) when they are far apart
fromSolution:
their correct positions.
Algorithm:
Let:
m =gap
1. Start with a large log between n = 2m A
n and elements. n1/2 = 2m/2
⇒ commonly used gap sequence is to
divide the length of the list by 2 repeatedly until the gap is 1.
Then:
T (2m ) = T (2m/2 ) + O(log 2m ) Let x(m) = T (2m )
2. For each gap size, go through the list and compare elements that are that gap
Substituting into the equation:
distance apart.
m
3. Use insertion sort to sort the sublists
x(m) = x created by these gaps.
+ O(m)
2
4. Continue reducing the gap until it becomes 1. When the gap is 1, the list is fully
The solution is:
sorted by insertion sort.
x(m) = Θ(m
Shell Sort Algorithm log m) ⇒ T (n) = Θ(log n log log n)
(Pseudocode):
Recursion:
shellSort(arr, n):
Recursion
gap = isn a//
process
2 # where a function
Initialize thecalls
gapitself
sizeeither directly or indirectly to solve
a problem. In recursion,
while gap > 0: a problem is divided into smaller instances of the same problem,
and solutions to these smaller
for i = gap to n-1: instances are combined to solve the original problem.
Recursion typically involves
temp two main parts:
= arr[i]
• Base Case: j A
= condition
i under which the recursion stops.
while j >= gap and arr[j - gap] > temp:
• Recursive Case: The =part
arr[j] where
arr[j the function calls itself to break the problem
- gap]
into smaller instances.
j = j - gap
arr[j] = temp
gap = gap // 2

Example: Let’s sort the array [12, 34, 54, 2, 3] using Shell sort.
Step 1: Initial array
[12, 34, 54, 2, 3]
1. Start with gap = 5//2 = 2, meaning the array will be divided into sublists based
on the gap 2.

10

ITECH WORLD AKTU

• Compare elements at index 0 and 2: [12, 54]. No change since 12 < 54.

• Compare elements at index 1 and 3: [34, 2]. Swap since 34 > 2, resulting in:

[12, 2, 54, 34, 3]

• Compare elements at index 2 and 4: [54, 3]. Swap since 54 > 3, resulting in:

[12, 2, 3, 34, 54]

Step 2: After first pass with gap 2

[12, 2, 3, 34, 54]

2. Reduce gap to 1: gap = 2//2 = 1. Now we perform insertion sort on the whole
array:

• Compare index 0 and 1: [12, 2]. Swap since 12 > 2, resulting in:

[2, 12, 3, 34, 54]

• Compare index 1 and 2: [12, 3]. Swap since 12 > 3, resulting in:

[2, 3, 12, 34, 54]

• Compare index 2 and 3: [12, 34]. No change.

• Compare index 3 and 4: [34, 54]. No change.

Step 3: After final pass with gap 1

[2, 3, 12, 34, 54]

At this point, the array is sorted.


Key Insights:

• Shell sort is more efficient than insertion sort for large lists, especially when elements
are far from their final positions.

• The efficiency depends on the choice of the gap sequence. A commonly used se-
quence is gap = n//2, reducing until gap equals 1.

3.7.2 Quick Sort


Quick Sort is a divide-and-conquer algorithm that sorts an array by partitioning it into
two sub-arrays around a pivot element. The sub-arrays are then sorted recursively.
Algorithm:

1. Choose a Pivot: Select an element from the array to act as the pivot.
2.Advantages
Partition: of Quick Sort:
Rearrange the array such that elements less than the pivot come before
Efficient
• it, Average
and elements Case:
greater comeQuick Sort has an average-case time complexity of
after it.
O(n log n).
11
12

ITECH WORLD AKTU


ITECH WORLD AKTU

• In-Place Sorting: It requires minimal additional space.


3. Recursively Apply: Apply the same process to the sub-arrays formed by the
Disadvantages
partition. of Quick Sort:

•Pseudocode:
Worst-Case Performance: The worst-case time complexity is O(n2 ), typically
occurring with poor pivot choices.
QuickSort(arr, low, high):
• if
NotlowStable:
< high:Quick Sort is not a stable sort.
pivotIndex = Partition(arr, low, high)
QuickSort(arr,
3.7.3 Merge Sort low, pivotIndex - 1)
QuickSort(arr, pivotIndex + 1, high)
Merge Sort is a stable, comparison-based divide-and-conquer algorithm that divides the
array into smaller low,
Partition(arr, sub-arrays,
high):sorts them, and then merges them back together.
Algorithm:
pivot = arr[high]
i = low - 1
1. Divide: Recursively divide the array into two halves until each sub-array contains
for j = low to high - 1:
a single element.
if arr[j] < pivot:
2. Merge: iMerge = i the
+ 1sorted sub-arrays to produce sorted arrays until the entire array
is merged.swap arr[i] with arr[j]
swap arr[i + 1] with arr[high]
Pseudocode:
return i + 1
Example: Consider
MergeSort(arr, left, the array:
right):
if left < right:
mid = (left + right) //[10 2, 7, 8, 9, 1, 5]
MergeSort(arr, left, mid)
• Choose pivot: 5
MergeSort(arr, mid + 1, right)
Merge(arr,
• Partition around left,
pivot 5:mid, right)
[1, 5, 8, 9, 7, 10]
Merge(arr, left, mid, right):
• n1 = mid - apply
Recursively left +Quick
1 Sort to [1] and [8, 9, 7, 10]
n2 = right - mid
• LContinue until the entire
= arr[left:left + n1] array is sorted:
R = arr[mid + 1:mid + 1 + n2][1, 5, 7, 8, 9, 10]
i = 0
j = 0
Visualization:
k = left
while iArray:
Initial < n1 and j < n2:
if L[i] <= R[j]: [10, 7, 8, 9, 1, 5]
arr[k] = L[i]
i = i + 1
Afterelse:
Partitioning:
arr[k] = R[j] [1, 5, 8, 9, 7, 10]
j = j + 1
Finalk Sorted
= k + 1Array:
while i < n1: [1, 5, 7, 8, 9, 10]
arr[k] = L[i]
i = i + 1
k = k + 1
while j < n2:

13

ITECH WORLD AKTU

arr[k] = R[j]
j = j + 1
k = k + 1
Example: Consider the array:

[38, 27, 43, 3, 9, 82, 10]

• Divide the array into:


[38, 27, 43, 3]
and
[9, 82, 10]

• Recursively divide these sub-arrays until single elements are obtained.

• Merge the single elements to produce sorted arrays:

[27, 38, 43, 3]

and
[9, 10, 82]

• Continue merging until the entire array is sorted:

[3, 9, 10, 27, 38, 43, 82]

Visualization:

Initial Array:
[38, 27, 43, 3, 9, 82, 10]

After Dividing and Merging:

[3, 9, 10, 27, 38, 43, 82]

Advantages of Merge Sort:

• Stable Sort: Merge Sort maintains the relative order of equal elements.

• Predictable Performance: It has a time complexity of O(n log n) in the worst,


average, and best cases.

Disadvantages of Merge Sort:

• Space Complexity: It requires additional space for merging.

• Slower for Small Lists: It may be slower compared to algorithms like Quick Sort
for smaller lists.

14
15

ITECH WORLD AKTU


ITECH WORLD AKTU

• Not Stable: Heap Sort is not a stable sort, meaning equal elements may not retain
3.7.4 their
Heap Sortorder.
original
Heap Sort is a comparison-based sorting algorithm that utilizes a binary heap data struc-
• Performance: It can be slower compared to algorithms like Quick Sort due to the
ture. It works by building a max heap and then repeatedly extracting the maximum
overhead of heap operations.
element to build the sorted array.
Algorithm:
4 1. Build
Comparison of Sorting Algorithms
a Max Heap: Convert the input array into a max heap where the largest
element is at the root.
Comparison Table:
2. Extract Max: Swap the root of the heap (maximum element) with the last element
of the heap
Algorithm andComplexity
Time then reduce(Best)
the heap size Complexity
Time by one. Heapify the root
(Worst) to maintain
Space the
Complexity
Shellmax
Sortheap property.
O(n log n) O(n2 ) O(1)
Quick O(n log n)
Sort Continue O(n2 ) O(log n)
3. Repeat: the extraction and heapify process until the heap is empty.
Merge Sort O(n log n) O(n log n) O(n)
Heap Sort
Pseudocode: O(n log n) O(n log n) O(1)

HeapSort(arr):
5 nSorting in Linear Time
= length(arr)
BuildMaxHeap(arr)
5.1 forIntroduction
i = n - 1 downtotoLinear
1: Time Sorting
swap arr[0] with arr[i]
Linear time sorting algorithms
Heapify(arr, 0, i) such as Counting Sort, Radix Sort, and Bucket Sort are
designed to sort data in linear time O(n).
Example:
BuildMaxHeap(arr):
• nCounting
= length(arr)
Sort: Efficient for small range of integers.
for i = n // 2 - 1 down to 0:
Heapify(arr, i, n)
5.1.1 Bucket Sort
Bucket Sort is ai,distribution-based
Heapify(arr, n): sorting algorithm that divides the input into several
buckets and then
largest = isorts each bucket individually. It is particularly useful when the input
is uniformly
left = distributed
2 * i + 1 over a range.
Algorithm:
right = 2 * i + 2
if left < n and arr[left] > arr[largest]:
1. Create Buckets:
largest = leftCreate an empty bucket for each possible range.

2. if right < nElements:


Distribute and arr[right] > arr[largest]:
Place each element into the appropriate bucket based on
largest = right
its value.
if largest != i:
3. Sortswap
Buckets:
arr[i]Sort eacharr[largest]
with bucket individually using another sorting algorithm (e.g.,
Insertion Sort).
Heapify(arr, largest, n)
4.Advantages
Concatenate Buckets:
of Heap Combine the sorted buckets into a single sorted array.
Sort:
Pseudocode:
• In-Place Sorting: Heap Sort does not require additional space beyond the input
array.
BucketSort(arr):
• minValue = min(arr) It has a time complexity of O(n log n) for both average and
Time Complexity:
maxValue = max(arr)
worst cases.
bucketCount = number of buckets
Disadvantages
buckets = [[] offor
Heap
_ inSort:
range(bucketCount)]

16

ITECH WORLD AKTU

for num in arr:


index = (num - minValue) // bucketWidth
buckets[index].append(num)
sortedArray = []
for bucket in buckets:
InsertionSort(bucket)
sortedArray.extend(bucket)
return sortedArray

InsertionSort(arr):
for i from 1 to length(arr):
key = arr[i]
j = i - 1
while j >= 0 and arr[j] > key:
arr[j + 1] = arr[j]
j = j - 1
arr[j + 1] = key

Example: Consider the array:

[0.78, 0.17, 0.39, 0.26, 0.72]

• Buckets Creation: Create 5 buckets.

• Distribute Elements: Place elements into buckets.

• Sort Buckets: Sort each bucket using Insertion Sort.

• Concatenate Buckets: Combine the sorted buckets:

[0.17, 0.26, 0.39, 0.72, 0.78]

5.1.2 Stable Sort


Stable Sort maintains the relative order of equal elements. Examples include Insertion
Sort and Merge Sort.
Algorithm for Stable Sort:

• Insertion Sort: Maintain the order of elements with equal keys by inserting each
element into its correct position relative to previously sorted elements.

Pseudocode:

InsertionSort(arr):
for i from 1 to length(arr):
key = arr[i]
j = i - 1
while j >= 0 and arr[j] > key:
arr[j + 1] = arr[j]
j = j - 1
arr[j + 1] = key

17
ITECH WORLD AKTU
ITECH WORLD AKTU

Example: Consider the array:


• Sort by next digit: [4, 3, 2, 1]
[802, 24, 45, 66, 75, 90, 170, 2]
• Sort the array:
• Continue until all digits are processed.[1, 2, 3, 4]

Question:
5.1.3 Among Merge Sort, Insertion Sort, and Quick Sort, which algorithm
Radix Sort
performs the best in the worst case? Apply the best algorithm to sort the list
Radix Sort is a non-comparative integer sorting algorithm that processes digits of num-
bers. It works by sorting numbersE, digit by M,
X, A, digit,
P, starting
L, E from the least significant digit
to the most significant digit.
in alphabetical
Algorithm:order.
Answer: - Merge Sort has a worst-case time complexity of O(n log n). - Insertion
1. has
Sort Determine Maximum
a worst-case Digits: Find
time complexity the2 ).
of O(n maximum
- Quicknumber of digits
Sort has in the array.
a worst-case time
complexity of O(n2 ), though its average-case complexity is O(n log n).
2. Sort by Digit: Sort the array by each digit using a stable sort (e.g., Counting
In the worst case, Merge Sort performs the best among these algorithms.
Sort).
Sorted List using Merge Sort:
3.Step-by-Step Solution:
Repeat: Continue until all digits are processed.

1.Pseudocode:
Initial List:
RadixSort(arr):
• Given List:
maxValue = max(arr) E, X, A, M, P, L, E
exp = 1
2. while
DividemaxValue
the List:// exp > 0:
CountingSort(arr, exp)
Divide
•exp the *list10into two halves:
= exp
E, X, A and M, P, L, E
CountingSort(arr, exp):
n = length(arr)
3. Recursive Division:
output = [0] * n
count = [0]
• For * 10half E, X, A:
the first
for i in range(n):
– Divide further into:
index = (arr[i] // exp) % 10
E and X, A
count[index] += 1
– For
for i in X, A: 10):
range(1,
count[i] += count[i
∗ Divide into: - 1]
for i in range(n - 1, -1, -1): X and A
index = (arr[i] // exp) % 10
• For the second half M, P, L, E:
output[count[index] - 1] = arr[i]
– Divide further
count[index] -= 1into:
for i in range(n): M, P and L, E
arr[i]
– For=M, output[i]
P:
∗ Dividethe
Example: Consider into:
array:
M and P
[170, 45, 75, 90, 802, 24, 2, 66]
– For L, E:
∗ Divide
• Sort by least into:digit:
significant
L and E
[170, 90, 802, 2, 24, 45, 75, 66]
4. Merge the Sorted Sublists:
18
19

ITECH WORLD AKTU

• Merge E and X, A:
– Merge X and A to get:
A, X
– Merge E and A, X to get:
A, E, X
• Merge M and P to get:
M, P
• Merge L and E to get:
E, L
• Merge M, P and E, L:
– Merge M and E, L, P to get:

E, L, M, P

5. Merge Final Sublists:

• Merge A, E, X and E, L, M, P:
– Final merge results in:

A, E, E, L, M, P, X

6. Sorted List:

• Sorted List:
A, E, E, L, M, P, X

20

ITECH WORLD AKTU


ITECH WORLD AKTU
Subject Name: Design and Analysis of Algorithm
(BCS503)

UNIT 2: Advanced Data Structures

Syllabus
• Red-Black Trees

• B-Trees

• Binomial Heaps

• Fibonacci Heaps

• Tries

• Skip List

Binary Search Tree (BST)


A Binary Search Tree (BST) is a node-based binary tree data structure where each node
contains the following components:

• LEFT: A pointer to the left child node, which contains only nodes with keys less
than the current node’s key.

• KEY: The value stored in the current node. This value determines the order within
the tree.

• PARENT: A pointer to the parent node. The root node’s parent pointer is NULL.

• RIGHT: A pointer to the right child node, which contains only nodes with keys
greater than the current node’s key.

Additionally, a BST must satisfy the following properties:

• The left subtree of a node contains only nodes with keys less than the node’s key.

• The right subtree of a node contains only nodes with keys greater than the node’s
key.

• Both the left and right subtrees must also be binary search trees.

21
1

Limitations of Binary Search Tree (BST)


Binary Search Trees (BSTs) have several limitations related to their complexity:

• Worst-Case Time Complexity: In the worst case, such as when the tree becomes
unbalanced (e.g., inserting sorted data), the height of the BST can reach O(n),
resulting in search, insertion, and deletion operations having a time complexity of
O(n).
• Space Complexity: Each node requires extra memory for storing pointers to
its children, which can lead to higher space complexity compared to array-based
structures especially in unbalanced trees
structures, especially in unbalanced trees.
• Poor Performance with Sorted Data: If input data is already sorted, the BST
will degenerate into a linked list, causing all operations to degrade to O(n) time
complexity.
• Balancing Overhead: Self-balancing BSTs (like AVL or Red-Black Trees) re-
quire additional operations (rotations and recoloring), which add extra overhead to
insertion and deletion operations.
• Cache Inefficiency: Due to pointer-based navigation, BSTs exhibit poor cache
locality, leading to slower performance compared to structures like arrays.

Red-Black Trees
A Red-Black Tree is a type of self-balancing binary search tree in which each node contains
an extra bit for denoting the color of the node, either red or black. The tree maintains
its balance by following a set of rules during insertion and deletion operations.

Properties of Red-Black Trees:


• Every node is either red or black.
• The root is always black.
• All leaves (NIL nodes) are black.
• If a node is red, then both its children are black.
• Every path from a given node to its descendant NIL nodes has the same number of
Finding the Height of a Red-Black Tree Using Black
black nodes.
Height
Node Structure in Red-Black Trees:
In a Red-Black Tree, the black height of a node is defined as the number of black nodes
Each
on thenode
pathinfrom
a Red-Black
that nodeTree consists
to any leaf, of the
not followingthe
including components:
node itself. The black height
is an
• important
COLOUR: property that
Indicates thehelps
colorinofmaintaining the balance
the node, either red or of the .tree.
black
• KEY: The value stored in the node, used to maintain the binary search tree prop-
Definition
erty.
of Black Height:
• The black height of a node x, denoted as bh(x), is the number of black nodes from
• LEFT: A pointer to the left child node.
x to any leaf, including the leaf itself.
• PARENT: A pointer to the parent node; the parent of the root node is NIL.
• The black height of a Red-Black Tree is the black height of its root node.
• RIGHT: A pointer to the right child node.

3
4

Calculating Black Height:


1. Start at the root node.

2. Initialize the black height, bh, to 0.

3. Traverse any path from the root to a leaf:

• Whenever a black node is encountered, increment bh by 1.


• Ignore the red nodes, as they do not contribute to the black height.

4. The value of bh when a leaf (NIL node) is reached is the black height of the tree.

Relation Between Black Height and Tree Height:


In a Red-Black Tree:

• The height of the tree, h, is at most 2 × bh, where bh is the black height of the tree.
This is because at most every alternate node along a path from the root to a leaf
can be red.

• Thus, if the black height of a Red-Black Tree is bh, the maximum height of the tree
is 2 × bh.

Time Complexity:
Calculating the black height of a Red-Black Tree requires traversing from the root to any
leaf, resulting in a time complexity of O(log n), where n is the number of nodes in the
tree.

Red-Black Tree Insert Pseudocode


INSERT(T, z)
1. y = NIL
2. x = T.root
3. while x NIL
4. y = x
5. if z.key < x.key
6. x = x.left
7. else
8. x = x.right
9. z.p = y
10. if y == NIL
11. T.root = z
12. else if z.key < y.key
13. y.left = z
14. else
15. y.right = z
16. z.left = NIL
17. z.right = NIL

18. z.color = RED


19. INSERT-FIXUP(T, z)
• Provides better worst-case time complexity for insertion, deletion, and search op-
erations.
• Helps maintain order in a dynamic data set.

Red-Black Tree Deletion


DELETE(T, z)
1. if z.left == NIL or z.right == NIL
2. y = z
3. else
4. y = TREE-SUCCESSOR(z)
5. if y.left NIL
6. x = y.left
7. else
Insert
8. x =Fixup
y.right Pseudocode
9. x.p = y.p
10. if y.p == NIL
INSERT-FIXUP(T, z)
11.
while T.root
z.p.color= x== RED
12.ifelse
z.p if
== yz.p.p.left
== y.p.left
13. y y.p.left = x
= z.p.p.right
if y.color == RED
14. else
15. y.p.right
z.p.color = x BLACK
16. ify.color
y z = BLACK
17. z.key = y.key
z.p.p.color = RED
18. ifzy.color
= z.p.p== BLACK
19. else
DELETE-FIXUP(T, x)
if z == z.p.right
z = z.p
LEFT-ROTATE(T, z)
z.p.color = BLACK
RB Tree Deletion
z.p.p.color = RED Fixup
RIGHT-ROTATE(T, z.p.p)
DELETE-FIXUP(T,
else x)
while x T.root and x.color == BLACK
(mirror image of the above)
if x == x.p.left
T.root.color = BLACK
w = x.p.right

6
7

Cases of RB Tree Insertion


• Case 1: Uncle is RED.

• Case 2: Uncle is BLACK and node is a right child.

• Case 3: Uncle is BLACK and node is a left child.

Advantages of Red-Black Tree over BST


• Ensures balanced height.

if w.color == RED
w.color = BLACK
x.p.color = RED
LEFT-ROTATE(T, x.p)
w = x.p.right
if w.left.color == BLACK and w.right.color == BLACK
w.color = RED
x = x.p
else if w.right.color == BLACK
w.left.color = BLACK
w.color = RED
RIGHT-ROTATE(T, w)
w = x.p.right
w.color = x.p.color
x.p.color = BLACK
w.right.color = BLACK
LEFT-ROTATE(T, x.p)
x = T.root
else
(mirror image of the above)
x.color = BLACK

Cases of RB Tree for Deletion


• Case 1: Sibling is RED.

• Case 2: Sibling is BLACK and both of sibling’s children are BLACK.

• Case 3: Sibling is BLACK, sibling’s left child is RED, and sibling’s right child is
BLACK.

• Case 4: Sibling is BLACK and sibling’s right child is RED.

B-Trees
A B-Tree is a self-balancing search tree in which nodes can have more than two children.
It is commonly used in databases and file systems to maintain sorted data and allow
searches, sequential access, insertions, and deletions in logarithmic time.

Properties of B-Trees:
• All leaves are at the same level.

• A B-Tree of order m can have at most m children and at least ⌈ m


2
⌉ children.

• Each node can contain at most m − 1 keys.

• Nodes are partially filled and data is sorted in increasing order.

Pseudocode for B-Tree Operations:


Searching in B Tree: B Tree Search(x k)
Searching in B-Tree: B-Tree-Search(x, k)
15. keyi[x] = keyt[y]
16. n[x] = n[x] +k)
B-Tree-Search(x, 1
1. i = 1
Insertion
2. while iin Non-Full
n[x] and Node: B-Tree-Insert-Nonfull(x, k)
k > keyi[x]
3. i = i + 1
B-Tree-Insert-Nonfull(x, k)
4. if i n[x]
1. i = n[x] and k == keyi[x]
5. if leaf[x]
2. return (x, i)
6.
3. if leaf[x]
while i 1 and k < keyi[x]
7.
4. return
key(iNULL
+ 1)[x] = keyi[x]
8. else
5. i = i - 1
9.
6. Disk-Read(ci[x])
key(i + 1)[x] = k
10. return B-Tree-Search(ci[x], k)
7. n[x] = n[x] + 1
8. else
Insertion
9. in B-Tree:
while i 1 and B-Tree-Insert(T,
k < keyi[x] k)
10. i = i -
B-Tree-Insert(T, k)1
11. i = i
1. r = root[T] + 1
12.if n[r]
2. if n[ci[x]]
== 2t - 1 == 2t - 1
13.
3. B-Tree-Split-Child(x, i, ci[x])
s = Allocate-Node()
14.
4. if k=>skeyi[x]
root[T]
15.
5. leaf[s] i = = i + 1
FALSE
16.
6. B-Tree-Insert-Nonfull(ci[x],
n[s] = 0 k)
7. c1[s] = r
Deletion
8. in B-Tree: B-Tree-Delete(T,
B-Tree-Split-Child(s, 1, r) k)
9. B-Tree-Insert-Nonfull(s,
B-Tree-Delete(T, k) k)
10. else
1. Find the node x that contains the key k.
11.If xB-Tree-Insert-Nonfull(r,
2. is a leaf, delete k fromk)x.
3. If x is an internal node:
a. If the predecessor y has at least t keys, replace k with the predecessor.
b. If the successor z has at least t keys, replace k with the successor.
c. If both y and z have t-1 keys, merge k, y, and z.

9
10

4. Recursively delete the key from the appropriate node.


Splitting a Child in B-Tree: B-Tree-Split-Child(x, i, y)
B-Tree-Split-Child(x, i, y)
Characteristics of B-Trees:
1. z = Allocate-Node()
B-Trees=are
2. •leaf[z] height-balanced.
leaf[y]
3. n[z] = t - 1
• Each node has a variable number of keys.
4. for j = 1 to t - 1
5. • Insertion
keyj[z]and= deletion
key(j +are
t)[y]
done in such a way that the tree remains balanced.
6. if not leaf[y]
7. • Searching,
for j =insertion,
1 to t and deletion have time complexity of O(log n).
8. cj[z] = c(j + t)[y]
9. n[y] = t - 1
10. for j = n[x] + 1 downto i + 1
11. c(j + 1)[x] = cj[x]
12. c(i + 1)[x] = z
13. for j = n[x] downto i
14. key(j + 1)[x] = keyj[x]

Binomial Heaps
Binomial Heaps are a type of heap data structure that supports efficient merging of two
heaps. It is composed of a collection of binomial trees that satisfy the heap property.

11

Properties of Binomial Heaps:


• Each binomial heap is a collection of binomial trees.

• A binomial tree of order k has exactly 2k nodes.

• The root has the smallest key and the heap is represented as a linked list of binomial
trees.
2 Binomial Heap Merge Algorithm
The merge algorithm combines two binomial heaps into one by merging their binomial
trees of the same degree, similar to the binary addition process. BINOMIAL-HEAP-
MERGE(H1, H2)
1 Union of Binomial Heap
1. Create a new binomial heap H
A binomial
2. Set heap
H.head is aroot
to the collection of binomial
of the merged list oftrees. The
H 1 and H 2union of two binomial heaps
involves merging two heaps into one, preserving the properties of binomial heaps.
3. Return H

1.1 Conditions for Union of Two Existing Binomial Heaps


2.1 Four Cases for Union
The union of two binomial heaps H1 and H2 is performed by merging their binomial
1. Both trees have different degrees: No merging is required.
trees. The following conditions are important for the union:
2. Both trees have the same degree: The two trees are merged.
1. Both heaps are merged into one by merging their respective binomial trees.
3. Three trees of the same degree appear consecutively: The middle tree is merged
2. The resulting heap is adjusted to maintain the binomial heap properties.
with one of its neighbors.
3. If two trees of the same degree appear, they are merged into one tree by linking.
4. Two consecutive trees have the same degree: The tree with the smaller root is made
the parent.
1.2 Algorithm for Union of Two Binomial Heaps
BINOMIAL-HEAP-EXTRACT-MIN(H)
BINOMIAL-HEAP-UNION(H1, H2)
1. Find the root x with the minimum key in the root list of H
1. H ← MERGE(H1, H2)
2. Remove x from the root list of H
2. If H = N U LL then return NULL
3. Create a new binomial heap H ′
12
13

3. Initialize pointers: x, prev x, and next x


4. x ← head of H
5. While next x = NULL:
6. If Degree(x) = Degree(next x) or Degree(next x) == Degree(next next x)
7. Move to the next tree
8. Else if Key(x) ≤ Key(next x)
9. Link next x as a child of x
10. next x ← next next x
11. Else
12. Link x as a child of next x
13. x ← next x
14. Return H

1.3 Time Complexity


4. Make the children of x a separate binomial heap by reversing the order of the linked
Theoftime
list x’s complexity
children of binomial heap union is O(log n), where n is the total number of
elements in the two′ heaps.
5. Union H and H
6. Return x

3 Deleting a Given Element in a Binomial Heap


Deleting a specific element in a binomial heap is done by reducing its key to −∞ and
then extracting the minimum element. BINOMIAL-HEAP-DELETE(H, x)
1. Call BINOMIAL-HEAP-DECREASE-KEY(H, x, −∞)
2. Call BINOMIAL-HEAP-EXTRACT-MIN(H)

4 BINOMIAL-HEAP-DECREASE-KEY Algorithm
The decrease-key operation reduces the key of a given node to a smaller value and then
adjusts the heap to maintain the binomial heap property.
BINOMIAL-HEAP-DECREASE-KEY(H, x, k)
1. If k > x.key then Error: New key is larger than current key
2. Set x.key ← k
3. Set y ← x, z ← y.p
4. While z = NIL and y.key < z.key :
5. Exchange y and z
6. Set y ← z, z ← y.p

14

4.1 Time Complexity


The time complexity for BINOMIAL-HEAP-DECREASE-KEY is O(log n).

5 Fibonacci Heaps and Its Applications


5.1 Structure of Fibonacci Heap
• Node: A node in a Fibonacci heap contains a key, pointers to its parent, children,
d bl I l k k f h d ( b f h ld ) d k
2. Setand a sibling.
H.min to the It also keeps
minimum track
of H1 of and
.min the H
degree
2.min(number of children) and a mark
indicating
3. Combine whether
the root listsitofhas
H 1lost
andaHchild since
2 into H it was made a child.
4. Return H
• Heap: A Fibonacci heap consists of a collection of heap-ordered trees. The trees
are rooted, and the heap maintains a pointer to the minimum node in the heap.
5.5 Algorithm for Make Heap
5.2 Algorithm for Consolidate Operation
MAKE-HEAP
1. Create an empty Fibonacci heap H
FIB-HEAP-CONSOLIDATE(H)
2. Return
1. Let H beHa Fibonacci heap
2. Initialize an empty array A of size ⌊log2 (n)⌋ + 1
5.6 Algorithm
3. For each node w in for Insert
the root list of H :
FIB-HEAP-INSERT(H,
4. Set x ← w x)
1.
5. Insert
Set dx ←
into the root list of H
x.degree
2.
6. If While
H.min A[d]
= NU =LL then set H.min ← x
NIL:
3.
7. ElseIfif Key(
Key(x)
x)><Key(
Key(A[d
H.min)
]): then set H.min ← x
8. Set temp ← x

15
16

5.7 Algorithm for Minimum


9. Set x ← A[d]
FIB-HEAP-MINIMUM(H)
10. Set A[d] ← temp
1. Return H.min
11. Link A[d] as a child of x
12. Else
5.8 Algorithm for Extract Min
13. Link x as a child of A[d]
FIB-HEAP-EXTRACT-MIN(H)
14. End if
1. Let z ← H.min
15. Increment d by 1
2. If z = NIL then
16. End while
3. For each child x of z :
17. Set H.min to the minimum of the roots of H
4. Remove x from the root list of H
5.
5.3 Insert x into the
Algorithm root
for list of H
Fib-Heap-Link
6. Remove z from the root list of H
FIB-HEAP-LINK(H, y, x)
7. If H.min = z then
1. Remove y from the root list of H
8. If H has no more nodes then set H.min ← NIL
2. Make y a child of x
9. Else set H.min ← minimum of the root list
3. Increase the degree of x by 1
10. Call FIB-HEAP-CONSOLIDATE(H )
4. Set the mark of y to false
11. Return z
5.4 Function for Uniting Two Fibonacci Heaps
FIB-HEAP-UNION(H1, H2)
1. Create a new Fibonacci heap H

17

6 Trie and Skip List


6.1 Trie and Its Properties
Trie: A trie, also known as a prefix tree or digital tree, is a tree data structure used to
store a dynamic set of strings, where the keys are usually strings. It provides a way to
efficiently search, insert, and delete keys.
Properties:

• Each node represents a single character of the keys.

• The root represents an empty string.

• A path from the root to a node represents a prefix of some keys.

• Each node has a boolean flag indicating whether it marks the end of a key.

• The time complexity of search, insert, and delete operations is proportional to the
length of the key.

6.2 Algorithm to Search and Insert a Key in Trie


TRIE-INSERT(TRIE, key)
1. Set node ← root of TRIE
2. For each character c in key :
3. If c is not a child of node:
4. Create a new node for c
5. End if
6. Set node ← child of node corresponding to c
7. Set node.isEnd ← true
TRIE-SEARCH(TRIE, key)
1. Set node ← root of TRIE
2. For Return
4. each character
true c in key :
5.
3. Else
If c isifnot
node.next.key > searchKey
a child of node : :
6.
4. Return
Return false
false
7. Set node ← node.next
5. Else
8. Return false
6. Set node ← child of node corresponding to c
DELETE(list, searchKey)
7. Return node.isEnd
1. Set node ← head of list
2. While node has a next node:
3. If node.next.key = searchKey :
4. Set node.next ← node.next.next
5. Return
6. Set node ← node.next
7. Return false
18
19

6.5 Divide and Conquer Approach to Compute xn


6.3 Skip List and Its Properties
POWER(x, n)
Skip List: A skip list is a data structure that allows fast search within an ordered
1. If n =of
sequence 0 then Return
elements. 1 multiple layers of linked lists to skip over elements, making
It uses
search
2. If noperations
is even: faster.
3. Properties:
Set y ← POWER(x, n/2)
• Skip
4. Return listsy have
× y multiple levels of linked lists.
• Each element in a higher level list represents a shortcut to the lower level lists.
5. Else
6. • Return
The timex×complexity
POWER(x, of nsearch,
− 1) insertion, and deletion operations is O(log n) on
average.
• The space complexity is O(n log n).

6.4 Insertion, Searching, and Deletion Operations


INSERT(list, searchKey)
1. Set node ← head of list
2. While node has a next node:
3. If node.next.key > searchK ey:
4. Create a new node with searchKey
5. Insert the new node between node and node.next
6. Return
7. Else set node ← node.next
8. Insert searchKey at the end of the list
SEARCH(list, searchKey)
1. Set node ← head of list
2. While node has a next node:
3. If node.next.key = searchKey :

20

ITECH WORLD AKTU


Design and Analysis of Algorithms (BCS503)
TUTU

UNIT 3: Divide and Conquer & Greedy Methods


AKK

Syllabus
• Divide and Conquer with examples such as Sorting, Matrix Multiplication, Convex Hull,
DD A

and Searching.
• Greedy Methods with examples such as Optimal Reliability Allocation, Knapsack, Min-
imum Spanning Trees (Prim’s and Kruskal’s algorithms), Single Source Shortest Paths
(Dijkstra’s and Bellman-Ford algorithms).
RL L

Divide and Conquer


O R

The Divide and Conquer technique involves solving problems by breaking them into smaller
sub-problems, solving each sub-problem independently, and then combining their results. It
W O

follows three key steps:


• Divide: The problem is divided into smaller sub-problems, typically into two or more
HH W

parts.
• Conquer: The sub-problems are solved recursively. If the sub-problem is small enough,
solve it directly.
• Combine: The solutions of the sub-problems are combined to get the final solution to
the original problem.
ECC

Divide and Conquer Algorithm


TE

Generic Divide and Conquer Algorithm


ITITE
EC
Generic Divide and Conquer Algorithm
time complexity of matrix multiplication from O(n ) to approximately O(n2.81 ).
3
Key Idea: Strassen’s method reduces the number of recursive multiplications by clev-
[H]
erlyGeneric Dividematrix
reorganizing and Conquer Algorithm
products. Instead[1]ofDivideAndConquer (problem):
8 recursive multiplications (as in the naive
divide-and-conquer
• If the problem method), Strassen’s algorithm performs 7 multiplications and 18 addi-
is small enough:
tions/subtractions.
– Solve the problem directly.
• Else:
Steps of Strassen’s Algorithm
Suppose–weDivide
wish the problem the
to compute intoproduct
smaller C
sub-problems.
= AB, where A, B, and C are n × n matrices.
The algorithm proceedssolve
– Recursively as follows:
each sub-problem using DivideAndConquer.
– Combine the solutions of the sub-problems to get the final solution.

1
2

ITECH WORLD AKTUDesign and Analysis of Algorithms (BCS503)


ITECH WORLD AKTUDesign and Analysis of Algorithms (BCS503)

1. Divide: - Split each n × n matrix A and B into four sub-matrices of size n2 × n2 . Let:
Example 1: Merge Sort   

Merge Sort is a sorting algorithm A11 follows
A12 the Divide B11
andBConquer
A = that paradigm. The array is
12
, B=
A21 A22 and thenBmerged.
divided into two halves, sorted independently, 21 B22

Algorithm:
2. Conquer: - Perform seven recursive multiplications:
• Divide the array into two halves.
M1 = (A11 + A22 )(B11 + B22 )
• Recursively sort each half.

TU
M2 = (A21 + A22 )B11
• Merge the two sorted halves to get the sorted array.
M3 = A11 (B12 − B22 )
Example: For an array [38, 27, 43,M 3, = 9,
A 82,
(B 10],
− the
B )array is divided and merged in steps.
4 22 21 11

M5 = (A11 + A12 )B22

AK
Matrix Multiplication
M6 = (A21 − A11 )(B11 + B12 )
M7 = (operation
Matrix multiplication is a fundamental A12 − A22 )( 21 + B
inBmany 22 ) of computer science and math-
areas
ematics. There are two main methods for matrix multiplication:
3. Combine: - Combine the seven products to get the final sub-matrices of C :

11 = M1 + M4 − M5(Naive
1. Conventional Matrix CMultiplication + M7 Method)

D
The conventional method of multiplying C12two
= Mmatrices
3 + M5
A and B follows the standard O(n3 )
approach. If A and B are n × n matrices, the product matrix C = AB is calculated as:

RL
C21 = M2 + M4
n
C 22 = M=1 − M A2[i+ + M6
][k]M·3B[k][j]
X
C[i][j]
Thus, the matrix C is: k=1
 
The time complexity of this methodC is=O(n C11) since
3 C12 each element of the resulting matrix C

O
is computed by multiplying n pairs of elements C21fromC22A and B .
Recurrence Relation: The time complexity of Strassen’s Algorithm can be expressed by

W
the Divide
2. recurrenceand
relation:
Conquer Approach to n  Matrix Multiplication
T (n) = 7T + O(n2 )
In the divide and conquer approach, matrices A2 and B are divided into smaller sub-matrices.
Here, T (n) represents
This method recursivelythe time complexity
multiplies for multiplying
the sub-matrices and combinestwo nthe
×n matrices.
results Solving
to obtain this
the final
log2 7
recurrence using The
product matrix. the master
key ideatheorem gives T
is to reduce (n)matrix
the ) ≈ O(n2.81problem
= O(nmultiplication ). size by breaking
H
down large matrices into smaller parts.
Advantages and Disadvantages
EC

3. Strassen’s Matrix Multiplication


Advantages: - Reduced time complexity compared to the conventional method, especially for
large matrices.
Strassen’s Algorithm is an optimized version of the divide and conquer method. It reduces the
Disadvantages: - The algorithm involves more additions and subtractions, which increases
constant factors. - Implementation is more complex, and the recursive approach can lead to
IT

overhead for small matrices.

ITECH WORLD AKTUDesign and Analysis of Algorithms (BCS503)

Example:
Let’s multiply two 2 × 2 matrices using Strassen’s method.
Let A and B be:    
1 2 5 6
A= , B=
3 4 7 8
Using Strassen’s method, we compute the seven products M1 , M2 , . . . , M7 and then combine
them to get the resulting matrix C .
The resulting product matrix C = AB is:
TU U

 
19 22
C=
43 50
T

This example demonstrates the power of Strassen’s method in reducing the number of
AK K

multiplications and solving matrix multiplication more efficiently.

Example 3: Convex Hull Problem


D A

The Convex Hull of a set of points is the smallest convex polygon that contains all the points.
The problem can be solved using various algorithms, and one of the most efficient is the Graham
Scan Algorithm.
RL LD

Graham Scan Algorithm


The Graham Scan algorithm is an efficient method to compute the convex hull of a set of points
O R

in the plane. The main idea is to sort the points based on their polar angle with respect to a
reference point and then process them to form the convex hull.
W O

Steps of the Algorithm:


1. Find the point with the lowest y-coordinate (in case of a tie, the leftmost point). This
H W

point is the starting point P0 .

2. Sort the remaining points based on the polar angle they make with P0 . If two points have
the same polar angle, keep the one that is closer to P0 .
EC CH

3. Initialize the convex hull with the first three points from the sorted list.

4. Process each of the remaining points:

(a) While the angle formed by the last two points in the hull and the current point
IT TE

makes a non-left turn (i.e., the turn is clockwise or collinear), remove the second-to-
last point from the hull.
(b) Add the current point to the hull.

5. After processing all points, the points remaining in the hull list form the convex hull.
I

Graham Scan Algorithm in Pseudocode:


IT
Graham Scan Algorithm in Pseudocode:
to the problem contains the optimal solution to its sub-problems.
[H] Graham Scan for Convex Hull [1] GrahamScan(points)
3. No Backtracking: Unlike other methods such as dynamic programming, greedy algo-
Find the point
rithms 0 with
doPnot the lowest
reconsider the y-coordinate.
choices made previously.
Sort the points based on the polar angle with respect to P0 .
4. Efficiency: Greedy algorithms are typically more efficient in terms of time complexity
because they solve sub-problems once and only make one pass over the input data.
4
5

ITECH WORLD AKTUDesign and Analysis of Algorithms (BCS503)


ITECH WORLD AKTUDesign and Analysis of Algorithms (BCS503)

0.2 Activity Selection Problem


Initialize the convex hull with the first three points from the sorted list. each remaining point
The
pi theActivity Selection
turn formed Problem
by the involves
last two pointsscheduling
of the hullresources
and pi is among
not leftseveral competing activi-
ties. The goal is to select the maximum number of activities that do not overlap in time.
Remove the second-to-last point from the hull.
Add pi to the hull.
Return the points in the hull.

Time Complexity:

TU
• Sorting the points based on the polar angle takes O(n log n).

• Processing each point and constructing the convex hull takes O(n).

Thus, the overall time complexity of the Graham Scan algorithm is:

AK
O(n log n)

where n is the number of points.

Example 4: Binary Search

D
Binary Search is used to find an element in a sorted array. The array is divided into two halves,
and the search is performed in the half where the element may exist.

RL
Algorithm:
• Compare the middle element with the target value.
0.2.1 Greedy Algorithm for Activity Selection
• If equal, return the position.
The Greedy Activity Selector algorithm selects activities based on their finish times. The idea

O
If the target
is to• always chooseisthe
smaller, search inthat
next activity the left half;first
finishes otherwise,
and is search in thewith
compatible rightthe
half.
previously
selected activities.
article amsmath

W
Pseudocode for Greedy Activity Selector (S, F):
GreedyActivitySelector(S, F):
Greedy Methods
n = length(S)
A = {1} // The first activity is always selected
The Greedy method constructs a solution step by step by selecting the best possible option
H k = 1
at each stage, without revisiting or considering the consequences of previous choices. It works
under the assumption that by choosing a local optimum at every step, the overall solution will
for m = 2 to n:
EC
be globally optimal.
if S[m] >= F[k]:
A = A {m}
0.1 Key Characteristics
k = m of Greedy Algorithms
1. Greedy Choice Property: At every step, choose the best option available without
IT

return
worryingA about the future implications. The choice must be feasible and should follow
the rules of the problem.
0.3 Example: Activity Selection Problem
2. Optimal Substructure: A problem has an optimal substructure if the optimal solution
Given the starting and finishing times of 11 activities:
• (2, 3)
• (8, 12)

• (12, 14)

ITECH WORLD AKTUDesign and Analysis of Algorithms (BCS503)

• (3, 5)

• (0, 6)

• (1, 4)

• (6, 10)

• (5, 7)

• (3, 8)

• (5, 9)
TU

• (8, 11)

0.3.1 Step 1: Sorting Activities


AK

First, we sort the activities based on their finish times:

Activity Start, Finish


1 (2, 3)
2 (3, 5)
3 (5, 7)
D

4 (5, 9)
RL

5 (6, 10)
6 (8, 11)
7 (8, 12)
8 (12, 14)
WO

9 (0, 6)
10 (1, 4)
11 (3, 8)

0.3.2 Step 2: Selecting Activities


H

Now we will select activities using the greedy approach: - Start with Activity 1 (2, 3) - The
next compatible activity is Activity 2 (3, 5) - Continue this process.
EC

The selected activities are:

• Activity 1: (2, 3)
IT

• Activity 2: (3, 5)

• Activity 3: (5, 7)

• Activity 6: (8, 11)

• Activity 8: (12, 14)

0.3.3 Final Selected Activities


The selected activities are (2, 3), (3, 5), (5, 7), (8, 11), and (12, 14).
7
8

ITECH WORLD AKTUDesign and Analysis of Algorithms (BCS503)


ITECH WORLD AKTUDesign and Analysis of Algorithms (BCS503)

Greedy Approach
0.4 Pseudocode for Recursive and Iterative Approaches
The greedy approach for the Knapsack Problem follows these steps:
Recursive Approach:
• Sort items by their value-to-weight ratio.
RecursiveActivitySelector(S, F, k, n):
Pick
• if items
k >= n: with the highest ratio until the weight limit is reached.
return []
Branch
m = k and
+ 1 Bound Approach
while m <= n and S[m] < F[k]:

TTUU
The Branchm =and
m +Bound
1 method is another approach to solve the Knapsack Problem efficiently
by exploring the solution
return [m] space using an implicit tree structure:
+ RecursiveActivitySelector(S, F, m, n)
• Implicit Tree: Each node represents a state of including or excluding an item.
Iterative Approach:
• Upper Bound of Node: Calculate the maximum possible value that can be obtained

AAKK
IterativeActivitySelector(S,
from the current node to pruneF):the tree.
n = length(S)
A = {1} // The first activity is always selected
Greedy
k = 1 Algorithm for Discrete Knapsack Problem
The greedy method can be effective for the fractional knapsack problem but not for the 0/1
for mproblem.
knapsack = 2 to n:

DD
if S[m] >= F[k]:
A = A {m}
0/1 Knapsack k = mProblem

RRLL
In the 0/1 Knapsack Problem, each item can either be included (1) or excluded (0) from the
returnThe
knapsack. A greedy method is not effective for solving the 0/1 Knapsack Problem because
it may lead to suboptimal solutions.
0.5 Optimization Problem

OO
Simple Knapsack
An optimization problem is a Problem using
problem in which we seekGreedy Method
to find the best solution from a set of
feasible solutions. It involves maximizing or minimizing a particular objective function subject

W
to constraints.
Simple Knapsack Problem using Greedy Method

W
Consider the following
0.5.1 Using Greedyinstance
Method forfor
theOptimization
simple knapsack problem. Find the solution using the
Problems
greedy method:
The greedy method can be applied to solve optimization problems by:

HH
• N =8
• Breaking the problem into smaller sub-problems.
• P = {11, 21, 31, 33, 43, 53, 55, 65}
EECC
• Making the locally optimal choice at each step, hoping it leads to a globally optimal
• W = {1, 11, 21, 23, 33, 43, 45, 55}
solution.
M = 110 that the greedy choice property and optimal substructure hold for the specific
•• Ensuring
problem.
IITT
Solution
Common examples of optimization problems solved using greedy algorithms include the
Knapsack
To Problem,
solve the problemMinimum
using theSpanning Tree, and
greedy method, we Huffman
calculate Coding.
the value-to-weight ratio for each
item, sort them, and fill the knapsack until we reach the maximum weight M .
The total value obtained is 152.6.
Example 2: Knapsack Problem
Items in Knapsack
The Knapsack Problem involves selecting items with given weights and values to maximize the
totalitems
The value included
without exceeding the weight
in the knapsack are I1limit.
, I2 , I3 , I4 , I5 , and a fraction of I6 .
Final Answer: Maximum value = 152.6.

ITECH WORLD AKTUDesign and Analysis of Algorithms (BCS503)

Item Weight (W) Value (P) Remaining Load Value Added


1 1 11 99 11
2 11 21 88 32
3 21 31 67 63
4 23 33 44 96
5 33 43 11 139
6 43 53 -32 152.6

Table 1: Knapsack Item Selection


TU
0/1 Knapsack Problem using Dynamic Programming
Solve the following 0/1 knapsack problem using dynamic programming:
AK

• P = {11, 21, 31, 33}

• W = {2, 11, 22, 15}


• C = 40

• N =4
D

Solution
RL

We will solve this problem using the dynamic programming approach by creating a table to
keep track of the maximum value at each capacity.

Item Weight (W) Value (P) Capacity (C) Current Value Set
0 0 0 0 0 {(0,0)}
O

1 2 11 2 11 {(11,2)}
2 11 21 11 21 {(21,11)}
3 22 31 22 31 {(31,22)}
W

4 15 33 15 33 {(33,15)}

Table 2: Dynamic Programming Table for 0/1 Knapsack

Starting with the initial set S0 = {(0, 0)}, we add items according to their weight and value,
H

resulting in the maximum capacity C = 40 being filled.


The answer is obtained by evaluating the maximum values for the given capacity and items,
EC

leading to the final output.


Final Answer: Maximum value = 4 (the total value of the selected items).

Conclusion
IT

The greedy method provides an efficient way to solve the fractional knapsack problem, but it
may not yield an optimal solution for the 0/1 knapsack problem. For 0/1 knapsack, dynamic
programming or branch and bound methods are preferred.

Comparison of Kruskal’s and Prim’s Algorithms


Prim’s Algorithm
Algorithm:

10
ITECH
ITECH WORLD
WORLD AKTUDesign
AKTUDesign and
and Analysis
Analysis of
of Algorithms
Algorithms (BCS503)
(BCS503)

Unique
Point Minimum Spanning
Kruskal’s Tree
Algorithm Prim’s Algorithm
1 Works on edges Works on vertices
Theorem: If the weights on the edges of a connected undirected graph are distinct, then there
2 a unique minimum
exists Greedily adds edges
spanning tree. Greedily adds vertices
3 Suitable for sparse graphs Suitable for dense graphs
Example: Consider a graph with vertices A, B, C, D and edges: - (A, B, 1) - (A, C, 3) -
(B, 4C, 2) - (B, D, 4)
Requires sorting of edges Uses a priority queue for edge selection
5 Can be used on disconnected graphs Works only on connected graphs
Since the weights are distinct, following either Kruskal’s or Prim’s algorithm will lead to
6 Forms forests before the MST is complete Grows a single tree
the same unique MST: 1. Add (A, B ) (weight 1). 2. Add (B, C ) (weight 2). 3. Add (B, D )
7 Easier to implement with disjoint set Easier to visualize tree growth
(weight 4).

TU
Table 3: Comparison of Kruskal’s and Prim’s Algorithms
Prim’s Minimum Spanning Tree Algorithm in Detail
• Initialize theStart
- Initialization: tree with
withan arbitrary
any vertex. vertex.
- Growth: Always pick the least weight edge that
expands thethe
• Mark treevertex
until all
as vertices
includedare
in included.
the MST. - Termination: When all vertices are included

AK
in the MST.
• While there are vertices not in the MST:
Greedy Single
– Select theSource Shortest
edge with Path
the minimum Algorithm
weight (Dijkstra’s
that connects a vertex inAlgorithm)
the MST to a
Algorithm:vertex outside it.
– Add the selected edge and vertex to the MST.
• Initialize distances from the source vertex to all others as infinity, except the source itself

D
(0).
Example: Consider the following graph:
• Create a priority queue and insert the source vertex.

RL
Vertices: A, B, C, D
• While the queue is not(A,
Edges: empty:
B, 1), (A, C, 4), (B, C, 2), (B, D, 5), (C, D, 3)
Starting from vertex
– Extract A: 1. with
the vertex Addthe
edge (A, B ) distance.
smallest (weight 1). 2. Add edge (B, C ) (weight 2). 3.
Add edge– (For ) (weight
C, Deach 3). calculate the potential distance through the current vertex.
neighbor,

O
The minimum spanning tree consists of edges (A, B), (B, C ), and (C, D) with total weight
6. – If the calculated distance is smaller, update it and add the neighbor to the queue.

W
Kruskal’s Algorithm
Algorithm:

H
• Sort all edges in non-decreasing order of their weights.

• Initialize the MST as an empty set.

EC
• For each edge, in sorted order:

– Check if adding the edge forms a cycle.


– If it doesn’t, add it to the MST.
IT
Example: Using the same graph:
1. Sorted edges: (A, B, 1), (B, C, 2), (C, D, 3), (B, D, 5), (A, C, 4). 2. Add edge (A, B). 3.
Add edge (B, C ). 4. Add edge (C, D ).
The minimum spanning tree consists of edges (A, B), (B, C ), and (C, D) with total weight
6.

12
11

ITECH WORLD AKTUDesign and Analysis of Algorithms (BCS503)

TU
AK
D
RL
O
W
H

Step 1: Initialization
EC

Node Distance from A Previous Node


A 0 -
B ∞ -
C ∞ -
D ∞ -
IT

E ∞ -
F ∞ -
G ∞ -
H ∞ -

Table 4: Initial Distance Table

Step 2: Extract Minimum (A = 0)


The node closest to A is extracted first. Here, the closest node is C with a distance of 2.

13

ITECH WORLD AKTUDesign and Analysis of Algorithms (BCS503)

• Extract C: Update the distances of its adjacent nodes


• Extract C: Update the distances of its adjacent nodes.
Node Distance from A Previous Node
Step 3: Relax Edges Leaving
A C 0 -
B 3 A
Node
C Distance2 from A Previous A Node
A
D 07 C-
B
E ∞
7 C-
C
F 92 A
B
D
G 107 C
D

TTUU
E
H 7
12 C
G
F ∞ -
G Table 8: Final
∞ Distance Table -
H ∞ -
Repeat the process of extracting the minimum node and updating the distances until all

AAKK
nodes have been visited. Table
After 5: Distance the
completing Table After Extracting
process, the shortestCpath from A to H will be
found.
Step 4: Extract Minimum
Final Distance Table: (B = 3)
Conclusion: The shortest path from A to H is:
• Extract B: Update the distances of its adjacent nodes.
A→C→D→G→H
Step 5: Relax Edges Leaving B

DD
with a total distance of 12.
Node Distance from A Previous Node
A 0 -

RRLL
Bellman-Ford Algorithm B 3 A
Algorithm: C 2 A
D 7 C
• Initialize the distance of
E the source to
7 0 and all othersCto infinity.

OO
• For each edge, relax it F
|V | − 1 times:9 B
G ∞ -
– For each edge (u, H v, w ): ∞ -

WW
– If distance[u] +Table
w < 6:
distance[v], updateAfter
Distance Table distance[v ].
Extracting B
• Check for negative weight cycles by iterating through all edges again.
Step 6: Extract Minimum (D = 7)
Example: Consider a graph with vertices A, B, C and edges: - (A, B, 1) - (B, C, −2) -

HH
(C, •A,Extract
−1) D: Update the distances of its adjacent nodes.
Starting from A: 1. Initialize distances: A = 0, B = ∞, C = ∞. 2. After one iteration:

EECC
Step 7: Relax Edgestwo
B = 1, C = −1. 3. After Leaving D A = 0, B = 1, C = −1 (no updates).
iterations:
Final distances: A = 0, B = 1, C = −1.
Node Distance from A Previous Node
article tikz amsmath booktabs graphicx
A 0 -
Bellman-Ford Algorithm Analysis ITECH WORLD AKTU
B 3 A
IITT
1 When Dijkstra
C

E 7
2
D and Bellman-Ford
7
A
C
C
Fail to Find the
Shortest PathF 9 B
G 10 D
Dijkstra Algorithm Failure: Dijkstra’s algorithm fails when there are negative weight edges
H ∞ -
in the graph. It assumes that once a node’s shortest path is found, it doesn’t need to be updated
again. However, in the presence
Table 7: of negative
Distance weight
Table edges,
After shorter Dpaths may be found after
Extracting
visiting a node, leading to incorrect results.
Bellman-Ford
Step 8: Continue Algorithm
Extracting Failure:
NodesThe(E,Bellman-Ford
G, F, H) algorithm can handle negative
weight edges and will find the shortest path as long as there is no negative weight cycle.
14
15

ITECH WORLD AKTUDesign and Analysis of Algorithms (BCS503)


ITECH WORLD AKTUDesign and Analysis of Algorithms (BCS503)

However, it will fail to find a shortest path if the graph contains a negative weight cycle that
is reachable from the source node.

2 Can Bellman-Ford Detect All Negative Weight Cy-


cles?
Yes, Bellman-Ford can detect negative weight cycles. After completing the main relaxation
process (updating distances), the algorithm performs one more iteration to check if any further

TU
relaxation is possible. If a distance is updated in this extra iteration, a negative weight cycle
exists in the graph.

3 Applying Bellman-Ford Algorithm on the Given Graph


AK
The graph provided can be analyzed using the Bellman-Ford algorithm. The algorithm itera-
tively updates the shortest distances from the source node to all other nodes. Here’s how to
apply the Bellman-Ford algorithm:
D
RL
O
W
H

Figure 1: Graph for Bellman-Ford Application


EC

The graph contains both positive and negative weights. We will perform multiple relax-
ations, updating the shortest path estimates.
IT

16

ITECH WORLD AKTUDesign and Analysis of Algorithms (BCS503)

4 Bellman-Ford Algorithm Table


The table below shows the shortest path distances at each iteration:

Iteration Distance to s Distance to t Distance to x Distance to y Distance to z


0 0 ∞ ∞ ∞ ∞
1 0 6 7 9
1 0 6 ∞ 7 9
2
class Graph: 0 6 4 7 2
def3 __init__(self,
0 2
vertices): 4 7 2

TTUU
4 self.V = vertices
0 2 4 7 -2
self.graph = []
Table 9: Distance Table for Bellman-Ford Algorithm
def add_edge(self, u, v, w):
The algorithm terminates after the
self.graph.append([u, v, last
w]) iteration when no more updates occur.

AAKK
def bellman_ford(self, src):
dist = [float("Inf")] * self.V
dist[src] = 0

for _ in range(self.V - 1):

DD
for u, v, w in self.graph:
if dist[u] != float("Inf") and dist[u] + w < dist[v]:
dist[v] = dist[u] + w

RRLL
for u, v, w in self.graph:
if dist[u] != float("Inf") and dist[u] + w < dist[v]:
print("Graph contains negative weight cycle")

OO
return

print("Vertex Distance from Source")

W
for i in range(self.V):

W
print("{0}\t\t{1}".format(i, dist[i]))

g = Graph(5)

HH
g.add_edge(0, 1, 6)
g.add_edge(0, 2, 7)
g.add_edge(1, 2, 8)

EECC
g.add_edge(1, 3, 5)
g.add_edge(1, 4, -4)
g.add_edge(2, 3, -3)
g.add_edge(2, 4, 9)
g.add_edge(3, 1, -2)

IITT
g.add_edge(4, 0, 2)
g.add_edge(4, 3, 7)

g.bellman_ford(0)

17
18

ITECH WORLD
ITECH AKTUDesign
WORLD AKTU and Analysis of Algorithms (BCS503)

5 Bellman-Ford Algorithm Code


Below is the Python code for applying the Bellman-Ford algorithm on this graph:

ITECH WORLD AKTU


# Python code for Bellman-Ford Algorithm

Design and Analysis of Algorithm (DAA)


Subject Code: BCS503

UNIT 4:

Syllabus
• Dynamic Programming with Examples Such as Knapsack.

• All Pair Shortest Paths – Warshal’s and Floyd’s Algorithms.

• Resource Allocation Problem.

• Backtracking, Branch and Bound with Examples Such as:

– Travelling Salesman Problem.


– Graph Coloring.
– n-Queen Problem.
– Hamiltonian Cycles.
– Sum of Subsets.

Dynamic Programming
Dynamic Programming (DP) is a technique for solving complex problems by breaking
them down into simpler overlapping subproblems. It applies to problems exhibiting two
main properties:

• Overlapping subproblems: The problem can be broken into smaller, repeating


subproblems that can be solved independently.

• Optimal substructure: The solution to a problem can be constructed from the


optimal solutions of its subproblems.

ITECH WORLD AKTU

Differences Between DP and Other Approaches


1. DP solves each subproblem only once and saves the results.

2. Other approaches like divide and conquer might solve the same subproblem multiple
times.

Example
The Fibonacci sequence is a classic example, where each number is the sum of the two
preceding ones. DP avoids redundant calculations by storing already computed values.

You might also like