Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
3 views

algorithms assignment 1

The document outlines the steps for designing and analyzing algorithms, including understanding the problem, specifying the algorithm, verifying correctness, analyzing performance, and implementing/testing. It explains asymptotic notations (Big-O, Big-Omega, Big-Theta) and provides examples of recursive and non-recursive algorithms, such as calculating factorial and finding the maximum element in a list. Additionally, it discusses sorting algorithms like selection sort and bubble sort, detailing their efficiency and application.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

algorithms assignment 1

The document outlines the steps for designing and analyzing algorithms, including understanding the problem, specifying the algorithm, verifying correctness, analyzing performance, and implementing/testing. It explains asymptotic notations (Big-O, Big-Omega, Big-Theta) and provides examples of recursive and non-recursive algorithms, such as calculating factorial and finding the maximum element in a list. Additionally, it discusses sorting algorithms like selection sort and bubble sort, detailing their efficiency and application.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

ALGORITHMS ASSIGNMENT 1

1. With neat diagram explain different steps in designing and analyzing


an algorithm.
Ans :
An algorithm is a finite sequence of unambiguous instructions to solve a
problem. The process of designing and analyzing an algorithm involves the
following steps:
1.1 Algorithm Design Steps
1. Understanding the Problem
o Clearly define input, output, and constraints.

2. Algorithm Specification
o Specify the algorithm using pseudocode, flowcharts, or programming

languages.
3. Correctness Verification
o Ensure the algorithm correctly solves the problem for all inputs.

4. Performance Analysis
o Evaluate time complexity (speed) and space complexity (memory

usage).
5. Implementation & Testing
o Code the algorithm and test it with different inputs.

1.2 Analysis of an Algorithm


1.2.1 Measuring Input Size
 The efficiency of an algorithm depends on input size n.

 Example: Sorting an array of size n takes longer than sorting an array of

size 5.
 Input size can be measured in terms of:

o Number of elements (e.g., sorting, searching)

o Number of bits (e.g., prime-checking algorithm)

1.2.2 Performance Analysis


An algorithm’s performance is analyzed in two ways:
1. Time Complexity (T(n))
o Measures the execution time as a function of input size n.
o Example: For a sorting algorithm, time complexity is based on the
number of comparisons made.
2. Space Complexity (S(n))
o Measures memory required by the algorithm.

o Example: A recursive function uses stack space, affecting S(n).

1.3 Orders of Growth (Big-O Notation)


The order of growth represents how the algorithm’s complexity increases as
input size grows.
Common Growth Rates:
 O(1) – Constant time (e.g., accessing an array

element).
 O(log n) – Logarithmic time (e.g., binary search).

 O(n) – Linear time (e.g., simple loops).

 O(n²) – Quadratic time (e.g., nested loops in

sorting).
 O(2ⁿ) – Exponential time (e.g., recursion in

Fibonacci).

2. Define algorithm. Explain asymptotic notations Big Oh, Big Omega


and Big Theta notations with suitable example
ans:
Definition of Algorithm:
An algorithm is a finite sequence of well-defined instructions to solve a problem
or perform a computation. It takes an input, processes it, and produces an
output within a finite time.

Asymptotic Notations
Asymptotic notations describe the efficiency of an algorithm in terms of time
and space complexity as the input size n grows. The three main notations are:
1. Big-O Notation (O)
 Represents the upper bound of an algorithm’s running time.
 It defines the worst-case complexity.
 A function t(n) ∈ O(g(n)) if there exist constants c and n₀ such that:

 Example:


2. Big-Omega Notation (Ω)
 Represents the lower bound of an algorithm’s running time.

 It defines the best-case complexity.

 A function t(n) ∈ Ω(g(n)) if there exist constants c and n₀ such that:


 Example:

3. Big-Theta Notation (Θ)


 Represents the tight bound of an algorithm’s running time.

 It means the function is both O(g(n)) and Ω(g(n)).

 A function t(n) ∈ Θ(g(n)) if there exist constants c₁, c₂ and n₀ such that:


 Example:

3. Explain the general plan for analyzing the efficiency of a recursive


algorithm. Suggest a recursive algorithm to find factorial of number
and Tower of Hanoi. Derive their efficiency
Ans :
Analysis of Recursive Algorithm, Factorial, and Tower of Hanoi
1. General Plan for Analyzing Recursive Algorithms
The time efficiency of a recursive algorithm is analyzed using the following
steps:
1. Decide Input Size: Choose an appropriate parameter to represent the
input size (e.g., n for factorial).
2. Identify Basic Operation: Determine the most frequent or costly operation
(e.g., multiplication for factorial).
3. Worst-Case Analysis: Check if the number of times the basic operation is
executed varies for different inputs.
4. Recurrence Relation: Formulate a recurrence relation based on the
function's recursive definition.
5. Solve the Recurrence: Derive an explicit formula or estimate its growth
order using asymptotic analysis.

2. Recursive Algorithm for Factorial Calculation


The factorial function is defined as:
n!=n×(n−1)×(n−2)⋯×1n! = n \times (n-1) \times (n-2) \dots \times
1n!=n×(n−1)×(n−2)⋯×1
with base case:
0!=10! = 10!=1
ALGORITHM Factorial(n)
// Computes n! recursively

// INPUT:
// n → A non-negative integer

// OUTPUT:
// Returns the factorial of `n`

if n = 0 then
return 1 // Base case: 0! = 1
else
return Factorial(n - 1) * n // Recursive case: n! = (n - 1)! * nTime Complexity
Analysis:
 The basic operation is multiplication.

 The recurrence relation is: T(n)=T(n−1)+O(1)T(n) = T(n-1) +

O(1)T(n)=T(n−1)+O(1)
 Expanding it: T(n)=T(n−1)+1
 =T(n−2)+2=...=T(1)+(n−1)T(n)
 = T(n-1) + 1 = T(n-2) + 2 = ...
 = T(1) + (n-1)T(n)=T(n−1)+1=T(n−2)+2=...
 =T(1)+(n−1)
 This simplifies to T(n) = O(n).

3. Recursive Algorithm for Tower of Hanoi


The Tower of Hanoi problem involves moving n disks from a source peg to a
destination peg using an auxiliary peg, following the rules:
1. Only one disk can be moved at a time.
2. A larger disk cannot be placed on top of a smaller disk.
ALGORITHM Hanoi(n, src, aux, dest)
// Solves the Tower of Hanoi problem recursively

// INPUT:
// n → Number of disks
// src → Source peg
// aux → Auxiliary peg
// dest → Destination peg

// OUTPUT:
// Sequence of moves to transfer `n` disks from `src` to `dest`

if n = 1 then
print "Move disk 1 from " src " to " dest
return

Hanoi(n - 1, src, dest, aux) // Move n-1 disks from src to aux
print "Move disk " n " from " src " to " dest
Hanoi(n - 1, aux, src, dest) // Move n-1 disks from aux to dest
Time Complexity Analysis:
 The recurrence relation is: T(n)=2T(n−1)+O(1)T(n) = 2T(n-1) +

O(1)T(n)=2T(n−1)+O(1)
 Expanding it: T(n)=2(2T(n−2)+1)+1=4T(n−2)+3T(n)
= 2(2T(n-2) + 1) + 1 = 4T(n-2) + 3T(n)
=2(2T(n−2)+1)+1=4T(n−2)+3
T(n)=8T(n−3)+7=...
=2n−1T(n)
= 8T(n-3) + 7 = ...
= 2^n - 1T(n)=8T(n−3)+7=...
=2n−1
 This simplifies to T(n) = O(2^n), indicating exponential complexity.

4. Explain the general plan for analyzing the efficiency of a non-


recursive algorithm. Suggest a non-recursive algorithm to find
maximum element in the list of n numbers. Derive its efficiency.
Ans:
1. General Plan for Analyzing a Non-Recursive Algorithm (4 Marks)
The efficiency of a non-recursive algorithm is analyzed using these steps:
1. Identify Input Size (n)
o The size of input determines the efficiency.

o Example: n represents the number of elements in a list.

2. Determine the Basic Operation


o The most repeated operation in the algorithm (e.g., comparisons in

searching).
3. Count the Number of Basic Operations
o Identify how many times the basic operation runs.

4. Formulate the Summation Equation


o Express the total number of operations as a function of n.

5. Find the Asymptotic Complexity (Big-O Notation)


o Ignore constants and lower-order terms to determine growth rate.

2. Non-Recursive Algorithm to Find Maximum Element (3 Marks)


Problem Statement:
Find the maximum element in a list of n numbers using an iterative approach.
ALGORITHM FindMax(A*0..n − 1+)
// Finds the maximum element in a given array
// INPUT:
// A*+ → Array of `n` elements

// OUTPUT:
// maxval → Maximum element in the array

maxval ← A*0+ // Assume the first element is the maximum

for i ← 1 to n − 1 do
if A[i] > maxval then
maxval ← A*i+ // Update max if a larger element is found

return maxval // Return the maximum value

3. Efficiency Analysis (3 Marks)


 Basic Operation: A[i] > maxval (Comparison).

 Loop runs (n-1) times → C(n) = n - 1.

 Asymptotic Complexity: O(n)O(n)O(n) (Linear Time Complexity).

Best-Case, Worst-Case, and Average-Case Analysis


Case Condition Complexity
Best First element is max (A[0]) O(n)
Worst Last element is max (A[n-1]) O(n)
Average Random distribution of numbers O(n)

5. What Brute force method ? Apply selection sort for the word
EXAMPLE to arrange in alphabetical order.
Ans:
1. Brute Force Method (5 Marks)
The brute force method is a straightforward approach to solving a problem by
systematically checking all possible solutions. It is often the simplest but least
efficient method.
1.1 Characteristics of Brute Force Approach
 Simple and easy to implement.
 No optimization – evaluates all possible outcomes.

 Useful for small inputs, but inefficient for large-scale problems.

 Common examples: Sorting, searching, string matching, and GCD

calculation.
1.2 Examples of Brute Force Approach in Algorithms
1. Sorting Algorithms: Selection Sort, Bubble Sort.
2. Searching Algorithms: Sequential Search, Brute Force String Matching.
3. Mathematical Computations: Matrix Multiplication, Factorial Calculation.

2. Selection Sort Algorithm (5 Marks)


Selection sort is a brute-force sorting algorithm that repeatedly selects the
smallest element from the unsorted portion and swaps it with the first unsorted
element.
2.1 Working of Selection Sort
1. Find the smallest element in the array and swap it with the first element.
2. Find the second smallest element and swap it with the second element.
3. Repeat until all elements are sorted.
ALGORITHM SelectionSort(A*0..n − 1+)
// Sorts a given array using the Selection Sort algorithm

// INPUT:
// A*+ → Array of `n` elements

// OUTPUT:
// A*+ → Sorted array in non-decreasing order

for i ← 0 to n − 2 do
minIndex ← i // Assume the first element is the smallest

// Find the smallest element in the remaining array


for j ← i + 1 to n − 1 do
if A[j] < A[minIndex] then
minIndex ← j
// Swap the smallest element found with A[i]
if minIndex ≠ i then
swap A[i] and A[minIndex]

3. Applying Selection Sort on "EXAMPLE" (5 Marks)


Step-by-Step Execution
Original word: EXAMPLE
(ASCII order: E=69, X=88, A=65, M=77, P=80, L=76, E=69)
Pass Unsorted List Smallest Element Swap Result

1 EXAMPLE A Swap A & E AXEMPLE

2 AXEMPLE E Swap E & X AEXMPLE

3 AEXMPLE E No Swap AEXMPLE

4 AEXMPLE L Swap L & X AELMPXE

5 AELMPXE M No Swap AELMPXE

6 AELMPXE P No Swap AELMPXE


Sorted word: A E E L M P X

4. Efficiency Analysis of Selection Sort


 Time Complexity:

o Best-case, Worst-case, and Average-case: O(n²).

o Runs in O(n²) comparisons, making it inefficient for large lists.

 Space Complexity:

o O(1) (in-place sorting, no extra memory used).

6. With the algorithm derive the worst-case efficiency for Bubble sort.
Apply bubble sort on these elements. 45, 20, 40, 5, 15
Ans:
1. Bubble Sort Algorithm (5 Marks)
Bubble Sort is a brute force sorting algorithm that repeatedly swaps adjacent
elements if they are in the wrong order.
ALGORITHM BubbleSort(A*0..n − 1+)
// Sorts a given array A*0..n − 1+ using bubble sort

INPUT:
A* + → Array of `n` elements

OUTPUT:
A* + → Sorted array in non-decreasing order

for i ← 0 to n − 2 do
for j ← 0 to n − 2 − i do
if A[j + 1] < A[j] then
swap A[j] and A[j + 1]
Working Mechanism
 Pass 1: Largest element bubbles to the last position.

 Pass 2: Second largest element moves to its correct position.

 Repeats until the list is sorted.

2. Worst-Case Efficiency Derivation (5 Marks)


2.1 Identifying the Basic Operation
 The basic operation in Bubble Sort is comparison (A[j] > A[j+1]).

 The number of swaps and comparisons depends on input order.

2.2 Worst-Case Input


 Reversed order list (e.g., [45, 40, 20, 15, 5]).

 Every pair of elements must be swapped.

2.3 Derivation of Time Complexity


 Total number of comparisons in worst case:
 Total number of swaps in the worst case:

Final Complexity

Bubble Sort is inefficient for large inputs, making it only useful for small
datasets.

3. Applying Bubble Sort to Given List: [45, 20, 40, 5, 15] (5 Marks)
Initial List: [45, 20, 40, 5, 15]
Pass Comparisons & Swaps Result after Pass

1 (45,20), (45,40), (45,5), (45,15) [20, 40, 5, 15, 45]

2 (20,40), (40,5), (40,15) [20, 5, 15, 40, 45]

3 (20,5), (20,15) [5, 15, 20, 40, 45]

4 (5,15) [5, 15, 20, 40, 45]


Final Sorted List: [5, 15, 20, 40, 45]

7. Design an algorithm for performing sequential or linear search and


compute best case worst case and average case efficiency
Ans:
Algorithm for Sequential Search
Sequential (Linear) Search is a brute force method that checks each element of
a list one by one until the target value is found or the list is exhausted.
ALGORITHM SequentialSearch(A*0..n − 1+, K)
// Searches for a given value in a given array by sequential search

INPUT:
A* + → Array of `n` elements
K → Search key

OUTPUT:
Index of `K` in `A` OR `-1` if not found

i←0
while i < n and A*i+ ≠ K do
i←i+1
if i < n then return i
else return −1
Working Mechanism:
1. Start from the first element and check each item sequentially.
2. If a match is found, return the index.
3. If the end of the list is reached, return -1 (not found).

2. Efficiency Analysis (4 Marks)


2.1 Best-Case Efficiency (O(1))
 Occurs when the target element is at the first position (A[0] = K).

 Only one comparison is needed.

2.2 Worst-Case Efficiency (O(n))


 Occurs when the target is at the last position (A[n-1] = K) or not in the

list.
 The loop runs n times, leading to T(n) = O(n).

2.3 Average-Case Efficiency (O(n))


 If the target is present, it may be anywhere in the list.

 Assuming equal probability, the average comparisons required:

 If the target is not found, n comparisons are made.


8. Explain the concept of divide and conquer. Design an algorithm for
merge sort and derive its time complexity.
Ans:
1. Concept of Divide and Conquer (5 Marks)
The Divide and Conquer strategy is a problem-solving approach that:
1. Divides the problem into smaller subproblems.
2. Conquers by solving each subproblem recursively.
3. Combines the results to form the final solution.
1.1 Characteristics of Divide and Conquer
✅ Recursively divides the problem into subproblems of smaller sizes.
✅ Solves the base cases directly (when the problem is small enough).
✅ Combines solutions to solve the original problem efficiently.
1.2 Applications of Divide and Conquer
 Sorting: Merge Sort, Quick Sort

 Searching: Binary Search

 Mathematics: Fast Exponentiation, Karatsuba Multiplication

2. Merge Sort Algorithm (5 Marks)


Merge Sort follows the Divide and Conquer approach by recursively dividing an
array into two halves, sorting each half, and then merging them.
ALGORITHM MergeSort(A[low..high])
// Sorts an array A[low..high] using merge sort

INPUT: A* + → Array of elements


low → Starting index
high → Ending index

OUTPUT: A* + → Sorted array

if low < high then


mid ← (low + high) / 2
MergeSort(A, low, mid)
MergeSort(A, mid + 1, high)
Merge(A, low, mid, high)
ALGORITHM Merge(A[low..high])
// Merges two sorted halves A[low..mid] and A[mid+1..high]

n1 ← mid - low + 1
n2 ← high – mid

Create arrays L[n1] and R[n2]

for i ← 0 to n1 - 1 do
L*i+ ← A*low + i+
for j ← 0 to n2 - 1 do
R*j+ ← A*mid + 1 + j+

i ← 0, j ← 0, k ← low

while i < n1 and j < n2 do


if L*i+ ≤ R*j+ then
A*k+ ← L*i+
i←i+1
else
A*k+ ← R*j+
j←j+1
k←k+1

while i < n1 do
A*k+ ← L*i+
i←i+1
k←k+1

while j < n2 do
A*k+ ← R*j+
j←j+1
k←k+1
2.2 Working of Merge Sort
1. Divide: Recursively split the array into two halves.
2. Conquer: Sort both halves using recursive calls.
3. Combine: Merge the sorted halves into a single sorted array.

3. Time Complexity Derivation (5 Marks)


3.1 Recurrence Relation
Let T(n) represent the time complexity of Merge Sort:

 2T(n/2): Two recursive calls on subarrays of size n/2.


 O(n): Time to merge two sorted halves.

3.2 Solving the Recurrence Relation


Expanding the recurrence:

Continuing until n = 1 (base case), we get:

3.3 Complexity Analysis


Case Complexity

Best Case O(n log n)

Worst Case O(n log n)

Average Case O(n log n)

9. Design an insertion sort algorithm and obtain its time complexity.


Apply insertion sort on these elements. 89,45,68,90,29,34,17.
Ans
1. Insertion Sort Algorithm (5 Marks)
Insertion Sort is a simple sorting algorithm that builds the sorted array one
element at a time by placing each element in its correct position.
ALGORITHM InsertionSort(A[0..n - 1])
// Sorts an array A[0..n - 1] using insertion sort

INPUT: A* + → Array of `n` elements


OUTPUT: A* + → Sorted array in non-decreasing order

for i ← 1 to n - 1 do
key ← A*i+
j←i-1

while j ≥ 0 and A*j+ > key do


A*j + 1+ ← A*j+
j←j-1

A*j + 1+ ← key
Working Mechanism:
1. Start from the second element (A[1]) and compare it with previous
elements.
2. Shift larger elements one position right to make space for the key
element.
3. Insert the key element in the correct position.
4. Repeat the process for all elements until the array is sorted.

2. Time Complexity Analysis (5 Marks)


2.1 Best-Case Complexity (O(n))
 Occurs when the array is already sorted.

 Only one comparison per element → O(n).

2.2 Worst-Case Complexity (O(n²))


 Occurs when the array is in reverse order (each element needs to be

shifted).
 Total comparisons and shifts:

2.3 Average-Case Complexity (O(n²))
 Randomly ordered list requires shifting for about half the elements.

 Time complexity remains O(n²) on average.

3. Applying Insertion Sort on Given List: [89, 45, 68, 90, 29, 34, 17] (5 Marks)
Initial List: [89, 45, 68, 90, 29, 34, 17]
Pass Key Comparisons & Shifts Result after Pass
1 45 89 → Shift [45, 89, 68, 90, 29, 34, 17]
2 68 No shift [45, 68, 89, 90, 29, 34, 17]
3 90 No shift [45, 68, 89, 90, 29, 34, 17]
4 29 90 → 89 → 68 → 45 → Shift [29, 45, 68, 89, 90, 34, 17]
5 34 90 → 89 → 68 → 45 → Shift [29, 34, 45, 68, 89, 90, 17]
6 17 90 → 89 → 68 → 45 → 34 → 29 → Shift [17, 29, 34, 45, 68, 89, 90]
Final Sorted List: [17, 29, 34, 45, 68, 89, 90]
10. Design an algorithm for quick sort algorithm. Apply quick sort on
these elements. 25,75,40,10,20,05,15.
Ans:
1. Quick Sort Algorithm (5 Marks)
Quick Sort is a Divide and Conquer sorting algorithm that:
1. Picks a pivot element from the array.
2. Partitions the array such that elements smaller than the pivot go left and
larger elements go right.
3. Recursively sorts the left and right subarrays.
ALGORITHM QuickSort(A[low..high])
// Sorts an array A[low..high] using quick sort

INPUT: A* + → Array of elements


low → Starting index
high → Ending index
OUTPUT: A* + → Sorted array

if low < high then


pivotIndex ← Partition(A, low, high)
QuickSort(A, low, pivotIndex - 1)
QuickSort(A, pivotIndex + 1, high)

ALGORITHM Partition(A[low..high])
// Partitions A[low..high] around a pivot

pivot ← A*high+
i ← low - 1
for j ← low to high - 1 do
if A[j] < pivot then
i←i+1
swap(A[i], A[j])

swap(A[i + 1], A[high])


return i + 1Working Mechanism:
 Selects a pivot (usually the last element).

 Rearranges elements so smaller values move left, larger values right.

 Recursively sorts both partitions.

2. Time Complexity Analysis (5 Marks)


2.1 Best-Case Complexity (O(n log n))
 Occurs when the pivot divides the array evenly.

 Results in log n recursive calls with O(n) work per level.

 Total time complexity: O(n log n).

2.2 Worst-Case Complexity (O(n²))


 Occurs when the smallest/largest element is always chosen as pivot.

 Leads to n recursive calls, each with O(n) work.

 Total time complexity: O(n²).

2.3 Average-Case Complexity (O(n log n))


 For randomly chosen pivots, Quick Sort performs O(n log n) in
expectation.

3. Applying Quick Sort on Given List: [25, 75, 40, 10, 20, 5, 15] (5 Marks)
Step-by-Step Execution
Initial List: [25, 75, 40, 10, 20, 5, 15]
Step Pivot Partitioned List Left Subarray Right Subarray
1 15 [10, 5, 15, 25, 75, 40, 20] [10, 5] [25, 75, 40, 20]
2 5 [5, 10] Sorted Unsorted
3 20 [5, 10, 15, 20, 25, 75, 40] [Sorted] [25, 40, 75]
4 40 [5, 10, 15, 20, 25, 40, 75] [Sorted] [Sorted]
Final Sorted List: [5, 10, 15, 20, 25, 40, 75]

11. Explain Strassen’s matrix multiplication and derive its time


complexity.
Ans:
1. Strassen’s Matrix Multiplication Algorithm
Strassen’s Matrix Multiplication is an efficient Divide and Conquer algorithm
that multiplies two n × n matrices faster than the conventional method.
1.1 Concept of Strassen’s Algorithm
 The standard method requires O(n³) operations for multiplying two n × n

matrices.
 Strassen’s algorithm reduces the number of multiplications by

decomposing the matrices into smaller parts.


 Uses 7 recursive multiplications instead of 8 (as in the conventional

method).
1.2 Matrix Partitioning
Given two matrices A and B, they are divided into four submatrices each:

Strassen’s algorithm calculates 7 products (P1 to P7) using recursive


multiplication:
Using these, the resultant matrix C is computed as:

2. Time Complexity Derivation (5 Marks)


2.1 Recurrence Relation
Let T(n) be the time complexity of Strassen’s algorithm:

where:
 7T(n/2): Recursively multiplying 7 submatrices of size (n/2) × (n/2).

 O(n²): Time for adding and subtracting matrices.

2.2 Solving the Recurrence Using Master Theorem


The recurrence follows the form:

where:
 a = 7, b = 2, d = 2.

 Applying the Master Theorem, we compare log_b a = log_2 7 ≈ 2.81 with d

= 2.
Since log_2 7 > d, the complexity is:
2.3 Comparison with Conventional Matrix Multiplication
Algorithm Multiplications Additions/Subtractions Time Complexity
Standard Method n³ O(n²) O(n³)
Strassen’s Method 7T(n/2) O(n²) O(n^{2.81})
Thus, Strassen’s algorithm is asymptotically faster than conventional matrix
multiplication for large n.

12. What is topological sequence? Explain DFS and Source removal method with
suitable example to solve topological sequence
Ans:
1. Topological Sorting (5 Marks)
A topological sequence (or topological ordering) of a Directed Acyclic Graph
(DAG) is a linear ordering of its vertices such that for every directed edge (u →
v), vertex u appears before vertex v in the ordering.
1.1 Properties of Topological Sorting
✅ Only applicable to DAGs (graphs without cycles).
✅ Multiple valid topological orderings may exist.
✅ Used in scheduling tasks, dependency resolution, and precedence
constraints.
1.2 Example DAG & Topological Ordering
Consider a directed graph:
A→B→D
↓ ↓
C→E
A valid topological ordering of vertices:
A,C,B,E,DA, C, B, E, DA,C,B,E,D

2. Topological Sorting Using DFS (5 Marks)


2.1 Explanation
 DFS-based Topological Sorting follows a depth-first traversal approach.

 When visiting a node, all its adjacent vertices are recursively visited first.
 The node is then pushed onto a stack, ensuring it appears after its
dependencies in the final ordering.
2.2 Steps for DFS-based Topological Sorting
1. Start DFS traversal from any unvisited vertex.
2. Visit all adjacent vertices recursively.
3. Push the vertex to a stack once all adjacent vertices are visited.
4. Repeat for all vertices until all are visited.
5. Pop vertices from the stack to get a valid topological order.
2.3 Example
For the given DAG:
A→B→D
↓ ↓
C→E
DFS Visit Order:
1. Start from A → Visit B → Visit D → Push D to stack.
2. Visit E from B → Push E to stack.
3. Push B to stack.
4. Visit C → Push C to stack.
5. Push A to stack.
Final Topological Order (from stack):
A,C,B,E,DA, C, B, E, DA,C,B,E,D
2.4 Time Complexity
 DFS traversal takes O(V + E) (where V is vertices and E is edges).

 Stack operations take O(V).

 Overall complexity: O(V + E).

3. Source Removal Method (5 Marks)


3.1 Explanation
 The Source Removal method works by repeatedly removing vertices with

in-degree = 0 and updating the graph.


 A queue is used to process vertices in the correct order.

3.2 Steps for Source Removal Method


1. Compute in-degree (number of incoming edges) for each vertex.
2. Find all vertices with in-degree = 0 and add them to a queue.
3. Process each vertex by removing it and reducing the in-degree of its
neighbors.
4. If a neighbor's in-degree becomes 0, add it to the queue.
5. Repeat until all vertices are processed.
3.3 Example
For the DAG:
A→B→D
↓ ↓
C→E
Step-by-step execution:
1. Initial In-Degrees: A: 0, B: 1, C: 1, D: 1, E: 2
2. Start with vertex A (in-degree 0) and remove it.
o Reduce in-degree of B and C.

o Updated in-degrees: B: 0, C: 0, D: 1, E: 2

3. Remove B and C, reduce in-degrees of their neighbors (D and E).


o Updated in-degrees: D: 0, E: 1

4. Remove D, then remove E.


Final Topological Order:
A,C,B,D,EA, C, B, D, EA,C,B,D,E
3.4 Time Complexity
 Computing in-degree: O(V + E).

 Queue operations: O(V + E).

 Overall complexity: O(V + E).

13. Using Exhaustive search technique solve the below instance of


knapsack problem.
Item: 1 2 3 4
Weight: 2 1 3 5
Value: 12 10 20 5
Capacity: 5
Ans :
1. Understanding the Knapsack Problem (5 Marks)
The 0/1 Knapsack problem is an optimization problem where:
 Given n items, each with weight w[i] and value v[i].
 A knapsack can hold items up to a maximum capacity W.
 The objective is to maximize the total value while staying within the

weight limit.
1.1 Given Instance of Knapsack Problem
Item 1 2 3 4

Weight (w[i]) 2 1 3 5

Value (v[i]) 12 10 20 5

Knapsack Capacity (W) 5

2. Exhaustive Search Approach (5 Marks)


The Exhaustive Search method generates all possible subsets of items and
selects the one with maximum value within the given weight limit.
2.1 Possible Item Combinations and Their Values
Subset Weight Value Valid?

{} (Empty) 0 0 ✅

{1} 2 12 ✅

{2} 1 10 ✅

{3} 3 20 ✅

{4} 5 5 ✅

{1, 2} 3 22 ✅

{1, 3} 5 32 ✅ ( Best Choice)

{1, 4} 7 17 ✅ (Exceeds Capacity)

{2, 3} 4 30 ✅

{2, 4} 6 15 ✅ ( Exceeds Capacity)


Subset Weight Value Valid?

{3, 4} 8 25 ✅ (Exceeds Capacity)

{1, 2, 3} 6 42 ✅ (Exceeds Capacity)

{1, 2, 4} 8 27 ✅ (Exceeds Capacity)

{1, 3, 4} 10 37 ✅ (Exceeds Capacity)

{2, 3, 4} 9 35 ✅ (Exceeds Capacity)

{1, 2, 3, 4} 11 47 ✅ (Exceeds Capacity)


2.2 Optimal Solution
✅ The best subset is {1, 3} (Items 1 & 3)
✅ Total Weight = 5 (Within Capacity)
✅ Total Value = 32 (Maximum)

3. Time Complexity Analysis (5 Marks)


Since every subset of n items is considered, the time complexity is:
T(n)=O(2n)T(n) = O(2^n)T(n)=O(2n)
 Exponential Growth: Not practical for large n.

 Useful for small problems like this instance (n = 4).

 More efficient approaches: Greedy method, Dynamic Programming.

14. What is string matching? With the algorithm derive the worst-
case efficiency.
Ans:
String matching is the process of finding occurrences of a pattern (P) of length
m in a text (T) of length n.
1.1 Definition
Given:
 A text string T[0...n-1] of length n.

 A pattern string P[0...m-1] of length m.

 Find the index i where T[i...i+m-1] matches P.


1.2 Example
Text:
T = "NOBODY_NOTICED_HIM"
Pattern:
P = "NOT"
 The pattern "NOT" occurs at index 7 in T.

2. Brute-Force String Matching Algorithm (5 Marks)


The Brute-Force (Naïve) String Matching method checks every possible
alignment of the pattern in the text.
2.1 Algorithm (From PDF)
Brute-Force String Matching Algorithm
ALGORITHM BruteForceStringMatch(T [0..n − 1], P [0..m − 1])

// Implements brute-force string matching


// Input: An array T *0..n − 1+ of n characters (text)
// An array P *0..m − 1+ of m characters (pattern)
// Output: The index of the first character in the text that starts a matching
substring or −1 if not found
for i ← 0 to n − m do
j←0
while j < m and P [j ] = T [i + j ] do
j←j+1
if j = m return i
return −1
2.2 Working Mechanism
1. Align P with T at position 0.
2. Compare characters one by one.
3. If a mismatch occurs, shift P one position right.
4. Repeat until P is found or T is exhausted.
2.3 Example Execution
Text:
ini
CopyEdit
T = "NOBODY_NOTICED_HIM"
Pattern:
P = "NOT"
 Step 1: Compare "NOT" with "NOB" → Mismatch

 Step 2: Shift "NOT" → Compare "NOT" with "OBO" → Mismatch

 Step 3: Continue shifting until "NOT" matches "NOT" at index 7 → Match

found

3. Worst-Case Efficiency Analysis (5 Marks)


3.1 Worst-Case Scenario
Occurs when:
1. Pattern and text share common prefixes but mismatch at the last
character.
2. Example:
o T = "AAAAAAAAAAAB" (Repeated 'A's followed by 'B').

o P = "AAAA".

o Here, for each shift, the algorithm checks all m characters before

failing.
3.2 Time Complexity Analysis
For each shift, the worst case requires O(m) comparisons, and we may shift up
to (n - m + 1) times.

Thus, the worst-case complexity of Brute-Force String Matching is O(nm).


3.3 Average Case Complexity
 For random text: O(n).

 For worst-case pattern placements: O(nm).

15. Solve the following recurrence relations:


(i) T(n)=2T(n/2)+1 (ii) T(n)=T(n-1)+n
Ans:

You might also like