Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
5 views

Assignment 3 DSA

Ds
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Assignment 3 DSA

Ds
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Ans 1-

Index Sequential Search:

Index Sequential Search is a searching technique that combines elements of both sequential and
indexed searching. It uses an index table that holds pointers to records in a sorted dataset, allowing
for faster access. When searching for an element, the algorithm first checks the index to identify the
relevant block, then performs a sequential search within that block. This reduces the number of
comparisons needed, making it more efficient than basic sequential search, especially in large
datasets where direct access would be time-consuming.

Ans 2-

Steps of Binary Search to Find Element 9

1. Initial Setup:

Array: [2, 4, 7, 9, 15, 20, 25, 30, 35, 40]

Left pointer (low) = 0

Right pointer (high) = 9 (length of array - 1)

2. First Step:

Calculate middle index: mid = (0 + 9) / 2 = 4

Value at mid: 15

Since 9 < 15, adjust right pointer: high = mid - 1 = 3

3. Second Step:

Left pointer (low) = 0, Right pointer (high) = 3

New middle index: mid = (0 + 3) / 2 = 1

Value at mid: 4

Since 9 > 4, adjust left pointer: low = mid + 1 = 2

4. Third Step:

Left pointer (low) = 2, Right pointer (high) = 3

New middle index: mid = (2 + 3) / 2 = 2

Value at mid: 7

Since 9 > 7, adjust left pointer: low = mid + 1 = 3


5. Fourth Step:

Left pointer (low) = 3, Right pointer (high) = 3

New middle index: mid = (3 + 3) / 2 = 3

Value at mid: 9

Found the element 9 at index 3.

The binary search process quickly narrows down the potential location of the target element by
repeatedly dividing the search interval in half, making it efficient for sorted arrays.

Ans 3-

Quadratic Probing Insertion

Hash Table Size: 7

Keys to Insert: 50, 700, 76, 85, 92, 73

Hash Function

To find the initial index, we use the modulo operation:

Index=Keymod Table Size\text{Index} = \text{Key} \mod \text{Table Size}Index=KeymodTable Size

Insertion Steps

1. Insert 50:

o Index: 50mod 7=150 \mod 7 = 150mod7=1

o Inserted at index 1.

Hash Table: [_, 50, _, _, _, _, _]

2. Insert 700:

o Index: 700mod 7=0700 \mod 7 = 0700mod7=0

o Inserted at index 0.

Hash Table: [700, 50, _, _, _, _, _]

3. Insert 76:

o Index: 76mod 7=676 \mod 7 = 676mod7=6

o Inserted at index 6.

Hash Table: [700, 50, _, _, _, _, 76]

4. Insert 85:
o Index: 85mod 7=185 \mod 7 = 185mod7=1

o Collision at index 1 (occupied by 50).

o Apply quadratic probing:

▪ i=1i = 1i=1

▪ New index: (1+12)mod 7=2(1 + 1^2) \mod 7 = 2(1+12)mod7=2

o Inserted at index 2.

Hash Table: [700, 50, 85, _, _, _, 76]

5. Insert 92:

o Index: 92mod 7=192 \mod 7 = 192mod7=1

o Collision at index 1 (occupied by 50).

o Apply quadratic probing:

▪ i=1i = 1i=1

▪ New index: (1+12)mod 7=2(1 + 1^2) \mod 7 = 2(1+12)mod7=2 (occupied by


85)

▪ Next probe: (1+22)mod 7=5(1 + 2^2) \mod 7 = 5(1+22)mod7=5

o Inserted at index 5.

Hash Table: [700, 50, 85, _, _, 92, 76]

6. Insert 73:

o Index: 73mod 7=273 \mod 7 = 273mod7=2

o Collision at index 2 (occupied by 85).

o Apply quadratic probing:

▪ i=1i = 1i=1

▪ New index: (2+12)mod 7=3(2 + 1^2) \mod 7 = 3(2+12)mod7=3 (empty)

o Inserted at index 3.

Final Hash Table: [700, 50, 85, 73, _, 92, 76]

Summary

The quadratic probing process efficiently resolves collisions by checking subsequent indices based on
the square of the number of attempts. This method helps maintain performance in a hash table by
reducing clustering compared to linear probing.
Ans 4-

Double Hashing

Double Hashing is an open addressing collision resolution technique used in hash tables. It employs
two hash functions to compute the index for storing and retrieving keys. When a collision occurs (i.e.,
when the calculated index is already occupied), double hashing uses the second hash function to
determine the next index to try.

How It Works

1. Hash Functions:

o Let h1(key)h_1(key)h1(key) be the primary hash function that computes the initial
index.

o Let h2(key)h_2(key)h2(key) be the secondary hash function used to determine the


step size when a collision occurs.

2. Insertion Process:

o Calculate the initial index: index=h1(key)index = h_1(key)index=h1(key).

o If that index is occupied, compute the next index using the second hash function:
index=(index+h2(key))mod table sizeindex = (index + h_2(key)) \mod \text{table
size}index=(index+h2(key))modtable size

o Continue this process until an empty index is found.

Minimizing Collisions

Double hashing minimizes collisions by using a secondary hash function that creates a varied step
size for each key. This approach ensures that keys that collide will probe different slots, reducing
clustering:

• Diverse Step Sizes: The second hash function can provide a range of possible probe
sequences, helping to distribute keys more uniformly across the hash table.

• Less Clustering: Unlike linear or quadratic probing, which can create groups of filled slots,
double hashing minimizes clustering by spreading out keys over the hash table.

Ans 5-

Quick Sort

Quick Sort is a divide-and-conquer sorting algorithm that works by selecting a "pivot" element from
the array and partitioning the other elements into two sub-arrays: those less than the pivot and
those greater than the pivot. The sub-arrays are then sorted recursively.

Working of Quick Sort

1. Choose a Pivot: Select an element from the array as the pivot (commonly the last element).

2. Partitioning:
o Rearrange the array so that elements less than the pivot come before it and those
greater come after it.

o Return the final position of the pivot.

3. Recursively Apply:

o Recursively apply the same steps to the sub-arrays formed by splitting at the pivot.

Example

Let's sort the array [10, 7, 8, 9, 1, 5] using Quick Sort:

1. Initial Array: [10, 7, 8, 9, 1, 5]

o Choose pivot: 5.

2. Partitioning:

o Rearranging gives: [1, 5, 8, 9, 10, 7]

o Pivot 5 is now at index 1.

3. Recursive Calls:

o Left sub-array: [1] (already sorted).

o Right sub-array: [8, 9, 10, 7]

▪ Choose pivot: 7.

▪ Rearranging gives: [7, 8, 9, 10] (pivot 7 is now at index 3).

o Recursively sort [8, 9, 10], which is already sorted.

4. Final Sorted Array: [1, 5, 7, 8, 9, 10]

Why Quick Sort is Faster

1. Average Case Time Complexity:

o Quick Sort has an average case time complexity of O(n*log n), making it efficient for
large datasets.

2. In-place Sorting:

o It sorts the elements in place, requiring less memory than algorithms that use
additional data structures.

3. Locality of Reference:

o Quick Sort performs well with modern computer architectures due to better locality
of reference, which keeps frequently accessed data close together in memory.

4. Good on Average:

o Although its worst-case time complexity is O(n2) (when the pivot is consistently the
smallest or largest element), this scenario is rare with good pivot selection strategies
(like choosing a random pivot).
Ans 6-

Merge Sort

Merge Sort is a divide-and-conquer algorithm that splits an array into halves, recursively sorts each
half, and then merges the sorted halves back together.

Given Array

Initial Array: [12, 11, 13, 5, 6, 7]

Steps of Merge Sort

1. Divide:

o Split the array into two halves:

▪ Left: [12, 11, 13]

▪ Right: [5, 6, 7]

2. Recursive Sort Left Half:

o Split [12, 11, 13]:

▪ Left: [12], Right: [11, 13]

o Sort [11, 13]:

▪ Split: [11], [13]

▪ Both are single-element arrays and are sorted.

o Merge [11] and [13]:

▪ Result: [11, 13]

o Now merge [12] and [11, 13]:

▪ Compare 12 with 11: 11 is smaller.

▪ Result: [11, 12]

▪ Compare 12 with 13: 12 is smaller.

▪ Result: [11, 12, 13]

3. Recursive Sort Right Half:

o Split [5, 6, 7]:

▪ Left: [5], Right: [6, 7]

o Sort [6, 7]:

▪ Split: [6], [7] (both single-element arrays).

o Merge [6] and [7]:


▪ Result: [6, 7]

o Now merge [5] and [6, 7]:

▪ Compare 5 with 6: 5 is smaller.

▪ Result: [5, 6]

▪ Add 6 and 7: Result: [5, 6, 7]

4. Merge Sorted Halves:

o Now merge the two sorted halves [11, 12, 13] and [5, 6, 7]:

▪ Compare 11 with 5: 5 is smaller.

▪ Result: [5]

▪ Compare 11 with 6: 6 is smaller.

▪ Result: [5, 6]

▪ Compare 11 with 7: 7 is smaller.

▪ Result: [5, 6, 7]

▪ Add remaining 11, 12, 13: Result: [5, 6, 7, 11, 12, 13]

Final Sorted Array

The sorted array after performing Merge Sort is:

Final Result: [5, 6, 7, 11, 12, 13]

Summary

Merge Sort efficiently sorts the array by repeatedly dividing it into halves and merging sorted halves
back together. This algorithm has a time complexity of O(n*log n), making it effective for large
datasets.

You might also like