Unit 3
Unit 3
Unit 3
Linear Search
Binary Search
Linear Search
Linear search is a sequential searching algorithm where we start from one end and check every element of
the list until the desired element is found. It is the simplest search algorithm. Here you can easily
understand the process with the figures.
Suppose we have an array and we have to find the item k=1 then the searching process will be executed as
follows:
// Linear Search in C
#include <stdio.h>
int main()
{
int array[100], search, c, n;
printf("Enter number of elements in array\n");
scanf("%d", &n);
printf("Enter %d integer(s)\n", n);
for (c = 0; c < n; c++)
scanf("%d", &array[c]);
printf("Enter a number to search\n");
scanf("%d", &search);
for (c = 0; c < n; c++)
{
if (array[c] == search) /* If required element is found */
{
printf("%d is present at location %d.\n", search, c+1);
break;
}
}
if (c == n)
printf("%d isn't present in the array.\n", search);
return 0;
}
Binary Search
Binary Search is a searching algorithm used in a sorted array by repeatedly dividing the search interval
in half. The idea of binary search is to use the information that the array is sorted and reduce the time
complexity to O(Log n). The given example will illustrate the Binary Search process in the following
figure. Suppose we have an array and we have to find an item x=4.
FIGURE SHOWS THE BINARY SEARCH
Binary Search Algorithm: The basic steps to perform Binary Search are:
Begin with the mid element of the whole array as a search key.
If the value of the search key is equal to the item then return an index of the search key.
Or if the value of the search key is less than the item in the middle of the interval, narrow the interval to
the lower half.
Otherwise, narrow it to the upper half.
Repeatedly check from the second point until the value is found or the interval is empty.
Binary Search Algorithm can be implemented in the following two ways
1. Iterative Method
2. Recursive Method
1. Iteration Method
binarySearch(arr, x, low, high)
repeat till low = high
mid = (low + high)/2
if (x == arr[mid])
return mid
else if (x > arr[mid]) // x is on the right side
low = mid + 1
else // x is on the left side
high = mid - 1
2. Recursive Method (The recursive method follows the divide and conquer approach)
binarySearch(arr, x, low, high)
if low > high
return False
else
mid = (low + high) / 2
if x == arr[mid]
return mid
else if x > arr[mid] // x is on the right side
return binarySearch(arr, x, mid + 1, high)
else // x is on the left side
return binarySearch(arr, x, low, mid - 1)
Divide and Conquer Algorithm
Divide-and-Conquer is an algorithmic pattern. In algorithmic methods, the design is to take a dispute ona
huge input, break the input into minor pieces, decide the problem on each of the small pieces, and then
merge the piecewise solutions into a global solution. A divide-and-conquer algorithm is a strategy for
solving a large problem by
Divide the problem into a number of sub problems that are smaller instances of the same
problem.
Conquer the sub problems by solving them recursively. If they are small enough, solve the
sub problems as base cases.
Combine the solutions to the sub problems into the solution for the original problem.
In Indexed Sequential Search a sorted index is set aside in addition to the array.
Each element in the index points to a block of elements in the array or another expanded index.
The index is searched 1st then the array and guides the search in the array.
Note: Indexed Sequential Search actually does the indexing multiple time, like creating the index of an
index.
Bubble Sort:
Bubble sort is a sorting algorithm that compares two adjacent elements and swaps them until they are in
the intended order. It is a comparison-based algorithm in which each pair of adjacent elements is
compared and the elements are swapped if they are not in order.
Just like the movement of air bubbles in the water that rises up to the surface, each element of the array
moves to the end in each iteration. Therefore, it is called a Bubble Sort. This algorithm is not suitable for
large data sets as its average and worst-case complexity is Ο(n2) where n is the number of items.
Suppose we are trying to sort the elements in ascending order and the elements are as follows:
COMPARE THE ADJACENT ELEMENTS
In the above figure, starting from the first index, compare the first and the second elements. If the first
element is greater than the second element, they are swapped. Now, compare the second and the third
elements. Swap them if they are not in order. The above process goes on until the last element.
PUT THE LARGEST ELEMENT AT THE END
The same process in the above figure goes on for the remaining iterations. After each iteration, the
largest element among the unsorted elements is placed at the end. In each iteration, the comparison takes
place up to the last unsorted element.
The array is sorted when all the unsorted elements are placed at their correct positions. For this, we
repeat the same process until all the elements should be sorted.
THE ARRAY IS SORTED IF ALL ELEMENTS ARE KEPT IN THE RIGHT ORDER
Insertion Sort:
Insertion sort is a simple sorting algorithm that works similarly to the way you sort playing cards in your
hands. The array is virtually split into a sorted and an unsorted part. Values from the unsorted part are
picked and placed in the correct position in the sorted part.
Although it is simple to use, it is not appropriate for large data sets as the time complexity of insertion
sort in the average case and worst case is O(n2), where n is the number of items. Insertion sort is less
efficient than the other sorting algorithms like heap sort, quick sort, merge sort, etc.
12 11 13 5 6
Here, 12 is greater than 11 hence they are not in the ascending order and 12 is not at its correct
position. Thus, swap 11 and 12.
So, for now 11 is stored in a sorted sub-array.
11 12 13 5 6
Second Pass:
Now, move to the next two elements and compare them
11 12 13 5 6
Here, 13 is greater than 12, thus both elements seems to be in ascending order, hence, no
swapping will occur. 12 also stored in a sorted sub-array along with 11
Third Pass:
Now, two elements are present in the sorted sub-array which are 11 and 12
Moving forward to the next two elements which are 13 and 5
11 12 13 5 6
Both 5 and 13 are not present at their correct place so swap them
11 12 5 13 6
After swapping, elements 12 and 5 are not sorted, thus swap again
11 5 12 13 6
Here, again 11 and 5 are not sorted, hence swap again
5 11 12 13 6
here, it is at its correct position
Fourth Pass:
Now, the elements which are present in the sorted sub-array are 5, 11 and 12
Moving to the next two elements 13 and 6
5 11 12 13 6
Clearly, they are not sorted, thus perform swap between both
5 11 12 6 13
Now, 6 is smaller than 12, hence, swap again
5 11 6 12 13
Here, also swapping makes 11 and 6 unsorted hence, swap again
5 6 11 12 13
Finally, the array is completely sorted.
Now, the first two elements are sorted. Take the third element and compare it with the elements on the left
of it. Placed it just behind the element smaller than it. If there is no element smaller than it, then place it
at the beginning of the array.
PLACE 4 BEHIND 1
PLACE 3 BEHIND 1 AND THE ARRAY IS SORTED
Selection Sort:
Selection sort works by taking the smallest element in an unsorted array and bringing it to the front. You'll go through
each item (from left to right) until you find the smallest one. The first item in the array is now sorted, while the rest of
the array is unsorted. This process continues till all elements are sorted.
The algorithm maintains two subarrays in a given array and it will be sorted by the following steps:
Algorithm
n : size of list
for i = 1 to n - 1
min = i
for j = i+1 to n
min = j;
end if
end for
if indexMin != i then
end if
end for
end procedure
Let’s assume that the first element of the array is sorted then we can start our sorting.
Compare the minimum with the second element. If the second element is smaller than the minimum,
assign the second element as the minimum. Compare minimum with the third element. Again, if the
third element is smaller, then assign a minimum to the third element otherwise do nothing. The process
goes on until the last element.
THE FIRST ITERATION
Now, this is the final sorted array and it will be achieved in four steps that mean if we are going to sort
any array with the n elements then n-1 steps will follow.
Radix Sort:
Radix sort is a sorting algorithm that sorts the elements by first grouping the individual digits of the same
place value. Then, sort the elements according to their increasing/decreasing order. Suppose, we have an
array of 8 elements. First, we will sort elements based on the value of the unit place. Then, we will sort
elements based on the value of the tenth place. This process goes on until the last significant place. Let
the initial array be as follows:
In the given array, the largest element is 736 which has 3 digits in it. So, the loop will run up to three
times (i.e., to the hundreds place). That means three passes are required to sort the array. Now, first sort the
elements on the basis of unit place digits (i.e., x = 0). Here, we are using the COUNTING SORT
ALGORITHM to sort the elements.
In the first pass, the list is sorted on the basis of the digits at 0's place.
In the second pass, the list is sorted on the basis of the next significant digits (i.e., digits in 10th place).
After the second pass, the elements of the array are shown like this:
After the third pass, the elements of the array will be sorted and are shown like this:
Radix sort is a non-comparative sorting algorithm that is better than the comparative sorting algorithms. It
has a linear time complexity that is better than the comparative algorithms.
Quick Sort:
Quick Sort is a Divide and Conquer algorithm. It picks an element as a pivot and partitions the given array
around the picked pivot. There are many different versions of Quick Sort that pick pivot in different ways.
1. You will pick any pivot, let's say the highest index value.
2. You will take two variables to point left and right of the list, excluding pivot.
3. The left will point to the lower index, and the right will point to the higher index.
4. Now you will move all elements which are greater than pivot to the right.
5. Then you will move all elements smaller than the pivot to the left partition.
There are different variations of quicksort where the pivot element is selected from different positions. Here,
we will be selecting the rightmost element of the array as the pivot element.
A pointer is fixed at the pivot element. The pivot element is compared with the elements beginning from
the first index.
COMPARISON OF PIVOT ELEMENT WITH ELEMENT BEGINNING FROM THE FIRST INDEX
IF THE ELEMENT IS GREATER THAN THE PIVOT ELEMENT, A SECOND POINTER IS SET FOR
THAT ELEMENT
Now, the pivot is compared with other elements. If an element smaller than the pivot element is reached, the
smaller element is swapped with the greater element found earlier.
Again, the process is repeated to set the next greater element as the second pointer. And, swap it with
another smaller element.
THE PROCESS IS REPEATED TO SET THE NEXT GREATER ELEMENT AS THE SECOND
POINTER
1. 44 33 11 55 77 90 40 60 99 22 88
Let 44 be the Pivot element and scanning done from right to left
Comparing 44 to the right-side elements, and if right-side elements are smaller than 44, then swap it.
As 22 is smaller than 44 so swap them.
22 33 11 55 77 90 40 60 99 44 88
Now comparing 44 to the left side element and the element must be greater than 44 then swap them.
As 55 are greater than 44 so swap them.
22 33 11 44 77 90 40 60 99 55 88
Recursively, repeating steps 1 & steps 2 until we get two lists one left from pivot element 44 & one right
from pivot element.
22 33 11 40 77 90 44 60 99 55 88
22 33 11 40 44 90 77 60 99 55 88
Now, the element on the right side and left side are greater than and smaller than 44 respectively.
And these sublists are sorted under the same process as above done.
Merge Sort:
Merge Sort is one of the most popular sorting algorithms based on the Divide and Conquer Algorithm
principle. Here, a problem is divided into multiple sub-problems. Each sub-problem is solved individually.
Finally, sub-problems are combined to form the final solution.
Think of it as a recursive algorithm continuously splits the array in half until it cannot be further divided.
This means that if the array becomes empty or has only one element left, the dividing will stop, i.e. it is
the base case to stop the recursion. If the array has multiple elements, split the array into halves and
recursively invoke the merge sort on each of the halves. Finally, when both halves are sorted, the merge
operation is applied. Merge operation takes two smaller sorted arrays and combines them to eventually
make a larger one.
We know that merge sort first divides the whole array iteratively into equal halves unless the atomic values
are achieved. We see here that an array of 8 items is divided into two arrays of size 4.
This does not change the sequence of appearance of items in the original. Now we divide these two arrays
into halves.
We further divide these arrays and we achieve atomic value which can no more be divided.
Now, we combine them in exactly the same manner as they were broken down. Please note the color codes
given to these lists.
We first compare the element for each list and then combine them into another list in a sorted manner. We
see that 14 and 33 are in sorted positions. We compare 27 and 10 and in the target list of 2 values we put
10 first, followed by 27. We change the order of 19 and 35 whereas 42 and 44 are placed sequentially.
In the next iteration of the combining phase, we compare lists of two data values, and merge them into a list
of found data values placing all in a sorted order.
After the final merging, the list should look like this −
First convert the array into heap data structure using heapify, then one by one delete the root node of the
Max-heap and replace it with the last node in the heap and then heapify the root of the heap. Repeat this
process until size of heap is greater than 1.
Build a heap from the given input array.
Repeat the following steps until the heap contains only one element:
Swap the root element of the heap (which is the largest element) with the last element of the
heap.
Remove the last element of the heap (which is now in the correct position).
Heapify the remaining elements of the heap.
The sorted array is obtained by reversing the order of the elements in the input array.
Transform into max heap: After that, the task is to construct a tree from that unsorted array and try to
convert it into max heap.
To transform a heap into a max-heap, the parent node should always be greater than or equal to the child
nodes
Here, in this example, as the parent node 4 is smaller than the child node 10, thus, swap them to
build a max-heap.
Now, 4 as a parent is smaller than the child 5, thus swap both of these again and the resulted heap and
array should be like this:
Perform heap sort: Remove the maximum element in each step (i.e., move it to the end position and remove
that) and then consider the remaining elements and transform it into a max heap.
Delete the root element (10) from the max heap. In order to delete this node, try to swap it with the last
node, i.e. (1). After removing the root element, again heapify it to convert it into max heap.
Resulted heap and array should look like this:
Step 4: Repeat the above steps and it will look like the following:
Now remove the root (i.e. 3) again and perform heapify.
Now when the root is removed once again it is sorted. and the sorted array will be like arr[] = {1, 3, 4, 5,
10}.
Counting Sort:
Counting sort is a sorting technique based on keys between a specific range. It works by counting the
number of objects having distinct key values (a kind of hashing). Then do some arithmetic operations to
calculate the position of each object in the output sequence in other words Counting sort is a sorting
algorithm that sorts the elements of an array by counting the number of occurrences of each unique
element in the array. The count is stored in an auxiliary array and the sorting is done by mapping the count
as an index of the auxiliary array. Working of the Counting sort algorithm will be as follows:
1. Find out the maximum element (let it be max) from the given array:
2. Initialize an array of length max+1 with all elements 0. This array is used for storing the count of
the elements in the array.
3. Store the count of each element at its respective index in the count array. For example: if the count
of element 3 is 2 then, 2 is stored in the 3rd position of the count array. If element "5" is not present
in the array, then 0 is stored in 5th position.
4. Store the cumulative sum of the elements of the count array. It helps in placing the elements into
the correct index of the sorted array.
5. Find the index of each element of the original array in the count array. This gives the cumulative
count. Place the element at the index calculated as shown in the figure below.
6. After placing each element in its correct position, decrease its count by one.
Bucket Sort:
Bucket Sort is a sorting algorithm that divides the unsorted array elements into several groups called
buckets. Each bucket is then sorted by using any of the suitable sorting algorithms or recursively applying
the same bucket algorithm. Finally, the sorted buckets are combined to form a final sorted array.
The process of bucket sorting can be understood as a Scatter-Gather Approach. Here, elements are first
scattered into buckets then the elements in each bucket are sorted. Finally, the elements are gathered in
order.
Create an array of size 10. Each slot of this array is used as a bucket for storing elements.
Insert elements into the buckets from the array. The elements are inserted according to the range of the
bucket. In our example code, we have buckets each of ranges from 0 to 1, 1 to 2, 2 to 3,. (n-1) to n.
Suppose, an input element is .23 is taken. It is multiplied by size = 10 (ie. .23*10=2.3). Then, it is
converted into an integer (ie. 2.3≈2). Finally, .23 is inserted into bucket-2.
Similarly, .25 is also inserted into the same bucket. Every time, the floor value of the floating-point
number is taken. If we take integer numbers as input, we have to divide it by the interval (10 here) to get
the floor value. Similarly, other elements are inserted into their respective buckets.
The elements of each bucket are sorted using any of the stable sorting algorithms. Here, we have used
quicksort (inbuilt function).
The elements from each bucket are gathered. It is done by iterating through the bucket and inserting an
individual element into the original array in each cycle. The element from the bucket is erased once it is
copied into the original array.
What is Hashing?
Hashing is the process of transforming any given key or a string of characters into another value. This is
usually represented by a shorter, fixed-length value or key that represents and makes it easier to find or
employ the original string. A hash function is used to generate the new value according to a mathematical
algorithm. The result of a hash function is known as a hash value or simply, a hash.
It requires a significant amount of your time to search the entire list and locate that specific number. This
manual process of scanning is not only time-consuming but inefficient too. With hashing in the data
structure, you can narrow down the search and find the number within seconds.
A good hash function uses a one-way hashing algorithm, or in other words, the hash cannot be converted
back into the original key. Keep in mind that two keys can generate the same hash. This phenomenon is
known as a collision. There are several ways to handle collisions.
The hash function in the data structure verifies the file which has been imported from another source. A
hash key for an item can be used to accelerate the process. It increases the efficiency of retrieval and
optimizes the search. This is how we can simply give the hashing definition in the data structure.
Hashing in data structure falls into a collision if two keys are assigned the same index number in the hash
table. The collision creates a problem because each index in a hash table is supposed to store only one
value. Hashing in data structure uses several collision resolution techniques to manage the performance of
a hash table. There are mainly three types of collision resolution techniques as follows:
Linear Probing: Linear probing is a scheme in computer programming for resolving collisions
in hash tables in Data Structures for maintaining a collection of key-value pairs and looking up
the value associated with a given key. It was invented in 1954 by Gene Amdahl and Elaine M.
EXAMPLE: Explain the linear probing collision resolution technique in hashing. If the size of
the hash table is 11, show the resultant hash table after inserting keys 89, 18,12,45,49, 58, and 9
using linear probing.
Linear probing is a collision resolution technique used in hashing. When a collision occurs,
meaning two keys hash to the same location in the hash table, linear probing attempts to find the
next available slot by linearly probing through the table until an empty slot is found. Here's how
linear probing works in your example, with a hash table of size 11:
1. Initialize the hash table: [_, _, _, _, _, _, _, _, _, _, _] (with 11 empty slots represented by
underscores).
2. Insert key 89: The hash value of 89 modulo 11 is 1. Since the slot at index 1 is empty, we
insert 89 there. The updated hash table becomes: [_, 89, _, _, _, _, _, _, _, _, _].
3. Insert key 18: The hash value of 18 modulo 11 is also 7. However, the slot at index 7 is
occupied by key 89. We need to probe the next slot. We continue linearly probing and
find an empty slot at index 8. So, we insert 18 there. The updated hash table becomes: [_,
89, _, _, _, _, _, 18, _, _, _ ].
4. Insert key 12: The hash value of 12 modulo 11 is 1, but index 1 is already occupied by key 89.
We start linear probing from the next slot, index 2, which is empty. We insert 12 there. The
updated hash table becomes: [_, 89, 12, _, _, _, _, _, 18, _,_, _].
5. Insert key 45: The hash value of 45 modulo 11 is 1, but index 1 is already occupied. We
continue probing and find that index 2 is occupied by 12. We move to index 3, which is
empty, and insert 45 there. The updated hash table becomes: [_, 89, 12, 45, _, _, _, _, 18, _,
_, _ ].
6. Insert key 49: The hash value of 49 modulo 11 is 5. We find that index 5 is empty, so we
insert 49 there. The updated hash table becomes: [_, 89, 12, 45, _, 49, _, _, 18, _, _].
7. Insert key 58: The hash value of 58 modulo 11 is 3. We find that index 3 is occupied by 45.
We start linear probing from the next slot, index 4, which is empty. We insert 58 there. The
updated hash table becomes [_, 89, 12, 45, 58, 49, _, 18, _, _, _ ].
8. Insert key 9: The hash value of 9 modulo 11 is 9. We find that index 9 is empty, so we insert
9 there. The updated hash table becomes: [_, 89, 12, 45, 58, 49, _, _, 18, 9, _].
9. After inserting all the keys using linear probing, the final hash table is: [_, 89, 12, 45, 58,
49,_, _, 18, _,9, _].
Quadratic probing is another collision resolution technique used in hashing. Instead of linearly
probing through the hash table, quadratic probing uses a quadratic function to probe through
the table. The probing sequence is determined by adding successive squares of an increment
value to the original hash index until an empty slot is found. Here's how quadratic probing works
in your example, with a hash table of size 20:
1. Initialize the hash table: [_, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _] (with 20 empty slots
represented by underscores).
2. Insert key 96: The hash value of 96 modulo 20 is 16. The slot at index 16 is empty, so we
insert 96 there. The updated hash table becomes:[-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,96,---].
3. Insert key 48: The hash value of 48 modulo 20 is 8. The slot at index 8 is empty, so we
insert 48 there. The updated hash table becomes: [_, _, _, _, _, _, _, _, 48, _, _, _, _, _, _, _,
96,_ ,_ ,_ ].
4. Insert key 63: The hash value of 63 modulo 20 is 3. The slot at index 3 is empty, so we
insert 63 there. The updated hash table becomes: [_, _, _, 63, _, _, _, _, 48, _, _, _, _, _, _, _,
96,_ ,_ ,_].
5. Insert key 29: The hash value of 29 modulo 20 is 9. The slot at index 9 is empty, so we
insert 29 there. The updated hash table becomes: [_, _, _, 63, _, _, _, _, 48, 29, _, _, _, _, 96,_
,_ , ,_].
6. Insert key 87: The hash value of 87 modulo 20 is 7. The slot at index 7 is empty, so we
insert 87 there. The updated hash table becomes: [_, _, _, 63, _, _, _, 87, 48, 29, _, _, _, _,
_,_,_, 96,_ ,_ ,_].
7. Insert key 77: The hash value of 77 modulo 20 is 17. The slot at index 17 is empty, so we
insert 77 there. The updated hash table becomes: [_, _, _, 63, _, _, _, 87, 48, 29, _, _, _, _,
_,96, 77, _, _, ].
8. Insert key 65: The hash value of 65 modulo 20 is 5. The slot at index 5 is empty, so we
insert 65 there. The updated hash table becomes: [_, _, _, 63, _, 65, _, 87, 48, 29, _, _, _, _, _, _,
96, 77, _, _,].
9. Insert key 69: The hash value of 69 modulo 20 is 9. The slot at index 9 is already occupied
by 29. We start quadratic probing by adding successive squares of an increment value (1^2 =
1) to the original index. The next slot to check is (9 + 1^2) modulo 20, which is 10. This slot is
empty, so we insert 69 there. The updated hash table becomes: [_, _, _, 63, _, 65, _, 87, 48,
29, 69, _, _, _, _, _, 96, 77, _, _].
10. Insert key 94: The hash value of 94 modulo 20 is 14. The slot at index 14 is empty, so we insert
94 there. The updated hash table becomes: [_, _, _, 63, _, 65, _, 87, 48, 29, 69, _, _, _, 94, _, 96,
77, _, _].
11. Insert key 61: The hash value of 61 modulo 20 is 1. The slot at index 1 is empty, so we insert 61
there. The updated hash table becomes: [_, 61, _, 63, _, 65, _, 87, 48, 29, 69, _, _, _,94, _, 96,
77, _, _].
The final hash table after inserting all the keys using quadratic probing is: [_, 61, _, 63, _, 65, _, 87,
48, 29, 69, _, _, _,94, _, 96,77, _, _].
Double Hashing: Double hashing is a computer programming technique used in conjunction with
open addressing in hash tables to resolve hash collisions, by using a secondary hash of the key as
an offset when a collision occurs. It was invented by a conscientious scientist, Hans Peter Luhn
at IBM in 1947 while researching the field of Computer Science and Information Science.
EXAMPLE: Explain the Double Hashing collision resolution technique in hashing. If the size
of the hash table is 15, show the resultant hash table after inserting keys 17, 71, 43, 29, 97,
59,
15, 39, 84, and 51 using Double Hashing.
Double hashing is a collision resolution technique in hashing where, when a collision occurs, the
algorithm uses a secondary hash function to calculate a step size (or interval) for probing through
the hash table. The secondary hash function ensures that the step size is different for each key,
reducing the likelihood of collisions. Assuming the following hash functions:
Here's how to double hashing works in your example, with a hash table of size 15 and the given
keys: 17, 71, 43, 29, 97, 59, 15, 39, 84, and 51. Initialize the hash table: [_, _, _, _, _, _, _, _, _, _, _,
_, _, _, _] (with 15 empty slots represented by underscores).
1. Insert key 17: The hash value of 17 modulo 15 is 2. The slot at index 2 is empty, so we
insert 17 there. The updated hash table becomes: [_, _, 17, _, _, _, _, _, _, _, _, _, _, _, _].
2. Insert key 71: The hash value of 71 modulo 15 is 11. The slot at index 11 is empty, so we
insert 71 there. The updated hash table becomes: [_, _, 17, _, _, _, _, _, _, _, 71, _, _, _, _].
3. Insert key 43: The hash value of 43 modulo 15 is 13. The slot at index 13 is empty, so we
insert 43 there. The updated hash table becomes: [_, _, 17, _, _, _, _, _, _, _, 71, _, 43, _, _].
4. Insert key 29: The hash value of 29 modulo 15 is 14. The slot at index 14 is empty, so we
insert 29 there. The updated hash table becomes: [_, _, 17, _, _, _, _, _, _, _, 71, _, 43, _,
29].
5. Insert key 97: The hash value of 97 modulo 15 is 7. The slot at index 7 is empty, so we
insert 97 there. The updated hash table becomes: [_, _, 17, _, _, _, _, 97, _, _, 71, _, 43, _,
29].
6. Insert key 59: The hash value of 59 modulo 15 is 14, but index 14 is already occupied by
29. We use double hashing to calculate the step size. Let's use a secondary hash function
that returns:
14 + 4 = 18
We probe the next slot, index (14 + 4) modulo 18, which is 3. The slot at index 3 is empty,
so we insert 59 there. The updated hash table becomes: [_, _, 17, 59, _, _, _, 97, _, _, 71, _,
43,
_, 29].
7. Insert key 15: The hash value of 15 modulo 15 is 0, but index 0 is empty, so we insert 15
there. The updated hash table becomes: [15, _, 17, 59, _, _, _, 97, _, _, 71, _, 43, _, 29].
8. Insert key 39: The hash value of 39 modulo 15 is 9. The slot at index 9 is empty, so we
insert 39 there. The updated hash table becomes: [15, _, 17, 59, _, _, _, 97, _, 39, 71, _, 43,
_, 29].
9. Insert key 84: The hash value of 84 modulo 15 is 9, but index 9 is already occupied by 39.
We use double hashing to calculate the step size. Let's use a secondary hash function that
returns:
9 + 7 = 16
We probe the next slot, index (9 + 7) modulo 16, which is 1. The slot at index 1 is empty,
so we insert 84 there. The updated hash table becomes: [15, 84, 17, 59, _, _, _, 97, _, 39,
71,
_, 43, _, 29].
10. Insert key 51: The hash value of 51 modulo 15 is 6. The slot at index 6 is empty, so we
insert 51 there. The updated hash table becomes: [15, 84, 17, 59, _, _, 51, 97, _, 39, 71, _,
43, _, 29].
11. The final hash table after inserting all the keys using double hashing is: [15, 84, 17, 59, _,
_, 51, 97, _, 39, 71, _, 43, _, 29].
🙤*🙤