Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Computational Thinking Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 49

Computational thinking and Problem-solving

Objectives

 Show understanding of linear and binary searching methods


- Describe a linear and binary search.
- Write algorithms to implement a binary and linear search.

 Show understanding of insertion sort and bubble sort methods


-Describe an insertion sort and a bubble sort.
-Write algorithms to implement an insertion and bubble sort.
 Show understanding of and use Abstract Data Types (ADT)
- Describe linked lists, stacks, queues and binary trees.
-Write algorithms to find items in a linked list and a binary tree.
-Write algorithms to insert items into a stack, a queue, a linked list and a binary tree.
-Write algorithms to delete an item from a stack, a queue and a linked list.

 Show how it is possible for ADTs to be implemented from another ADT


-Explain how an ADT can be implemented using a built-in data type and another ADT, and write
algorithms to implement this.

 Show understanding that different algorithms which perform the same task can be compared by
using criteria (e.g. time taken to complete the task and memory used)
-Explain the use of Big O notation to specify time and space complexity.
-Compare algorithms on criteria such a time taken and memory used.
Linear Search Algorithm

The linear search algorithm, also known as sequential search, is a simple method for finding a target

value within a list or array by iterating through each element until the target value is found or the end

of the list is reached.The time complexity of a linear search algorithm is O(n), where 'n' is the number of

elements in the list. This is because, in the worst-case scenario, the algorithm may need to examine

every element in the list before finding the target.The space complexity of a linear search is O(1), which

means that the amount of additional memory used by the algorithm does not depend on the size of the

input. Linear search doesn't require any extra data structures or memory proportional to the input size;

it simply iterates through the elements.

Here's the algorithm for linear search:

1. Start from the beginning of the list.

2. Compare each element of the list with the target value.

3. If the element matches the target value, return its index.

4. If the entire list is traversed and the target value is not found, return -1 to indicate that the element is
not present in the list.

Now, let's illustrate this algorithm with an example:

Suppose we have an array arr = [5, 9, 3, 7, 2, 8] and we want to search for the target value 7.

Step-by-step explanation of linear search:

1. Starting the search: Begin at the first element of the array arr.
2. Iteration 1: Compare the first element, arr[0] = 5, with the target value 7. No match.
3. Iteration 2: Move to the second element, arr[1] = 9. No match.
4. Iteration 3: Proceed to the third element, arr[2] = 3. No match.
5. Iteration 4: Check the fourth element, arr[3] = 7. Match found! Return the index `3`.

Python code implementing linear search:

def linear_search(arr, target):


for index in range(len(arr)):
if element == target:
return index
return -1

# Example usage:
arr = [5, 9, 3, 7, 2, 8]
target_value = 7

result_index = linear_search(arr, target_value)

if result_index != -1:
print(f"Element found at index {result_index}")
else:
print("Element not found in the array")

Code Explanation
This Python code defines a linear_search function that performs a linear search through the arr list to
find the target value. In the provided example, the target value 7 is searched within the arr list. The
function returns the index of the found element or -1 if the element is not present in the list.

Binary search algorithm

Binary search is a highly efficient algorithm for finding a specific value in a sorted array. It operates by
repeatedly dividing the search space in half. In each step, it compares the target value to the middle
element, eliminating half of the remaining elements. This process continues until the target is found or
the search space becomes empty. Binary search has a time complexity of O(log n), making it
significantly faster than linear search for large datasets.

Purpose: Efficiently finds the position of a target value within a sorted array.
Core Principle: Repetitively divides the search space in half based on comparisons with the middle
element.
Requirements: The array must be sorted in ascending or descending order (for numerical values) or
lexicographically (for strings).
Efficiency:
Time Complexity: O(log n), where n is the number of elements in the array. This is significantly faster
than linear search, which has a time complexity of O(n).
Space Complexity: O(1) (constant), as it uses only a few variables to track the search space.
Python implementation:
def binary_search(array, target):
"""Performs binary search on a sorted array of integers."""
low = 0
high = len(array) - 1

while low <= high:


mid = (low + high) // 2

if array[mid] == target:
return mid

elif array[mid] < target:


low = mid + 1

else:
high = mid - 1

return -1 # Target not found

Explanation:

1. Assumption:The array is already sorted in ascending order.


2. Initialisation: Set low to the first index and high to the last index of the array.
3. Iterative Search:
- Calculate the middle index mid using (low + high) // 2.
- Compare array[mid] with target:
- If they're equal, the target is found, so return mid.
- If array[mid]` is less than `target`, the target must be in the right half, so update low to mid + 1.
- If array[mid] is greater than target, the target must be in the left half, so update high to mid - 1.
4. Target Not Found: If the loop completes without finding the target, return -1.

Key Points:

- Binary search has a time complexity of O(log n), making it much faster than linear search for large
arrays.
- It requires a sorted array to function correctly.
- It's a divide-and-conquer algorithm, repeatedly dividing the search space in half.

Example

def binary_search(array, target):


# ... (code from previous response)

# Create a sorted array of integers


array = [2, 5, 7, 13, 19, 22, 27, 33]
target = 19 # Value to search for

# Perform the binary search


result = binary_search(array, target)

if result != -1:
print("Target found at index:", result)
else:
print("Target not found in the array")

Output:

Target found at index: 4


Explanation:
1. The array`is already sorted in ascending order, which is a requirement for binary search.
2. We set target to 19, the value we want to find.
3. The binary_search`function is called with array and target as arguments.
4. The function searches for the target using the binary search algorithm and returns its index (4 in this
case) or -1 if not found.
5. The result is printed, indicating that the target was found at index 4.

Comparison of how the performance of binary search varies with the number of data items:

1. Efficiency: Binary search has a time complexity of O(log n), where 'n' is the number of elements in
the sorted array. This means that as the number of items grows, the time taken to search does not
increase linearly; instead, it increases logarithmically. In contrast, linear search has a time complexity of
O(n), where the time taken is directly proportional to the number of items.

2. Speed: As the number of data items increases, the advantages of binary search become more
apparent. With a larger dataset, binary search's logarithmic time complexity allows it to outperform
linear search significantly. It drastically reduces the number of comparisons needed to find an element
compared to linear search.

3. Memory Overhead: Binary search requires a sorted dataset, which might require additional memory
or computational time to sort initially. However, once sorted, the search operation benefits from the
efficiency of binary search.

4. Impact of Data Size: For smaller datasets, the difference in performance might not be as noticeable
since both binary and linear searches can be relatively fast. However, as the dataset grows larger, the
advantage of binary search in terms of reduced time complexity becomes increasingly prominent.

In summary, binary search's efficiency becomes more evident and advantageous as the number of data
items increases. Its logarithmic time complexity allows it to perform significantly better than linear
search, particularly with larger datasets or collections.

Bubble Sorting

Bubble sort is a simple sorting algorithm that repeatedly steps through the list, compares adjacent
elements, and swaps them if they are in the wrong order. This process continues until the entire list is
sorted.The time complexity of the Bubble Sort algorithm is O(n^2), where "n" is the number of
elements in the array. This is because the algorithm iterates through the array multiple times, and for
each iteration, it compares and swaps elements as necessary.The space complexity of Bubble Sort is O(1)
since it only uses a constant amount of extra space for temporary variables, regardless of the input size.
Bubble Sort is an in-place sorting algorithm, meaning it doesn't require additional memory proportional
to the size of the input.

Algorithmic explanation of bubble sort:

1. Start with the first element (index 0) and compare it with the next element.
2. If the next element is smaller, swap them. Move to the next pair of elements.
3. Continue this process until the last pair of elements.
4. After the first pass, the largest element will be at the end of the list.
5. Repeat steps 1-4 for the remaining elements (excluding the sorted elements).
6. Continue this process until the entire list is sorted.

Python Implementation:

def bubble_sort(arr):
n = len(arr)
# Traverse through all array elements
for i in range(n):
# Last i elements are already in place
for j in range(0, n - i - 1):
# Traverse the array from 0 to n-i-1
# Swap if the element found is greater
# than the next element
if arr[j] > arr[j + 1]:
arr[j], arr[j + 1] = arr[j + 1], arr[j]

# Example usage:
arr = [64, 34, 25, 12, 22, 11, 90]

print("Original Array:", arr)


bubble_sort(arr)
print("Sorted Array:", arr)

Explanation of the code:

- The bubble_sort function takes an array arr as input.


- It iterates through the array using two nested loops:
- The outer loop (`for i in range(n)`) controls the number of passes needed to sort the array. For each
pass, the largest element settles at the end of the array.
- The inner loop (`for j in range(0, n - i - 1)`) compares adjacent elements and swaps them if they are in
the wrong order.
- The `arr` list is modified in place, and after sorting, the sorted array is printed.

The output demonstrates the original unsorted array and the resulting sorted array after applying the
bubble sort algorithm.

Insertion Sort

Insertion Sort is a simple and intuitive comparison-based sorting algorithm. It builds the final sorted
array one element at a time. The algorithm iterates through the input array, considering each element
and inserting it into its correct position within the already sorted portion of the array.

1. Algorithm Steps:
- The first element is considered to be a sorted part.
- Iterate through the unsorted part of the array.
- For each element, compare it with elements in the sorted part and insert it at the correct position.

2. Pseudocode:
InsertionSort(arr):
for i from 1 to length(arr) - 1:
key = arr[i]
j=i-1
while j >= 0 and arr[j] > key:
arr[j + 1] = arr[j]
j=j-1
arr[j + 1] = key

3. Explanation
- Insertion Sort is stable, maintaining the relative order of equal elements.
- Its worst-case time complexity is O(n^2), suitable for small datasets or nearly sorted arrays.
- The best-case time complexity is O(n) when the array is already sorted.
- It is an in-place algorithm, requiring only a constant amount of additional memory.
- Insertion Sort is adaptive, performing well on partially sorted data.
- Understanding its mechanics aids in grasping fundamental sorting principles and algorithmic analysis.

Usage and Considerations:


Insertion Sort is practical for small datasets or when the array is nearly sorted. Its simplicity makes it
suitable for educational purposes and understanding basic sorting techniques. While it may not be the
most efficient for large datasets, its adaptive nature and low space complexity contribute to its
relevance in specific scenarios.

Algorithm: Insertion Sort


1. Start with the second element (index 1) of the array.
2. Compare the current element with the one before it.
3. If the current element is smaller, swap it with the previous element.
4. Continue this process towards the beginning of the array until the current element is in its correct
position.
5. Move to the next element and repeat steps 2-4 until the entire array is sorted.

Now, let's implement Insertion Sort in Python:

def insertion_sort(arr):
# Traverse through the array starting from the second element
for i in range(1, len(arr)):
key = arr[i] # Current element to be compared

# Move elements of arr[0..i-1], that are greater than key,


# to one position ahead of their current position
j=i-1
while j >= 0 and key < arr[j]:
arr[j + 1] = arr[j]
j -= 1
arr[j + 1] = key

# Example usage:
arr = [12, 11, 13, 5, 6]
print("Original Array:", arr)
insertion_sort(arr)
print("Sorted Array:", arr)
Explanation of the Python code:

- The insertion_sort function takes an array arr as input.


- It traverses through the array starting from the second element (index 1).
- For each element, it compares it with the elements before it and inserts it into the correct position
among the already sorted elements.
- The while loop shifts elements greater than the `key` element to the right to make space for inserting
the key.
- The arr list is modified in place, and after sorting, the sorted array is printed.

The output demonstrates the original unsorted array and the resulting sorted array after applying the
Insertion Sort algorithm.

Finding an Item in a Linked List:

A linked list is a dynamic data structure commonly encountered in Computer Science studies. Unlike
arrays, a linked list does not require contiguous memory allocation. Instead, it comprises nodes, each
containing data and a reference (or link) to the next node in the sequence. This arrangement facilitates
efficient insertion and deletion operations, crucial for various algorithms and applications.

Key Concepts:

1. Node Structure:
- Each node in a linked list holds a data element and a reference (link) to the next node.
- The last node typically points to a null reference, indicating the end of the list.

2. Dynamic Memory Allocation:


- Linked lists dynamically allocate memory as nodes are added, providing flexibility in managing
varying amounts of data.

3. Traversal:
- Traversing a linked list involves starting from the head (first node) and navigating through successive
nodes using their references.

4. Types of Linked Lists:


- Singly Linked List: Each node points to the next node.
- Doubly Linked List: Each node points to both the next and the previous node, allowing bidirectional
traversal.
- Circular Linked List: The last node points back to the first, forming a closed loop.

5. Advantages:
- Dynamic Size: Linked lists can easily accommodate changing data sizes.
- Efficient Insertion and Deletion: Inserting or deleting a node requires updating references, making
these operations efficient.

6. Disadvantages:
- Random Access: Unlike arrays, linked lists do not support direct access to elements by index.
Traversal is necessary.
- Memory Overhead:The links between nodes consume additional memory compared to arrays.
Applications and Real-world Examples:

1. Memory Management:
- Linked lists are integral to dynamic memory allocation and deallocation in languages like C and C++.

2. Data Structures:
- Linked lists serve as foundational structures for more complex data structures, such as stacks,
queues, and hash tables.

3. Game Development:
- In game development, linked lists can be used for managing entities, like enemies or power-ups, in a
flexible manner.

Algorithm for finding an item in a linked list:


1. Start from the head of the linked list.
2. Traverse the list until you find the target item or reach the end.
3. If the target is found, return the item; otherwise, return None.

def find(itemSearch):
"""Search for an item in the linked list."""
found = False
itemPointer = startPointer

while itemPointer != nullPointer and not found:


if myLinkedList[itemPointer] == itemSearch:
found = True
else:
itemPointer = myLinkedListPointers[itemPointer]

return itemPointer

Example: Finding an Item in a Linked List

Consider the following linked list:

# Python program for finding an item in a linked list

# Initialise the linked list and pointers


myLinkedList = [27, 19, 36, 42, 16, None, None, None, None, None, None, None]
myLinkedListPointers = [-1, 0, 1, 2, 3 ,6 ,7 ,8 ,9 ,10 ,11, -1]
startPointer = 4
nullPointer = -1

def find(itemSearch):
"""Search for an item in the linked list."""
found = False
itemPointer = startPointer
while itemPointer != nullPointer and not found:
if myLinkedList[itemPointer] == itemSearch:
found = True
else:
itemPointer = myLinkedListPointers[itemPointer]

return itemPointer

# Enter item to search for


item = int(input("Please enter item to be found: "))
result = find(item)

if result != -1:
print("Item found")
else:
print("Item not found")

Code Explanation:

1. Linked List Representation:


- myLinkedList`: Represents the values in the linked list.
- myLinkedListPointers`: Represents the pointers connecting the nodes in the linked list.
- startPointer: Indicates the starting position in the linked list.
- nullPointer: Represents the end of the linked list.

2. Search Function (find):


- find(itemSearch): Searches for itemSearch in the linked list.
- found: Boolean flag to indicate whether the item is found.
- `itemPointer: Points to the current position in the linked list during traversal.

3. Search Algorithm:
- The search algorithm traverses the linked list, comparing each element with the target itemSearch.
- If the item is found, the found flag is set to True, and the search terminates.
- If the item is not found, the itemPointer moves to the next position using the pointers in
myLinkedListPointers.

4. Result and Output:


- The result of the search is stored in the variable `result`.
- If the result is not -1, the item is found, and a corresponding message is printed.
- If the result is `-1`, the item is not found, and a different message is printed.

This program demonstrates a basic search operation in a linked list, where the user inputs an item to
search, and the program outputs whether the item is present in the linked list or not.

Inserting an Item in a Linked List

def insert(itemAdd):
# Global variables
global startPointer, heapStartPointer, myLinkedList, myLinkedListPointers, nullPointer

# Check if the linked list is full


if heapStartPointer == nullPointer:
print("Linked List full")
else:
# Save the current start pointer
tempPointer = startPointer

# Update the start pointer to the next available space


startPointer = heapStartPointer

# Move heap start pointer to the next available space


heapStartPointer = myLinkedListPointers[heapStartPointer]

# Insert the item at the new start pointer position


myLinkedList[startPointer] = itemAdd

# Update the pointers to link the new item to the rest of the linked list
myLinkedListPointers[startPointer] = tempPointer

Explanation:

1. global startPointer, heapStartPointer, myLinkedList, myLinkedListPointers, nullPointer`: Declares


global variables that will be used in the function. These variables include pointers for managing the
linked list.

2. if heapStartPointer == nullPointer:: Checks if the linked list is full by comparing the heap start pointer
to the null pointer.

3. print("Linked List full")`: Prints a message indicating that the linked list is full if the condition in the
previous step is true.

4. tempPointer = startPointer`: Saves the current start pointer in a temporary variable.

5. `startPointer = heapStartPointer`: Updates the start pointer to the next available space in the linked
list.

6. `heapStartPointer = myLinkedListPointers[heapStartPointer]`: Moves the heap start pointer to the


next available space in the linked list.

7. `myLinkedList[startPointer] = itemAdd`: Inserts the provided item at the new start pointer position in
the linked list.

8. `myLinkedListPointers[startPointer] = tempPointer`: Updates the pointers to link the new item to the
rest of the linked list by connecting it to the previously saved start pointer.

In summary, this program inserts a new item into a linked list. It uses pointers and manages the
available space in the linked list to ensure proper insertion.
Deleting an Item In a Linked List

def delete(itemDelete):
# Global variables
global startPointer, heapStartPointer, myLinkedList, myLinkedListPointers, nullPointer

# Check if the linked list is empty


if startPointer == nullPointer:
print("Linked List empty")
else:
# Initialize index and oldindex
index = startPointer

# Search for the item to delete


while myLinkedList[index] != itemDelete and index != nullPointer:
oldindex = index
index = myLinkedListPointers[index]

# Check if the item was not found


if index == nullPointer:
print("Item", itemDelete, "not found")
else:
# Mark the item as deleted
myLinkedList[index] = None

# Save the next available space pointer


tempPointer = myLinkedListPointers[index]

# Update pointers to remove the item from the linked list


myLinkedListPointers[index] = heapStartPointer
heapStartPointer = index
myLinkedListPointers[oldindex] = tempPointer

Explanation:

1. global startPointer, heapStartPointer, myLinkedList, myLinkedListPointers, nullPointer: Declares


global variables that will be used in the function. These variables include pointers for managing the
linked list.

2. if startPointer == nullPointer:`: Checks if the linked list is empty by comparing the start pointer to the
null pointer.

3. print("Linked List empty")`: Prints a message indicating that the linked list is empty if the condition in
the previous step is true.

4. index = startPointer`: Initializes the index variable to the start pointer to begin searching for the item
to delete.
5. while myLinkedList[index] != itemDelete and index != nullPointer:`: Searches for the item to delete by
iterating through the linked list until the item is found or the end of the list is reached.

6. oldindex = index: Saves the current index before moving to the next node.

7. index = myLinkedListPointers[index]: Moves to the next node in the linked list.

8. if index == nullPointer:`: Checks if the item was not found by comparing the index to the null pointer.

9. print("Item", itemDelete, "not found")`: Prints a message indicating that the item to delete was not
found if the condition in the previous step is true.

10. myLinkedList[index] = None`: Marks the item as deleted by setting its value to `None`.

11. `tempPointer = myLinkedListPointers[index]`: Saves the next available space pointer.

12. `myLinkedListPointers[index] = heapStartPointer`: Updates pointers to remove the item from the
linked list.

13. `heapStartPointer = index`: Updates the heap start pointer to the position of the deleted item.

14. `myLinkedListPointers[oldindex] = tempPointer`: Updates pointers to reconnect the linked list


without the deleted item.

In summary, this program deletes an item from a linked list. It uses pointers to navigate through the
linked list, marks the item as deleted, and updates pointers to maintain the integrity of the linked list.

Finding an Item in a Binary Tree:

A binary tree is a hierarchical data structure that consists of nodes connected by edges. It is composed
of nodes, where each node contains data and two pointers, usually referred to as the left child and right
child. Nodes without a parent are called the root nodes, while nodes without children are called leaves.

1. Structure:
- Root Node: The topmost node in the tree, serving as the starting point.
- Parent Node: A node that has child nodes connected beneath it.
- Child Node: Nodes directly connected to a parent node.
- Leaf Node: Nodes without children.
- Internal Node: Any node with at least one child.
- Subtree:A tree formed by a node and its descendants.

2. Binary Tree Properties:


- Binary Nature:Each node has at most two children (left and right).
- Child Ordering: Nodes to the left are usually smaller, and nodes to the right are larger, creating an
ordered structure.
- Levels and Depth: The level of a node is its distance from the root, and the depth is the level of the
deepest node.
- Height:The height is the length of the longest path from the root to a leaf.
3. Applications:
- Search and Retrieval: Binary trees facilitate efficient search operations, especially in binary search
trees.
- Sorting: Binary trees are the foundation for sorting algorithms like heapsort.
- Expression Trees:Used in compilers to represent expressions for parsing and evaluation.
- Huffman Coding:Binary trees are employed in data compression algorithms like Huffman coding.

4. Traversal:
- Inorder: Traverse left subtree, visit the root, traverse right subtree.
- Preorder: Visit the root, traverse left subtree, traverse right subtree.
- Postorder: Traverse left subtree, traverse right subtree, visit the root.

5. Balanced and Unbalanced Trees:


- Balanced: Ensures that the height difference between left and right subtrees is minimal, promoting
efficient search operations.
- Unbalanced:Height differences may be significant, leading to performance degradation.

Understanding binary trees is crucial for data structuring, efficient search algorithms, and developing a
solid foundation in computer science. Mastery of their properties and applications contributes to a
deeper comprehension of more advanced data structures and algorithms.

Algorithm:
1. Start from the root of the binary tree.
2. If the root is None, the item is not in the tree; return None.
3. If the root contains the target item, return the item.
4. If the target is less than the root's value, recursively search the left subtree.
5. If the target is greater than the root's value, recursively search the right subtree.

Python Implementation:

# Global variables
rootPointer = 0
nullPointer = -1

# Structure to represent a node in the binary tree


class TreeNode:
def __init__(self, item, leftPointer=nullPointer, rightPointer=nullPointer):
self.item = item
self.leftPointer = leftPointer
self.rightPointer = rightPointer

# Binary tree represented as an array of nodes


myTree = [TreeNode(5, 1, 2), TreeNode(3, 3, 4), TreeNode(8, nullPointer, nullPointer),
TreeNode(1, nullPointer, nullPointer), TreeNode(4, nullPointer, nullPointer)]

# Function to find an item in the binary tree


def find(itemSearch):
# Initialise itemPointer to the root of the tree
itemPointer = rootPointer
# Search until the item is found or the end of the tree is reached
while itemPointer != nullPointer and myTree[itemPointer].item != itemSearch:
# Move to the left subtree if the item is smaller
if myTree[itemPointer].item > itemSearch:
itemPointer = myTree[itemPointer].leftPointer
# Move to the right subtree if the item is larger
else:
itemPointer = myTree[itemPointer].rightPointer

# Return the itemPointer, which can be the position of the item or nullPointer if not found
return itemPointer

# Example usage
itemToSearch = 4
result = find(itemToSearch)

# Output the result


if result == nullPointer:
print(f"Item {itemToSearch} not found in the binary tree.")
else:
print(f"Item {itemToSearch} found at position {result} in the binary tree.")

Explanation:

1. `rootPointer = 0`: Initializes the root pointer to the root of the binary tree.

2. `nullPointer = -1`: Defines a constant representing a null pointer.

3. `class TreeNode`: Defines a class to represent a node in the binary tree, including its item value and
left/right pointers.

4. `myTree = [...]`: Represents the binary tree as an array of nodes.

5. `find(itemSearch)`: Defines a function to search for an item in the binary tree using the provided
itemSearch value.

6. Inside the function, a while loop is used to traverse the tree until the item is found or the end of the
tree is reached.

7. The function returns the `itemPointer`, which can be the position of the item in the tree or
`nullPointer` if the item is not found.

8. Example usage and output the result based on whether the item is found or not.

This program implements a binary tree search algorithm where the tree is represented using an array of
nodes. It traverses the tree based on item values, comparing them to the search item until the item is
found or the end of the tree is reached. The result indicates the position of the item or `nullPointer` if
not found.
Inserting an Item in a Binary tree

# Define a class to represent a node in the binary tree


class Node:
def __init__(self, item, leftPointer=None, rightPointer=None):
self.item = item
self.leftPointer = leftPointer
self.rightPointer = rightPointer

# Declare an array to represent the binary tree


myTree = [Node() for _ in range(12)] # Index 0 to 11

# Global variables
rootPointer = None
nextFreePointer = 0

# Constants
nullPointer = -1

# Procedure to add a node to the binary tree


def nodeAdd(itemAdd):
global nextFreePointer, rootPointer

# Check for a full tree


if nextFreePointer == nullPointer:
print("No nodes free")
else:
# Use the next free node
itemAddPointer = nextFreePointer
nextFreePointer = myTree[nextFreePointer].leftPointer
itemPointer = rootPointer

# Check for an empty tree


if itemPointer is None:
rootPointer = itemAddPointer
else:
# Find where to insert a new leaf
while itemPointer is not None:
oldPointer = itemPointer
if myTree[itemPointer].item > itemAdd:
# Choose left branch
leftBranch = True
itemPointer = myTree[itemPointer].leftPointer
else:
# Choose right branch
leftBranch = False
itemPointer = myTree[itemPointer].rightPointer

# Use left or right branch


if leftBranch:
myTree[oldPointer].leftPointer = itemAddPointer
else:
myTree[oldPointer].rightPointer = itemAddPointer

# Store the item to be added in the new node


myTree[itemAddPointer].leftPointer = None
myTree[itemAddPointer].rightPointer = None
myTree[itemAddPointer].item = itemAdd

# Example usage
nodeAdd(5)
nodeAdd(3)
nodeAdd(8)
nodeAdd(1)
nodeAdd(4)

# Output the binary tree structure


print("Binary Tree Structure:")
for i, node in enumerate(myTree):
print(f"Node {i}: Item = {node.item}, Left = {node.leftPointer}, Right = {node.rightPointer}")

Explanation:

1. Node Class:Defines a class to represent a node in the binary tree, including its item value and
left/right pointers.

2. Array `myTree: Represents the binary tree as an array of nodes.

3. Global Variables: `rootPointer` and `nextFreePointer` are declared as global variables.

4. Constants:`nullPointer` is set to -1, representing a null pointer.

5. Procedure `nodeAdd(itemAdd)`:
- Checks for a full tree and prints an error message if the tree is full.
- Allocates a new node for the item to be added.
- Determines the position to insert the new node based on the item value.
- Updates the pointers accordingly, linking the new node to the tree.
- Stores the item in the new node.

6. Example Usage:Demonstrates the usage of `nodeAdd` procedure by adding nodes with items 5, 3, 8,
1, and 4 to the binary tree.

7. Output Tree Structure: Prints the structure of the binary tree after the example usage.

This program creates and updates a binary tree based on the provided pseudocode. It uses an array to
represent the binary tree, allocates new nodes as needed, and links them to the existing tree structure.
The example usage illustrates how to add nodes to the binary tree.

Additional Notes:
- Both algorithms have a time complexity of O(n) in the worst case, where 'n' is the number of nodes or
elements in the data structure.
- These implementations are for educational purposes, and in real-world scenarios, you might
encounter variations based on the specific requirements of the application.

The time complexity of algorithms for finding an item in both a linked list and a binary tree,
explaining the complexities in detail.

Finding an Item in a Linked List:

Algorithm:
1. Start from the head of the linked list.
2. Traverse the list until you find the target item or reach the end.
3. If the target is found, return the item; otherwise, return None.

Time Complexity (Worst Case):


- Explanation:
- In the worst case, you may need to traverse the entire linked list to find the item.

- Time Complexity:
O(n), where n is the number of elements in the linked list.

Explanation:
- In the worst case, you might need to look at each element in the linked list once, making the time
complexity linearly proportional to the number of elements. This is because you may need to visit each
node in the list until you find the target or reach the end.

Finding an Item in a Binary Tree:

Algorithm:
1. Start from the root of the binary tree.
2. If the root is None, the item is not in the tree; return None.
3. If the root contains the target item, return the item.
4. If the target is less than the root's value, recursively search the left subtree.
5. If the target is greater than the root's value, recursively search the right subtree.

Time Complexity (Worst Case):


Explanation:
- In the worst case, you may need to traverse the entire height of the binary tree.

Time Complexity:
O(h), where h is the height of the binary tree.

Explanation:
- The time complexity depends on the height of the binary tree. In the worst case, you might need to go
from the root to a leaf node. The height of a balanced binary tree is log_2(n) where n is the number of
nodes, so the time complexity is logarithmic. However, in the worst case for an unbalanced tree, the
height is n (the number of nodes), leading to linear time complexity.

Summary :
1. Linked List:
-Worst Case Time Complexity: O(n)
Explanation:Linear time complexity means the time taken grows proportionally with the number of
elements in the linked list.

2. Binary Tree:
Worst Case Time Complexity: O(h), where h is the height of the tree.
- Explanation: The time complexity depends on the height of the binary tree. In a balanced tree, it is
logarithmic, but in an unbalanced tree, it can be linear.

Understanding these complexities helps you to analyse and compare the efficiency of algorithms,
supporting their problem-solving skills in computer science.

Let's go through algorithms to insert an item into a stack, queue, linked list, and binary tree

Inserting an Item into a Stack:

Algorithm:
1. Push the new item onto the top of the stack.

Python Implementation:

class Stack:
def __init__(self):
self.items = []

def push(self, item):


self.items.append(item)

Explanation:
- The `push` operation adds the item to the top of the stack in constant time O(1).
- No complex steps are involved in stack insertion.

Exam Trick:
- Understand the Last-In-First-Out (LIFO) nature of a stack.
- Consider scenarios where a stack can be useful, such as in reversing a sequence.

Inserting an Item into a Queue:

Algorithm:
1. Enqueue (insert) the new item at the rear of the queue.

Python Implementation:
from collections import deque

class Queue:
def __init__(self):
self.items = deque()

def enqueue(self, item):


self.items.append(item)

Explanation:
- The `enqueue` operation adds the item to the rear of the queue in constant time O(1).
- The `deque` (double-ended queue) is used for efficient insertion and removal at both ends.

Exam Trick:
- Understand the First-In-First-Out (FIFO) nature of a queue.
- Consider situations where a queue is applicable, such as managing tasks in a printer queue.

Inserting an Item into a Linked List:

Algorithm:
1. Create a new node with the given item.
2. Set the next pointer of the new node to point to the current first node.
3. Update the head of the linked list to be the new node.

Python Implementation:

class Node:
def __init__(self, data):
self.data = data
self.next = None

*class LinkedList:
def __init__(self):
self.head = None

def insert_at_beginning(self, item):


new_node = Node(item)
new_node.next = self.head
self.head = new_node

Explanation:
- The `insert_at_beginning` operation inserts the item at the beginning of the linked list in constant time
O(1).
- A new node is created, and its next pointer is set to the current first node. The head is then updated to
the new node.

Exam Trick:
- Understand the concept of inserting nodes at the beginning, middle, or end of a linked list.
- Practice traversing linked lists and understanding node relationships.
Inserting an Item into a Binary Tree:

Algorithm:

1. Start from the root of the tree.


2. If the tree is empty, create a new node with the given item and set it as the root.
3. If the item is less than the current node's value, recursively insert it into the left subtree.
4. If the item is greater than the current node's value, recursively insert it into the right subtree.

Python Implementation:

class TreeNode:
def __init__(self, key):
self.key = key
self.left = None
self.right = None

def insert_into_binary_tree(root, item):


if root is None:
return TreeNode(item)

if item < root.key:


root.left = insert_into_binary_tree(root.left, item)
elif item > root.key:
root.right = insert_into_binary_tree(root.right, item)

return root

Explanation:
- The `insert_into_binary_tree` operation inserts the item into a binary tree in logarithmic time O(log n)
in a balanced tree.
- Recursively navigates the tree based on the comparison of the item with each node's value.

Exam Trick:
- Understand the concept of a binary search tree (BST) and how items are inserted based on their
values.
- Practice tree traversals to reinforce understanding.

Example Questions:

1. Linked List Question:


- Given the following linked list, perform the `insert_at_beginning` operation to add the item 'X'.

linked_list = LinkedList()
linked_list.head = Node(1)

2. Binary Tree Question:


- Given the following binary search tree, perform the `insert_into_binary_tree` operation to add the
item 15.

binary_tree = TreeNode(10)
binary_tree.left = TreeNode(5)
binary_tree.right = TreeNode(20)

3. Stack Question:
- Given a stack that initially contains [1, 2, 3], perform the `push` operation to add the item 4.

4. Queue Question:
- Given a queue that initially contains [A, B, C], perform the `enqueue` operation to add the item 'D'.

These questions assess the ability to apply the insertion operations to various data structures and
demonstrate understanding of their underlying algorithms.

Certainly! Let's go through the answers to the questions and provide the corresponding implementation
examples for the programs.

Linked List Question:

Question:
Given the following linked list, perform the `insert_at_beginning` operation to add the item 'X'.

linked_list = LinkedList()
linked_list.head = Node(1)

Answer:

# Implementation
linked_list.insert_at_beginning('X')

Explanation:
The `insert_at_beginning` operation adds the item 'X' to the beginning of the linked list. The resulting
linked list will be:

linked_list.head = Node('X')
linked_list.head.next = Node(1)

Binary Tree Question:

Question:
Given the following binary search tree, perform the `insert_into_binary_tree` operation to add the item
15.

binary_tree = TreeNode(10)
binary_tree.left = TreeNode(5)
binary_tree.right = TreeNode(20)

Answer:
# Implementation
binary_tree = insert_into_binary_tree(binary_tree, 15)

Explanation:
The `insert_into_binary_tree` operation inserts the item 15 into the binary search tree. The resulting
tree will be:
```plaintext
10
/ \
5 20
/
15

Stack Question:

Question:
Given a stack that initially contains [1, 2, 3], perform the `push` operation to add the item 4.

Answer:

# Implementation
stack = Stack()
stack.items = [1, 2, 3]
stack.push(4)

Explanation:
The `push` operation adds the item 4 to the top of the stack. The resulting stack will be:

stack.items = [1, 2, 3, 4]

Queue Question:
Given a queue that initially contains [A, B, C], perform the `enqueue` operation to add the item 'D'.

Answer:
# Implementation
queue = Queue()
queue.items = deque(['A', 'B', 'C'])
queue.enqueue('D')

Explanation:
The `enqueue` operation adds the item 'D' to the rear of the queue. The resulting queue will be:

queue.items = deque(['A', 'B', 'C', 'D'])

These implementations demonstrate the application of insertion operations for different data
structures, showcasing how items are added to a linked list, binary tree, stack, and queue.
Let's go through algorithms to delete an item from a stack, queue, and linked list, along with their
Python implementations and explanations.

Deleting an Item from a Stack:

Algorithm:
1. Pop the item from the top of the stack.

Python Implementation:

class Stack:
def __init__(self):
self.items = []

def pop(self):
if not self.is_empty():
return self.items.pop()
else:
return None

def is_empty(self):
return len(self.items) == 0

Explanation:
- The `pop` operation removes the item from the top of the stack in constant time O(1).
- The `is_empty` method is used to check if the stack is empty before attempting to pop.

Exam Trick:
- Understand the Last-In-First-Out (LIFO) nature of a stack.
- Be aware of the possibility of attempting to pop from an empty stack, which should be handled to
avoid errors.

Deleting an Item from a Queue:

Algorithm:
1. Dequeue (remove) the item from the front of the queue.

Python Implementation:

from collections import deque

class Queue:
def __init__(self):
self.items = deque()

def dequeue(self):
if not self.is_empty():
return self.items.popleft()
else:
return None
def is_empty(self):
return len(self.items) == 0

Explanation:
- The `dequeue` operation removes the item from the front of the queue in constant time O(1).
- The `is_empty` method is used to check if the queue is empty before attempting to dequeue.

Exam Trick:
- Understand the First-In-First-Out (FIFO) nature of a queue.
- Be cautious about attempting to dequeue from an empty queue, and handle such cases appropriately.

Deleting an Item from a Linked List:

Algorithm:
1. Start from the head of the linked list.
2. Traverse the list until you find the target item or reach the end.
3. If the target item is found, remove the node containing the item.

Python Implementation:

class LinkedList:
def __init__(self):
self.head = None

def delete_item(self, target):


current = self.head
previous = None

while current:
if current.data == target:
if previous:
previous.next = current.next
else:
self.head = current.next
return
else:
previous = current
current = current.next

Explanation:
- The `delete_item` operation removes the target item from the linked list.
- The `previous` pointer is used to keep track of the node before the current node containing the target
item.

Exam Trick:
- Understand the concept of traversing a linked list to find and delete a specific node.
- Be aware of edge cases such as deleting the head node or the possibility of the target item not being
in the list.
Example Questions:

1. Linked List Question:


- Given the following linked list, perform the `delete_item` operation to remove the item 5.

linked_list = LinkedList()
linked_list.head = Node(3)
linked_list.head.next = Node(5)
linked_list.head.next.next = Node(8)
```

2. Stack Question:
- Given a stack that initially contains [1, 2, 3, 4], perform the `pop` operation to remove the top item.

3. Queue Question:
- Given a queue that initially contains ['A', 'B', 'C', 'D'], perform the `dequeue` operation to remove the
front item.

These questions assess the ability to apply deletion operations to different data structures and
demonstrate an understanding of their underlying algorithms.

Certainly! Let's go through the answers to the questions and provide the corresponding implementation
examples for the programs.

Linked List Question:

Question:
Given the following linked list, perform the `delete_item` operation to remove the item 5.

linked_list = LinkedList()
linked_list.head = Node(3)
linked_list.head.next = Node(5)
linked_list.head.next.next = Node(8)

Answer:

# Implementation
linked_list.delete_item(5)

Explanation:
The `delete_item` operation removes the target item 5 from the linked list. The resulting linked list will
be:
`
linked_list.head = Node(3)
linked_list.head.next = Node(8)
```

Stack Question:
Question:
Given a stack that initially contains [1, 2, 3, 4], perform the `pop` operation to remove the top item.
Answer:
# Implementation
stack = Stack()
stack.items = [1, 2, 3, 4]
popped_item = stack.pop()

Explanation:
The `pop` operation removes the top item (4) from the stack. The resulting stack will be:
stack.items = [1, 2, 3]

The variable `popped_item` will be assigned the value 4.

Queue Question:
Given a queue that initially contains ['A', 'B', 'C', 'D'], perform the `dequeue` operation to remove the
front item.

Answer:
# Implementation
queue = Queue()
queue.items = deque(['A', 'B', 'C', 'D'])
dequeued_item = queue.dequeue()

Explanation:
The `dequeue` operation removes the front item ('A') from the queue. The resulting queue will be:
```python
queue.items = deque(['B', 'C', 'D'])
```
The variable `dequeued_item` will be assigned the value 'A'.

These implementations demonstrate the application of deletion operations for different data structures,
showcasing how items are removed from a linked list, stack, and queue.

Graphs as Abstract Data Types (ADTs)

Introduction to Graphs:

Definition of a Graph:
- A graph is a versatile data structure that consists of a finite set of vertices (nodes) connected by edges.
- In graph theory, it serves as an abstract representation of relationships between different entities.

Key Features of a Graph:

1. Vertices (Nodes):
- Represent entities or objects.
- Can contain information or attributes.

2. Edges:
- Connect pairs of vertices.
- Can be directed or undirected.
3. Weighted Edges:
- Edges may have associated weights.
- Represent the cost, distance, or any relevant measure.

4. Adjacency:
- Describes the relationships between vertices.
- Vertices are adjacent if there is an edge connecting them.

5. Cycles:
- Cycles occur when a sequence of edges forms a closed loop.
- Graphs can be cyclic or acyclic.

Justifying the Use of Graphs:

Use Cases for Graphs:

1. Social Networks:
- Vertices: Users
- Edges: Friendships or connections
- Application:Identify mutual friends, recommend connections.

2. Transportation Networks:
- Vertices: Locations or junctions
- Edges: Roads, railroads, or flight paths
- Application: Find the shortest path, optimize routes.

3. Internet and Web Pages:


- Vertices: Web pages
- Edges: Hyperlinks
- Application: PageRank algorithm, web crawling.

4. Course Prerequisites:
- Vertices: Courses
- Edges: Prerequisites
- Application:Plan academic paths, identify dependencies.

Justification:

1. Efficient Representation:
- Graphs provide a concise representation of complex relationships.

2. Flexibility:
- Graphs can model various scenarios, from social connections to network infrastructures.

3. Optimization:
- Graph algorithms enable optimization of paths, network flows, and resource allocation.

4. Problem Solving:
- Graphs facilitate problem-solving in diverse domains, promoting algorithmic thinking.
Example Questions:

Question 1:
Explain the key features of a graph and how they can represent relationships between entities. Provide
a real-world example.

Answer:
A graph comprises vertices and edges where vertices represent entities, and edges signify relationships.
For example, in a social network graph, users are vertices, and friendships are edges. This
representation allows us to model and analyze complex relationships efficiently.

Question 2:
Justify the use of a graph data structure in the context of transportation networks. Provide specific
applications and benefits.

Answer:
Graphs efficiently model transportation networks with locations as vertices and roads/paths as edges.
Applications include finding optimal routes, minimizing travel time, and optimizing resource allocation.
The graph's flexibility and algorithms make it suitable for complex network analysis.

Question 3:
Discuss the significance of cycles in graphs. Provide an example scenario where cycles are essential for
problem-solving.

Answer:
Cycles in graphs represent closed loops or recurring patterns. In network optimization, identifying
cycles is crucial for detecting feedback loops or repeated patterns, such as in traffic flow analysis.
Understanding cycles helps prevent inefficiencies and aids in optimizing network structures.

These questions assess your understanding of graph concepts, their applications, and the ability to
justify their use in specific contexts.

Implementing ADTs from Others

Introduction:

Abstract Data Types (ADTs):


- ADTs provide a high-level description of data and operations without specifying the underlying
implementation.
- It's possible to implement one ADT using another or built-in types in various programming languages.

Implementing ADTs from Other ADTs or Built-In Types:

1. Stack:
- A stack can be implemented using an array or a linked list.
- Example Implementation:
- Using a Python list: `stack = []`

2. Queue:
- A queue can be implemented using an array or a linked list.
- Example Implementation:
- Using Python's `deque` from the `collections` module: `queue = deque([])`

3. Linked List:
- A linked list can be implemented using nodes and pointers.
- Example Implementation:

class Node:
def __init__(self, data):
self.data = data
self.next = None

4. Dictionary:
- A dictionary can be implemented using hash tables or associative arrays.
- Example Implementation:
- Using Python's dictionary: `my_dict = {}`

5. Binary Tree:
- A binary tree can be implemented using nodes with left and right children pointers.
- Example Implementation:

class TreeNode:
def __init__(self, key):
self.key = key
self.left = None
self.right = None

Examples of questions:

Question 1:
Explain how a stack can be implemented using a linked list. Provide an example scenario where a stack
would be beneficial.

Answer:
A stack can be implemented using a linked list by maintaining a pointer to the top of the list. Push
operations add elements at the top, and pop operations remove elements from the top. A scenario
where a stack is beneficial is the parsing of expressions where the last operator encountered needs to
be processed first.

Question 2:
Describe the implementation of a queue using a Python list. Discuss the advantages of using a queue in
scenarios involving task scheduling.

Answer:
A queue can be implemented using a Python list, utilizing the `append` method for enqueueing and the
`pop(0)` method for dequeueing. In task scheduling, a queue ensures that tasks are processed in a first-
come-first-served manner, maintaining fairness and order in execution.

Question 3:
Discuss the key features of a linked list and explain how it can be implemented in Python. Provide an
example scenario where a linked list is a suitable data structure.

Answer:
A linked list consists of nodes with data and pointers. In Python, it can be implemented using a `Node`
class. A linked list is suitable for scenarios where dynamic data storage is required, such as maintaining
a playlist where songs can be easily added or removed.

Question 4:
Explain the implementation of a dictionary using a hash table. Discuss the advantages of using a
dictionary for quick data retrieval in comparison to a list.

Answer:
A dictionary can be implemented using a hash table, where keys are hashed to index locations. In
Python, dictionaries provide constant-time average-case complexity for data retrieval. Unlike lists,
where retrieval involves linear search, dictionaries excel in scenarios where quick and direct access to
data is essential.

Question 5:
Describe the structure of a binary tree and how it can be implemented using nodes in Python. Provide
an example scenario where a binary tree is a suitable data structure.

Answer:
A binary tree consists of nodes with left and right children pointers. In Python, it can be implemented
using a `TreeNode` class. A binary tree is suitable for scenarios involving hierarchical relationships, such
as representing organizational structures where each node has at most two subordinates.

Comparing Algorithms
Introduction:

Algorithm Comparison:
- Algorithms can be compared based on various criteria, including time complexity, space complexity,
and performance.
- Big O notation is a standardized way to express time and space complexity.

Criteria for Algorithm Comparison:

1. Time Complexity:
- Refers to the amount of time an algorithm takes to complete based on the input size.
- Expressed using Big O notation (e.g., O(1), O(log n), O(n), O(n^2)).

2. Space Complexity:
- Measures the amount of memory an algorithm uses based on the input size.
- Expressed using Big O notation similar to time complexity.

3. Performance:
- Involves practical considerations such as real-world execution time and responsiveness.

Big O Notation:
1. O(1) - Constant Time:
- Independent of the input size.
- Example: Accessing an element in an array by index.

2. O(log n) - Logarithmic Time:


- Efficiency increases as the input size grows.
- Example: Binary search.

3. O(n) - Linear Time:


- Efficiency grows linearly with the input size.
- Example: Linear search.

4. O(n^2) - Quadratic Time:


- Efficiency grows with the square of the input size.
- Example: Bubble sort.

Example Exam Questions:

Question 1:
Compare and contrast the time complexities of two sorting algorithms - Bubble Sort (O(n^2)) and
Merge Sort (O(n log n)). Discuss the scenarios where each algorithm is most suitable.

Answer:
- Bubble Sort has quadratic time complexity and is suitable for small datasets.
- Merge Sort has a better time complexity of O(n log n) and is preferable for large datasets.

Question 2:
Explain the concept of space complexity using Big O notation. Provide an example scenario where an
algorithm with O(n) space complexity is preferred over an algorithm with O(n^2) space complexity.

Answer:
- Space complexity refers to the memory used by an algorithm.
- An algorithm with O(n) space complexity is preferred when dealing with large datasets to minimize
memory consumption.

Question 3:
Discuss the importance of performance considerations in algorithm selection. Provide examples of
scenarios where real-world execution time is crucial, and responsiveness is a key factor.

Answer:
- Performance considerations are essential for real-world applications.
- In scenarios like real-time systems or interactive applications, responsiveness is crucial, and algorithms
with lower time complexity are preferred.

Question 4:
Explain the significance of Big O notation in comparing algorithms. Provide an example scenario where
a more efficient algorithm with a lower Big O complexity offers a substantial advantage over a less
efficient algorithm.
Answer:
- Big O notation provides a standardized way to express time and space complexity.
- In scenarios with large datasets, an algorithm with O(n log n) time complexity may offer a substantial
advantage over an O(n^2) algorithm in terms of faster execution.

Recursion
Recursion in computer science is a technique where a function calls itself to solve smaller instances
iteratively until completion.

WHAT YOU SHOULD ALREADY KNOW

Remind yourself of the definitions of the following mathematical functions, which many of you will be
familiar with, and see how they are constructed.
 Factorials
 Arithmetic sequences
 Fibonacci numbers
 Compound interest

Key terms
Recursion – a process using a function or procedure that is defined in terms of itself and calls itself.
Base case – a terminating solution to a process that is not recursive.
General case – a solution to a process that is recursively defined.
Winding – process which occurs when a recursive function or procedure is called until the base case is
found.
Unwinding – process which occurs when a recursive function finds the base case and the function
returns the values

Objectives
Show understanding of recursion
 Essential features of recursion.
 How recursion is expressed in a programming language.
 Write and trace recursive algorithms
 When the use of recursion is beneficial

Show awareness of what a compiler has to do to translate recursive programming code


 Use of stacks and unwinding

Understanding recursion
Recursion is a process using a function or procedure that is defined in terms of itself and calls itself. The
process is defined using a base case, a terminating solution to a process that is not recursive, and a
general case, a solution to a process that is recursively defined.

For example, a function to calculate a factorial for any positive whole number n! is recursive. The
definition for the function uses:

a base case of 0! = 1

a general case of n! = n * (n–1)!

This can be written in pseudocode as a recursive function.


FUNCTION factorial (number : INTEGER) RETURNS INTEGER
IF number = 0
THEN answer ← 1 // base case
ELSE
answer ← number * factorial (number - 1)
// recursive call with general case
ENDIF
RETURN answer
ENDFUNCTION

With recursive functions, the statements after the recursive function call are not executed until the base
case is reached; this is called winding. After the base case is reached and can be used in the recursive
process, the function is unwinding.

#Python program recursive factorial function


def factorial(number):
if number == 0:
answer = 1
else:
answer = number * factorial(number - 1)
return answer
print(factorial(0))
print(factorial(5)

Compound interest can be calculated using a recursive function. Where the principal is the amount of
money invested, rate is the rate of interest and years is the number of years the money has been invested
The base case is total0 = principal where years = 0
The general case totaln = totaln-1 * rate
is

DEFINE FUNCTION compoundInt(principal,rate,years :REAL)RETURNS REAL


IF years = 0
THEN
total ← principal
ELSE
total ← compoundInt(principal * rate, rate, years - 1)
ENDIF
RETURN total
ENDFUNCTION

Essential Features

1. Base Case:
The base case is the terminating condition that stops the recursive calls. It represents the smallest
instance of the problem that can be solved directly without further recursion. Without a base case, the
recursive calls would continue indefinitely, leading to a stack overflow.

2. Progress Toward Base Case:


In a well-structured recursive algorithm, each recursive call should move the problem closer to the
base case. This ensures that the problem size decreases with each recursive call, eventually reaching the
base case.

3. Function Calls Itself:


The defining characteristic of recursion is that a function calls itself. This self-referential behavior
allows the algorithm to break down a complex problem into simpler, more manageable subproblems.

4. Memory Allocation (Stack):


Each recursive call adds a new frame to the call stack, storing information about the state of the
function, including local variables and the return address. The stack is crucial for keeping track of the
sequence of recursive calls and their corresponding results.

5. Elegance and Simplicity:


Recursion can lead to more elegant and concise code, especially when dealing with problems that
naturally exhibit a recursive structure. It often mirrors the mathematical definition of the problem,
making the code more intuitive and easier to understand.

6. Applicability to Problems with Recursive Structure:


Recursion is most effective when applied to problems that can be naturally divided into smaller
instances of the same problem. Examples include problems involving tree structures, mathematical
sequences, and hierarchical relationships.

7. Risk of Stack Overflow:


In recursive algorithms, excessive recursion without proper termination conditions or large input sizes
may lead to a stack overflow. It's essential to design recursive algorithms carefully to avoid this issue.

8. Performance Considerations:
While recursion can be an elegant solution, it may not always be the most efficient. Recursive
function calls involve additional overhead, and certain problems may benefit from alternative approaches
such as iteration or dynamic programming.

9. Tail Recursion Optimization (Optional):


Some programming languages support tail call optimization, a feature that optimizes tail-recursive
functions by reusing the current function's stack frame for the next recursive call. This can improve
performance and reduce the risk of stack overflow.
0
Understanding and carefully implementing these features are crucial for creating correct and efficient
recursive algorithms. Recursion is a powerful tool, but it requires thoughtful design to ensure proper
functioning and to avoid common pitfalls.

Benefits of recursion
 Recursive solutions can contain fewer programming statements than an iterative solution.
 The solutions can solve complex problems in a simpler way than an iterative solution.
 However, if recursive calls to procedures and functions are very repetitive, there is a very heavy use
of the stack, which can lead to stack overflow.
 For example, factorial(100) would require 100 function calls to be placed on the stack before the
function unwinds.

How a compiler implements recursion


 Recursive code needs to make use of the stack; therefore, in order to implement recursive procedures
and functions in a high-level programming language, a compiler must produce object code that
pushes return addresses and values of local variables onto the stack with each recursive call, winding.
 The object code then pops the return addresses and values of local variables off the stack,
unwinding.
The use of stacks and unwinding is an essential aspect of how recursion is implemented and
managed in programming languages. Let's explore these concepts:

1. Function Call Stack:


- When a function is called, a new frame is created on the call stack to store information about the
function call. This includes parameters, local variables, and the return address.
- In the context of recursion, each recursive call adds a new frame to the stack. This creates a stack of
frames, each corresponding to a specific invocation of the recursive function.
- The stack structure is Last In, First Out (LIFO), meaning that the most recently called function must
complete before the previous one can resume.

2. Unwinding the Stack:


- When a base case is reached in a recursive function, the recursion starts to unwind. Unwinding
involves the sequential removal of frames from the stack.
- As each frame is removed, the associated function completes its execution, and control returns to the
previous level of recursion.
- The unwinding process continues until the initial (first) function call is reached. At this point, the
entire stack has been unwound, and the recursion is complete.

3. Memory Management:
- The stack is a region of memory allocated for function calls and local variables. Each frame on the
stack represents a specific context of a function call.
- As the recursion unwinds, the memory occupied by each frame is deallocated, making it available for
other parts of the program.
- Proper memory management is crucial to prevent stack overflow, which occurs when the stack
becomes too large due to excessive recursive calls without reaching a base case.

4. Tail Recursion Optimization:


- Tail recursion is a specific form of recursion where the recursive call is the last operation performed
in the function. Some programming languages, like certain implementations of functional languages,
support tail call optimization.
- Tail call optimization allows the system to reuse the current function's stack frame for the next
recursive call. This optimization can prevent the stack from growing excessively and improve
performance.
In this example(factorial), each recursive call adds a new frame to the stack, and the unwinding process
occurs when the base case is reached. The stack is gradually unwound, and the final result is calculated.

Real World Application


1. File System Navigation:
Recursive algorithms are commonly used in file system navigation and operations. When you navigate
through folders on your computer, a recursive algorithm can be employed to explore each directory and
its subdirectories. This is especially useful for tasks like searching for a specific file or calculating the
total size of a directory and its subdirectories.

2.Sorting Algorithms:
Recursive algorithms are commonly used in sorting algorithms, such as the famous Merge Sort and
Quick Sort. These algorithms break down the sorting problem into smaller subproblems, sort each
subproblem, and then combine the results to achieve a fully sorted list. Recursive sorting algorithms are
widely used in various applications, including data analysis and database management.

3.Fractal Generation
Fractals, which are complex geometric patterns, are often generated using recursive algorithms.The
Mandelbrot set is a well-known example of a fractal that can be generated using recursion.In this
application, the algorithms repeatedly applies a mathematical formula to generate intricate and self-
replicating patterns.

Exam Questions

From Specimen paper 9618/3 number 7


7. An ordered binary tree Abstract Data Type (ADT) has these associated operations:

 create tree
 add new item to tree
 traverse tree

A student is designing a program that will implement a binary tree ADT as a linked list of ten
nodes.

Each node consists of data, a left pointer and a right pointer.

A program is to be written to implement the tree ADT. The variables and procedures to be usedare listed
below:

Identifier Data type Description


Node RECORD Data structure to store node data and associated
pointers.
LeftPointer INTEGER Stores index of start of left subtree.
RightPointer INTEGER Stores index of start of right subtree.
Data STRING Data item stored in node.
Tree ARRAY Array to store nodes.
NewDataItem STRING Stores data to be added.
FreePointer INTEGER Stores index of start of free list.
RootPointer INTEGER Stores index of root node.
NewNodePointer INTEGER Stores index of node to be added.
CreateTree() Procedure initialises the root pointer and free pointer
and links all nodes together into the free list.
AddToTree() Procedure to add a new data item in the correct
position in the binary tree.
FindInsertionPoint() Procedure that finds the node where a new node is
to be added.
Procedure takes the parameter NewDataItem and
returns two parameters:
 Index, whose value is the index of the node
where the new node is to be added
 Direction, whose value is the direction of the
pointer (“Left” or “Right”).
These pseudocode declarations and this procedure can be used to create an empty tree with ten nodes.

TYPE Node
DECLARE LeftPointer : INTEGER
DECLARE RightPointer: INTEGER
DECLARE Data : STRING
ENDTYPE
DECLARE Tree : ARRAY[0 : 9] OF Node
DECLARE FreePointer : INTEGER
DECLARE RootPointer : INTEGER

PROCEDURE CreateTree() DECLARE


Index : INTEGER
RootPointer  -1
FreePointer  0
FOR Index  0 TO 9 // link nodes Tree[Index].LeftPointer
 Index + 1Tree[Index].RightPointer  -1
NEXT
Tree[9].LeftPointer  -1
ENDPROCEDURE
(a) Complete the pseudocode to add a data item to the tree.

PROCEDURE AddToTree(BYVALUE NewDataItem : STRING)


// if no free node report an error

IF FreePointer ............................................................................................................
THEN
OUTPUT "No free space left"
ELSE
// add new data item to first node in the free list
NewNodePointer  FreePointer

.............................................................................................................................
// adjust free pointer

FreePointer  ..............................................................................................
// clear left pointer

Tree[NewNodePointer].LeftPointer  ................................................
// is tree currently empty?

IF ......................................................................................................................
THEN // make new node the root node

..............................................................................................................
ELSE // find position where new node is to be addedIndex
 RootPointer
CALL FindInsertionPoint(NewDataItem,Index,Direction)IF
Direction = "Left"
THEN // add new node on left

...........................................................................................
ELSE // add new node on right

...........................................................................................

ENDIF
ENDIF

ENDIF
ENDPROCEDURE [8]
(b) The traverse tree operation outputs the data items in alphabetical order.
This can be written as a recursive solution.

Complete the pseudocode for the recursive procedure TraverseTree.

PROCEDURE TraverseTree(BYVALUE Pointer : INTEGER)

...................................................................................................................................................

...................................................................................................................................................

...................................................................................................................................................

...................................................................................................................................................

...................................................................................................................................................

...................................................................................................................................................

...................................................................................................................................................

...................................................................................................................................................

...................................................................................................................................................

...................................................................................................................................................

...................................................................................................................................................

ENDPROCEDURE [5]
ANSWER:

Question Answer Marks


7(a) PROCEDURE AddToTree(ByVal NewDataItem : STRING) 8
// if no free node report an error
IF FreePointer = -1 [1]
THEN
ERROR("No free space left")
ELSE // add new data item to first node in the
free list
NewNodePointer  FreePointer
Tree[NewNodePointer].Data  NewDataItem [1]
// adjust free pointer
FreePointer  Tree[FreePointer].LeftPointer [1]
// clear left pointer
Tree[NewNodePointer].LeftPointer  -1 [1]
// is tree currently empty?
IF RootPointer = -1 [1]
THEN // make new node the root node
RootPointer  NewNodePointer [1]
ELSE // find position where new node is to
be added
Index  RootPointer
CALL FindInsertionPoint(NewDataItem,
Index, Direction)
IF Direction = "Left"
THEN // add new node on left
Tree[Index].LeftPointer 
NewNodePointer [1]
ELSE // add new node on right
Tree[Index].RightPointer 
NewNodePointer [1]
ENDIF
ENDIF
ENDIF
ENDPROCEDURE
7(b) test for base case [1] recursive 5
call for left pointer [1]output
data [1]
recursive call for right pointer [1] order:
visit left, output, visit right [1]
IF Pointer <> -1 THEN
TraverseTree(Tree[Pointer].LeftPointer)
OUTPUT Tree[Pointer].Data
TraverseTree(Tree[Pointer].RightPointer)
ENDIF

Prepared By: Sir R. Mazambara 2


Email Whatsapp Me
Appendix
1. Factorials:
def factorial(n):
# Base case: factorial of 0 is 1
if n == 0:
return 1
else:
# Recursive case: n! = n * (n-1)!
result = n * factorial(n - 1)
return result

# Example usage:
result = factorial(5)
print(f"The factorial of 5 is {result}")
Explanation:
- The `factorial` function calculates the factorial of a given number `n`.
- The base case checks if `n` is 0 and returns 1, as the factorial of 0 is defined to be 1.
- The recursive case calculates the factorial using the formula `n! = n * (n-1)!`. It calls
itself with the argument `n - 1`.
- The example calculates and prints the factorial of 5.

2. Arithmetic Sequences:
def arithmetic_sequence(a, d, n):
# Formula for the nth term of an arithmetic sequence: a_n = a + (n-1)d
nth_term = a + (n - 1) * d
return nth_term

# Example usage:
result = arithmetic_sequence(3, 2, 5)
print(f"The 5th term in the arithmetic sequence is {result}")

Explanation:
- The `arithmetic_sequence` function calculates the nth term of an arithmetic
sequence.
- The parameters `a`, `d`, and `n` represent the first term, common difference, and
term number, respectively.
- The formula `a_n = a + (n-1)d` is used to calculate the nth term.
- The example calculates and prints the 5th term in an arithmetic sequence with a first
term of 3, common difference of 2.

3. Fibonacci Numbers:
def fibonacci(n):
# Base case: fibonacci(0) = 0, fibonacci(1) = 1
if n == 0:
return 0
elif n == 1:
return 1
else:
# Recursive case: fibonacci(n) = fibonacci(n-1) + fibonacci(n-2)
result = fibonacci(n - 1) + fibonacci(n - 2)
return result

Prepared By: Sir R. Mazambara 3


Email Whatsapp Me
# Example usage:
result = fibonacci(6)
print(f"The 6th Fibonacci number is {result}")

Explanation:
- The `fibonacci` function calculates the nth Fibonacci number.
- The base case checks if `n` is 0 or 1 and returns the corresponding Fibonacci values.
- The recursive case calculates the Fibonacci number using the formula `fibonacci(n)
= fibonacci(n-1) + fibonacci(n-2)`.
- The example calculates and prints the 6th Fibonacci number.

4. Compound Interest:
def compound_interest(principal, rate, time):
# Formula for compound interest: A = P * (1 + r/n)^(nt)
# Where:
# A is the final amount
# P is the principal amount
# r is the annual interest rate
# n is the number of times interest is compounded per year
# t is the number of years
A = principal * (1 + rate/100) ** time
return A

# Example usage:
result = compound_interest(1000, 5, 3)
print(f"The compound interest after 3 years is {result - 1000}")

Explanation:
- The `compound_interest` function calculates the compound interest using the
compound interest formula.
- The parameters `principal`, `rate`, and `time` represent the principal amount, annual
interest rate, and the number of years, respectively.
- The formula for compound interest is `A = P * (1 + r/n)^(nt)`, where `A` is the final
amount.
- The example calculates and prints the compound interest after 3 years for a principal
of $1000 at an annual interest rate of 5%.

Converting a program from recursive to iterative (and vice versa) involves


restructuring the logic to use loops (iterative) or function calls (recursive) in a
different way. Here are the steps and procedures for each conversion, along with an
example in Python:

Converting from Recursive to Iterative:

1. Identify Base Case:


- Identify the base case in the recursive function.

2. Initialize Variables:
- Replace function parameters with loop variables.
- Initialize these variables with values equivalent to the initial recursive call.

Prepared By: Sir R. Mazambara 4


Email Whatsapp Me
3. Use a Loop:
- Replace the recursive calls with a loop.
- Update loop variables according to the logic of the recursive calls.

4. Terminate Loop:
- Ensure that the loop terminates when the base case is satisfied.

Example (Factorial in Recursive to Iterative):

# Recursive Factorial
def factorial_recursive(n):
if n == 0:
return 1
else:
return n * factorial_recursive(n - 1)

# Iterative Factorial
def factorial_iterative(n):
result = 1
for i in range(1, n + 1):
result *= i
return result

# Example Usage
recursive_result = factorial_recursive(5)
iterative_result = factorial_iterative(5)

print(f"Recursive Factorial: {recursive_result}")


print(f"Iterative Factorial: {iterative_result}")
```

Converting from Iterative to Recursive:

1. Identify Loop Variables:


- Identify the variables modified in the loop.

2. Transform Loop Logic:


- Transform the logic inside the loop into a recursive call.
- Use function parameters instead of loop variables.

3. Base Case in Loop:


- Identify the condition for terminating the loop and use it as the base case for
recursion.

4. Recursive Call:
- Replace the loop with a recursive call to the function.

Example (Factorial in Iterative to Recursive):

Prepared By: Sir R. Mazambara 5


Email Whatsapp Me
# Iterative Factorial
def factorial_iterative(n):
result = 1
for i in range(1, n + 1):
result *= i
return result

# Recursive Factorial
def factorial_recursive(n, result=1):
if n == 0:
return result
else:
return factorial_recursive(n - 1, result * n)

# Example Usage
iterative_result = factorial_iterative(5)
recursive_result = factorial_recursive(5)

print(f"Iterative Factorial: {iterative_result}")


print(f"Recursive Factorial: {recursive_result}")

These examples illustrate the process of converting a factorial function from recursive
to iterative and vice versa in Python. The logic remains the same, but the structure of
the code is adapted to either loops or recursive calls.

Recursion and iteration are both programming techniques used to solve


problems and execute algorithms, but they differ in their approaches. Here's a
comparison of the use of recursion and iteration:

Recursion:

1. Definition:
- Recursion involves a function calling itself to solve a smaller instance of the same
problem.

2. Structure:
- Recursive functions have a base case that stops the recursion and one or more
recursive cases that reduce the problem size.

3. Readability:
- Recursion can lead to more elegant and readable code, especially when dealing
with problems that naturally exhibit a recursive structure.

4. Memory Usage:
- Recursive calls use the call stack, which may lead to stack overflow for deeply
nested calls if not optimized (e.g., tail recursion optimization).

5. Examples:

Prepared By: Sir R. Mazambara 6


Email Whatsapp Me
- Common examples include factorial calculation, tree traversal, and problems with
a recursive structure (e.g., Fibonacci sequence).

6. Ease of Implementation:
- Some problems are naturally expressed with recursion, making the implementation
more straightforward and intuitive.

Iteration:

1. Definition:
- Iteration involves executing a set of instructions repeatedly using loops until a
certain condition is met.

2. Structure:
- Iterative structures, like `for` and `while` loops, control the flow of execution until
a specified condition is satisfied.

3. Readability:
- Iteration can sometimes result in more verbose code compared to recursion,
especially for certain types of problems.

4. Memory Usage:
- Iterative approaches typically use less memory since they don't rely on the call
stack as heavily as recursion.

5. Examples:
- Common examples include searching and sorting algorithms, as well as tasks that
involve repeated execution of a set of instructions.

6. Ease of Implementation:
- Iterative solutions are often more straightforward for certain types of problems,
particularly those that don't have a natural recursive structure.

Considerations for Choosing Between Recursion and Iteration:

- Base Case:
- Problems that naturally lend themselves to a base case and recursive structure are
often more suitable for recursion.
- Problems with well-defined iteration conditions may be better solved using loops.

- Performance:
- Recursive solutions may have higher overhead due to function calls and the use of
the call stack.
- Iterative solutions may be more performant in certain cases.

- Readability vs. Efficiency:


- Choose the approach that balances code readability with efficiency based on the
specific requirements of the problem.

Prepared By: Sir R. Mazambara 7


Email Whatsapp Me
In summary, both recursion and iteration are valuable tools, and the choice between
them depends on the nature of the problem, performance considerations, and personal
coding style. Some problems are naturally solved with recursion, while others are
more suited for iterative approaches.

When a compiler encounters recursive programming code, it needs to perform


several tasks to translate the code into machine code or an intermediate
representation. Here's an overview of the steps a compiler typically takes when
dealing with recursive code:

1. Lexical Analysis:
- The compiler starts with lexical analysis or scanning, breaking the source code
into tokens (keywords, identifiers, operators, etc.).

2. Syntax Analysis (Parsing):


- The syntax analysis phase verifies that the code follows the grammar rules of the
programming language.
- Recursive descent parsing or other parsing techniques are employed to build a
syntax tree.

3. Semantic Analysis:
- The compiler checks for semantic errors and ensures that the code adheres to the
language's semantics.
- It performs type checking and other analyses to catch potential issues.

4. Intermediate Code Generation:


- The compiler generates an intermediate code representation of the source code.
This intermediate code is often closer to the machine code but abstracted for further
optimization.

5. Optimization:
- The compiler applies various optimization techniques to improve the efficiency of
the code.
- Common optimizations include constant folding, loop unrolling, and inlining.

6. Code Generation:
- The compiler translates the optimized intermediate code into machine code or
another target code.
- This involves mapping the abstracted intermediate code to the specific instructions
of the target architecture.

7. Register Allocation:
- The compiler allocates registers for variables and manages the storage of data.
- This phase aims to optimize the use of CPU registers to minimize memory access.

8. Code Linking (if applicable):


- For compiled languages that use separate compilation units, the compiler may link
the generated code with other modules or libraries.

9. Error Handling:

Prepared By: Sir R. Mazambara 8


Email Whatsapp Me
- The compiler provides meaningful error messages if it encounters issues during
the translation process.
- It helps developers identify and fix errors in their code.

10. Debug Information Generation (optional):


- The compiler may generate debug information to assist in debugging the compiled
code. This information includes mapping machine code back to the original source
code.

11. Output Generation:


- The final step involves generating the executable file or another target output
format, depending on the compiler's purpose.

In the context of recursive programming, the compiler must handle recursive function
calls correctly, ensuring that the generated machine code maintains the necessary
stack frames and manages the call stack appropriately. This involves tracking function
parameters, local variables, and the return address for each recursive invocation. The
compiler must also optimize the code to minimize unnecessary stack operations and
improve overall performance.

Prepared By: Sir R. Mazambara 9


Email Whatsapp Me

You might also like