Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
11 views

Algorithm & Complexity - V2

This document provides course notes on algorithms and complexity from the IT department at University M’hamed Bougara-Boumerdes. It covers topics such as what is an algorithm, algorithm complexity, sorting algorithms, trees, heaps, graphs, and more. The notes are directed by Benabderrezak Youcef and contain definitions, explanations, examples, and pseudocode for various algorithms and data structures. It also includes a table of contents to help navigate the different sections.

Uploaded by

Aymen Raki
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Algorithm & Complexity - V2

This document provides course notes on algorithms and complexity from the IT department at University M’hamed Bougara-Boumerdes. It covers topics such as what is an algorithm, algorithm complexity, sorting algorithms, trees, heaps, graphs, and more. The notes are directed by Benabderrezak Youcef and contain definitions, explanations, examples, and pseudocode for various algorithms and data structures. It also includes a table of contents to help navigate the different sections.

Uploaded by

Aymen Raki
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

University M’hamed Bougara-Boumerdes

Faculty of Sciences

IT departement

Algorithm & Complexity

Course notes

Directed By :
Benabderrezak Youcef
-Phd student in Cyber security and future Pr-
y.brnabderrezak@univ-boumerdes.dz

Telegram : https://t.me/infoumbb2

2023 / 2024
Table of contents

1. What is an Algorithm ? .............................................................................................3


2. Caracteristics of an algorithm ...................................................................................3
3. Algorithm Complexity ..............................................................................................4
3.1. Definition.............................................................................................................4
4. Why do we need to understand the Complexity? ....................................................5
5. Complexity calculus .................................................................................................5
5.1. What is Complexity calculus ? ..............................................................................5
5.2. Big O classes .......................................................................................................5
5.3. How to calculate Big O – the basics ...................................................................7
6. Sorting Algorithms ...................................................................................................9
6.1. Bubble Sort ..........................................................................................................9
6.2. Selection Sort ....................................................................................................10
6.3. Insertion Sort .....................................................................................................10
6.4. Merge Sort .........................................................................................................11
6.5. QuickSort...........................................................................................................12
7. Trees ........................................................................................................................15
7.1. Terminologies associated with Binary Trees ...................................................15
7.2. Binary tress ........................................................................................................16
a. Full Binary Tree .............................................................................................16
b. Complete Binary Tree ...................................................................................16
c. Perfect Binary Tree ........................................................................................16
d. Balanced Binary tree .....................................................................................17
e. Degenerate Binary Tree .................................................................................18
7.3. CRUD operations in Binary trees .....................................................................18
8. Implementation of trees ..........................................................................................20
8.1. Implementation of generals trees ......................................................................20
8.2. Implementation of Binary Search Trees (BST) ................................................22
8.3. Implementation of balanced search binary tree ................................................23
9. Heap Data structure ................................................................................................25
9.1. Definition of Heap.............................................................................................25
9.2. Description of Heap ..........................................................................................26
9.3. Some operation on heap ....................................................................................26
a. Insertion (enqueue) ........................................................................................26

1
b. Deletion (Removal of the heap root, dequeue) .............................................27
9.4. Heap Sort ...........................................................................................................27
10. Graphs ...................................................................................................................29
10.1. what Graph ? ..................................................................................................29
10.2. particular graphs ............................................................................................30
a. Undirected Graph...........................................................................................30
b. Directed Graph (Digraph)..............................................................................30
c. Weighted Graph .............................................................................................30
d. Unweighted Graph .........................................................................................31
f. Cyclic Graph ..................................................................................................31
g. Acyclic Graph ................................................................................................32
h. Complete Graph .............................................................................................32
i. Sparse Graph .....................................................................................................33
j. Dense Graph ......................................................................................................33
k. Bipartite Graph ..............................................................................................33
l. Connected Graph ...............................................................................................34
m. Tree ................................................................................................................34
n. Forest..............................................................................................................34
10.3. Graph representation ......................................................................................35
a. Adjacency matrix ...........................................................................................35
b. Adjacency list: ...............................................................................................36
c. Incidence matrix ............................................................................................36
d. Edge list .........................................................................................................37
10.4. Graph Traversal .............................................................................................37
a. Breadth-first search (BFS) .............................................................................37
b. Depth-first search (DFS) ...............................................................................38
10.5. Dijkstra algorithm ..........................................................................................39
a. Dijkstra pseudo algorithm .............................................................................39

2
1. What is an Algorithm ?
 An algorithm is a set of instructions or a step-by-step procedure for solving a
problem or performing a specific task.
 It is a well-defined, finite sequence of computational steps that takes some input
and produces an output

Algorithms are used in a wide range of applications, including computer programming,


mathematics, engineering, and data analysis. They are essential for solving complex
problems, processing large amounts of data, and automating tasks.

2. Caracteristics of an algorithm
In order for some instructions to be an algorithm, it must have the following
characteristics:

1. Well-Defined Inputs: If an algorithm asks for inputs, it should clearly state what
inputs it needs, and they should make sense.
2. Well-Defined Outputs: The algorithm must clearly say what output it will
produce, and it should be a sensible and well-defined result.
3. Finite-ness: The algorithm should have a clear endpoint, and it should not go on
forever.

3
4. Feasible: The algorithm should be practical and able to be executed with the
resources available.
5. Language Independent: The algorithm should be able to be understood and
implemented in any programming language.
6. Clear and unambiguous : refer to instructions or descriptions in an algorithm
that are easy to understand and do not have any ambiguity or confusion in their
meaning.

3. Algorithm Complexity
3.1. Definition

 Algorithm complexity, also known as time complexity and space complexity


 Is a way to measure the efficiency of an algorithm in terms of the resources it
requires to solve a problem.
 Time complexity measures the amount of time an algorithm takes to complete its
task, while space complexity measures the amount of memory the algorithm uses.

4
4. Why do we need to understand the Complexity?
 The performance of an algorithm influences the execution of the program.
 Hence, understanding the complexity of an algorithm is significant.
 Thus the developer should be aware of how to find the time complexity of an
algorithm.

 The complexity of an algorithm helps the user to :


1. Utilize the algorithm with the optimized data resources.
2. Estimate the time taken by the algorithm to produce the results.
3. Ensure the results produced by the execution of the algorithm are valid and not
outdated.
4. Calculate the space complexity or storage (memory required) for the algorithm.

5. Complexity calculus
5.1. What is Complexity calculus ?
 Algorithm complexity calculus, also known as time complexity analysis, is a
way to measure and understand how the performance of an algorithm changes
with the size of its input.
 It helps us predict how much time an algorithm will take to solve a problem as
the problem's input size increases.
 The most common notation used for expressing AC is Big O

5.2. Big O classes

 Big O classes, also known as time complexity classes, are categories that group
algorithms based on their growth rates or upper bounds of time complexity as the
input size increases.
 These classes help us compare and analyze the efficiency of algorithms. Here are
some commonly encountered Big O classes: O(1), O(log n) , O(n), O( n log n),

O( n^2), O(n^c), O(2^n), O(n !)

5
5.2.1. O(1) - Constant Time Complexity
 Algorithms with constant time complexity have a fixed execution time
regardless of the input size.
 These algorithms perform a constant number of operations, making them
highly efficient.
 Example: Accessing an element from an array by its index.

5.2.2. O(log n) - Logarithmic Time Complexity


 Algorithms with logarithmic time complexity grow slowly as the input
size increases.
 They divide the input into smaller parts at each step.
 Example: Binary search on a sorted array.
5.2.3. O(n) - Linear Time Complexity
 Algorithms with linear time complexity have their execution time directly
proportional to the input size.
 As the input grows, the execution time increases linearly.
 Example: Linear search in an unsorted list.

5.2.4. O(n log n) - Linearithmic Time Complexity


Algorithms with linearithmic time complexity grow faster than linear but
slower than quadratic.
They are often found in efficient sorting and searching algorithms.
Example: Merge Sort and QuickSort.

6
5.2.5. O(n^2) - Quadratic Time Complexity
Algorithms with quadratic time complexity have their execution time
proportional to the square of the input size.
As the input grows, the execution time increases quadratically.
Example: Bubble Sort and Selection Sort.

5.2.6. O(n^c) - Polynomial Time Complexity


Algorithms with polynomial time complexity have their execution time
proportional to the input size raised to a constant power 'c'.
Example: Matrix multiplication using the Strassen algorithm.

5.2.7. O(2^n) - Exponential Time Complexity


Algorithms with exponential time complexity grow rapidly with the input
size.
They are generally inefficient for larger inputs.
Example: Brute-force approaches for some combinatorial problems.

5.2.8. O(n!) - Factorial Time Complexity


Algorithms with factorial time complexity have an execution time that
grows extremely fast with the input size, making them highly inefficient.
Example: Brute-force solutions for some permutation-related problems.

5.3. How to calculate Big O – the basics


1. Break your algorithm/function into individual operations
2. Calculate the Big O of each operation
3. Add up the Big O of each operation together
4. Remove the constants
5. Find the highest order term — this will be what we consider the Big O of our
algorithm/function

Let's apply the steps to analyze the "Find max in array" algorithm to calculate its Big O
complexity:

7
1. Break the algorithm into individual operations:
Initialize a variable to store the maximum value.
Iterate through the array to find the maximum value.
2. Calculate the Big O of each operation:
Initializing a variable takes constant time, O(1).
Iterating through the array takes linear time, O(n).
3. Add up the Big O of each operation together:
The algorithm has two operations, one with O(1) and the other with O(n).
Total Big O: O(1) + O(n)
4. Remove the constants:
In Big O notation, constants are dropped since they don't significantly
affect the growth rate.
Simplified Big O: O(n)
5. Find the highest order term:
The highest order term in the simplified Big O is "n".
Final Big O: O(n)

Therefore, the Big O complexity of the "Find max in array" algorithm is O(n),
meaning its time complexity grows linearly with the size of the input array.

8
6. Sorting Algorithms
A sorting algorithm is an algorithm that arranges elements in a specific
order, such as ascending or descending order.
Sorting is a fundamental operation in computer science and plays a crucial
role in various applications and tasks.
There are numerous sorting algorithms, each with its unique approach and
time complexity.
The primary goal of a sorting algorithm is to reorder elements in a
collection (e.g., an array or a list) so that they follow a specific order.

6.1. Bubble Sort


Definition
 Bubble Sort is a simple comparison-based sorting algorithm that repeatedly
steps through the list, compares adjacent elements, and swaps them if they
are in the wrong order.
 The pass through the list is repeated until the list becomes sorted.
Complexity: O(n^2)

9
6.2. Selection Sort
Definition
Selection Sort is a simple comparison-based sorting algorithm that divides
the input list into a sorted and an unsorted part.
It repeatedly finds the minimum element from the unsorted part and swaps
it with the first element of the unsorted part.
Complexity: O(n^2)

6.3. Insertion Sort


Definition
Insertion Sort is a simple comparison-based sorting algorithm that builds
the final sorted array one item at a time.
It takes elements from the unsorted part and inserts them into their
correct position in the sorted part.
Complexity: O(n^2)

10
6.4. Merge Sort
Definition
Merge Sort is a comparison-based divide-and-conquer sorting algorithm.
It divides the input list into two halves, recursively sorts them, and then
merges the sorted halves.
Complexity: O(n log n)

#include <stdio.h>
// Merge helper function for Merge Sort
void merge(int arr[], int l, int m, int r) {
int i, j, k;
int n1 = m - l + 1;
int n2 = r - m;

// Create temporary arrays to store the left and right subarrays


int L[n1], R[n2];
// Copy data from the original array to the temporary arrays
for (i = 0; i < n1; i++) {
L[i] = arr[l + i];
}
for (j = 0; j < n2; j++) {
R[j] = arr[m + 1 + j];
}
// Merge the two temporary arrays back into the original array
i = 0; // Initial index of the left subarray
j = 0; // Initial index of the right subarray
k = l; // Initial index of the merged subarray
while (i < n1 && j < n2) {
// Compare elements of the two subarrays and merge them in ascending order
if (L[i] <= R[j]) {
arr[k] = L[i];
i++;

11
} else {
arr[k] = R[j];
j++;
}
k++;
}
// Copy the remaining elements, if any, of the left subarray
while (i < n1) {
arr[k] = L[i];
i++;
k++;
}
// Copy the remaining elements, if any, of the right subarray
while (j < n2) {
arr[k] = R[j];
j++;
k++;
}
}
// Merge Sort function
void mergeSort(int arr[], int l, int r) {
if (l < r) {
int m = l + (r - l) / 2;
// Recursively sort the left and right halves
mergeSort(arr, l, m);
mergeSort(arr, m + 1, r);
// Merge the sorted halves
merge(arr, l, m, r);
}
}
int main() {
int arr[] = {38, 27, 43, 3, 9, 82, 10};
int size = sizeof(arr) / sizeof(arr[0]);
printf("Original array: ");
for (int i = 0; i < size; i++) {
printf("%d ", arr[i]);
}
printf("\n");
mergeSort(arr, 0, size - 1);
printf("Sorted array: ");
for (int i = 0; i < size; i++) {
printf("%d ", arr[i]);
}
printf("\n");
return 0;
}

6.5. QuickSort
Definition

12
QuickSort is a comparison-based divide-and-conquer sorting algorithm.
It selects a pivot element and partitions the array around the pivot,
recursively sorting the sub-arrays.
Complexity: O(n log n)

#include <stdio.h>

// Partition helper function for QuickSort


// This function selects a pivot element and partitions the array into two parts:
// -Elements less than or equal to the pivot on the left side.
// - Elements greater than the pivot on the right side.
// It returns the index of the pivot after partitioning.
int partition(int arr[], int low, int high) {
int pivot = arr[high]; // Choose the last element as the pivot
int i = low - 1; // Index of the smaller element
// Traverse the subarray from 'low' to 'high - 1'
for (int j = low; j < high; j++) {
// If the current element is smaller than or equal to the pivot
if (arr[j] <= pivot) {
i++; // Increment the index of the smaller element
// Swap arr[i] and arr[j] to move the smaller element to the left side
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
}
// Swap arr[i + 1] and arr[high] to place the pivot in its correct position
int temp = arr[i + 1];
arr[i + 1] = arr[high];
arr[high] = temp;
return i + 1; // Return the index of the pivot
}
// QuickSort function

13
// This function recursively sorts the subarrays by selecting a pivot element and
// partitioning the array around it. It then applies QuickSort to the left and right subarrays.
void quickSort(int arr[], int low, int high) {
if (low < high) {
// Partition the array and get the index of the pivot
int pi = partition(arr, low, high);
// Recursively sort the left and right subarrays
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}
int main() {
int arr[] = {38, 27, 43, 3, 9, 82, 10};
int size = sizeof(arr) / sizeof(arr[0]);
printf("Original array: ");
for (int i = 0; i < size; i++) {
printf("%d ", arr[i]);
}
printf("\n");
quickSort(arr, 0, size - 1);
printf("Sorted array: ");
for (int i = 0; i < size; i++) {
printf("%d ", arr[i]);
}
printf("\n");
return 0;
}

14
7. Trees
In computer science, a tree is a hierarchical data structure that consists of
nodes connected by edges.
Each tree has a root node, which serves as the starting point, and all other
nodes are organized in a parent-child relationship.
The topmost node in the tree is the root, and nodes without children are
called leaves.

7.1. Terminologies associated with Binary Trees


 Node: It represents a termination point in a tree.
 Root: A tree’s topmost node.
 Parent: Each node (apart from the root) in a tree that has at least one sub-node of
its own is called a parent node.
 Child: A node that straightway came from a parent node when moving away from
the root is the child node.
 Leaf Node: These are external nodes. They are the nodes that have no child.
 Internal Node: As the name suggests, these are inner nodes with at least one
child.
 Depth of a Tree: The number of edges from the tree’s node to the root is.
 Height of a Tree: It is the number of edges from the node to the deepest leaf. The
tree height is also considered the root height.

15
7.2. Binary tress
A binary tree is a tree with a maximum of two children for each parent.There is
differents types of binary trees :

a. Full Binary Tree

Full Binary Tree is a Binary Tree in which every node has 0 or 2 children

b. Complete Binary Tree

Complete Binary Tree has all levels completely filled with nodes except the last level
and in the last level, all the nodes are as left side as possible.

c. Perfect Binary Tree

Tree in which all internal nodes have 2 children and all the leaf nodes are at the same
depth or same level.

16
d. Balanced Binary tree

A balanced binary tree is a binary tree that follows the 3 conditions:


1. The height of the left and right tree for any node does not
differ by more than 1.
2. The left subtree of that node is also balanced.
3. The right subtree of that node is also balanced.
A single node is always balanced. It is also referred to as a height-
balanced binary tree.

17
e. Degenerate Binary Tree

Tree where every parent node has only one child node

7.3. CRUD operations in Binary trees


#include <iostream>
// Define a structure for the binary tree node
struct TreeNode {
int data; // Data stored in the node
TreeNode* left; // Pointer to the left child
TreeNode* right; // Pointer to the right child
// Constructor to initialize node with data
TreeNode(int value) : data(value), left(nullptr), right(nullptr) {}
};
// Function to insert a new node into the binary tree
TreeNode* insert(TreeNode* root, int value) {
if (root == nullptr) {
return new TreeNode(value); // Create a new node if tree is empty
}
if (value < root->data) {
root->left = insert(root->left, value); // Traverse left if value is smaller
} else {
root->right = insert(root->right, value); // Traverse right if value is greater or equal
}
return root;
}
// Function to display the binary tree using in-order traversal
void display(TreeNode* root) {
if (root != nullptr) {
display(root->left); // Traverse left subtree
std::cout << root->data << " "; // Print current node's data
display(root->right); // Traverse right subtree
}
}
// Function to search for a value in the binary tree

18
bool search(TreeNode* root, int value) {
if (root == nullptr) {
return false; // Value not found if the tree is empty
}
if (root->data == value) {
return true; // Value found at the current node
} else if (value < root->data) {
return search(root->left, value); // Search in the left subtree
} else {
return search(root->right, value); // Search in the right subtree
}
}

// Function to update a value in the binary tree


void update(TreeNode* root, int oldValue, int newValue) {
if (root == nullptr) {
return; // If the tree is empty, nothing to update
}
if (root->data == oldValue) {
root->data = newValue; // Update the value at the current node
} else if (newValue < root->data) {
update(root->left, oldValue, newValue); // Update in the left subtree
} else {
update(root->right, oldValue, newValue); // Update in the right subtree
}
}
// Function to delete a node from the binary tree
TreeNode* deleteNode(TreeNode* root, int value) {
if (root == nullptr) {
return root; // Return null if tree is empty
}
if (value < root->data) {
root->left = deleteNode(root->left, value); // Traverse left for deletion
} else if (value > root->data) {
root->right = deleteNode(root->right, value); // Traverse right for deletion
} else {
if (root->left == nullptr) {
TreeNode* temp = root->right;
delete root;
return temp;
} else if (root->right == nullptr) {
TreeNode* temp = root->left;
delete root;
return temp;
}
TreeNode* temp = root->right;
while (temp->left != nullptr) {
temp = temp->left;
}
root->data = temp->data;
root->right = deleteNode(root->right, temp->data);
}

19
return root;
}
int main() {
TreeNode* root = nullptr;
// Create: Insertion
root = insert(root, 50);
root = insert(root, 30);
root = insert(root, 70);
root = insert(root, 20);
root = insert(root, 40);
// Read: Display the tree
std::cout << "In-order traversal: ";
display(root);
std::cout << std::endl;
// Read: Searching
int searchValue = 30;
if (search(root, searchValue)) {
std::cout << searchValue << " found in the tree." << std::endl;
} else {
std::cout << searchValue << " not found in the tree." << std::endl;
}
// Update: Changing a value
int oldValue = 30;
int newValue = 60;
update(root, oldValue, newValue);

// Read: Display the updated tree


std::cout << "In-order traversal after update: ";
display(root);
std::cout << std::endl;
// Delete: Removing a node
int deleteValue = 30;
root = deleteNode(root, deleteValue);
// Read: Display the tree after deletion
std::cout << "In-order traversal after deletion: ";
display(root);
std::cout << std::endl;
return 0;
}

8. Implementation of trees
8.1. Implementation of generals trees
Implementing a general tree involves creating a data structure that can
represent nodes with any number of child nodes.
Each node in a general tree can have an arbitrary number of children,
unlike a binary tree where each node has at most two children.

20
Here's a simple implementation of a general tree in C++ with comments explaining
each part:

#include <iostream>
#include <vector>
// Define a structure for the general tree node
struct TreeNode {
int data; // Data stored in the nod
std::vector<TreeNode*> children; // Vector to store child nodes
// Constructor to initialize node with data
TreeNode(int value) : data(value) {}
};
// Function to create a new node
TreeNode* createNode(int value) {
return new TreeNode(value);
}
// Function to add a child node to a parent node
void addChild(TreeNode* parent, TreeNode* child) {
parent->children.push_back(child);
}
// Function to perform a depth-first traversal of the general tree
void depthFirstTraversal(TreeNode* root) {
if (root == nullptr) {
return;
}
std::cout << root->data << " "; // Process current node
for (TreeNode* child : root->children) {
depthFirstTraversal(child); // Recurse on child nodes
}
}
int main() {
// Creating nodes
TreeNode* root = createNode(1);
TreeNode* child2 = createNode(2);
TreeNode* child3 = createNode(3);
TreeNode* child4 = createNode(4);
TreeNode* child5 = createNode(5);
// Adding children
addChild(root, child2);
addChild(root, child3);
addChild(child2, child4);
addChild(child2, child5);
// Performing depth-first traversal
std::cout << "Depth-first traversal: ";
depthFirstTraversal(root);
std::cout << std::endl;
return 0;
}

21
8.2. Implementation of Binary Search Trees (BST)
 Binary search trees (BSTs) are a specific type of binary tree where each node's
left subtree contains only nodes with values less than the node's value, and the
right subtree contains nodes with values greater than the node's value.
 Searching in a binary search tree involves efficiently finding a specific value
within the tree.

#include <iostream>
// Define a structure for the binary search tree node
struct TreeNode {
int data; // Data stored in the node
TreeNode* left; // Pointer to the left child
TreeNode* right; // Pointer to the right child

// Constructor to initialize node with data


TreeNode(int value) : data(value), left(nullptr), right(nullptr) {}
};
// Function to insert a new node into the binary search tree
TreeNode* insert(TreeNode* root, int value) {
if (root == nullptr) {
return new TreeNode(value); // Create a new node if tree is empty
}
if (value < root->data) {
root->left = insert(root->left, value); // Traverse left if value is smaller
} else {
root->right = insert(root->right, value); // Traverse right if value is greater or equal
}
return root;
}
// Function to search for a value in the binary search tree
bool search(TreeNode* root, int value) {
if (root == nullptr) {
return false; // Value not found if the tree is empty
}
if (root->data == value) {
return true; // Value found at the current node
} else if (value < root->data) {
return search(root->left, value); // Search in the left subtree
} else {
return search(root->right, value); // Search in the right subtree
}
}
int main() {
TreeNode* root = nullptr;
// Insertion
root = insert(root, 50);
root = insert(root, 30);
root = insert(root, 70);
root = insert(root, 20);

22
root = insert(root, 40);
// Searching
int searchValue = 30;
if (search(root, searchValue)) {
std::cout << searchValue << " found in the binary search tree." << std::endl;
} else {
std::cout << searchValue << " not found in the binary search tree." << std::endl;
}
return 0;
}

8.3. Implementation of balanced search binary tree


 A balanced binary search tree is a type of binary search tree where the height of
the left and right subtrees of any node differs by at most one.
 An AVL tree (Adelson-Velsky and Landis tree) is a self-balancing binary search
tree data structure.
 It was named after the inventors Georgy Adelson-Velsky and Evgenii Landis,
who introduced it in 1962.
 The primary feature of an AVL tree is that it maintains its balance property,
which ensures that the height difference between the left and right subtrees of
any node is at most one.

#include <iostream>
// Define a structure for the AVL tree node
struct TreeNode {
int data; // Data stored in the node
int height; // Height of the node
TreeNode* left; // Pointer to the left child
TreeNode* right; // Pointer to the right child
// Constructor to initialize node with data and height
TreeNode(int value) : data(value), height(1), left(nullptr), right(nullptr) {}
};
// Function to calculate the height of a node
int getHeight(TreeNode* node) {
if (node == nullptr) {
return 0;
}
return node->height;
}
// Function to calculate the balance factor of a node
int getBalanceFactor(TreeNode* node) {
if (node == nullptr) {
return 0;
}
return getHeight(node->left) - getHeight(node->right);

23
}
// Function to update the height of a node
void updateHeight(TreeNode* node) {
if (node == nullptr) {
return;
}
node->height = 1 + std::max(getHeight(node->left), getHeight(node->right));
}
// Function to perform a right rotation
TreeNode* rotateRight(TreeNode* y) {
TreeNode* x = y->left;
TreeNode* T2 = x->right;
x->right = y;
y->left = T2;

updateHeight(y);
updateHeight(x);
return x;
}
// Function to perform a left rotation
TreeNode* rotateLeft(TreeNode* x) {
TreeNode* y = x->right;
TreeNode* T2 = y->left;
y->left = x;
x->right = T2;
updateHeight(x);
updateHeight(y);
return y;
}
// Function to balance a node and maintain AVL property
TreeNode* balance(TreeNode* node) {
if (node == nullptr) {
return node;
}
updateHeight(node);
int balanceFactor = getBalanceFactor(node);
// Left Heavy
if (balanceFactor > 1) {
if (getBalanceFactor(node->left) >= 0) {
return rotateRight(node);
} else {
node->left = rotateLeft(node->left);
return rotateRight(node);
}
}
// Right Heavy
if (balanceFactor < -1) {
if (getBalanceFactor(node->right) <= 0) {
return rotateLeft(node);
} else {
node->right = rotateRight(node->right);
return rotateLeft(node);

24
}
}
return node;
}
// Function to insert a new node into the AVL tree
TreeNode* insert(TreeNode* root, int value) {
if (root == nullptr) {
return new TreeNode(value);
}

if (value < root->data) {


root->left = insert(root->left, value);
} else {
root->right = insert(root->right, value);
}
return balance(root);
}
// Function to perform an in-order traversal of the AVL tree
void inorderTraversal(TreeNode* root) {
if (root != nullptr) {
inorderTraversal(root->left);
std::cout << root->data << " ";
inorderTraversal(root->right);
}
}
int main() {
TreeNode* root = nullptr;
// Insertion
root = insert(root, 10);
root = insert(root, 20);
root = insert(root, 30);
root = insert(root, 40);
root = insert(root, 50);
root = insert(root, 25);

// Displaying the AVL tree using in-order traversal


std::cout << "In-order traversal of AVL tree: ";
inorderTraversal(root);
std::cout << std::endl;
return 0;
}

9. Heap Data structure


9.1. Definition of Heap
 Heap is a tree-like data structure that allows you to directly find the element
you want to process first.
 It is an almost complete binary tree ordered.

25
 A binary tree is said to be almost complete if all its levels are filled, except
possibly the last one, which must be filled on the left.
 Its leaves are therefore at the same minimum distance from the root, more or
less 1.

An example heap. It contains 9


items.
The highest priority item (100) is at
the root.

9.2. Description of Heap


A tree is said to be ordered in heap when one of the following properties is checked:

1. For all nodes A and B of the tree such that B is a child of A : key(A) > = key(B)
2. For all nodes A and B of the tree such that B is a child of A : key(A) <= key(B)

9.3. Some operation on heap


a. Insertion (enqueue)

26
b. Deletion (Removal of the heap root, dequeue)

9.4. Heap Sort


Heap Sort is a comparison-based sorting algorithm that uses the properties of a heap
data structure to efficiently sort an array.

The algorithm begins by constructing a max heap from the input array, transforming the
array into a structure where the largest element is at the root.

It then repeatedly extracts the root (the largest element), swaps it with the last element
in the heap (array), and restores the heap property through a process called "heapify."
This continues until the entire array is sorted.

Heap Sort has an average and worst-case time complexity of O(n log n), making it
efficient for larger datasets.

27
28
10. Graphs
10.1. what Graph ?
 A graph consists of a collection of nodes (also called vertices) and edges
connecting these nodes.

 Graphs are used to represent and model various relationships, connections, and
interactions among different entities.
 They have wide applications in various fields, including computer science,
social networks, transportation systems, and more.

29
10.2. particular graphs
a. Undirected Graph

A basic graph in which edges have no direction. It models relationships without


distinguishing between a source and a destination.

b. Directed Graph (Digraph)

A graph in which edges have a direction. It represents relationships where there is a


clear flow or direction between nodes.

c. Weighted Graph

A graph in which edges have weights or costs associated with them. Weighted graphs
are used to model scenarios where the connections between nodes have varying costs.

30
d. Unweighted Graph

A graph in which all edges have the same weight or no weight at all. The focus is on
the presence or absence of edges rather than their weights.

f. Cyclic Graph

A graph that contains at least one cycle, which is a path that starts and ends at the same
node.

31
g. Acyclic Graph

A graph that does not contain any cycles. Directed acyclic graphs (DAGs) are
particularly useful in modeling situations where dependencies exist, such as in task
scheduling or hierarchical relationships.

h. Complete Graph

A graph where there is an edge between every pair of distinct nodes. In an n-node
complete graph, there are n*(n-1)/2 edges.

32
i. Sparse Graph

A graph in which the number of edges is much less than the possible number of edges.

j. Dense Graph

A graph in which the number of edges is close to the possible number of edges.

k. Bipartite Graph

A graph whose nodes can be divided into two distinct sets such that there are no edges
between nodes within the same set.

33
l. Connected Graph

A graph in which there is a path between any pair of nodes. If a graph is not fully
connected, it can have connected components, subgraphs that are themselves
connected.

m. Tree

A connected acyclic graph with a single root node and a branching structure.

Trees are used to model hierarchical relationships and are fundamental in data
structures and algorithms.

n. Forest

A collection of disjoint trees (multiple trees not connected to each other).

Planar Graph: A graph that can be embedded in a plane without any edges crossing.
Planar graphs have applications in geographical networks and circuit design.

34
10.3. Graph representation
A graph representation is a way to store a graph in a computer so that it can be
manipulated and analyzed efficiently.

 Some of the most common graph representations are:

a. Adjacency matrix

 This is a square matrix where each row and column corresponds to a vertex in
the graph.
 The entry in row i, column j is 1 if there is an edge between vertex i and vertex j,
and 0 otherwise.
 Adjacency matrices are easy to implement and efficient for storing sparse graphs,
but they can be wasteful of space for dense graphs.

35
b. Adjacency list:

 This is a list of lists, where each list stores the vertices that are adjacent to a
given vertex.
 Adjacency lists are more space-efficient than adjacency matrices for dense
graphs, but they can be more difficult to implement and inefficient for sparse
graphs.

c. Incidence matrix

 This is a matrix where each row corresponds to an edge in the graph and each

column corresponds to a vertex in the graph.

 The entry in row i, column j is 1 if vertex j is an endpoint of edge i, and 0

otherwise.

 Incidence matrices are less common than adjacency matrices and adjacency lists,

but they can be useful for some applications.

36
d. Edge list

 This is a list of all the edges in the graph, where each edge is represented by a

pair of vertices.

 Edge lists are the simplest graph representation to implement, but they can be

inefficient for storing large graphs.

10.4. Graph Traversal

 Graph traversal is the process of visiting each vertex in a graph.

 Such traversals are classified by the order in which the vertices are visited.

 Tree traversal is a special case of graph traversal.

There are two main types of graph traversal algorithms:

a. Breadth-first search (BFS)

 BFS is an algorithm that is used to graph data or searching tree or traversing

structures.

 BFS algorithm is used to search a graph data structure for a node that meets a

set of criteria. It starts at the root of the graph and visits all nodes at the current

depth level before moving on to the nodes at the next depth level.

37
1. In the various levels of the data, you can mark any node as the starting or initial node

to begin traversing. The BFS will visit the node and mark it as visited and places it

in the queue.

2. Now the BFS will visit the nearest and un-visited nodes and marks them. These

values are also added to the queue. The queue works on the FIFO model.

3. In a similar manner, the remaining nearest and un-visited nodes on the graph are

analyzed marked and added to the queue. These items are deleted from the queue as

receive and printed as the result.

Read more : Breadth First Search (BFS) Algorithm with EXAMPLE (guru99.com)

b. Depth-first search (DFS)

 Depth First Traversal (or DFS) for a graph is similar to Depth First Traversal of

a tree.

 Unlike trees, graphs may contain cycles (a node may be visited twice).

 To avoid processing a node more than once, use a boolean visited array.

 A graph can have more than one DFS traversal.

38
10.5. Dijkstra algorithm

 Dijkstra's algorithm is an algorithm for finding the shortest paths between nodes

in a weighted graph.

 It was conceived by computer scientist Edsger W. Dijkstra in 1959.

a. Dijkstra pseudo algorithm

procedure Dijkstra(G, s)
// Initialize the distance to all nodes to infinity.
for each vertex v in G
dist[v] := infinity
prev[v] := nil
// Initialize the distance to the source node to 0.
dist[s] := 0
// Create a priority queue of unvisited nodes.
Q := {s}
// While the queue is not empty:
while Q is not empty
// Remove the node with the shortest distance from the queue.
u := vertex in Q with minimum dist[u]
remove u from Q

// For each neighbor v of u:


for each v in adj[u]
// If the new distance is shorter than the current distance, update the distance and the predecessor.
if dist[v] > dist[u] + weight(u, v)
dist[v] := dist[u] + weight(u, v)
prev[v] := u
add v to Q

Here are the comments for the pseudocode:

1. G is the graph.

2. s is the source node.

3. dist[v] is the shortest distance from the source node to vertex v.

4. prev[v] is the predecessor of vertex v.

5. Q is the priority queue of unvisited nodes.

6. adj[v] is the list of neighbors of vertex v.

7. weight(u, v) is the weight of the edge between vertices u and v.

39
References
1. Programiz: Learn to Code for Free

2. Introduction to Graphs - Data Structure and Algorithm Tutorials -

GeeksforGeeks

3. 10 Graph Algorithms Visually Explained | by Vijini Mallawaarachchi | Towards

Data Science

4. Google Bard

5. ChatGpt

40

You might also like