Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

Data structure using C language

The document outlines the syllabus for a Data Structures course for II Year B. Tech CSE students, detailing course objectives, prerequisites, and expected outcomes. It includes a breakdown of units covering various data structures, algorithms, and their implementations in C++, alongside reference and textbook recommendations.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Data structure using C language

The document outlines the syllabus for a Data Structures course for II Year B. Tech CSE students, detailing course objectives, prerequisites, and expected outcomes. It includes a breakdown of units covering various data structures, algorithms, and their implementations in C++, alongside reference and textbook recommendations.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

II Year B.

Tech CSE ‐ I Sem L T/P/D C REFERENCE BOOKS:


DATA STRUCTURES 3 -/-/- 3 1. Data structures and Algorithms in C++, Michael T.Goodrich, R.Tamassia and .Mount,
(R18A0503) DATA STRUCTURES Wiley student edition, John Wiley and Sons.
(R18A0503) 2. Data structures and Algorithm Analysis in C++, Mark Allen Weiss, Pearson Education. Ltd.,
Prerequisites: A course on “Programming for Problem Solving”. Second Edition

Course Objectives: OUTCOMES:


 To impart the basic concepts of data structures At the end of the course the students are able to:
 Exploring basic data structures such as stacks queues and lists.  Ability to select the data structures that efficiently model the information in a
LECTURE NOTES  Introduces a variety of data structures such as hash tables, search trees, heaps, problem.
graphs.  Ability to assess efficiency trade-offs among different data structure
 To understand concepts about searching and sorting techniques
B.TECH II YEAR – I SEM (R18) implementations or combinations.
 Implement and know the application of algorithms for sorting .
(2019-20) UNIT-I  Design programs using a variety of data structures, including hash tables, binary
Introduction: Abstract data types, Singly linked list: Definition, operations: Traversing, and general tree structures, search trees, AVL-trees, heaps and graphs.
Searching, Insertion and deletion, Doubly linked list: Definition, operations: Traversing,
Searching, Insertion and deletion, Circular Linked List: Definition, operations: Traversing,
Searching, Insertion and deletion.

UNIT-II
Stack: Stack ADT, array and linked list implementation, Applications- expression conversion
and evaluation. Queue: Types of Queue: Simple Queue, Circular Queue, Queue ADT- array
and linked list implementation. Priority Queue, heaps.

UNIT-III
Searching: Linear and binary search methods. Sorting: Selection Sort, Bubble Sort, Insertion
Sort, Quick Sort, Merge Sort, Heap Sort. Time Complexities .Graphs: Basic terminology,
representation of graphs, graph traversal methods DFS, BFS.

UNIT IV
Dictionaries: linear list representation, skip list representation, operations - insertion,
deletion and searching. Hash Table Representation: hash functions, collision resolution-
separate chaining, open addressing-linear probing, quadratic probing, double hashing,
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING rehashing, extendible hashing.

UNIT-V
MALLA REDDY COLLEGE OF ENGINEERING & TECHNOLOGY Binary Search Trees: Various Binary tree representation, definition, BST ADT,
Implementation, Operations- Searching, Insertion and Deletion, Binary tree traversals,
(Autonomous Institution – UGC, Govt. of India) threaded binary trees,
(Recognized under 2(f) and 12 (B) of UGC ACT 1956)
(Affiliated to JNTUH, Hyderabad, Approved by AICTE - Accredited by NBA & NAAC – ‘A’ Grade - ISO 9001:2015 Certified) AVL Trees : Definition, Height of an AVL Tree, Operations – Insertion, Deletion and
Maisammaguda, Dhulapally (Post Via. Hakimpet), Secunderabad – 500100, Telangana State, India Searching
B-Trees: B-Tree of order m, height of a B-Tree, insertion, deletion and searching, B+ Tree.

TEXTBOOKS:
1. Data Structures using C++, Special Edition-MRCET, Tata McGraw-Hill Publishers 2017.

2. Data structures, Algorithms and Applications in C++, S.Sahni, University Press (India)
Pvt.Ltd, 2nd edition, Universities Press Orient Longman Pvt. Ltd. Education.
INDEX
TOPIC PAGE NO
UNIT -1 UNIT -1
UNIT NO
Introduction 01
Introduction: Abstract data types, Singly linked list: Definition, operations: Traversing, Searching, The linked allocation has the following draw backs:
Singly linked list 02 1. No direct access to a particular element.
I Insertion and deletion, Doubly linked list: Definition, operations: Traversing, Searching, Insertion and
Doubly linked list 14 2. Additional memory required for pointers.
deletion, Circular Linked List: Definition, operations: Traversing, Searching, Insertion and deletion
Circular Linked List 28
Stack ADT 34 Data structure A data structure is a specialized format for organizing and storing data. Linked list are of 3 types:
General data structure types include the array, the file, the record, the table, the tree, and so on. 1. Singly Linked List
Array implementation 35
Any data structure is designed to organize data to suit a specific purpose so that it can be accessed 2. Doubly Linked List
linked list implementation 38 3. Circularly Linked List
and worked with in appropriate ways
Queue ADT 42
II Array implementation 43
Abstract Data Type SINGLY LINKED LIST
In computer science, an abstract data type (ADT) is a mathematical model for data types A singly linked list, or simply a linked list, is a linear collection of data items. The linear order is
linked list implementation 45 where a data type is defined by its behavior (semantics) from the point of view of a user given by means of POINTERS. These types of lists are often referred to as linear linked list.
Circular Queue 47 of the data, specifically in terms of possible values, possible operations on data of this type, * Each item in the list is called a node.
Priority Queue 52 and the behavior of these operations. When a class is used as a type, it is an abstract type that * Each node of the list has two fields:
Heaps 53 refers to a hidden representation. In this model an ADT is typically implemented as a class, and 1. Information- contains the item being stored in the list.
Searching: Linear Search 67 each instance of the ADT is usually a n object of that class. 2. Next address- contains the address of the next item in the list.
In ADT all the implementation details are hidden * The last node in the list contains NULL pointer to indicate that it is the end of the list.
Binary search 70
Sorting: Bubble Sort 74 Conceptual view of Singly Linked List
Selection Sort 75
Insertion Sort 77
Quick Sort 78
III
Merge Sort 82
Operations on Singly linked list:
Heap Sort 84  Insertion of a node
Time Complexities 86  Deletions of a node
Graphs: Basic terminology 87  Traversing the list
Representation of graphs 89
Graph traversal methods 91 Structure of a node:
Dictionaries: linear list representation 94 Method -1:
Skip list representation 98
IV Hash Table Representation 102 struct node
Rehashing, 109 { Data link
 Linear data structures are the data structures in which data is arranged in a list or in a int data;
Extendible hashing 111 sequence. struct node *link;
Binary Search Trees: Basics 115  Non linear data structures are the data structures in which data may be arranged in a };
Binary tree traversals 119 hierarchic al manner
Binary Search Tree 121 LIST ADT
V List is basically the collection of elements arrange d in a sequential manner. In memory Method -2:
AVL Trees 126
B-Trees 138 we can store the list in two ways: one way is we can store the elements in sequential
memory locations. That means we can store the list in arrays. class node
B+ Tree 147 The other way is we can use pointers or links to associate elements sequentially. {
This is known as linked list. public:
int data;
LINKED LISTS node *link;
The linked list is very different type of collection from an array. Using such lists, we can };
store collections of information limited only by the total amount of memory that the OS will allow
us to use.Further more, there is no need to specify our needs in advance. The linked list is very
flexible dynamic data structure : items may be added to it or deleted from it at will. A programmer
need not worry about how many items a program will have to accommodate in advance. This
allows us to write robust programs which require much less maintenance.

Page 1 Page 2
1 2
UNIT -1 UNIT -1 UNIT -1

Insertions: To place an elements in the list there are 3 cases :


1. At the beginning case 2:Inserting end of the list case 3: Insert at a position
2. End of the list
3. At a given position

case 1:Insert at the beginning

insert node at position 3


temp head is the pointer variable which contains address of the first node and temp contains address of new
node to be inserted then sample code is
head is the pointer variable which contains address of the first node and temp contains address of new
node to be inserted then sample code is c=1;
while(c<pos)
{
temp t=head; prev=cur;
while(t->link!=NULL) cur=cur->link;
{ c++;
head is the pointer variable which contains address of the first node and temp contains address of t=t->link; }
new node to be inserted then sample code is } prev->link=temp;
t->link=temp; temp->link=cur;
temp->link=head;
head=temp;
After insertion the linked list is
After insertion:

Code for insert End:-

template <class T>


void list<T>::insert_end() Code for inserting a node at a given position:-
Code for insert front:- {
template <class T>
template <class T> struct node<T> *t,*temp;
void list<T>::Insert_at_pos(int pos)
void list<T>::insert_front() int n; {struct node<T>*cur,*prev,*temp;
{ cout<<"Enter data into node:"; int c=1;
struct node <T>*t,*temp; cin>>n; cout<<"Enter data into node:";
cout<<"Enter data into node:"; temp=create_node(n); cin>>item
cin>>item; if(head==NULL) temp=create_node(item);
temp=create_node(item); head=temp; if(head==NULL)
if(head==NULL) else head=temp;
head=temp; { t=head; else
else while(t->link!=NULL) {
{ temp->link=head; t=t->link; prev=cur=head;
head=temp; t->link=temp; if(pos==1)
} } {
} } temp->link=head;

Page 3 Page 4 Page 5


3 4 5
UNIT -1 UNIT -1 UNIT -1

head=temp; head=head->link; else


} cout<<"node "<<t->data<<" Deletion is sucess"; { while(cur->link!=NULL)
else delete(t); { prev=cur;
{ } cur=cur->link;
while(c<pos) } }
{ c++; prev->link=NULL;
prev=cur; Case 2. Delete a node at end of the list cout<<"node "<<cur->data<<" Deletion is sucess";
cur=cur->link; free(cur);
} }
prev->link=temp;
head
}
temp->link=cur;
}
}
}
CASE 3. Delete a node at a given position
}
To delete last node , find the node using following code
Deletions: Removing an element from the list, without destroying the integrity of the list itself. head
To place an element from the list there are 3 cases :
1. Delete a node at beginning of the list struct node<T>*cur,*prev;
2. Delete a node at end of the list cur=prev=head;
3. Delete a node at a given position while(cur->link!=NULL)
{ prev=cur; Delete node at position 3
Case 1: Delete a node at beginning of the list cur=cur->link; head is the pointer variable which contains address of the first node. Node to be deleted is node
head } containing value 30.
prev->link=NULL; Finding node at position 3
cout<<"node "<<cur->data<<" Deletion is sucess";
free(cur);
c=1;
while(c<pos)
head is the pointer variable which contains address of the first node { c++;
head prev=cur;
sample code is cur=cur->link;
t=head; }
head=head->link;
cout<<"node "<<t->data<<" Deletion is sucess";
delete(t); prev cur

code for deleting a node at end of the list 10 20 30 40 NULL


template <class T>
void list<T>::delete_end()
{
head cur is the node to be deleted . before deleting update links
struct node<T>*cur,*prev;
cur=prev=head;
code to update links
code for deleting a node at front if(head==NULL) prev->link=cur->link;
cout<<"List is Empty\n"; cout<<cur->data <<"is deleted successfully";
template <class T> else delete cur;
void list<T>::delete_front() { cur=prev=head;
{ if(head->link==NULL)
struct node<T>*t; { prev cur
if(head==NULL) cout<<"node "<<cur->data<<" Deletion is sucess";
free(cur); 10 20 30 40 NULL
cout<<"List is Empty\n";
else head=NULL;
{ t=head; }

Page 6 Page 7 Page 8


6 7 8
UNIT -1 UNIT -1 UNIT -1

Traversing the list: Assuming we are given the pointer to the head of the list, how do we get the end template <class T> template <class T>
of the list. list<T>::list() void list<T>::insert_front()
{ {
template <class T> head=NULL; struct node <T>*t,*temp;
void list<T>:: display() } cout<<"Enter data into node:";
{ cin>>item;
struct node<T>*t; template <class T> temp=create_node(item);
void list<T>:: display() if(head==NULL)
if(head==NULL) { head=temp;
{ struct node<T>*t; else
cout<<"List is Empty\n"; { temp->link=head;
} if(head==NULL) head=temp;
else { }
{ t=head; cout<<"List is Empty\n"; }
while(t!=NULL) }
{ cout<<t->data<<"->"; else template <class T>
t=t->link; { t=head; void list<T>::delete_end()
} while(t!=NULL) {
} { cout<<t->data<<"->"; struct node<T>*cur,*prev;
} t=t->link; cur=prev=head;
} if(head==NULL)
Dynamic Implementation of list ADT } cout<<"List is Empty\n";
} else
#include<iostream.h> { cur=prev=head;
#include<stdlib.h> template <class T> if(head->link==NULL)
template <class T> struct node<T>* list<T>::create_node(int n) {
struct node {struct node<T> *t; cout<<"node "<<cur->data<<" Deletion is sucess";
{ t=new struct node<T>; free(cur);
T data; t->data=n; head=NULL;
struct node<T> *link; t->link=NULL; }
}; return t; else
template <class T> } { while(cur->link!=NULL)
class list { prev=cur;
{ template <class T> cur=cur->link;
int item; void list<T>::insert_end() }
struct node<T>*head; {struct node<T> *t,*temp; prev->link=NULL;
public: int n; cout<<"node "<<cur->data<<" Deletion is sucess";
list(); cout<<"Enter data into node:"; free(cur);
void display(); cin>>n; }
struct node<T>*create_node(int n); temp=create_node(n); }
void insert_end(); if(head==NULL) }
void insert_front(); head=temp;
void Insert_at_pos(int pos); else template <class T>
void delete_end(); { t=head; void list<T>::delete_front()
void delete_front(); while(t->link!=NULL) {
void Delete_at_pos(int pos); t=t->link; struct node<T>*t;
void Node_count(); t->link=temp; if(head==NULL)
}; } cout<<"List is Empty\n";
} else
{ t=head;
head=head->link;

Page 9 Page 10 Page 11


9 10 11
UNIT -1 UNIT -1 UNIT -1

cout<<"node "<<t->data<<" Deletion is sucess"; } cout<<"10.Exit "<<endl;


delete(t); } cout<<"Enter Your choice:";
} cin>>ch;
} template <class T> switch(ch)
void list<T>::Delete_at_pos(int pos) {
template <class T> { case 1: L.insert_end();
void list<T>::Node_count() struct node<T>*cur,*prev,*temp; break;
{ int c=1; case 2: L.insert_front();
struct node<T>*t; break;
int c=0; if(head==NULL) case 3:L.delete_end();
t=head; { break;
if(head==NULL) cout<<"List is Empty\n"; case 4:L.delete_front();
{ } break;
cout<<"List is Empty\n"; else case 5: cout<<"Enter position to insert";
{ prev=cur=head; cin>>pos;
} if(pos==1) L.Insert_at_pos(pos);
else { break;
{ while(t!=NULL) head=head->link; case 6: cout<<"Enter position to insert";
{ c++; cout<<cur->data <<"is deleted sucesfully"; cin>>pos;
t=t->link; delete cur; L.Delete_at_pos(pos);
} } break;
cout<<"Node Count="<<c<<endl; else case 7: L.Node_count();
} { break;
} while(c<pos) case 8: L.display();
{ c++; break;
template <class T> prev=cur; case 9:system("cls");
void list<T>::Insert_at_pos(int pos) cur=cur->link; break;
{struct node<T>*cur,*prev,*temp; } case 10:exit(0);
int c=1; prev->link=cur->link;
cout<<"Enter data into node:"; cout<<cur->data <<"is deleted sucesfully"; default:cout<<"Invalid choice";
cin>>item delete cur; }
temp=create_node(item); } }
if(head==NULL) } }
head=temp; }
else DOUBLY LINKED LIST
{ prev=cur=head; int main() A singly linked list has the disadvantage that we can only traverse it in one direction. Many
if(pos==1) { applications require searching backwards and forwards through sections of a list. A useful refinement
{ int ncount,ch,pos; that can be made to the singly linked list is to create a doubly linked list. The distinction made
temp->link=head; list <int> L; between the two list types is that while singly linked list have pointers going in one direction, doubly
head=temp; while(1) linked list have pointer both to the next and to the previous element in the list. The main advantage of
} { a doubly linked list is that, they permit traversing or searching of the list in both directions.
else cout<<"\n ***Operations on Linked List***"<<endl;
In this linked list each node contains three fields.
{ cout<<"\n1.Insert node at End"<<endl;
a) One to store data
while(c<pos) cout<<"2.Insert node at Front"<<endl;
b) Remaining are self referential pointers which points to previous and next nodes in the list
{ c++; cout<<"3.Delete node at END"<<endl;
prev=cur; cout<<"4.Delete node at Front"<<endl;
cur=cur->link; cout<<"5.Insert at a position "<<endl; prev data next
} cout<<"6.Delete at a position "<<endl;
prev->link=temp; cout<<"7.Node Count"<<endl;
temp->link=cur; cout<<"8.Display nodes "<<endl;
} cout<<"9.Clear Screen "<<endl;

Page 12 Page 13 Page 14


12 13 14
UNIT -1 UNIT -1 UNIT -1

Implementation of node using structure void Insert_at_pos(int pos);


void Delete_at_pos(int pos); case 2:Inserting end of the list
Method -1: };
head
struct node Insertions: To place an elements in the list there are 3 cases
{  1. At the beginning
int data;  2. End of the list NULL 10 20 30 NULL

struct node *prev;  3. At a given position


struct node * next;
}; case 1:Insert at the beginning NULL 40 NULL
temp
head
Implementation of node using class
NULL 10 20 30 NULL head is the pointer variable which contains address of the first node and temp contains address of
Method -2: new node to be inserted then sample code is

class node
NULL 40 NULL t=head;
{ temp
public: while(t->next!=NULL)
int data; head is the pointer variable which contains address of the first node and temp contains address of new t=t->next;
node *prev; node to be inserted then sample code is t->next=temp;
node * next; temp->prev=t;
};
temp->next=head;
head->prev=temp;
NULL 10 20 30 NULL head=temp; head

head NULL 10 20 30 NULL

Operations on Doubly linked list:


 Insertion of a node
 Deletions of a node
40 10 20 30 NULL
 Traversing the list

Doubly linked list ADT: NULL 40 NULL

template <class T> Code for insert front:- Code to insert a node at End:-
class dlist template <class T>
{ void DLL<T>::insert_front() template <class T>
int data; { void DLL<T>::insert_end()
struct dnode<T>*head; struct dnode <T>*t,*temp; {
public: cout<<"Enter data into node:"; struct dnode<T> *t,*temp;
dlist() cin>>data; int n;
{ temp=create_dnode(data); cout<<"Enter data into dnode:";
head=NULL; if(head==NULL) cin>>n;
} head=temp; temp=create_dnode(n);
void display(); else if(head==NULL)
struct dnode<T>*create_dnode(int n); { temp->next=head; head- head=temp;
void insert_end(); >prev=temp; else
void insert_front(); head=temp; { t=head;
void delete_end(); } while(t->next!=NULL)
void delete_front(); } t=t->next;
void dnode_count();

Page 15 Page 16 Page 17


15 16 17
UNIT -1 UNIT -1 UNIT -1

t->next=temp;
temp->prev=t; Code to insert a node at a position head is the pointer variable which contains address of the first node
}
} template <class T> sample code is
void dlist<T>::Insert_at_pos(int pos)
{ t=head;
case 3:Inserting at a give position struct dnode<T>*cr,*pr,*temp; head=head->next;
int count=1; head->prev=NULL;
cout<<"Enter data into dnode:"; cout<<"dnode "<<t->data<<" Deletion is sucess";
cin>>data; delete(t);
head temp=create_dnode(data);
display();
if(head==NULL)
NULL 10 20 30 NULL
{//when list is empty
head=temp; head
temp }
40 else
{ pr=cr=head; NULL 10 NULL 20 30 NULL
insert 40 at position 2 if(pos==1)
head is the pointer variable which contains address of the first node and temp contains address of new { //inserting at pos=1
node to be inserted then sample code is temp->next=head;
head=temp;
} code for deleting a node at front
else
while(count<pos) { template <class T>
{ count++; while(count<pos) void dlist<T>:: delete_front()
pr=cr; { count++; {struct dnode<T>*t;
cr=cr->next; pr=cr; if(head==NULL)
} cr=cr->next; cout<<"List is Empty\n";
pr->next=temp; } else
temp->prev=pr; pr->next=temp; { t=head;
temp->next=cr; temp->prev=pr; head=head->next;
cr->prev=temp; temp->next=cr; head->prev=NULL;
cr->prev=temp; cout<<"dnode "<<t->data<<" Deletion is sucess";
} delete(t);
} }
} }
head pr cr
Deletions: Removing an element from the list, without destroying the integrity of the list itself. Case 2. Delete a node at end of the list
To place an element from the list there are 3 cases : To deleted the last node find the last node. find the node using following code
NULL 10 20 30 NULL
1. Delete a node at beginning of the list
2. Delete a node at end of the list struct dnode<T>*pr,*cr;
3. Delete a node at a given position pr=cr=head;
NULL 40 NULL
while(cr->next!=NULL)
Case 1: Delete a node at beginning of the list { pr=cr;
temp cr=cr->next;
head }
pr->next=NULL;
cout<<"dnode "<<cr->data<<" Deletion is sucess";
NULL 10 20 30 NULL delete(cr);

Page 18 Page 19 Page 20


18 19 20
UNIT -1 UNIT -1 UNIT -1

head while(count<pos) Dynamic Implementation of Doubly linked list ADT


{ pr=cr;
NULL 10 20 NULL 30 NULL cr=cr->next;
count++;
} #include<iostream.h>
cr pr->next=cr->next; template <class T>
pr
struct dnode
cr->next->prev=pr;
{
head T data;
code for deleting a node at end of the list struct dnode<T> *prev;
NULL 10 30 20 NULL struct dnode<T> *next;
template <class T> };
void dlist<T>::delete_end() cr template <class T>
pr
{ class dlist
struct dnode<T>*pr,*cr; {
pr=cr=head; int data;
if(head==NULL) code for deleting a node at a position struct dnode<T>*head;
cout<<"List is Empty\n"; public:
else template <class T> dlist();
{ cr=pr=head; void dlist<T>::Delete_at_pos(int pos) struct dnode<T>*create_dnode(int n);
if(head->next==NULL) { void insert_front();
{ struct dnode<T>*cr,*pr,*temp; void insert_end();
cout<<"dnode "<<cr->data<<" Deletion is sucess"; int count=1; void Insert_at_pos(int pos);
delete(cr); display(); void delete_front();
head=NULL; if(head==NULL) void delete_end();
} { void Delete_at_pos(int pos);
else cout<<"List is Empty\n"; void dnode_count();
{ while(cr->next!=NULL) } void display();
{ pr=cr; else };
cr=cr->next; { pr=cr=head;
} if(pos==1) template <class T>
pr->next=NULL; { dlist<T>::dlist()
cout<<"dnode "<<cr->data<<" Deletion is sucess"; head=head->next; {
delete(cr); head->prev=NULL; head=NULL;
} cout<<cr->data <<"is deleted sucesfully"; }
} delete cr;
} } template <class T>
else struct dnode<T>*dlist<T>::create_dnode(int n)
CASE 3. Delete a node at a given position { {
while(count<pos) struct dnode<T> *t;
head { count++; t=new struct dnode<T>;
pr=cr; t->data=n;
NULL 10 30 20 NULL
cr=cr->next; t->next=NULL;
} t->prev=NULL;
Delete node at position 2 pr->next=cr->next; return t;
head is the pointer variable which contains address of the first node. Node to be deleted is node cr->next->prev=pr; }
containing value 30. cout<<cr->data <<"is deleted sucesfully"; template <class T>
Finding node at position 2. delete cr; void dlist<T>::insert_front()
} {
} struct dnode <T>*t,*temp;
} cout<<"Enter data into dnode:";

Page 21 Page 22 Page 23


21 22 23
UNIT -1 UNIT -1 UNIT -1

cin>>data; { }
temp=create_dnode(data); while(count<pos) }
if(head==NULL) { count++; }
head=temp; pr=cr; template <class T>
else cr=cr->next; void dlist<T>::Delete_at_pos(int pos)
{ temp->next=head; head- } {
>prev=temp; pr->next=temp; struct dnode<T>*cr,*pr,*temp;
head=temp; temp->prev=pr; int count=1;
} temp->next=cr; display();
} cr->prev=temp; if(head==NULL)
template <class T> } {
void dlist<T>::insert_end() } cout<<"List is Empty\n";
{ } }
struct dnode<T> *t,*temp; else
int n; template <class T> { pr=cr=head;
cout<<"Enter data into dnode:"; void dlist<T>:: delete_front() if(pos==1)
cin>>n; {struct dnode<T>*t; {
temp=create_dnode(n); if(head==NULL) head=head->next;
if(head==NULL) cout<<"List is Empty\n"; head->prev=NULL;
head=temp; else cout<<cr->data <<"is deleted sucesfully";
else { display(); delete cr;
{ t=head; t=head; }
while(t->next!=NULL) head=head->next; else
t=t->next; head->prev=NULL; {
t->next=temp; cout<<"dnode "<<t->data<<" Deletion is sucess"; while(count<pos)
temp->prev=t; delete(t); { count++;
} } pr=cr;
} } cr=cr->next;
template <class T> }
void dlist<T>::delete_end() pr->next=cr->next;
template <class T> { cr->next->prev=pr;
void dlist<T>::Insert_at_pos(int pos) struct dnode<T>*pr,*cr; cout<<cr->data <<"is deleted sucesfully";
{ pr=cr=head; delete cr;
struct dnode<T>*cr,*pr,*temp; if(head==NULL) }
int count=1; cout<<"List is Empty\n"; }
cout<<"Enter data into dnode:"; else }
cin>>data; { cr=pr=head; template <class T>
temp=create_dnode(data); if(head->next==NULL) void dlist<T>::dnode_count()
display(); { {
if(head==NULL) cout<<"dnode "<<cr->data<<" Deletion is sucess"; struct dnode<T>*t;
{//when list is empty delete(cr); int count=0;
head=temp; head=NULL; display();
} } t=head;
else else if(head==NULL)
{ pr=cr=head; { while(cr->next!=NULL) cout<<"List is Empty\n";
if(pos==1) { pr=cr; else
{ //inserting at pos=1 cr=cr->next; { while(t!=NULL)
temp->next=head; } { count++;
head=temp; pr->next=NULL; t=t->next;
} cout<<"dnode "<<cr->data<<" Deletion is sucess"; }
else delete(cr); cout<<"node count is "<<count;

Page 24 Page 25 Page 26


24 25 26
UNIT -1 UNIT -1 UNIT -1

} case 6: DL.dnode_count(); class clist


} break; {
template <class T> case 7: cout<<"Enter position to insert"; int data;
void dlist<T>::display() cin>>pos; struct cnode<T>*head;
{ DL.Insert_at_pos(pos); public:
struct dnode<T>*t; break; clist();
if(head==NULL) case 8: cout<<"Enter position to Delete"; struct cnode<T>* create_cnode(int n);
{ cin>>pos; void display();
cout<<"List is Empty\n"; DL.Delete_at_pos(pos); void insert_end();
} break; void insert_front();
else case 9:exit(0); void delete_end();
{ cout<<"Nodes in the linked list are ...\n"; case 10:system("cls"); void delete_front();
t=head; break; void cnode_count();
while(t!=NULL) default:cout<<"Invalid choice";
{ cout<<t->data<<"<=>"; } };
t=t->next; } //code for defaut constructor
} } template <class T>
} clist<T>::clist()
} CIRCULARLY LINKED LIST {
int main() A circularly linked list, or simply circular list, is a linked list in which the last node is always points head=NULL;
{ to the first node. This type of list can be build just by replacing the NULL pointer at the end of the list }
int ch,pos; with a pointer which points to the first node. There is no first or last node in the circular list.
dlist <int> DL; //code to display elements in the list
while(1) Advantages: template <class T>
{  Any node can be traversed starting from any other node in the list. void clist<T>::display()
cout<<"\n ***Operations on Doubly List***"<<endl;  There is no need of NULL pointer to signal the end of the list and hence, all pointers contain {
cout<<"\n1.Insert dnode at End"<<endl; valid addresses. struct cnode<T>*t;
cout<<"2.Insert dnode at Front"<<endl;  In contrast to singly linked list, deletion operation in circular list is simplified as the search for if(head==NULL)
cout<<"3.Delete dnode at END"<<endl; the previous node of an element to be deleted can be started from that item itself. {
cout<<"4.Delete dnode at Front"<<endl; cout<<"clist is Empty\n";
cout<<"5.Display nodes "<<endl; }
head
cout<<"6.Count Nodes"<<endl; else
cout<<"7.Insert at a position "<<endl; { t=head;
cout<<"8.Delete at a position "<<endl; if(t->link==head)
cout<<"9.Exit "<<endl; cout<<t->data<<"->";
cout<<"10.Clear Screen "<<endl; else
cout<<"Enter Your choice:"; {
cin>>ch; cout<<t->data<<"->";
switch(ch) Dynamic Implementation of Circular linked list ADT t=t->link;
{ while(t!=head)
case 1: DL.insert_end(); {
break; #include<iostream.h> cout<<t->data<<"->";
case 2: DL.insert_front(); #include<stdlib.h> t=t->link;
break; template <class T> }
case 3:DL.delete_end(); struct cnode }
break; { }
case 4:DL.delete_front(); T data; }
break; struct cnode<T> *link;
case 5://display contents }; //Code to create node
DL.display(); //Code fot circular linked List ADT template <class T>
break; template <class T> struct cnode<T>* clist<T>::create_cnode(int n)

Page 27 Page 28 Page 29


27 28 29
UNIT -1 UNIT -1 UNIT -1

{ temp=create_cnode(data); //prev=cur;
struct cnode<T> *t; if(head==NULL) //cur=cur->link;
t=new struct cnode<T>; { prev->link=head;//points to head
t->data=n; head=temp; cout<<"cnode "<<cur->data<<" Deletion is sucess";
t->link=NULL; temp->link=temp; free(cur);
return t; } }
} else }
//Code to insert node at the end { }
template <class T> t=head; //Code to delete node at front
void clist<T>::insert_end() if(t->link==head) template <class T>
{ { void clist<T>::delete_front()
struct cnode<T>*t; t- >link=temp; {
struct cnode<T>*temp; temp->link=t; struct cnode<T>*t,*temp;
int n; } if(head==NULL)
cout<<"Enter data into cnode:"; else cout<<"circular list is Empty\n";
cin>>n; { else
temp=create_cnode(n); //code to find last node { t=head;
if(head==NULL) while(t->link!=head) //head=head->link;
{ { if(t->link==head)
head=temp; t=t->link; {
temp->link=temp; } head=NULL;
} t->link=temp; //linking last and first node cout<<"cnode "<<t->data<<" Deletion is sucess";
else temp->link=head; delete(t);
{ head=temp; }
t=head; else
if(t->link==head)// list containing only one node } {
{ } //code to find last node
t->link=temp; cout<<"Node inserted \n"; while(t->link!=head)
temp->link=t; } {
} t=t->link;
else //Code to delete node at end }
{ template <class T> temp=head;
while(t->link!=head) void clist<T>::delete_end() t->link=head->link; //linking last and first node
{ { cout<<"cnode "<<temp->data<<" Deletion is sucess";
t=t->link; struct cnode<T>*cur,*prev; head=head->link;
} cur=prev=head; delete(temp);
t->link=temp; if(head==NULL) }
temp->link=head; cout<<"clist is Empty\n"; }
} else }
} { cur=prev=head; //Code to count nodes in the circular linked list
cout<<"Node inerted"<<endl; if(cur->link==head) template <class T>
} { void clist<T>::cnode_count()
cout<<"cnode "<<cur->data<<" Deletion is sucess"; {
//Code to insert node at front free(cur); struct cnode<T>*t;
template <class T> head=NULL; int c=0;
void clist<T>::insert_front() } t=head;
{ else if(head==NULL)
struct cnode <T>*t; { while(cur->link!=head) {
struct cnode<T>*temp; { prev=cur; cout<<"circular list is Empty\n";
cout<<"Enter data into cnode:"; cur=cur->link;
cin>>data; } }

Page 30 Page 31 Page 32


30 31 32
UNIT -1

else Stack: Stack ADT, array and linked list implementation, Applications- expression conversion and
{ t=t->link; Popping an element from stack:
evaluation. Queue: Types of Queue: Simple Queue, Circular Queue, Queue ADT- array and linked
c++; To remove an item, first extract the data from top position in the stack and then decrement the
list implementation. Priority Queue, heaps.
while(t!=head) stack pointer, top.
{ c++; STACK ADT:- A Stack is a linear data structure where insertion and deletion of items takes place
t=t->link; at one end called top of the stack. A Stack is defined as a data structure which operates on a last-in
} first-out basis. So it is also is referred as Last-in First-out( LIFO).
cout<<"Node Count="<<c; Stack uses a single index or pointer to keep track of the information in the stack. The basic
operations associated with the stack are:
} a) push(insert) an item onto the stack.
b) pop(remove) an item from the stack.
}
int main() The general terminology associated with the stack is as follows:
{ A stack pointer keeps track of the current position on the stack. When an element is placed
int ch,pos; on the stack, it is said to be pushed on the stack. When an object is removed from the stack, it is //code to remove an element from stack
clist <int> L; said to be popped off the stack. Two additional terms almost always used with stacks are template<class T>
while(1) overflow, which occurs when we try to push more information on a stack that it can hold, and void stack<T>::pop()
{ underflow, which occurs when we try to pop an item off a stack which is empty. {
cout<<"\n ***Operations on Circular Linked clist***"<<endl; if(top= =-1)
cout<<"\n1.Insert cnode at End"<<endl; cout<<"Stack is Underflow";
cout<<"2.Insert Cnode at Front"<<endl; Pushing items onto the stack: else
cout<<"3.Delete Cnode at END"<<endl; {
cout<<"4.Delete Cnode at Front"<<endl; data=stk[top];
cout<<"5.Display Nodes "<<endl; top--;
cout<<"6.Cnode Count"<<endl; cout<<data<<" is poped Sucesfully ....\n";
cout<<"7.Exit "<<endl; }
cout<<"8.Clear Screen "<<endl; }
cout<<"Enter Your choice:";
cin>>ch; Assume that the array elements begin at 0 ( because the array subscript starts from 0)
switch(ch) and the maximum elements that can be placed in stack is max. The stack pointer, top, is considered to Static implementation of Stack ADT
{ be pointing to the top element of the stack. A push operation thus involves adjusting the stack pointer
case 1: L.insert_end(); to point to next free slot and then copying data into that slot of the stack. Initially the top is initialized #include<stdlib.h>
break; to -1.
#include<iostream.h>
case 2: L.insert_front();
#define max 4
break; //code to push an element on to stack;
template<class T>
case 3:L.delete_end(); template<class T>
class stack
break; void stack<T>::push()
{
case 4:L.delete_front(); {
private:
break; if(top==max-1)
int top;
case 5://display contents cout<<"Stack Overflow...\n";
T stk[max],data;
L.display(); else
public:
break; {
case 6: L.cnode_count(); stack();
cout<<"Enter an element to be pushed:";
break; void push();
top++;
case 7:exit(0); void pop();
cin>>data;
void display();
case 8:system("cls"); stk[top]=data;
break; };
cout<<"Pushed Sucesfully.... \n";
template<class T>
default:cout<<"Invalid choice"; }
stack<T>::stack()
} }
{
}
top=-1;
}

Page 33 Page 1 Page 2


33 34 35
} cout<<"Enter Choice:";
//code to push an element on to stack; cin>>choice; *****Menu for Stack operations*****
template<class T> switch(choice) 1.PUSH
void stack<T>::push() { 2. POP
{ case 1: st.push(); 3. DISPLAY
if(top==max-1) break; 4. EXIT
cout<<"Stack Overflow...\n"; case 2: st.pop(); Enter Choice:1
else break; Stack Overflow...
{ case 3: st.display();
cout<<"Enter an element to be pushed:"; break; *****Menu for Stack operations*****
top++; case 4: exit(0); 1.PUSH
cin>>data; default:cout<<"Invalid choice...Try again...\n"; 2. POP
stk[top]=data; } 3. DISPLAY
cout<<"Pushed Sucesfully .... \n"; } 4. EXIT
} } Enter Choice:2
} output: 55 is poped Sucesfully....
//code to remove an element from stack *****Menu for Stack operations*****
template<class T> 1.PUSH *****Menu for Stack operations*****
void stack<T>::pop() 2. POP 1.PUSH
{ 3. DISPLAY 2. POP
if(top==-1) 4. EXIT 3. DISPLAY
cout<<"Stack is Underflow"; Enter Choice:1 4. EXIT
else Enter an element to be pushed:11 Enter Choice:3
{ Pushed Sucesfully.... Elements in the Stack are....
data=stk[top]; *****Menu for Stack operations***** 44
top--; 1.PUSH 22
cout<<data<<" is poped Sucesfully ....\n"; 2. POP 11
} 3. DISPLAY
} 4. EXIT *****Menu for Stack operations*****
//code to display stack elements Enter Choice:1 1.PUSH
template<class T> Enter an element to be pushed:22 2. POP
void stack<T>::display() Pushed Sucesfully.... 3. DISPLAY
{ 4. EXIT
if(top==-1) *****Menu for Stack operations***** Enter Choice:4
cout<<"Stack Under Flow"; 1.PUSH
else 2. POP
{ cout<<"Elements in the Stack are .... \n"; 3. DISPLAY
Dynamic implementation of Stack ADT
for(int i=top;i>-1;i--) 4. EXIT
{ Enter Choice:1
cout<<<<stk[i]<<"\n"; Enter an element to be pushed:44 #include<iostream.h>
} Pushed Sucesfully.... template <class T>
} struct node
} *****Menu for Stack operations***** {
int main() 1.PUSH T data;
{ 2. POP struct node<T> *link;
int choice; 3. DISPLAY };
stack <int>st; 4. EXIT template <class T>
while(1) Enter Choice:1 class stack
{ Enter Choice:1 {
cout<<"\n*****Menu for Stack operations*****\n"; Enter an item to be pushed:55 int data;
cout<<"1.PUSH\n2.POP\n3.DISPLAY\n4.EXIT\n"; Pushed Sucesfully.... struct node<T>*top;

Page 3 Page 4 Page 5


36 37 38
public: else
stack() { t=top;
{ top=top->link;
top=NULL; cout<<"node "<<t->data<<" Deletion is sucess";
} delete(t);
void display(); }
void push(); }
void pop();
}; int main()
template <class T> {
void stack<T>::display() int ch;
{ stack <int> st;
struct node<T>*t; while(1)
{
if(top==NULL) cout<<"\n ***Operations on Dynamic stack***"<<endl;
{ cout<<"\n1.PUSH"<<endl;
cout<<"stack is Empty\n"; cout<<"2.POP"<<endl; Conversion of Infix Expressions to Prefix and Postfix
} cout<<"3.Display "<<endl;
else cout<<"4.Exit "<<endl;
{ t=top; cout<<"Enter Your choice:";
while(t!=NULL) cin>>ch;
{ cout<<"|"<<t->data<<"|"<<endl; switch(ch)
t=t->link; {
} case 1: st.push();
} break;
} case 2: st.pop();
break;
template <class T> case 3:st.display();;
void stack<T>::push() break;
{ case 4:exit(0);
struct node <T>*t,*temp; default:cout<<"Invalid choice";
cout<<"Enter data into node:"; }
cin>>data; }
temp=new struct node<T>; } Convert following infix expression to prefix and postfix
(A + B) * C - (D - E) * (F + G)
temp->data=data;
temp->link=NULL;
if(top==NULL) Applications of Stack:
top=temp; 1. Stacks are used in conversion of infix to postfix expression.
else 2. Stacks are also used in evaluation of postfix expression.
{ temp->link=top; 3. Stacks are used to implement recursive procedures.
top=temp; 4. Stacks are used in compilers.
} 5. Reverse String
}
An arithmetic expression can be written in three different but equivalent notations, i.e., without
changing the essence or output of an expression. These notations are −
template <class T> 1. Infix Notation The Tower of Hanoi (also called the Tower of Brahma or Lucas' Tower,[1] and sometimes
void stack<T>::pop() 2. Prefix (Polish) Notation pluralized) is a mathematical game or puzzle. It consists of three rods, and a number of disks of
3. Postfix (Reverse-Polish) Notation different sizes which can slide onto any rod. The puzzle starts with the disks in a neat stack in
{
struct node<T>*t; ascending order of size on one rod, the smallest at the top, thus making a conical shape.
if(top==NULL)
cout<<"stack is Empty\n";

Page 6 Page 40 Page 41


39 40 41
The objective of the puzzle is to move the entire stack to another rod, obeying the following simple Operations on Queue: {
rules: A queue have two basic operations: if(front>rear)
1. Only one disk can be moved at a time. a) adding new item to the queue front=rear=-1;
2. Each move consists of taking the upper disk from one of the stacks and placing it on top of another b) removing items from queue. if(rear==max-1)
stack i.e. a disk can only be moved if it is the uppermost disk on a stack. The operation of adding new item on the queue occurs only at one end of the queue called the rear or cout<<"queue Overflow...\n";
3. No disk may be placed on top of a smaller disk. back. else
The operation of removing items of the queue occurs at the other end called the front. {
if(front==-1)
For insertion and deletion of an element from a queue, the array elements begin at 0 and the front=0;
maximum elements of the array is maxSize. The variable front will hold the index of the item that is rear++;
considered the front of the queue, while the rear variable will hold the index of the last item in the cout<<"Enter an item to be inserted:";
queue. cin>>item;
Assume that initially the front and rear variables are initialized to -1. Like stacks, underflow q[rear]=item;
and overflow conditions are to be checked before operations in a queue. cout<<"inserted Sucesfully..into queue..\n";
}
Queue empty or underflow condition is }
if((front>rear)||front= =-1) template <class T>
cout<”Queue is empty”; void queue<T>::delete_q()
{
if((front==-1&&rear==-1)||front>rear)
{
Queue Full or overflow condition is front=rear=-1;
cout<<"queue is Empty .. \n";
if((rear==max) }
QUEUE ADT cout<”Queue is full”; else
A queue is an ordered collection of data such that the data is inserted at one end and deleted from {
another end. The key difference when compared stacks is that in a queue the information stored is item=q[front];
processed first-in first-out or FIFO. In other words the information receive from a queue comes in the front++;
same order that it was placed on the queue. cout<<item<<" is deleted Sucesfully ... \n";
Static implementation of Queue ADT }
}
template <class T>
#include<stdlib.h> void queue<T>::display_q()
#include<iostream.h> {
#define max 4 if((front==-1&&rear==-1)||front>rear)
template <class T> {
class queue front=rear=-1;
{ cout<<"queue is Empty .. \n";
T q[max],item; }
int front,rear; else
Representing a Queue: public: queue(); {
One of the most common way to implement a queue is using array. An easy way to do so is to void insert_q(); for(int i=front;i<=rear;i++)
define an array Queue, and two additional variables front and rear. The rules for manipulating these void delete_q(); cout<<"|"<<q[i]<<"|<--";
variables are void display_q(); }
simple: }; }
 Each time information is added to the queue, increment rear. template <class T> int main()
 Each time information is taken from the queue, increment front. queue<T>::queue() {
 Whenever front >rear or front=rear=-1 the queue is empty. { int choice;
Array implementation of a Queue do have drawbacks. The maximum queue size has to be set at front=rear=-1; queue<int> q;
compile time, rather than at run time. Space can be wasted, if we do not use the full capacity of the } while(1)
array. //code to insert an item into queue; {
template <class T> cout<<"\n\n*****Menu for operations on QUEUE*****\n\n";
void queue<T> ::insert_q() cout<<"1.INSERT\n2.DELETE\n3.DISPLAY\n4.EXIT\n";

Page 42 Page 43 Page 44


42 43 44
cout<<"Enter Choice:"; cin>>item; while(1)
cin>>choice; p=new node<T>; {
switch(choice) p->data=item; cout<<"\n\n***Menu for operations on Queue***\n\n";
{ p->next=NULL; cout<<"1.Insert\n2.Delete\n3.DISPLAY\n4.EXIT\n";
case 1: q.insert_q(); if(front==NULL) cout<<"Enter Choice:";
break; { cin>>choice;
case 2: q.delete_q(); rear=front=p; switch(choice)
break; } {
case 3: cout<<"Elements in the queue are ... \n"; else case 1: q1.insert_q();
q.display_q(); { break;
break; rear->next=p; case 2: q1.delete_q();
case 4: exit(0); rear=p; break;
default: cout<<"Invalid choice...Try again...\n"; } case 3: q1.display_q();
} cout<<"\nInserted into Queue Sucesfully ... \n"; break;
} } case 4: exit(0);
} //code to delete an elementfrom queue default: cout<<"Invalid choice...Try again...\n";
template <class T> }
void queue<T>::delete_q() }
{ }
Dynamic implementation of Queue ADT node<T>*t;
if(front==NULL)
cout<<"\nQueue is Underflow"; Application of Queue:
#include<stdlib.h> else Queue, as the name suggests is used whenever we need to have any group of objects in an order in
#include<iostream.h> { which the first one coming in, also gets out first while the others wait for there turn, like in the
template <class T> item=front->data; following scenarios :
struct node t=front; 1. Serving requests on a single shared resource, like a printer, CPU task scheduling etc.
{ front=front->next; 2. In real life, Call Center phone systems will use Queues, to hold people calling them in an order,
T data; cout<<"\n"<<item<<" is deleted from Queue ... \n"; until a service representative is free.
struct node<T>*next; } 3. Handling of interrupts in real-time systems. The interrupts are handled in the same order as they
}; delete(t); arrive, First come first served.
template <class T> }
class queue //code to display elements in queue CIRCULAR QUEUE
{ template <class T> Once the queue gets filled up, no more elements can be added to it even if any element is removed
private: void queue<T>::display_q() from it consequently. This is because during deletion, rear pointer is not adjusted.
T item; {
node<T> *front,*rear; node<T>*t;
public: if(front==NULL)
queue(); cout<<"\nQueue Under Flow";
void insert_q(); else
void delete_q(); {
void display_q(); cout<<"\nElements in the Queue are ... \n";
}; t=front;
template <class T> while(t!=NULL)
queue<T>::queue() {
{ cout<<"|"<<t->data<<"|<-";
front=rear=NULL; t=t->next;
} }
//code to insert an item into queue; } When the queue contains very few items and the rear pointer points to last element. i.e.
template <class T> } rear=maxSize-1, we cannot insert any more items into queue because the overflow condition satisfies.
void queue<T>::insert_q() int main() That means a lot of space is wasted
{ { .Frequent reshuffling of elements is time consuming. One solution to this is arranging all
node<T>*p; int choice; elements in a circular fashion. Such structures are often referred to as Circular Queues.
cout<<"Enter an element to be inserted:"; queue<int>q1;

Page 45 Page 46 Page 47


45 46 47
A circular queue is a queue in which all locations are treated as circular such that the first Step 4: If Front = 0 then else
location CQ[0] follows the last location CQ[max-1]. Front = 1 {
Step 5: Return cout<<"Enter an element";
cin>>num;
Circular Queue empty or underflow condition is Deletion from Circular Queue: if(front==-1)
rear=front=0;
if(front==-1) Algorithm CQueueDeletion(Q,maxSize,Front,Rear,item) else
cout<<"Queue is empty"; Step 1: If Front = 0 then rear=(rear+1)%max;
print “Queue Underflow” cq[rear]=num;
Circular Queue Full or overflow condition is Return cout<<num <<" is inserted ...";
Step 2: K=Q[Front] }
if(front==(rear+1)%max) Step 3: If Front = Rear then }
{ begin template <class T>
cout<<"Circular Queue is full\n"; Front = -1 void CircularQ<T>::deleteQ()
} Rear = -1 {
end int num;
else if(front==-1)
If Front = maxSize-1 then cout<<"Queue is empty";
Front = 0 else
else {
Front = Front + 1 num=cq[front];
Step 4: Return K cout<<"Deleted item is "<< num;
if(front==rear)
front=rear=-1;
else
Static implementation of Circular Queue ADT front=(front+1)%max;
}
#include<iostream.h> }
#define max 4 template <class T>
template <class T> void CircularQ<T>::displayQ()
class CircularQ {
{ int i;
T cq[max]; if(front==-1)
int front,rear; cout<<"Queue is empty";
public: else
CircularQ(); { cout<<"Queue elements are\n";
void insertQ(); for(i=front;i<=rear;i++)
void deleteQ(); cout<<cq[i]<<"\t";
void displayQ(); }
}; if(front>rear)
template <class T> {
CircularQ<T>::CircularQ() for(i=front;i<max;i++)
{ cout<<cq[i]<<"\t";
Insertion into a Circular Queue: front=rear=-1; for(i=0;i<=rear;i++)
Algorithm CQueueInsertion(Q,maxSize,Front,Rear,item) } cout<<cq[i]<<"\t";
Step 1: If Rear = maxSize-1 then template <class T> }
Rear = 0 void CircularQ<T>:: insertQ() }
else { int main()
Rear=Rear+1 int num; {
Step 2: If Front = Rear then if(front==(rear+1)%max) CircularQ<int> obj;
print “Queue Overflow” { int choice;
Return cout<<"Circular Queue is full\n"; while(1)
Step 3: Q[Rear] = item } { cout<<"\n*** Circular Queue Operations***\n";

Page 48 Page 49 Page 50


48 49 50
UNIT-2 UNIT-2
cout<<"\n1.insert Element into CircularQ"; Priority Queue }
cout<<"\n2.Delete Element from CircularQ"; DEFINITION:
cout<<"\n3.Display Elements in CircularQ"; A priority queue is a collection of zero or more elements. Each element has a priority or value.
cout<<"\n4.Exit "; HEAPS
Unlike the queues, which are FIFO structures, the order of deleting from a priority queue is determined by the
cout<<"\nEnter Choice:"; element priority.
cin>>choice; Heap is a tree data structure denoted by either a max heap or a min heap.
switch(choice) Elements are removed/deleted either in increasing or decreasing order of priority rather than in the order in A max heap is a tree in which value of each node is greater than or equal to value of its children nodes. A min
which they arrived in the queue. heap is a tree in which value of each node is less than or equal to value of its children nodes.
{ case 1: obj.insertQ();
break; There are two types of priority queues:
case 2: obj.deleteQ();  Min priority queue 18 4
break;  Max priority queue
case 3: obj.displayQ();
break; Min priority queue: Collection of elements in which the items can be inserted arbitrarily, but only smallest element
case 4: exit(0); can be removed.
} 12 4 12 14
} Max priority queue: Collection of elements in which insertion of items can be in any order but only largest element
} can be removed.

In priority queue, the elements are arranged in any order and out of which only the smallest or largest element 11 10 18 20
allowed to delete each time.
The implementation of priority queue can be done using arrays or linked list. The data structure heap is used
to implement the priority queue effectively.
APPLICATIONS: Max heap Min heap
1. The typical example of priority queue is scheduling the jobs in operating system. Typically OS allocates
priority to jobs. The jobs are placed in the queue and position of the job in priority queue determines their
Insertion of element in the Heap:
priority. In OS there are 3 jobs- real time jobs, foreground jobs and background jobs. The OS always
schedules the real time jobs first. If there is no real time jobs pending then it schedules foreground jobs. Lastly
Consider a max heap as given below:
if no real time and foreground jobs are pending then OS schedules the background jobs.
2. In network communication, the manage limited bandwidth for transmission the priority queue is used.
3. In simulation modeling to manage the discrete events the priority queue is used.
Various operations that can be performed on priority queue are-
1. Find an element
2. Insert a new element
3. Remove or delete an element
The abstract data type specification for a max priority queue is given below. The specification for a min priority
queue is the same as ordinary queue except while deletion, find and remove the element with minimum priority

ABSTRACT DATA TYPE(ADT):


Abstract data type maxPriorityQueue
{
Instances
Finite collection of elements, each has a priority Operations
empty():return true iff the queue is empty Now if we want to insert 7. We cannot insert 7 as left child of 4. This is because the max heap has a property that
size() :return number of elements in the queue value of any node is always greater than the parent nodes. Hence 7 will bubble up 4 will be left child of 7.
top() :return element with maximum priority Note: When a new node is to be inserted in complete binary tree we start from bottom and from left child on the
del() :remove the element with largest priority from the queue current level. The heap is always a complete binary tree.
insert(x): insert the element x into the queue

Page 51 Page 1 Page 2


51 52 53
UNIT-2 UNIT-2 UNIT-2

For deletion operation always the maximum element is deleted from heap. In Max heap the maximum
18 element is always present at root. And if root element is deleted then we need to reheapify the tree.

Consider a Max heap

12 7 inserted! 25

11 10 4 12 18

If we want to insert node 25, then as 25 is greatest element it should be the root. Hence 25 will bubble up and 18
will move down. 11 10 4
Thus deletion operation can be performed. The time complexity of deletion operation is O(log n).
25 inserted! 1. Remove the maximum element which is present at the root. Then a hole is created at the root.
Delete root element:25, Now we cannot put either 12 or 18 as root node and that should be greater than all its 2. Now reheapify the tree. Start moving from root to children nodes. If any maximum element is found then
children elements. place it at root. Ensure that the tree is satisfying the heap property or not.
3. Repeat the step 1 and 2 if any more elements are to be deleted.
18

12 18 void heap::delet(int item)


{
12 4 int item, temp;
if(size==0)
11 10 4 cout<<”Heap is empty\n”; else
{
11 10 //remove the last elemnt and reheapify
The insertion strategy just outlined makes a single bubbling pass from a leaf toward the root. At each level we item=H[size--];
do (1) work, so we should be able to implement the strategy to have complexity O(height) = O(log n). Now we cannot put 4 at the root as it will not satisfy the heap property. Hence we will bubble up 18 and place 18 at //item is placed at root temp=1;
root, and 4 at position of 18. child=2;
void Heap::insert(int item) while(child<=size)
{ If 18 gets deleted then 12 becomes root and 11 becomes parent node of 10. {
int temp; //temp node starts at leaf and moves up. if(child<size && H[child]<H[child+1]) child++;
temp=++size; if(item>=H[child])
while(temp!=1 && heap[temp/2]<item) //moving element down break;
{ H[temp]=H[child];
H[temp] = H[temp/2]; temp=temp/2; temp=child;
//finding the parent child=child*2;
} }
H[temp]=item; //pl;ace the largest item at root
} H[temp]=item;
}

Applications Of Heap:

Deletion of element from the heap: 1. Heap is used in sorting algorithms. One such algorithm using heap is known as heap sort.

Page 3 Page 4 Page 5


54 55 56
UNIT-2 UNIT-2 UNIT-2
2. In priority queue implementation the heap is used.

HEAP SORT

Heap sort is a method in which a binary tree is used. In this method first the heap is created using binary tree and then
heap is sorted using priority queue.

Eg:

25 57 48 38 10 91 84 33

In the heap sort method we first take all these elements in the array “A”

A[0] A[1] A[2] A[3] A[4] A[5] A[6] A[7]


25 57 48 38 10 91 84 33

Now start building the heap structure. In forming the heap the key point is build heap in such a way that the
highest value in the array will always be a root.

Insert 25

The next element is 84, which 91>84>57 the middle element. So 84 will be the parent of 57. For making the
complete binary tree 57 will be attached as right of 84.

Page 6 Page 7 Page 8


57 58 59
UNIT-2 UNIT-2 UNIT-2

Now the heap is formed. Let us sort it. For sorting the heap remember two main things the first thing is that the
binary tree form of the heap should not be distributed at all. For the complete sorting binary tree should be remained.
And the second thing is that we will start sorting the higher elements at the end of array in sorted manner i.e..
A[7]=91, A[6]=84 and so on..
Step 1:- Exchange A[0] with A[7]

Page 9 Page 10 Page 11


60 61 62
UNIT-2 UNIT-2 UNIT-2

Write a program to implement heap sort


#include<iostream.h>
void swap(int *a,int *b)
{
int t;
t=*a;
Step 5:-Exchane A[0] with A[2] *a=*b;
*b=t;
}
void heapify(int arr[], int n, int i)
{
int largest = i; // Initialize largest as root
int l = 2*i + 1; // left = 2*i + 1
int r = 2*i + 2; // right = 2*i + 2

// If left child is larger than root


if (l<n &&(arr[l] > arr[largest]))
largest = l;

// If right child is larger than largest so far


if (r < n &&( arr[r] > arr[largest]))
largest = r;

Page 12 Page 13 Page 14


63 64 65
UNIT -3 UNIT -3

// If largest is not root searching every element of the list till the required record is found. The elements in the list may be
Searching: Linear and binary search methods.
if (largest != i) in any order. i.e. sorted or unsorted.
Sorting: Bubble sort, selection sort, Insertion sort, Quick sort, Merge sort, Heap sort. Time complexities.
{ We begin search by comparing the first element of the list with the target element. If it
Graphs: Basic terminology, representation of graphs, graph traversal methods DFS, BFS.
swap(&arr[i], &arr[largest]); matches, the search ends and position of the element is returned. Otherwise, we will move to next
element and compare. In this way, the target element is compared with all the elements until a
match occurs. If the match do not occur and there are no more elements to be compared, we
// Recursively heapify the affected sub-tree ALGORITHMS conclude that target element is absent in the list by returning position as -1.
heapify(arr, n, largest); Definition: An Algorithm is a method of representing the step-by-step procedure for solving a
} problem. It is a method of finding the right answer to a problem or to a different problem by For example consider the following list of elements.
} breaking the problem into simple cases. 55 95 75 85 11 25 65 45
Suppose we want to search for element 11(i.e. Target element = 11). We first compare the
// function to do heap sort It must possess the following properties: target element with first element in list i.e. 55. Since both are not matching we move on the next
void heapSort(int arr[], int n) elements in the list and compare. Finally we will find the match after 5 comparisons at position 4
{ int i; 1. Finiteness: An algorithm should terminate in a finite number of steps. starting from position 0.
// Build heap (rearrange array) Linear search can be implemented in two ways.i)Non recursive ii)recursive
for ( i = n / 2 - 1; i >= 0; i--) 2. Definiteness: Each step of the algorithm must be precisely (clearly) stated.
heapify(arr, n, i);
Algorithm for Linear search
3. Effectiveness: Each step must be effective.i.e; it should be easily convertible into
// One by one extract an element from heap program statement and can be performed exactly in a finite amount of time.
for ( i=n-1; i>=0; i--) Linear_Search (A[ ], N, val , pos )
{ Step 1 : Set pos = -1 and k = 0
4. Generality: Algorithm should be complete in itself, so that it can be used to solve all
Step 2 : Repeat while k < N
// Move current root to end problems of given type for any input data.
Begin
swap(&arr[0], &arr[i]);
Step 3 : if A[ k ] = val
5. Input/Output: Each algorithm must take zero, one or more quantities as input data
Set pos = k
// call max heapify on the reduced heap and gives one of more output values.
print pos
heapify(arr, i, 0); An algorithm can be written in English like sentences or in any standard
Goto step 5
} representations. The algorithm written in English language is called Pseudo code.
End while
} Step 4 : print “Value is not present”
/* A utility function to print array of size n */ Example: To find the average of 3 numbers, the algorithm is as shown below.
Step 5 : Exit
void printArray(int arr[], int n) Step1: Read the numbers a, b, c, and d.
{ Step2: Compute the sum of a, b, and c.
for (int i=0; i<n; ++i) Step3: Divide the sum by 3. Non recursive C++ program for Linear search
cout << arr[i] << " "; Step4: Store the result in variable of d.
cout << "\n"; Step5: End the program.
} #include<iostream>
int main() using namespace std;
{ int Lsearch(int list[ ],int n,int key);
int n,i; int main()
int list[30]; {
cout<<"enter no of elements\n"; int n,i,key,list[25],pos;
cin>>n; Searching: Searching is the technique of finding desired data items that has been stored cout<<"enter no of elements\n";
cout<<"enter "<<n<<" numbers "; within some data structure. Data structures can include linked lists, arrays, search trees, hash cin>>n;
tables, or various other storage methods. The appropriate search algorithm often depends on the
for(i=0;i<n;i++) cout<<"enter "<<n<<" elements ";
data structure being searched. for(i=0;i<n;i++)
cin>>list[i];
Search algorithms can be classified based on their mechanism of searching. They are cin>>list[i];
heapSort(list, n);
 Linear searching cout<<"enter key to search";
cout << "Sorted array is \n";
printArray(list, n);  Binary searching cin>>key;
return 0; Linear or Sequential searching: Linear Search is the most natural searching method and pos= Lsearch (list,n,key);
} It is very simple but very poor in performance at times .In this method, the searching begins with if(pos==-1)
cout<<"\nelement not found";
else

Page 1 Page 2
66 67 68
UNIT -3 UNIT -3 UNIT -3

cout<<"\n element found at index "<<pos; /*recursive function for linear search*/
} int Rec_Lsearch(int list[],int n,int key)
/*function for linear search*/ {
int Lsearch(int list[ ],int n,int key) if(n<0)
{ return -1;
int i,pos=-1; if(list[n]==key)
for(i=0;i<n;i++) return n;
if(key==list[i]) else
{ return Rec_Lsearch(list,n-1,key);
pos=i; }
break;
} RUN1:
return pos; enter no of elements 5
} enter 5 elements 5 55 -4 99 7
enter key to search-4
Run 1: element found at index 2
enter no of elements 5
enter 5 elements 99 88 7 2 4 RUN 2:
enter key to search 7 enter no of elements 5 Algorithm:
element found at index 2 enter 5 elements 5 55 -4 99 7 Binary_Search (A [ ], U_bound, VAL)
enter key to search77 Step 1 : set BEG = 0 , END = U_bound , POS = -1
Run 2: element not found Step 2 : Repeat while (BEG <= END )
enter no of elements 5 Step 3 : set MID = ( BEG + END ) / 2
enter 5 elements 99 88 7 2 4 Step 4 : if A [ MID ] == VAL then
enter key to search 88 POS = MID
element not found print VAL “ is available at “, POS
BINARY SEARCHING GoTo Step 6
End if
Binary search is a fast search algorithm with run-time complexity of Ο(log n). This search if A [ MID ] > VAL then
algorithm works on the principle of divide and conquer. Binary search looks for a particular item set END = MID – 1
Recursive C++ program for Linear search Else
by comparing the middle most item of the collection. If a match occurs, then the index of item is
returned. If the middle item is greater than the item, then the item is searched in the sub-array to set BEG = MID + 1
#include<iostream> the left of the middle item. Otherwise, the item is searched for in the sub-array to the right of the End if
using namespace std; middle item. This process continues on the sub-array as well until the size of the subarray reduces End while
int Rec_Lsearch(int list[ ],int n,int key); to zero. Step 5 : if POS = -1 then
int main() Before applying binary searching, the list of items should be sorted in ascending or print VAL “ is not present “
{ descending order. End if
int n,i,key,list[25],pos; Best case time complexity is O(1) Step 6 : EXIT
cout<<"enter no of elements\n"; Worst case time complexity is O(log n)
cin>>n;
cout<<"enter "<<n<<" elements ";
for(i=0;i<n;i++)
cin>>list[i]; Non recursive C++ program for binary search
cout<<"enter key to search";
cin>>key; #include<iostream>
pos=Rec_Lsearch(list,n-1,key); using namespace std;
if(pos==-1) int binary_search(int list[],int key,int low,int high);
cout<<"\nelement not found"; int main()
else {
cout<<"\n element found at index "<<pos; int n,i,key,list[25],pos;
} cout<<"enter no of elements\n" ;

Page 3 Page 4 Page 5


69 70 71
UNIT -3 UNIT -3 UNIT -3

cin>>n; int n,i,key,list[25],pos;  selection sort


cout<<"enter "<<n<<" elements in ascending order "; cout<<"enter no of elements\n" ;  Insertion sort
for(i=0;i<n;i++) cin>>n;  Quick sort
cin>>list[i]; cout<<"enter "<<n<<" elements in ascending order ";  Merge sort
cout<<"enter key to search" ; for(i=0;i<n;i++)  Heap sort
cin>>key; cin>>list[i];
pos=binary_search(list,key,0,n-1); cout<<"enter key to search" ;
if(pos==-1) cin>>key; Bubble sort
cout<<"element not found" ; pos=rbinary_search(list,key,0,n-1);
else if(pos==-1) The bubble sort is an example of exchange sort. In this method, repetitive comparison is
cout<<"element found at index "<<pos; cout<<"element not found" ; performed among elements and essential swapping of elements is done. Bubble sort is commonly
} else used in sorting algorithms. It is easy to understand but time consuming i.e. takes more
/* function for binary search*/ cout<<"element found at index "<<pos; number of comparisons to sort a list . In this type, two successive elements are compared and
int binary_search(int list[],int key,int low,int high) } swapping is done. Thus, step-by-step entire array elements are checked. It is different from the
{ /*recursive function for binary search*/ selection sort. Instead of searching the minimum element and then applying swapping, two
int mid,pos=-1; int rbinary_search(int list[ ],int key,int low,int high) records are swapped instantly upon noticing that they are not in order.
while(low<=high) {
{ int mid,pos=-1;
mid=(low+high)/2; if(low<=high)
if(key==list[mid]) {
{ mid=(low+high)/2; ALGORITHM:
pos=mid; if(key==list[mid]) Bubble_Sort ( A [ ] , N )
break; { Step 1: Start
pos=mid; Step 2: Take an array of n elements
} return pos; Step 3: for i=0, .................. n-2
else if(key<list[mid]) } Step 4: for j=i+1,…….n-1
high=mid-1; else if(key<list[mid]) Step 5: if arr[j]>arr[j+1] then
else return rbinary_search(list,key,low,mid-1); Interchange arr[j] and arr[j+1]
low=mid+1; else End of if
} return rbinary_search(list,key,mid+1,high); Step 6: Print the sorted array arr
return pos; } Step 7:Stop
} return pos; #include<iostream>
Run 1: } using namespace std;
enter no of elements5 void bubble_sort(int list[30],int n);
enter 5 elements in ascending order 11 22 33 44 55 RUN 1: int main()
enter key to search33 enter no of elements 5 {
element found at index 2 enter 5 elements in ascending order 11 22 33 44 66 int n,i;
Run 2: enter key to search33 int list[30];
enter no of elements5 element found at index 2 cout<<"enter no of elements\n";
enter 5 elements in ascending order 11 22 33 44 55 RUN 2: cin>>n;
enter key to search21 enter no of elements 5 cout<<"enter "<<n<<" numbers ";
element Not found enter 5 elements in ascending order 11 22 33 44 66 for(i=0;i<n;i++)
enter key to search77 cin>>list[i];
element not found bubble_sort (list,n);
Recursive C++ program for binary search cout<<" after sorting\n";
for(i=0;i<n;i++)
#include<iostream> SORTING cout<<list[i]<<endl;
using namespace std; return 0;
int rbinary_search(int list[ ],int key,int low,int high); Arranging the elements in a list either in ascending or descending order. various sorting }
int main() algorithms are
{  Bubble sort void bubble_sort (int list[30],int n)

Page 10 Page 11 Page 12


72 73 74
UNIT -3 UNIT -3 UNIT -3

{ Step 4 : Repeat for J = K + 1 to N – 1


int temp ; Begin
int i,j; If A[ J ] < A [ POS ] INSERTION SORT
for(i=0;i<n;i++) Set POS = J
for(j=0;j<n-1;j++) End For Insertion sort: It iterates, consuming one input element each repetition, and growing a sorted
output list. Each iteration, insertion sort removes one element from the input data, finds the
if(list[j]>list[j+1]) Step 5 : Swap A [ K ] with A [ POS ]
location it belongs within the sorted list, and inserts it there. It repeats until no input elements
{ End For
remain.
temp=list[j]; Step 6 : stop
list[j]=list[j+1];
list[j+1]=temp; #include<iostream>
} using namespace std;
} void selection_sort (int list[],int n);
int main()
RUN 1: {
enter no of elements int n,i;
5 int list[30];
enter 5 numbers 5 4 3 2 1 cout<<"enter no of elements\n";
after sorting 1 2 3 4 5.. cin>>n;
cout<<"enter "<<n<<" numbers ";
for(i=0;i<n;i++)
cin>>list[i];
selection_sort (list,n);
Selection sort cout<<" after sorting\n";
for(i=0;i<n;i++)
selection sort:- Selection sort ( Select the smallest and Exchange ): cout<<list[i]<<endl;
The first item is compared with the remaining n-1 items, and whichever of all is lowest, is return 0;
put in the first position.Then the second item from the list is taken and compared with the }
remaining (n-2) items, if an item with a value less than that of the second item is found on the (n- void selection_sort (int list[],int n)
2) items, it is swapped (Interchanged) with the second item of the list and so on. {
int min,temp,i,j;
for(i=0;i<n;i++)
ALGORITHM:
{
min=i; Step 1: start
for(j=i+1;j<n;j++) Step 2: for i ← 1 to length(A)
{ Step 3: j ← i
if(list[j]<list[min]) Step 4: while j > 0 and A[j-1] > A[j]
min=j; Step 5: swap A[j] and A[j-1]
}
temp=list[i]; Step 6: j←j-1
list[i]=list[min]; Step 7: end while
list[min]=temp; Step 8: end for
} Step9: stop
}
program to implement insertion sort
RUN 1:
Algorithm: #include<iostream>
Selection_Sort ( A [ ] , N ) enter no of elements
using namespace std;
Step 1 :start 5 void insertion_sort(int a[],int n)
Step 2: Repeat For K = 0 to N – 2 enter 5 numbers 5 4 3 2 1 {
Begin after sorting 1 2 3 4 5 int i,t,pos;
Step 3 : Set POS = K for(i=0;i<n;i++)
{
Page 13 Page 14 Page 15
75 76 77
UNIT -3 UNIT -3 UNIT -3
t=a[i];
pos=i;
while(pos>0&&a[pos-1]>t)
{
a[pos]=a[pos-1];
pos--;
}
a[pos]=t;
}
}
int main()
{
int n,i;
int list[30];
cout<<"enter no of elements\n";
cin>>n;
cout<<"enter "<<n<<" numbers ";
for(i=0;i<n;i++)
cin>>list[i];
insertion_sort(list,n);
cout<<" after sorting\n";
for(i=0;i<n;i++)
cout<<list[i]<<endl;
return 0;
}

RUN 1:
enter no of elements 5
enter 5 numbers 55 44 33 22 11

after sorting 11 22 33 44 55

Quick sort

Quick sort: It is a divide and conquer algorithm. Developed by Tony Hoare in 1959. Quick sort
first divides a large array into two smaller sub-arrays: the low elements and the high elements.
Quick sort can then recursively sort the sub-arrays.
ALGORITHM:

Step 1: Pick an element, called a pivot, from the array.


Step 2: Partitioning: reorder the array so that all elements with values less than the pivot come
before the pivot, while all elements with values greater than the pivot come after it (equal
values can go either way). After this partitioning, the pivot is in its final position. This is
called the partition operation.
Step 3: Recursively apply the above steps to the sub-array of elements with smaller values and
separately to the sub-array of elements with greater values.

Page 16 Page 17 Page 18


78 79 80
UNIT -3 UNIT -3 UNIT -3

cin>>n; int i,j,k;


cout<<"enter "<<n<<" numbers "; i=low;
program to implement Quick sort j=mid+1;
for(i=0;i<n;i++)
k=low;
#include<iostream.h> cin>>list[i]; while((i<=mid)&&(j<=high))
int partition(int x[],int low,int high) quicksort(list,0,n-1); {
{ cout<<" after sorting\n"; if(a[i]<=a[j])
int down,up,pivot,t; for(i=0;i<n;i++) {
if(low<high) cout<<list[i]<<endl; temp[k]=a[i];
++i;
{ return 0;
}
down=low; } else
up=high; {
pivot=down; enter no of elements temp[k]=a[j];
while(down<up) 5 ++j;
{ enter 5 numbers 5 4 3 2 1 }
++k;
while((x[down]<=x[pivot])&&(down<high))down++; after sorting 1 2 3 4 5
}
while(x[up]>x[pivot])up--; if(i>mid)
if(down<up) {
Merge sort
{ while(j<=high)
t=x[down]; {
x[down]=x[up]; Merge sort is a sorting technique based on divide and conquer technique. In merge sort the temp[k]=a[j];
unsorted list is divided into N sublists, each having one element, because a list of one element is ++j;
x[up]=t;
considered sorted. Then, it repeatedly merge these sublists, to produce new sorted sublists, and at ++k;
}/*endif*/ lasts one sorted list is produced. Merge Sort is quite fast, and has a time complexity of O(n log n). }
} }
t=x[pivot]; Conceptually, merge sort works as follows: else
x[pivot]=x[up]; 1. Divide the unsorted list into two sub lists of about half the size. {
x[up]=t; 2. Divide each of the two sub lists recursively until we have list sizes of length 1, in which case the while(i<=mid)
list itself is returned. {
}
3. Merge the two sub lists back into one sorted list. temp[k]=a[i];
return up; ++i;
} ++k;
void quicksort(int x[],int low,int high) }
{ }
int p; for(int i=low;i<=high;i++)
a[i]=temp[i];
if(low<high)
}
{
p=partition(x,low,high); void mergesort(int a[],int low,int high)
quicksort(x,low,p-1); {
quicksort(x,p+1,high); int mid;
} if(low<high)
{
}
#include<iostream> mid=(low+high)/2;
int main() mergesort(a,low,mid);
using namespace std;
{ void merge(int a[ ],int low,int mid,int high) mergesort(a,mid+1,high);
int n,i; { merge(a,low,mid,high);
int list[30]; int temp[100]; }
cout<<"enter no of elements\n"; }

Page 20 Page 21 Page 22


81 82 83
UNIT -3 UNIT -3 UNIT -3

// If left child is larger than root return 0;


int main() if (L < n && arr[L] > arr[largest]) }
{ largest = L; RUN 1:
int n,i; // If right child is larger than largest so far enter no of elements 5
int list[30]; if (R < n && arr[R] > arr[largest]) enter 5 numbers 11 99 22 101 1
cout<<"enter no of elements\n"; largest = R; Sorted array is
cin>>n; // If largest is not root 1 11 22 99 101
cout<<"enter "<<n<<" numbers "; if (largest != i)
for(i=0;i<n;i++) { Time complexities:
cin>>list[i]; swap(arr[i], arr[largest]); Algorithm Worst case Average case Best case
mergesort (list,0,n-1); // Recursively heapify the affected sub-tree 2 2
cout<<" after sorting\n"; heapify(arr, n, largest); Bubble sort O(n ) O(n ) O(n2)
for(i=0;i<n;i++) } selection sort O(n2) O(n2) O(n2)
cout<<list[i]<<”\t”; } Insertion sort O(n2) O(n2) O(n2)
return 0; Quick sort O(n log n) O(n log n) O(n2)
} void heapSort(int arr[], int n) Merge sort O(n log n) O(n log n) O(n log n)
{ int i; Heap sort O(n log n) O(n log n) O(n log n)
RUN 1: // Build heap (rearrange array) Linear search O(n) O(n) O(1)
for ( i = n / 2 - 1; i >= 0; i--) Binary search O(log n) O(log n) O(1)
enter no of elements 5 heapify(arr, n, i);
enter 5 numbers 44 33 55 11 -1
after sorting -1 11 33 44 55 // One by one extract an element from heap
for ( i=n-1; i>=0; i--)
{
Heap sort // Move current root to end
swap(arr[0], arr[i]);
It is a completely binary tree with the property that a parent is always greater than or equal
to either of its children (if they exist). first the heap (max or min) is created using binary tree and // call max heapify on the reduced heap
then heap is sorted using priority queue. heapify(arr, i, 0);
}
Steps Followed: }

a) Start with just one element. One element will always satisfy heap property. /* A utility function to print array of size n */
b) Insert next elements and make this heap. void printArray(int arr[], int n)
c) Repeat step b, until all elements are included in the heap. {
Steps of Sorting: for (int i=0; i<n; ++i)
a) Exchange the root and last element in the heap. cout << arr[i] << " ";
b) Make this heap again, but this time do not include the last node. cout << "\n";
c) Repeat steps a and b until there is no element left. }
int main()
C++ program for implementation of Heap Sort {
int n,i;
#include <iostream> int list[30];
using namespace std; cout<<"enter no of elements\n";
// To heapify a subtree rooted with node i which is cin>>n;
// an index in arr[]. n is size of heap cout<<"enter "<<n<<" numbers ";
void heapify(int arr[], int n, int i) for(i=0;i<n;i++)
{ cin>>list[i];
int largest = i; // Initialize largest as root heapSort(list, n);
int L= 2*i + 1; // left = 2*i + 1 cout << "Sorted array is \n";
int R= 2*i + 2; // right = 2*i + 2 printArray(list, n);

Page 23 Page 24 Page 25


84 85 86
UNIT -3 UNIT -3 UNIT -3

Terminology of Graph
Graphs:-
A graph G is a discrete structure consisting of nodes (called vertices) and lines joining the nodes
(called edges). Two vertices are adjacent to each other if they are joint by an edge. The edge
joining the two vertices is said to be an edge incident with them. We use V (G) and E(G) to
denote the set of vertices and edges of G respectively.

Euler Circuit and Euler Path


An Euler circuit in a graph G is a simple circuit containing every edge of G. An Euler path in G is a
simple path containing every edge of G.

Graph Representations
Graph data structure is represented using following representations...
1. Adjacency Matrix
2. Incidence Matrix
3. Adjacency List
Adjacency Matrix
In this representation, graph can be represented using a matrix of size total number of vertices by total
number of vertices. That means if a graph with 4 vertices can be represented using a matrix of 4X4 class.
In this matrix, rows and columns both represents vertices. This matrix is filled with either 1 or 0. Here, 1
represents there is an edge from row vertex to column vertex and 0 represents there is no edge from row

Page 1 Page 2 Page 3


87 88 89
UNIT -3 UNIT -3 UNIT -3

vertex to column vertex. This recursive nature of DFS can be implemented using stacks. The basic idea is as follows:
For example, consider the following directed graph representation implemented using linked Pick a starting node and push all its adjacent nodes into a stack.
For example, consider the following undirected graph representation... list... Pop a node from stack to select the next node to visit and push all its adjacent nodes into a stack.
Repeat this process until the stack is empty. However, ensure that the nodes that are visited are marked.
This will prevent you from visiting the same node more than once. If you do not mark the nodes that are
visited and you visit the same node more than once, you may end up in an infinite loop.

DFS-iterative (G, s): //Where G is graph and s is source vertex


let S be stack
S.push( s ) //Inserting s in stack
mark s as visited.
while ( S is not empty):
This representation can also be implemented using array as follows.. //Pop a vertex from stack to visit next
v = S.top( )
Directed graph representation...
S.pop( )
//Push all the neighbours of v in stack that are not visited
for all neighbours w of v in Graph G:
if w is not visited :
S.push( w )
mark w as visited

DFS-recursive(G, s):
Incidence Matrix mark s as visited
In this representation, graph can be represented using a matrix of size total number of vertices for all neighbours w of s in Graph G:
by total number of edges. That means if a graph with 4 vertices and 6 edges can be if w is not visited:
Graph traversals DFS-recursive(G, w)
represented using a matrix of 4X6 class. In this matrix, rows represents vertices and columns
represents edges. This matrix is filled with either 0 or 1 or -1. Here, 0 represents row edge is Graph traversal means visiting every vertex and edge exactly once in a well-defined order. While using
not connected to column vertex, 1 represents row edge is connected as outgoing edge to certain graph algorithms, you must ensure that each vertex of the graph is visited exactly once. The order
column vertex and -1 represents row edge is connected as incoming edge to column vertex. in which the vertices are visited are important and may depend upon the algorithm or question that you
are solving.
For example, consider the following directed graph representation...
During a traversal, it is important that you track which vertices have been visited. The most common way
of tracking vertices is to mark them.

Depth First Search (DFS)


The DFS algorithm is a recursive algorithm that uses the idea of backtracking. It involves exhaustive
searches of all the nodes by going ahead, if possible, else by backtracking.

Here, the word backtrack means that when you are moving forward and there are no more nodes along
the current path, you move backwards on the same path to find nodes to traverse. All the nodes will be
visited on the current path till all the unvisited nodes have been traversed after which the next path will be
Adjacency List selected.
In this representation, every vertex of graph contains list of its adjacent vertices.

Page 4 Page 5 Page 6


90 91 92
UNIT -3 UNIT -4 UNIT -4
Dictionaries:- linear list representation, skip list representation, operations insertion, deletion and Now as head is NULL, this new node becomes head. Hence the dictionary contains only one
searching, hash table representation, hash functions, collision resolution-separate chaining, open addressing- record. this node will be ‘curr’ and ‘prev’ as well. The ‘cuur’ node will always point to current
linear probing, quadratic probing, double hashing, rehashing, extendible hashing. visiting node and ‘prev’ will always point to the node previous to ‘curr’ node. As now there is
Breadth First Search (BFS); only one node in the list mark as ‘curr’ node as ‘prev’ node.
There are many ways to traverse graphs. BFS is the most commonly used approach.BFS is a traversing
algorithm where you should start traversing from a selected node (source or starting node) and traverse DICTIONARIES:
Dictionary is a collection of pairs of key and value where every value is associated with the New/head/curr/prev
the graph layerwise thus exploring the neighbour nodes (nodes which are directly connected to source corresponding key.
node). You must then move towards the next-level neighbour nodes.As the name BFS suggests, you are Basic operations that can be performed on dictionary are: 1 10 NULL
required to traverse the graph breadthwise as follows: 1. Insertion of value in the dictionary
1.First move horizontally and visit all the nodes of the current layer 2. Deletion of particular value from dictionary Insert a record, key=4 and value=20,
2.Move to the next layer 3. Searching of a specific value with the help of key
New
Linear List Representation
The dictionary can be represented as a linear list. The linear list is a collection of pair and value.
There are two method of representing linear list. 4 20 NULL
1. Sorted Array- An array data structure is used to implement the dictionary.
2. Sorted Chain- A linked list data structure is used to implement the dictionary Compare the key value of ‘curr’ and ‘New’ node. If New->key > Curr->key then attach New node
to ‘curr’ node.
Structure of linear list for dictionary:
prev/head New curr->next=New
class dictionary prev=curr
{
1 10 4 20 NULL
private:
int k,data; Add a new node <7,80> then
struct node
{
public: int key; head/prev curr New
int value;
struct node *next; 1 10 4 20 7 80 NULL
} *head;

public:
dictionary(); If we insert <3,15> then we have to search for it proper position by comparing key value.
void insert_d( );
void delete_d( ); (curr->key < New->key) is false. Hence else part will get executed.
void display_d( );
void length();
}; 1 10 4 20 7 80 NULL

Insertion of new node in the dictionary:


Consider that initially dictionary is empty then
head = NULL
We will create a new node with some key and value contained in it. 3 15

void dictionary::insert_d( )
{
node *p,*curr,*prev;
cout<<"Enter an key and value to be inserted:";
cin>>k;
cin>>data;

Page 7 Page 1 Page 2


93 94 95
UNIT -4 UNIT -4 UNIT -4

p=new node; head=curr->next;


p->key=k; else
p->value=data; Case 2: prev->next=curr->next;
p->next=NULL;
if(head==NULL) delete curr;
If the node to be deleted is head node
head=p; cout<<"Item deleted from dictionary...";
i.e.. if(curr==head)
else }
{ Then, simply make ‘head’ node as next node and delete ‘curr’ }
curr=head;
while((curr->key<p->key)&&(curr->next!=NULL))
{ curr head The length operation:
prev=curr;
curr=curr->next; 1 10 3 15 4 20 7 80 NULL int dictionary::length()
} {
if(curr->next==NULL) struct node *curr;
{ int count;
if(curr->key<p->key) count=0;
{ curr=head;
Hence the list becomes if(curr==NULL)
curr->next=p;
prev=curr; {
} head cout<<”The list is empty”;
else return 0;
3 15 4 20 7 80 NULL }
{
p- >next=prev->next; while(curr!=NULL)
prev->next=p; {
} count++;
void dictionary::delete_d( ) cur=curr->next;
}
else { }
{ node*curr,*prev; return count;
p->next=prev->next; cout<<"Enter key value that you want to delete..."; }
prev->next=p; cin>>k;
} if(head==NULL)
cout<<"\nInserted into dictionary Sucesfully.... \n"; SKIP LIST REPRESENTATION
cout<<"\ndictionary is Underflow";
} Skip list is a variant list for the linked list. Skip lists are made up of a
} else series of nodes connected one after the other. Each node contains a key and value pair as well as
{ curr=head; one or more references, or pointers, to nodes further along in the list. The number of references
The delete operation: while(curr!=NULL) each node contains is determined randomly. This gives skip lists their probabilistic nature, and the
{ number of references a node contains is called its node level.
Case 1: Initially assign ‘head’ node as ‘curr’ node.Then ask for a key value of the node which is if(curr->key==k) There are two special nodes in the skip list one is head node which is the starting node of the list
to be deleted. Then starting from head node key value of each jode is cked and compared with the and tail node is the last node of the list
break;
desired node’s key value. We will get node which is to be deleted in variable ‘curr’. The node
prev=curr;
given by variable ‘prev’ keeps track of previous node of ‘cuu’ node. For eg, delete node with key
value 4 then curr=curr->next;
}
cur }
if(curr==NULL)
1 2 3 4 5 6 7
cout<<"Node not found...";
head tail
1 10 3 15 4 20 7 80 NULL else node node
{ The skip list is an efficient implementation of dictionary using sorted chain. This is because in
if(curr==head) skip list each node consists of forward references of more than one node at a time.

Page 3 Page 4 Page 5


96 97 98
UNIT -4 UNIT -4 UNIT -4

The individual node looks like this: {


Eg: temp->element.value=New_pair.value;
return;
}
Key value array of pointer
null
if*New_Level > levels)
{
Now to search any node from above given sorted chain we have to search the sorted chain from New_Level = ++levels;
head node by visiting each node. But this searching time can be reduced if we add one level in last[New_Level] = header;
Element *next
every alternate node. This extra level contains the forward pointer of some node. That means in }
Searching:
sorted chain come nodes can holds pointers to more than one node.
The desired node is searched with the help of a key value.
skipNode<K,E> *newNode = new skipNode<K,E>(New_pair, New_Level+1);

template<class K, class E> for(int i=0;i<=New_Level;i++)


skipnode<K,E>* skipLst<K,E>::search(K& Key_val) {
NULL { newNode->next[i] = last[i]->next[i];
skipnode<K,E>* Forward_Node = header; last[i]->next[i] = newNode;
for(int i=level;i>=0;i--) }
{ len++;
while (Forward_Node->next[i]->element.key < key_val) return;
If we want to search node 40 from above chain there we will require comparatively less time. This Forward_Node = Forward_Node->next[i]; }
search again can be made efficient if we add few more pointers forward references. last[i] = Forward_Node;
} Determining the level of each node:
return Forward_Node->next[0];
template <class K, class E>
}
int skipLst<K,E>::randomlevel()
Searching for a key within a skip list begins with starting at header at the overall list level and {
NULL moving forward in the list comparing node keys to the key_val. If the node key is less than the int lvl=0;
key_val, the search continues moving forward at the same level. If o the other hand, the node key while(rand() <= Lvl_No)
is equal to or greater than the key_val, the search drops one level and continues forward. This lvl=lvl+1;
skip list process continues until the desired key_val has been found if it is present in the skip list. If it is if(lvl<=MaxLvl)
not, the search will either continue at the end of the list or until the first key with a value greater return lvl;
than the search key is found. else
Node structure of skip list: Insertion: return MaxLvl;
There are two tasks that should be done before insertion operation: }
template <class K, class E> 1. Before insertion of any node the place for this new node in the skip list is searched. Hence
struct skipnode before any insertion to take place the search routine executes. The last[] array in the search
{ routine is used to keep track of the references to the nodes where the search, drops down Deletion:
typedef pair<const K,E> pair_type; one level. First of all, the deletion makes use of search algorithm and searches the node that is to be deleted.
pair_type element; 2. The level for the new node is retrieved by the routine randomelevel() If the key to be deleted is found, the node containing the key is removed.
skipnode<K,E> **next;
skipnode(const pair_type &New_pair, int MAX):element(New_pair) template<class K,class E> template<class K, class E>
{ void skipLst<K,E>::insert(pair<K,E>& New_pair) void skipLst<K,E>::delet(K& Key_val)
next=new skipnode<K,E>*[MAX]; { {
} if(New_pair.key >= tailkey) if(key_val>=tailKey)
}; { return;
cout<<”Key is too large”; skipNode<K,E>* temp = search(Key_val);
} if(temp->elemnt.key != Key_val)
return;
skipNode<K,E>* temp = search(New_pair.key);
if(temp->element.key == New_pair.key) for(int i=0;i<=levels;i++)

Page 6 Page 7 Page 8


99 100 101
UNIT -4 UNIT -4 UNIT -4

{ h(key) = record % table size 0 COLLISION


if(last[i]->next[i] == temp) 1
last[i]=>next[i] = temp->next[i]; 54%10=4 2 72 the hash function is a function that returns the key value using which the record can be placed in
} 72%10=2 3 the hash table. Thus this function helps us in placing the record in the hash table at appropriate
89%10=9 4 54 position and due to this we can retrieve the record directly from that location. This function need
while(level>0 && header->next[level] == tail) 37%10=7 5 to be designed very carefully and it should not return the same hash key address for two different
levels--; 6 records. This is an undesirable situation in hashing.
delete temp; 7 37
len--; 8 Definition: The situation in which the hash function returns the same hash key (home bucket) for
} 9 89 more than one record is called collision and two same hash keys returned for different records is
2. Mid Square: called synonym.
HASH TABLE REPRESENTATION In the mid square method, the key is squared and the middle or mid part of the result is used as the
 Hash table is a data structure used for storing and retrieving data very quickly. Insertion of index. If the key is a string, it has to be preprocessed to produce a number.
data in the hash table is based on the key value. Hence every entry in the hash table is Consider that if we want to place a record 3111 then Similarly when there is no room for a new pair in the hash table then such a situation is
associated with some key. called overflow. Sometimes when we handle collision it may lead to overflow conditions.
 Using the hash key the required piece of data can be searched in the hash table by few or 31112 = 9678321 Collision and overflow show the poor hash functions.
more key comparisons. The searching time is then dependent upon the size of the hash for the hash table of size 1000
H(3111) = 783 (the middle 3 digits) For example, 0
table.
1 131
 The effective representation of dictionary can be done using hash table. We can place the
3. Multiplicative hash function: Consider a hash function. 2
dictionary entries in the hash table using hash function.
HASH FUNCTION The given record is multiplied by some constant value. The formula for computing the hash key 3 43
 Hash function is a function which is used to put the data in the hash table. Hence one can is- H(key) = recordkey%10 having the hash table size of 10 4 44
use the same hash function to retrieve the data from the hash table. Thus hash function is 5
used to implement the hash table. H(key) = floor(p *(fractional part of key*A)) where p is integer constant and A is constant real The record keys to be placed are 6 36
 The integer returned by the hash function is called hash key. number. 7 57
131, 44, 43, 78, 19, 36, 57 and 77 8 78
For example: Consider that we want place some employee records in the hash table The record of Donald Knuth suggested to use constant A = 0.61803398987 131%10=1 9 19
employee is placed with the help of key: employee ID. The employee ID is a 7 digit number for 44%10=4
placing the record in the hash table. To place the record 7 digit number is converted into 3 digits If key 107 and p=50 then 43%10=3
by taking only last three digits of the key. 78%10=8
H(key) = floor(50*(107*0.61803398987)) 19%10=9
th
If the key is 496700 it can be stored at 0 position. The second key 8421002, the record of those = floor(3306.4818458045) 36%10=6
key is placed at 2nd position in the array. = 3306 57%10=7
Hence the hash function will be- H(key) = key%1000 At 3306 location in the hash table the record 107 will be placed. 77%10=7
Where key%1000 is a hash function and key obtained by hash function is called hash key.
4. Digit Folding:
 Bucket and Home bucket: The hash function H(key) is used to map several dictionary The key is divided into separate parts and using some simple operation these parts are Now if we try to place 77 in the hash table then we get the hash key to be 7 and at index 7 already
entries in the hash table. Each position of the hash table is called bucket. combined to produce the hash key. the record key 57 is placed. This situation is called collision. From the index 7 if we look for next
For eg; consider a record 12365412 then it is divided into separate parts as 123 654 12 and these vacant position at subsequent indices 8.9 then we find that there is no room to place 77 in the hash
The function H(key) is home bucket for the dictionary with pair whose value is key. are added together table. This situation is called overflow.

TYPES OF HASH FUNCTION H(key) = 123+654+12 COLLISION RESOLUTION TECHNIQUES


There are various types of hash functions that are used to place the record in the hash table- = 789 If collision occurs then it should be handled by applying some techniques. Such a
The record will be placed at location 789 technique is called collision handling technique.
1. Division Method: The hash function depends upon the remainder of division. 1. Chaining
Typically the divisor is table length. 5. Digit Analysis: 2. Open addressing (linear probing)
For eg; If the record 54, 72, 89, 37 is placed in the hash table and if the table size is 10 then The digit analysis is used in a situation when all the identifiers are known in advance. We 3.Quadratic probing
first transform the identifiers into numbers using some radix, r. Then examine the digits of each 4. Double hashing
identifier. Some digits having most skewed distributions are deleted. This deleting of digits is 5. Double hashing
continued until the number of remaining digits is small enough to give an address in the range of 6.Rehashing
the hash table. Then these digits are used to calculate the hash address.

Page 9 Page 10 Page 11


102 103 104
UNIT -4 UNIT -4 UNIT -4

CHAINING The next record key is 9. According to decision hash function it demands for the home bucket 9.
Initially, we will put the following keys in the hash table. Hence we will place 9 at index 9. Now the next final record key 29 and it hashes a key 9. But
In collision handling method chaining is a concept which introduces an additional field with data We will use Division hash function. That means the keys are placed using the formula home bucket 9 is already occupied. And there is no next empty bucket as the table size is limited
i.e. chain. A separate chain table is maintained for colliding data. When collision occurs then a to index 9. The overflow occurs. To handle it we move back to bucket 0 and is the location over
linked list(chain) is maintained at the home bucket. H(key) = key % tablesize there is empty 29 will be placed at 0th index.
H(key) = key % 10 Problem with linear probing:
For eg; One major problem with linear probing is primary clustering. Primary clustering is a process in
For instance the element 131 can be placed at which a block of data is formed in the hash table when collision is resolved.
Consider the keys to be placed in their home buckets are Key
131, 3, 4, 21, 61, 7, 97, 8, 9 H(key) = 131 % 10
39
=1 19%10 = 9 cluster is formed
then we will apply a hash function as H(key) = key % D 18%10 = 8 29
Index 1 will be the home bucket for 131. Continuing in this fashion we will place 4, 8, 7. 39%10 = 9 8
Where D is the size of table. The hash table will be- 29%10 = 9
Now the next key to be inserted is 21. According to the hash function 8%10 = 8
Here D = 10
H(key)=21%10 rest of the table is empty
H(key) = 1
0 this cluster problem can be solved by quadratic probing.
1 131 21 But the index 1 location is already occupied by 131 i.e. collision occurs. To resolve this collision
61 NULL
we will linearly move down and at the next empty location we will prob the element. Therefore 18
21 will be placed at the index 2. If the next element is 5 then we get the home bucket for 5 as
3 NULL QUADRATIC PROBING: 19
index 5 and this bucket is empty so we will put the element 5 at index 5.

61 NULL Quadratic probing operates by taking the original hash value and adding successive values of an
131
arbitrary quadratic polynomial to the starting value. This method uses following formula.
Index Key Key Key

7 97 NULL
NULL NULL NULL H(key) = (Hash(key) + i2) % m)
0
131 131 131 where m can be table size or any prime number.
1
NULL 21 21 for eg; If we have to insert following elements in the hash table with table size 10:
A chain is maintained for colliding elements. for instance 131 has a home bucket (key) 1. 2
similarly key 21 and 61 demand for home bucket 1. Hence a chain is maintained at index 1. NULL NULL 31 37, 90, 55, 22, 17, 49, 87 0 90
3 1 11
OPEN ADDRESSING – LINEAR PROBING 4 4 4 37 % 10 = 7 2 22
4 90 % 10 = 0 3
This is the easiest method of handling collision. When collision occurs i.e. when two records NULL 5 5 55 % 10 = 5 4
demand for the same home bucket in the hash table then collision can be solved by placing the 5 22 % 10 = 2 55
5
second record linearly down whenever the empty bucket is found. When use linear probing (open NULL NULL 61 11 % 10 = 1 6
addressing), the hash table is represented as a one-dimensional array with indices that range from 6
7 37
0 to the desired table size-1. Before inserting any elements into this table, we must initialize the 7 7 7 Now if we want to place 17 a collision will occur as 17%10 = 7 and 8
table to represent the situation where all slots are empty. This allows us to detect overflows and 7 bucket 7 has already an element 37. Hence we will apply 9
collisions when we inset elements into the table. Then using some suitable hash function the 8 8 8 quadratic probing to insert this record in the hash table.
element can be inserted into the hash table. 8
NULL NULL NULL Hi (key) = (Hash(key) + i2) % m
For example: 9
Consider i = 0 then
Consider that following keys are to be inserted in the hash table (17 + 02) % 10 = 7
after placing keys 31, 61
131, 4, 8, 7, 21, 5, 31, 61, 9, 29

Page 12 Page 13 Page 14


105 106 107
UNIT -4 UNIT -4 UNIT -4

(17 + 12) % 10 = 8, when i =1 Now if 17 to be inserted then In such situations, we have to transfer entries from old table to the new table by re computing
Key their positions using hash functions.
The bucket 8 is empty hence we will place the element at index 8. 0 90 H1(17) = 17 % 10 = 7 90
Then comes 49 which will be placed at index 9. 1 11 H2(key) = M – (key % M) Consider we have to insert the elements 37, 90, 55, 22, 17, 49, and 87. the table size is 10 and will
17
2 22 use hash function.,
49 % 10 = 9 3 Here M is prime number smaller than the size of the table. Prime number 22

4 smaller than table size 10 is 7 H(key) = key mod tablesize


5 55
6 Hence M = 7 37 % 10 = 7
7 37
45 90 % 10= 0
8 49 H2(17) = 7-(17 % 7) 55 % 10 = 5
9 =7–3=4 22 % 10 = 2
37 17 % 10 = 7 Collision solved by linear probing
Now to place 87 we will use quadratic probing. That means we have to insert the element 17 at 4 places from 37. In short we ha ve to take 4 49 % 10 = 9
0 90 jumps. Therefore the 17 will be placed at index 1.
(87 + 0) % 10 = 7 1 11 49 Now this table is almost full and if we try to insert more elements collisions will occur and
(87 + 1) % 10 = 8… but already occupied 2 22 Now to insert number 55 eventually further insertions will fail. Hence we will rehash by doubling the table size. The old
(87 + 22) % 10 = 1.. already occupied 3 table size is 10 then we should double this size for new table, that becomes 20. But 20 is not a
Key
(87 + 32) % 10 = 6 4 H1(55) = 55 % 10 =5 Collision prime number, we will prefer to make the table size as 23. And new hash function will be
5 90
It is observed that if we want place all the necessary elements in 55
6 87 H2(55) = 7-(55 % 7) 17 H(key) key mod 23 0 90
the hash table the size of divisor (m) should be twice as large as
7 37 =7–6=1 1 11
total number of elements. 22
8 49 37 % 23 = 14 2 22
9 That means we have to take one jump from index 5 to place 55. 90 % 23 = 21 3
DOUBLE HASHING Finally the hash table will be - 55 % 23 = 9 4
45
22 % 23 = 22 5 55
Double hashing is technique in which a second hash function is applied to the key when a 17 % 23 = 17 6 87
collision occurs. By applying the second hash function we will get the number of positions from 55
49 % 23 = 3 7 37
the point of collision to insert. 87 % 23 = 18 8 49
There are two important rules to be followed for the second function: 37
9
 it must never evaluate to zero. 10
 must make sure that all cells can be probed. 49 11
The formula to be used for double hashing is Comparison of Quadratic Probing & Double Hashing 12
13
H1(key) = key mod tablesize The double hashing requires another hash function whose probing efficiency is same as
Key 14
some another hash function required when handling random collision.
H2(key) = M – (key mod M) 15
90 The double hashing is more complex to implement than quadratic probing. The quadratic
16
probing is fast technique than double hashing.
where M is a prime number smaller than the size of the table. 17
22 18
Consider the following elements to be placed in the hash table of size 10 19
REHASHING
37, 90, 45, 22, 17, 49, 55 20
Initially insert the elements using the formula for H1(key). 21
Rehashing is a technique in which the table is resized, i.e., the size of table is doubled by creating
Insert 37, 90, 45, 22 45
a new table. It is preferable is the total size of table is a prime number. There are situations in 22
which the rehashing is required. 23
H1(37) = 37 % 10 = 7
H1(90) = 90 % 10 = 0 37
H1(45) = 45 % 10 = 5  When table is completely full Now the hash table is sufficiently large to accommodate new insertions.
H1(22) = 22 % 10 = 2  With quadratic probing when the table is filled half.
H1(49) = 49 % 10 = 9 49
 When insertions fail due to overflow. Advantages:

Page 15 Page 16 Page 17


108 109 110
UNIT -4 UNIT -4 UNIT -4

1. This technique provides the programmer a flexibility to enlarge the table size if required.
2. Only the space gets doubled with simple hash function which avoids occurrence of
collisions.
1 = 001 Thus the data is inserted using extensible hashing.
EXTENSIBLE HASHING 0 1
4 = 100 Deletion Operation:
 Extensible hashing is a technique which handles a large amount of data. The data to be (0) (1)
placed in the hash table is by extracting certain number of bits. 5 = 101 If we wan tot delete 10 then, simply make the bucket of 10 empty.
 Extensible hashing grow and shrink similar to B-trees. 100 001
 In extensible hashing referring the size of directory the elements are to be placed in 010 Based on last bit the data
buckets. The levels are indicated in parenthesis. is inserted. 00 01 10 11

For eg: Directory (1) (2) (2)

Step 2: Insert 7 100 001 111


0 1 7 = 111 1000 101
Levels But as depth is full we can not insert 7 here. Then double the directory and split the bucket.
(0) (1) After insertion of 7. Now consider last two bits.
001 111
data to be
010
placed in bucket 00 01 10 11
(1) (2) (2) Delete 7.

100 001 111 00 01 10 11


010
 The bucket can hold the data of its global depth. If data in bucket is more than global (1) (1)
depth then, split the bucket and double the directory.
100 001 Note that the level was increased
1000 101 when we insert 7. Now on deletion
Consider we have to insert 1, 4, 5, 7, 8, 10. Assume each page can hold 2 data entries (2 is the of 7, the level should get decremented.
Step 3: Insert 8 i.e. 1000
depth).

00 01 10 11
Step 1: Insert 1, 4 Delete 8. Remove entry from directory 00.
(2)
1 = 001
0 (1)
001 111
4 = 100 100
010 00 00 10 11
(0) 1000
We will examine last bit
001 (1) (1)
of data and insert the data Step 4: Insert 1 0
010
in bucket. 100 001
101

Insert 5. The bucket is full. Hence double the directory.

Applications of hashing:

Page 18 Page 20
111 112 113
UNIT -4 Binary Search Trees: Various Binary tree representation, definition, BST ADT, Implementation, 4. Internal nodes: The nodes other than the root node and the leaves are called the internal nodes. Eg:
Operations- Searching, Insertion and Deletion, Binary tree traversals, threaded binary trees,
B, C, D, G
AVL Trees : Definition, Height of an AVL Tree, Operations – Insertion, Deletion and Searching
B-Trees: B-Tree of order m, height of a B-Tree, insertion, deletion and searching, B+ Tree. 5. Parent nodes: The node which is having further sub-trees(branches) is called the parent node of
1. In compilers to keep track of declared variables.
2. For online spelling checking the hashing functions are used. those sub-trees. Eg: B is the parent node of E and F.
3. Hashing helps in Game playing programs to store the moves made.
6. Predecessor: While displaying the tree, if some particular node occurs previous to some other node
4. For browser program while caching the web pages, hashing is used. TREES
5. Construct a message authentication code (MAC) then that node is called the predecessor of the other node. Eg: E is the predecessor of the node B.
6. Digital signature. A Tree is a data structure in which each element is attached to one or more elements directly beneath it.
7. Successor: The node which occurs next to some other node is a successor node. Eg: B is the
7. Time stamping
8. Key updating: key is hashed at specific intervals resulting in new key successor of E and F.
Level 0
A 8. Level of the tree: The root node is always considered at level 0, then its adjacent children are
supposed to be at level 1 and so on. Eg: A is at level 0, B,C,D are at level 1, E,F,G,H,I,J are at level 2,
K,L are at level 3.
9. Height of the tree: The maximum level is the height of the tree. Here height of the tree is 3. The
B 1 height if the tree is also called depth of the tree.
C D
10. Degree of tree: The maximum degree of the node is called the degree of the tree.
E F G BINARY TREES
H I J
2 Binary tree is a tree in which each node has at most two children, a left child and a right child. Thus the
K L 3 order of binary tree is 2.

A binary tree is either empty or consists


Terminology of a) a node called the root
b) left and right sub trees are themselves binary trees.
 The connections between elements are called branches.
 A tree has a single root, called root node, which is shown at the top of the tree. i.e. root is always A binary tree is a finite set of nodes which is either empty or consists of a root and two disjoint
at the highest level 0. trees called left sub-tree and right sub-tree.
 Each node has exactly one node above it, called parent. Eg: A is the parent of B,C and D. In binary tree each node will have one data field and two pointer fields for representing the
 The nodes just below a node are called its children. ie. child nodes are one level lower than the sub-branches. The degree of each node in the binary tree will be at the most two.
parent node.
 A node which does not have any child called leaf or terminal node. Eg: E, F, K, L, H, I and M are Types Of Binary Trees:
 lNeaovdes. with at least one child are called non terminal or internal nodes.
 The child nodes of same parent are said to be siblings. There are 3 types of binary trees:
 A path in a tree is a list of distinct nodes in which successive nodes are connected by branches in
the tree. 1. Left skewed binary tree: If the right sub-tree is missing in every node of a tree we call it as left
skewed tree.
 The length of a particular path is the number of branches in that path. The degree of a node
of a tree is the number of children of that node. A
 The maximum number of children a node can have is often referred to as the order of a
tree. The height or depth of a tree is the length of the longest path from root to any leaf.
B
1. Root: This is the unique node in the tree to which further sub trees are attached. Eg: A
Degree of the node: The total number of sub-trees attached to the node is called the degree of the
node.Eg: For node A degree is 3. For node K degree is 0
C
3. Leaves: These are the terminal nodes of the tree. The nodes with degree 0 are always the leaf nodes.
Eg: E, F, K, L,H, I, J

Page 21
114 Page 1 115 Page 2 116
2. Right skewed binary tree: If the left sub-tree is missing in every node of a tree we call it is right Disadvantages of linked representation:
sub-tree.
1. This representation does not provide direct access to a node and special algorithms are
A required.
2. This representation needs additional space in each node for storing the left and right sub-
trees.
B
TRAVERSING A BINARY TREE

C Traversing a tree means that processing it so that each node is visited exactly once. A binary
tree can be
3. Complete binary tree: traversed a number of ways.The most common tree traversals are
The tree in which degree of each node is at the most two is called a complete binary tree. In
a complete binary tree there is exactly one node at level 0, two nodes at level 1 and four nodes at level
l  In-order
2 and so on. So we can say that a complete binary tree depth d will contain exactly 2 nodes at each  Pre-order and
level l, where l is from 0 to d.
 Post-order
A
Pre-order 1.Visit the root Root | Left | Right
Advantages of sequential representation: 2.Traverse the left sub tree in pre-order
The only advantage with this type of representation is that the 3.Traverse the right sub tree in pre-order.
B C direct access to any node can be possible and finding the parent or left children of any particular node In-order 1.Traverse the left sub tree in in-order Left | Root | Right
is fast because of the random access. 2.Visit the root
3.Traverse the right sub tree in in-order.
Disadvantages of sequential representation: Post-order 1.Traverse the left sub tree in post-order Left | Right | Root
D E F G 1. The major disadvantage with this type of representation is wastage of memory. For example in 2.Traverse the right sub tree in post-order.
the skewed tree half of the array is unutilized.
3.Visit the root
Note: 2. In this type of representation the maximum depth of the tree has to be fixed. Because we have
n decide the array size. If we choose the array size quite larger than the depth of the tree, then it
1. A binary tree of depth n will have maximum 2 -1 nodes. will be wastage of the memory. And if we coose array size lesser than the depth of the tree then A
2. A complete binary tree of level l will have maximum 2l nodes at each level, where l starts from 0. we will be unable to represent some part of the tree.
3. Any binary tree with n nodes will have at the most n+1 null branches. 3. The insertions and deletion of any node in the tree will be costlier as other nodes has to be B C
4. The total number of edges in a complete binary tree with n terminal nodes are 2(n-1). adjusted at appropriate positions so that the meaning of binary tree can be preserved.
As these drawbacks are there with this sequential type of representation, we will search for more
Binary Tree Representation flexible representation. So instead of array we will make use of linked list to represent the tree.
D E F G
b) Linked Representation
A binary tree can be represented mainly in 2 ways: Linked representation of trees in memory is implemented using pointers. Since each node in a
binary tree can have maximum two children, a node in a linked representation has two pointers for both
left and right child, and one information field. If a node does not have any child, the corresponding H I J
a) Sequential Representation
pointer field is made NULL pointer.
b) Linked Representation
In linked list each node will look like this:
a) Sequential Representation K
The simplest way to represent binary trees in memory is the sequential representation that uses one- Left Child Data Right Child The pre-order traversal is: ABDEHCFGIKJ
dimensional array. The in-order traversal is : DBHEAFCKIGJ
Advantages of linked representation:
1) The root of binary tree is stored in the 1 st location of array
th 1. This representation is superior to our array representation as there is no wastage of The post-order traversal is:DHEBFKIJGCA
2) If a node is in the j location of array, then its left child is in the location 2J+1 and its right memory. And so there is no need to have prior knowledge of depth of the tree.
child in the location 2J+2 Using dynamic memory concept one can create as much memory(nodes) as
d+1 required. By chance if some nodes are unutilized one can delete the nodes by
The maximum size that is required for an array to store a tree is 2 -1, where d is the depth of the tree.
making the address free.

2. Insertions and deletions which are the most common operations can be done without
moving the nodes.

Page 3 117 Page 4 118 Page 5 119


{
Inorder Traversal: if(temp!=NULL)
rd
{
Print 3 cout<<”temp->data”; preorder(temp->left);
A preorder(temp->right);
}
nd th }
Print 2 Print 4
B D

C Print this
E
at the last
st Operations On Binary Search Tree:
Print 1
The basic operations which can be performed on binary search tree are.
1. Insertion of a node in binary search tree.
C-B-A-D-E is the inorder traversal i.e. first we go towards the leftmost node. i.e. C so print that node
C. Then go back to the node B and print B. Then root node A then move towards the right sub-tree 2. Deletion of a node from binary search tree.
print D and finally E. Thus we are following the tracing sequence of Left|Root|Right. This type of 3. Searching for a particular node in binary search tree.
traversal is called inorder traversal. The basic principle is to traverse left sub-tree then root and then the Insertion of a node in binary search tree.
right sub-tree. While inserting any node in binary search tree, look for its appropriate position in the binary search
tree. We start comparing this new node with each node of the tree. If the value of the node which is
to be inserted is greater than the value of the current node we move on to the right sub-branch
Pseudo Code: otherwise we move on to the left sub-branch. As soon as the appropriate position is found we
From figure the postorder traversal is C-D-B-E-A. In the postorder traversal we are following the
Left|Right|Root principle i.e. move to the leftmost node, if right sub-tree is there or not if not then attach this new node as left or right child appropriately.
template <class T>
void inorder(bintree<T> *temp) print the leftmost node, if right sub-tree is there move towards the right most node. The key idea
{ here is that at each sub-tree we are following the Left|Right|Root principle and print the data
if(temp!=NULL) accordingly.
{ Pseudo Code:
inorder(temp->left);
cout<<”temp->data”; template <class T>
inorder(temp->right); void postorder(bintree<T> *temp)
} {
} if(temp!=NULL)
{
postorder(temp->left);
postorder(temp->right);
cout<<”temp->data”;
}
}
Before Insertion

In the above fig, if we wan to insert 23. Then we will start comparing 23 with value of root node
BINARY SEARCH TREE
i.e. 10. As 23 is greater than 10, we will move on right sub-tree. Now we will compare 23 with 20
In the simple binary tree the nodes are arranged in any fashion. Depending on user’s desire
and move right, compare 23 with 22 and move right. Now compare 23 with 24 but it is less than
is the preorder traversal of the above fig. We are following Root|Left|Right path i.e. data at the the new nodes can be attached as a left or right child of any desired node. In such a case finding for
any node is a long cut procedure, because in that case we have to search the entire tree. And thus 24. We will move on left branch of 24. But as there is node as left child of 24, we can attach 23 as
root node will be printed first then we move on the left sub-tree and go on printing the data till
the searching time complexity will get increased unnecessarily. So to make the searching left child of 24.
we reach to the left most node. Print the data at that node and then move to the right sub- tree.
Follow the same principle at each sub-tree and go on printing the data accordingly. algorithm faster in a binary tree we will go for building the binary search tree. The binary search
tree is based on the binary search algorithm. While creating the binary search tree the data is
template <class T> systematically arranged. That means values at left sub-tree < root node value < right sub-tree
void preorder(bintree<T> *temp) values.

Page 6 120 Page 7 121 Page 8 122


To explain this kind of deletion, consider a tree as given below.

If we want to delete the node 15, then we


will simply copy node 18 at place of 16
and then set the node free

Deletion of a node from binary search tree.


For deletion of any node from binary search tree there are three which are possible.
i. Deletion of leaf node. Let us consider that we want to delete node having value 7. We will then find out the inorder successor
ii. Deletion of a node having one child. of node 7. We will then find out the inorder successor of node 7. The inorder successor will be simply
iii. Deletion of a node having two children. copied at location of node 7.
That means copy 8 at the position where value of node is 7. Set left pointer of 9 as NULL. This
Deletion of leaf node. completes the deletion procedure.

This is the simplest deletion, in which we set the left or right pointer of parent node as NULL.

10

7 15

Before deletion

5 9 12 18

Deletion of a node having two children. Searching for a node in binary search tree.
From the above fig, we want to delete the node having value 5 then we will set left pointer of its parent
node as NULL. That is left pointer of node having value 7 is set to NULL. Consider a tree as given below. In searching, the node which we want to search is called a key node. The key node will be compared
with each node starting from root node if value of key node is greater than current node then we
search for it on right sub branch otherwise on left sub branch. If we reach to leaf node and still we do
not get the value of key node then we declare “node is not present in the tree”.

Deletion of a node having one child.

Page 9 123 Page 10 124 Page 11 125


We can also write it as
N > Nh = Nh-1+Nh-2+1

> 2Nh-2

> 4Nh-4
.
.
> 2iNh-2i

If value of h is even, let i = h/2-1


In the above tree, if we want to search for value 9. Then we will compare 9 with root node 10. As 9 is Then equation becomes
less than 10 we will search on left sub branch. Now compare 9 with 5, but 9 is greater than 5. So we
will move on right sub tree. Now compare 9 with 8 but 9 is greater than 8 we will move on right sub
branch. As the node we will get holds the value 9. Thus the desired node can be searched.
N > 2h/2-1N2

= N > 2(h-1)/2x4 (N2 = 4)


AVL TREES
Adelsion Velski and Lendis in 1962 introduced binary tree structure that is balanced with = O(log N)
respect to height of sub trees. The tree can be made balanced and because of this retrieval
of any node can be done in Ο(log n) times, where n is total number of nodes. From the If value of h is odd, let I = (h-1)/2 then equation becomes
name of these scientists the tree is called AVL tree. N > 2(h-1)/2 N1
N > 2(h-1)/2 x 1
Definition: H = O(log N)
An empty tree is height balanced if T is a non empty binary tree with T L and TR as This proves that height of AVL tree is always O(log N). Hence search, insertion and deletion can
its left and right sub trees. The T is height balanced if and only if
be carried out in logarithmic time.
i. TL and TR are height balanced.
ii. hL-hR <= 1 where hL and hR are heights of TL and TR.
The idea of balancing a tree is obtained by calculating the balance factor of a tree. Representation of AVL Tree

Definition of Balance Factor:  The AVL tree follows the property of binary search tree. In fact AVL trees are
basically binary search trees with balance factors as -1, 0, or +1.
The balance factor BF(T) of a node in binary tree is defined to be hL-hR where hL and hR Height of AVL Tree:  After insertion of any node in an AVL tree if the balance factor of any node
are heights of left and right sub trees of T. becomes other than -1, 0, or +1 then it is said that AVL property is violated. Then
Theorem: The height of AVL tree with n elements (nodes) is O(log n). we have to restore the destroyed balance condition. The balance factor is denoted at
For any node in AVL tree the balance factor i.e. BF(T) is -1, 0 or +1. right top corner inside the node.
Proof: Let an AVL tree with n nodes in it. Nh be the minimum number of nodes in an AVL tree of
height h.

In worst case, one sub tree may have height h-1 and other sub tree may have height h-2. And both these
sub trees are AVL trees. Since for every node in AVL tree the height of left and right sub trees differ
by at most 1.

Hence

Nh = Nh-1+Nh-2+1

Where Nh denotes the minimum number of nodes in an AVL tree of height h.

N0=0 N1=2

Page 12 126 Page 13 127 Page 14 128


When node ‘1’ gets inserted as a left child of node ‘C’ then AVL property gets destroyed i.e. node
A has balance factor +2.
The LL rotation has to be applied to rebalance the nodes.

Insertion of a node. 2. RR rotation:

There are four different cases when rebalancing is required after insertion of new node. When node ‘4’ gets attached as right child of node ‘C’ then node ‘A’ gets unbalanced. The rotation
which needs to be applied is RR rotation as shown in fig.
1. An insertion of new node into left sub tree of left child. (LL).
2. An insertion of new node into right sub tree of left child. (LR).
3. An insertion of new node into left sub tree of right child. (RL).
4. An insertion of new node into right sub tree of right child.(RR).

Some modifications done on AVL tree in order to rebalance it is called rotations of AVL tree

There are two types of rotations:

Single rotation Double rotation


Left-Left(LL rotation) Left-Right(LR rotation)

 After insertion of a new node if balance condition gets destroyed, then the nodes on that
path(new node insertion point to root) needs to be readjusted. That means only the affected sub Right-Right(RR rotation) Right-Left(RL rotation)
tree is to be rebalanced.
 The rebalancing should be such that entire tree should satisfy AVL property.
In above given example- Insertion Algorithm:
1. Insert a new node as new leaf just as an ordinary binary search tree.
2. Now trace the path from insertion point(new node inserted as leaf) towards root. For each node
‘n’ encountered, check if heights of left (n) and right (n) differ by at most 1.
a) If yes, move towards parent (n).
b) Otherwise restructure by doing either a single rotation or a double rotation.
Thus once we perform a rotation at node ‘n’ we do not require to perform any rotation at any
ancestor on ‘n’.

Page 15 129 Page 16 130 Page 17 131


Insert 1

To insert node ‘1’ we have to attach it as a left child of ‘2’. This will unbalance the tree as follows.
We will apply LL rotation to preserve AVL property of it.

Insert 28
When node ‘3’ is attached as a right child of node ‘C’ then unbalancing occurs because of LR. The node ‘28’ is attached as a right child of 25. RR rotation is required to rebalance.
Hence LR rotation needs to be applied.

When node ‘2’ is attached as a left child of node ‘C’ then node ‘A’ gets unbalanced as its balance
factor becomes -2. Then RL rotation needs to be applied to rebalance the AVL tree.
Example:

Insert 25
Insert 1, 25, 28, 12 in the following AVL tree. We will attach 25 as a right child of 18. No balancing is required as entire tree preserves the AVL
property

Page 18 132 Page 19 133 Page 20 134


The tree becomes
Insert 12 Deletion:

Even after deletion of any particular node from AVL tree, the tree has to be restructured in order to
preserve AVL property. And thereby various rotations need to be applied.

Algorithm for deletion:

The deletion algorithm is more complex than insertion algorithm.


1. Search the node which is to be deleted.
2. a) If the node to be deleted is a leaf node then simply make it NULL to remove.
b) If the node to be deleted is not a leaf node i.e. node may have one or two children, then the
node must be swapped with its inorder successor. Once the node is swapped, we can remove
this node.
3. Now we have to traverse back up the path towards root, checking the
balance factor of every node along the path. If we encounter unbalancing
in some sub tree
then balance that sub tree using appropriate single or double
rotations. The deletion algorithm takes O(log n) time to delete any node.

To rebalance the tree we have to apply LR rotation.

Page 21 135 Page 22 136 Page 23 137


UNIT -5 UNIT -5

Definition:

A B tree of order m is an m-way search tree and hence may be empty. If non empty, then the following
properties are satisfied on its extended tree representation:
i. The root node must have at least two child nodes and at most m child nodes.
ii. All internal nodes other than the root node must have at least |m/2 | non empty child nodes and at most Step 2: Insert 8, Since the node is full split the node at medium 1, 3, 7, 8, 14
m non empty child nodes.
iii. The number of keys in each internal node is one less than its number of child nodes and these keys
partition the keys of the tree into sub trees. 7
iv. All external nodes are at the same level.

v.

Example:
F K O B tree of order 4
1 3 8 14
Level 1

Searching:

The searching of a node in an AVL tree is very simple. As AVL tree is basically binary search tree, the C D G M N P Q W Step 3: Insert 5, 11, 17 which can be easily inserted in a B-tree.
algorithm used for searching a node from binary search tree is the same one is used to search a node
from AVL tree. 7

The searching of a node from AVL tree takes O(log n) time.

S T X Y Z
Level 1 3 5 8 11 14 17
BTREES 3
 Multi-way trees are tree data structures with more than two branches at a node. The data
structures of m-way search trees, B trees and Tries belong to this category of tree
structures.
 AVL search trees are height balanced versions of binary search trees, provide efficient Insertion
retrievals and storage operations. The complexity of insert, delete and search operations on For example construct a B-tree of order 5 using following numbers. 3, 14, 7, 1, 8, 5, 11, 17, 13, 6, 23, 12,
AVL search trees id O(log n). 20, 26, 4, 16, 18, 24, 25, 19 Step 4: Now insert 13. But if we insert 13 then the leaf node will have 5 keys which is not allowed. Hence
 Applications such as File indexing where the entries in an index may be very large, The order 5 means at the most 4 keys are allowed. The internal node should have at least 3 non empty 8,
maintaining the index as m-way search trees provides a better option than AVL search trees children and each leaf node must contain at least 2 keys. 11, 13, 14, 17 is split and medium node 13 is moved up.
which are but only balanced binary search trees.
 While binary search trees are two-way search trees, m-way search trees are extended binary
search trees and hence provide efficient retrievals.
 B trees are height balanced versions of m-way search trees and they do not recommend Step 1: Insert 3, 14, 7, 1
representation of keys with varying sizes. 7 13
 Tries are tree based data structures that support keys with varying sizes. 1 3 7 14

1 3 5 8 11 14 17
.

Page 25 Page 26
Page 24 138 139 140
UNIT -5 UNIT -5 UNIT -5
Step 7: Insertion of node 4 causes left most node to split. The 1, 3, 4, 5, 6 causes key 4 to move up.
Then insert 16, 18, 24, 25.
Delete 8, then it is very simple.
13
4 7 13 20

4 7 17 20

Step 5: Now insert 6, 23, 12, 20 without any split.


1 3 5 6 8 11 12 14 16 17 18 23 24 25 26
7 13

1 3 5 6 11 12 14 16 18 19 23 24 25 26
Step 8: Finally insert 19. Then 4, 7, 13, 19, 20 needs to be split. The median 13 will be moved up to
from a root node.
The tree then will be -

1 3 5 6 8 11 12 14 17 20 23 13

Now we will delete 20, the 20 is not in a leaf node so we will find its successor which is 23, Hence 23
will be moved up to replace 20.

4 7 17 20 13

Step 6: The 26 is inserted to the right most leaf node. Hence 14, 17, 20, 23, 26 the node is split and 20 will be
moved up.
4 7 17 23

7 13 20 1 3 5 6 8 11 12 14 16 18 19 23 24 25 26

Thus the B tree is constructed. 13 1 3 5 6 11 12 14 16 18 19 24 25 26


Deletion
1 3 5 6 8 11 12 14 17 23 26
Consider a B-tree

4 7 17 20
Next we will delete 18. Deletion of 18 from the corresponding node causes the node with only one
key, which is not desired (as per rule 4) in B-tree of order 5. The sibling node to immediate right has
an extra key. In such a case we can borrow a key from parent and move spare key of sibling up.

1 3 5 6 8 11 12 14 16 18 19 23 24 25 26

Page 27 Page 28 Page 29


141 142 143
UNIT -5 UNIT -5 UNIT -5

13
7 13 17 24
The running time of search operation depends upon the height of the tree. It is O(log n).

Height of B-tree
4 7 17 24
The maximum height of B-tree gives an upper bound on number of disk access. The maximum number of
1 3 4 6 11 12 14 16 19 23 25 26 keys in a B-tree of order 2m and depth h is
2 h-1
1 + 2m + 2m(m+1) + 2m(m+1) + . . .+ 2m(m+1)

h
1 3 5 6 11 12 14 16 19 23 25 26 i-1
= 1 + ∑ 2m(m+1)
i=1
Searching The maximum height of B-tree with n keys
The search operation on B-tree is similar to a search to a search on binary search tree. Instead of choosing log m+1 n = O(log n)
Now delete 5. But deletion of 5 is not easy. The first thing is 5 is from leaf node. Secondly this leaf between a left and right child as in binary tree, B-tree makes an m-way choice. Consider a B-tree as given 2m
node has no extra keys nor siblings to immediate left or right. In such a situation we can combine this below.
node with one of the siblings. That means remove 5 and combine 6 with the node 1, 3. To make the tree
balanced we have to move parent’s key down. Hence we will move 4 down as 4 is between 1, 3, and 6. 13
The tree will be-

13

4 7
17
7 17 24 20

1 3 5 6 8 11 12 14 16 18 19 23 24 25 26
1 3 4 6 11 12 14 16 19 23 25 26

But again internal node of 7 contains only one key which not allowed in B-tree. We then will try to borrow
a key from sibling. But sibling 17, 24 has no spare key. Hence we can do is that, combine 7 with 13 and 17, If we want to search 11 then
24. Hence the B-tree will be
i. 11 < 13 ; Hence search left node

ii. 11 > 7 ; Hence right most node

iii. 11 > 8 ; move in second block

iv. node 11 is found

Page 30 Page 31 Page 32


144 145 146
B+ Tree Leaf Node Structure pleaf = 31 which means each leaf node can hold up to p leaf = 31 value/data pointer
B+ Trees 1. Each leaf node is of the form, <<K 1, Pr1>, <K2, Pr2>,… ,<Kq-1, Prq-1>, Pnext> where combinations, assuming data pointers are record pointers.
q<=p, each Pri is a data pointer, and Pnext points to the next leaf node of the B+
tree.
 Most implementations use the B-tree variation, the B+-tree. 2. Within each leaf node, K1 < K2 < …< Kq-1, q<=p Example 7 from Text
 In the B-tree, every value of the search field appears once at some level in the 3. Each Pri is a data pointer that points to the record whose search field value is Ki,
tree, along with the data pointer to the record, or block where the record is stored. or to a file block containing the record (or a block of pointers if the search field is Suppose that we construct a B+ tree on the field of Example 6. To calculate the
 In a B+ tree, data pointers are stored only at the leaf nodes, therefore the not a key field) approximate number of entries of the B+ tree we assume that each node is 69
structure of the leaf nodes vary from the structure of the internal (non leaf) nodes. 4. Each leaf node has at least p/2 values. percent full. On average, each internal node will have 34 * 0.69 or approximately 23
 If the search field is a key field, the leaf nodes have a value for every value of the 5. All leaf nodes are at the same level. pointers, and hence 22 values. Each leaf node, on the average will hold 0.69*pleaf =
0.69*31 or approximately 21 data record pointers. A B+ tree will have the following
search field, along with the data pointer to the record or block.
B+ Tree Information average number of entries at each level.
 If the search field is a non key field, the pointer points to a block containing
pointers to the data file records, creating an extra level of indirection (similar to  By starting at the leftmost block, it is possible to traverse leaf nodes as a linked Root: 1 node 22 entries 23 pointers
option 3 for the secondary indexes) list using the Pnext pointers. This provides ordered access to the data records on Level 1: 23 nodes 506 entries 529 pointers
 The leaf nodes of the B+ Trees are linked to provide ordered access on the the indexing field. Level 2: 529 nodes 11,638 entries 12,167 pointers
search field to the record. The first level is similar to the base level of an index.  Entries in internal nodes of a B+ tree include search values and tree pointers, Leaf Level: 12,167 nodes 255,507 record pointers
 Some search field values in the leaf nodes are repeated in the internal nodes of without any data pointers, more entries can be stored into an internal node of a
the B+ trees, in order to guide the search. B+ tree, than for a B-tree. When we compare this result with the previous B-tree example (Example 5), we can
 Therefore the order p will be larger for a B+ tree, which leads to fewer B+ tree see that the B+ tree can hold up to 255,507 record pointers, whereas a
levels, improving the search time. corresponding B-tree can only hold 65,535 entries.
B+ Tree Example  The order p can be different for the internal and leaf nodes, because of the
Insertion and Deletion with B+-trees.
structural differences of the nodes.
The following example has p = 3, and pleaf = 2
Example 6 from Text
5 To calculate the order p of a B+ Tree. suppose the search key field is V = 9 bytes
Points to Note:
long, the block size is B = 512 bytes, a record pointer is Pr = 7 bytes and a block
pointer is P = 6 bytes. An internal node of the B+ trees can have up to p tree  Every key value must exist at the leaf level, because all data pointers are at the
pointers and p – 1 search field values, which must fit into a single block. leaf level,
3 7 8
 Every value appearing in an internal node, also appears as the rightmost value in
Calculate the value of p for an internal node: the leaf level of the subtree pointed at by the tree pointer to the left of the value.
 When a leaf node is full, and a new entry is inserted there, the node overflows
5 6 bytes and must be split. The first j = (p leaf+1)/2 entries (in the example 2 entries) in the
1 3 5 6 7 8 9 12 original node are kept there, and the remaining entries are moved to the new leaf
9 bytes
node. The entry at position j is copied/replicated and moved to the parent node.
 When an internal node is full, and a new entry is to be inserted, the node
p*P + (p-1)*V <= 512 overflows and must be split into 2 nodes. The entry at position j is moved to the
p*6 + (p-1)*9 <= 512 parent node. The first j-1 entries are kept in the original node, and the last j+1
6p + 9p –9 <= 512 entries are moved to the new node.
B+ Tree Internal Node Structure 15p <= 522
p = 34 which means that each internal node can hold up to 34 tree pointers, and 33 To practice B+ Tree insertion, complete Exercise 14.15 in Chapter 14 of the course
1. Each internal node is of the form <P 1, K1, P2, K2, …., Pq-1, Kq-1, Pq>, where q<=p search key values.
and each Pi is a tree pointer. text.
2. Within each internal node, K1 < K2 < ….< Kq-1. Calculate the value of p for a leaf node:
3. For all search field values X in the subtree pointed at by Pi, we have:
 Ki-1 < X <= Ki for 1 < i < q; 1 3
6 bytes
 X <=K for i = 1;
 and Ki-1 < X for i = q. 9 bytes
4. Each internal node has at most, p tree pointers. 7 bytes
5. Each internal node, except the root, has at least p/2 tree pointers. The root (pleaf)*((Pr + V)) + P <= 512
node has at least two tree pointers if it is an internal node. 16pleaf + 6 <= 512
6. An internal node with q pointers, q <=p, has q-1 search field values. pleaf <= 506/16

147 148 149

You might also like