Handout Data Structures Final
Handout Data Structures Final
What is computer?
Is a device that performs tasks based on given set of step-by-step instructions
(procedures)
These set of procedures (step-by-step instructions) are called algorithms.
The primary purpose of computer is not to perform calculations, but to store and retrieve
data as fast as possible.
Lnked list
Linear Queue
Page 1
Data Structures & Algorithm – Lecture Notes
When selecting one data structure to solve a problem, we should follow the following steps:
1. Analyze the problem to determine the basic operations that must be supported
2. Quantify the resource constraints for each operation
3. Select the data structure that best meets these requirements
How does Google quickly find pages that contain search term?
What is the fastest way to broadcast a message to a network of computers?
How does your operating system track which memory (disk or RAM) is free?
So, to handle these issues, a mechanism is devised known as “data type” system.
Data type means “item type to be stored in memory”. They are used to identify what amount of
memory size that should be assigned to a given data/item and what kinds of operations that could be
made on the item.
Based on c++ programming language, data types are categorized as primitive/built-in and
derived/user-defined.
integer
Numeric
floating
point(double)
Primitive/ built-
in /Intrinsic
character and
string
Non-numeric
Derived/ user-
defined /non- interface
primitive
Array
Page 2
Data Structures & Algorithm – Lecture Notes
Examples:-
struct Time{
int hour;
int minute;
int second;
};
struct Box {
double length;
double breadth;
double height;
};
class Box {
public:
double length;
double breadth;
double height;
Page 3
Data Structures & Algorithm – Lecture Notes
Note:
The difference between structures and class data types is classes contain data members and
the operations that control the data
The user defined data types help us to create “Abstract Data types”.
So, by our logical view, a book is as what we have defined above, it is one kind of model/abstract
view created by us. This process of modeling/constructing a logical view or picture of an item is
called Data Abstraction.
To transform this abstract view/logical view in to computer program, we use the concept of user
defined data types like class data type.
Therefore, ADT is defined a set of objects together with a set of operations or an ADT is an
externally defined data type that holds some kind of data with some other operations that operate on
the data. Users of an ADT do not need to have any detailed information about the internal
representation of the data storage or implementation of the operations. It is called abstract because it
doesn’t specify how the operations of the ADT are going to be implemented.
Generally, in data abstraction, the model or logical view/picture of an object/item in the real world is
created in order to store information about it without specifying the implementation details.
Real world
object/item to be
modeled
ADT
/ Logical view of the
real world object
Page 4
Data Structures & Algorithm – Lecture Notes
Algorithm
The way data are organized in a computer’s memory is said to be Data Structure and
sequence of computational steps to solve a problem is said to be an Algorithm, in other
words, an Algorithm is a clearly specified set of simple instructions to be followed to
solve a problem. Therefore, a program/software is nothing but data structures plus
algorithms.
Properties of an algorithm
Sequence:
Each step must have a unique defined preceding and succeeding step. The first step
(start step) and last step (halt step) must be clearly noted.
Language Independence:
It must not depend on any one programming language.
Effectiveness:
It must be possible to perform each step exactly and in a finite amount of time.
Efficiency:
It must solve with the least amount of computational resources such as time and
space.
Generality:
Algorithm should be valid on all possible inputs.
int main(){
int a[] = { 23, 45, 71, 12, 87, 66, 20, 33, 15, 69 };
int min = a[0];
for (int i = 1; i < 10; i++) {
if (a[i] < min) min = a[i]; }
cout<<"The minimum value is: "<<min;
Page 5
Data Structures & Algorithm – Lecture Notes
return 0;
}
Algorithm Analysis
Algorithm analysis is the process of determining the amount of computing time and storage space
required by different algorithms, called complexity of algorithms. It is a study of computer program
performance and resource usage.
It is the process of establishing the function T(n) or S(n) for the given algorithm.
Running Time
Memory Usage
Communication Bandwidth
“what is the performance and category of a given algorithm during the three conditions (best-
case, average-case, &worst-case) in terms of the notations (O,Ω,Ɵ,o,ɷ) using the formal and
informal method of counting operations. ”
Complexity of Algorithm
Complexity of an algorithm is the function T(n) or S(n) describing the efficiency of an algorithm in
terms of the amount of data the algorithm must process.
T(n) – is a function describing the complexity (amount of time an algorithm takes) in-terms
of the amount of inputs “n” to the algorithm. It means that “the algorithm ‘A’ takes ‘T’
amount of time for the given set of ‘n’ inputs”.
S(n) – is a function describing the complexity (amount of memory space an algorithm takes)
in-terms of the amount of inputs “n” to the algorithm. It means that “the algorithm ‘A’
takes ‘S’ amount of memory space for the given set of ‘n’ inputs”.
Note: the most common measure of complexity of algorithms is time complexity, T(n). And the
number of inputs give us the three testing conditions,
If we test it in small number of inputs “n”, it is called best-case.
If we test it in normal number of inputs “n”, it is called average-case.
If we test it in very large number of inputs “n”, it is called worst-case.
The complexity function T(n) of a given algorithm is determined by counting the number of
operations of the algorithm.
Page 6
Data Structures & Algorithm – Lecture Notes
It determines the rate at which the storage or time grows as a function of the size n, number of inputs.
Eg. O(n), O(n), O(n2), O(2n), O(logn), O(nlogn), O(n!) …
During algorithm analysis, 1st we drive the function T(n) for the algorithm, and then we determine the
order/category of T(n).
Order of a given algorithm or a function T(n) is written as O(T(n)). So, O(T(n)) – means:
“category/order of the function T(n).”
Example: if the time complexity function T(n) is categorized in logarithmic time, it is written as
O(T(n))= logn which is read as ”order/category of T(n) is equal to logn.”
3. Loops: Running time for a loop is equal to the running time for the statements inside
the loop multiplied by number of iteration. The total running time of nested loops, the
running time is multiplied by the product of the sizes of all the loops.
4. Blocks of Sequences of statements:
Use order arithmetic addition rule which is O( f(n)+g(n)) = max( f(n), g(n)) and
add the time complexities of each statement.
Page 7
Data Structures & Algorithm – Lecture Notes
By ignoring the initializations and the loop conditionals, we construct the complexity function
as:- the four operations(two assignments, *&+)are executed N times i.e.
P = P * I; //(N) assignments & multiplications totally 4 operations, n-times executed
I = I + 1; //(N) assignments & additions
Therefore,
T(n) = N * 4
T(n) = 4N
So, the complexity function T(n) of the given algorithm is T(n) = 5N – 2 using the
informal method and T(n) = 4N using the formal way.
But in both methods, the Order/category of T(n)is linear time O(n) which is termed as
a good efficiency.
Example2:
int count ( )
{
int k=0; // 1 assignment
cout << “ Enter an integer”; //1 out put
Cin >>n; //1 for input
Page 8
Data Structures & Algorithm – Lecture Notes
Exercises:
Compute the resources requirement in terms of input function T(n) and determine the
complexity the function.
o If growth rate of T(n) is greater than or equal to f(n), then order of the algorithm is said to
be T(n)= Ω(f(n)). This is read as “T(n) is Big-omega of f(n)”
o If growth rate of T(n) is the same as f(n), then order of the algorithm is said to be T(n)=
Θ(f(n)). This is read as “T(n) is Theta of f(n)”
o If growth rate of T(n) is less than f(n), then order of the algorithm is said to be
T(n)=o(f(n)). This is read as “T(n) is little-oh of f(n)”
Page 9
Data Structures & Algorithm – Lecture Notes
o If growth rate of T(n) is greater than f(n), then order of the algorithm is said to be T(n)=
ω(f(n)). This is read as “T(n) is little-omega of f(n)”
T(n)= Θ(f(n))
T(n)= f(n)
T(n)=O(f(n))
T(n)=O(f(n))
T(n) < f(n)
T(n)<=f(n) T(n)
T(n)= ω(f(n))
T(n)>f(n)
T(n)= Ω(f(n))
T(n)>=f(n)
The following figure shows most commonly used asymptotes (growth functions) in asymptotic
notation.
2n n3
n2
nlogn
2n
n
logn
Page 10
Data Structures & Algorithm – Lecture Notes
These are the notations that are used to represent and explain the growth rate of complexity
function, T(n), of algorithms in terms of algorithms.
Theta notation
If f is a given function, and if some function g is a tight bound for f, then we say that f is Θ
of g, which is written as f(n)= Θ(g(n)).
In simple terms, f(n)= Ω(g(n)) means that “growth rate of f(n) is the same as g(n) or f(n)
and g(n) has the same growth rate”, which indicates that g(n) is tight bound of f(n).
If f(n) is Θ(g(n))
o f has an order of magnitude g
o g is asymptotically tight bound for f(n)
o f is an order of g
o f and g grow at the same rate for large n
o f and g have the same rate of growth
Note: properties of Θ:
Reflexive: f(n)= Θ(f(n))
Transitive : T(n)= Θ(f(n)), f(n)= Θ(g(n)) => T(n)= Θ(g(n))
Page 11
Data Structures & Algorithm – Lecture Notes
Example:
f ( n) = 2n2+3n+5
f(n) is O(n2) and also f(n) is Ω(n2). Therefore, f(n) is Θ(n2)
If f(n)=2n+1, then f(n) = Θ (n)
f(n) =2n2 then f(n)=O(n4), f(n)=O(n3), and f(n)=O(n2)
All these are technically correct, but the last expression is the best and tight one. Since 2n 2
and n2 have the same growth rate, it can be written as f(n)= Θ(n2) .
Little-o Notation
If f is a given function, and if some function g is non-tight upper bound for f, then we say
that f is f is little-o of g, which is written as f(n)= o(g(n)).
In simple terms, f(n)= o(g(n)) means that “f(n) has less growth rate compared to g(n) or
growth rate of f(n) is the less than g(n)”, but not greater than or equal to which is for big-
O.
Example:
The function f(n)=3n+4 is o(n2)
if we have g(n)= 2n2
g(n) =o(n3), O(n2), but g(n) is not o(n2).
If we have f(n)= 3n, g(n)= n2, then f(n)= O(n2) But f(n)≠Ω(n2) Therefore, f(n)= o(n2)
Note: Little-oh is not reflexive, symmetric but it is transitive
Little-Omega (ω notation)
If f is a given function, and if some function g is non-tight lower bound for f, then we say
that f is f is little-omega of g, which is written as f(n)= ω(g(n)).
In simple terms, f(n)= ω(g(n)) means that “f(n) has greater growth rate compared to g(n) or
growth rate of f(n) is the greater than g(n)”, but not greater than or equal to which is for
big-omega.
Example:
2n2=ω(n) but it’s not ω (n2).
Page 12
Data Structures & Algorithm – Lecture Notes
Reflexivity
• f(n)=Θ(f(n)),
• f(n)=O(f(n)),
• f(n)=Ω(f(n)).
Conclusion
Page 13
Data Structures & Algorithm – Lecture Notes
Chapter overview
Sorting algorithms
Sorting
o process of reordering a list of items in either increasing or decreasing order.
o efficiency of sorting algorithm is measured using
o the number of comparisons and
o the number of data movements made by the algorithms.
o Sorting algorithms are categorized as:
o simple/elementary and
o advanced.
o Simple sorting algorithms, like Selection sort, Bubble sort and Insertion sort, are
only used to sort small-sized list of items.
1. Selection Sort
Given an array of length n,
Search elements 0 through n-1 and select the smallest
Swap it with the element in location 0
Search elements 1 through n-1 and select the smallest
Swap it with the element in location 1
Search elements 2 through n-1 and select the smallest
Swap it with the element in location 2
Search elements 3 through n-1 and select the smallest
Swap it with the element in location 3
Continue in this fashion until there’s nothing left to search
Page 14
Data Structures & Algorithm – Lecture Notes
We repeatedly find the next largest (or smallest) element in the array and move it to its final
position in the sorted array.
Analysis:
The outer loop executes n-1 times
The inner loop executes about n(n-1)/2 times on average (from n to 2 times)
Work done in the inner loop is constant (swap two array elements)
Time required is roughly (n-1)*[n(n-1)/2]
You should recognize this as O(n2)
i.e.
How many comparisons?
(n-1)+(n-2)+…+1 = O(n2)
How many swaps?
n = O(n)
void selectionSort(int[] a)
{
int outer, inner, min;
for (outer = 0; outer < a.length - 1; outer++)
{
min=outer;
for (inner = outer + 1; inner < a.length; inner++)
{
if (a[inner] < a[min])
{
min=inner;
}
}
Int temp=a[outer];
a[outer]=a[min];
a[min] = temp;
}
}//end of function
Example 1:
#include<iostream>
using namespace std;
main()
{
int arr[5];
int mini,temp;
cout<<"Enter 5 numbers: "<<endl;
for(int i=0; i<5; i++) {
cin>>arr[i];
}
Page 15
Data Structures & Algorithm – Lecture Notes
2. Bubble Sort
Also called Exchange sort
simplest algorithm to implement and the slowest algorithm on very large inputs.
Basic Idea:
Loop through array from i=0 to n and swap adjacent elements if they are out of order.
Repeatedly compares adjacent elements of an array.
Compare each element (except the last one) with its neighbor to the right
If they are out of order, swap them
This puts the largest element at the very end
The last element is now in the correct and final place
Compare each element (except the last two) with its neighbor to the right
If they are out of order, swap them
This puts the second largest element next to last
The last two elements are now in their correct and final places
Compare each element (except the last three) with its neighbor to the right
Continue as above until you have no unsorted elements on the left
Code for Bubble Sort
void bubbleSort(int[] a)
{
int outer, inner;
for (outer = a.length - 1; outer > 0; outer--) //counts down
{
for (inner = 0; inner < outer; inner++)
Page 16
Data Structures & Algorithm – Lecture Notes
{
if (a[inner] > a[inner + 1])
{
int temp = a[inner];
a[inner] = a[inner + 1];
a[inner + 1] = temp;
}
} }}
Example 2:
#include<iostream>
using namespace std;
main()
{
const int array_size = 4;
int x[array_size], hold;
3. Insertion sort
It inserts each item into its proper place in the final list.
The simplest implementation of this requires two list structures – the source list and the list
into which sorted items are inserted.
Basic Idea:
Find the location for an element and move all others up, and insert the element.
The approach is the same approach that we use for sorting a set of cards in our hand.
While playing cards, we pick up a card, start at the beginning of our hand and find the
place to insert the new card, insert it and move all the others up one place.
Sort: 34 8 64 51 32 21
34 8 64 51 32 21
The algorithm sees that 8 is smaller than 34 so it swaps.
8 34 64 51 32 21
51 is smaller than 64, so they swap.
8 34 51 64 32 21
The algorithm sees 32 as another smaller number and moves it to its
appropriate location between 8 and 34.
8 32 34 51 64 21
The algorithm sees 21 as another smaller number and moves into between 8
and 32.
Final sorted numbers:
8 21 32 34 51 64
Page 18
Data Structures & Algorithm – Lecture Notes
{
int temp = array[outer];
inner = outer;
while (inner > 0 && array[inner - 1] >= temp)
{
array[inner] = array[inner - 1];
inner--;
}
array[inner] = temp; }}
Example 3:
#include<iostream>
using namespace std;
void insertion(int array[], int n);
main(){
int array[20];
int n;
cout<<"How many numbers are you going to insert?"<<endl;
cin>>n;
cout<<"\nEnter "<<n<<" elements: ";
for(int i=0;i<n;i++){
cin>>array[i];
}
insertion(array, n);
}
void insertion(int array[], int n)
{
int a[30];
a[0] = array[0];
for (int i = 1; i < n; i++)
{
int temp = array[i];
int j = i - 1;
while (( a[j] > temp) && (j>=0))
{
a[j+1] = a[j];
j--;
}
a[j+1] = temp;
}
for (int k = 0; k < n; k++)
{
array[k] = a[k];
}
cout<<"The sorted numbers are: \n";
for(int z=0;z<n;z++){
cout<<array[z]<<endl;
}
}
Page 19
Data Structures & Algorithm – Lecture Notes
i.e.
How many comparisons?
1+2+3+…+(n-1)= O(n2)
How many swaps?
1+2+3+…+(n-1)= O(n2)
Selection Sort and Insertion Sort are “good enough” for small arrays
Searching algorithms
Searching is a process of looking for a specific element in a list of items or determining that
the item is not in the list.
1. Linear Searching
Page 20
Data Structures & Algorithm – Lecture Notes
Example 1:
#include<iostream>
using namespace std;
main()
{
int arr1[5];
int req;
int location=-5;
cout<<"Enter 5 numbers to store in array: "<<endl;
for(int i=0; i<5; i++)
{
cin>>arr1[i];
}
cout<<endl;
cout<<"Enter the number you want to found :";
cin>>req;
cout<<endl;
for(int w=0;w<5;w++)
{
if(arr1[w] == req)
location=w;
}
if(location !=-5)
Page 21
Data Structures & Algorithm – Lecture Notes
{
cout<<"Required number is found out at the location:"<<location+1;
cout<<endl;
}
else
cout<<"Number is not found ";
}
2. Binary Searching
This searching algorithms works only on an ordered list.
It uses principle of divide and conquer
Though additional cost has to do with keeping list in order, it is more efficient than
linear search
The basic idea is:
Locate midpoint of array to search
Determine if target is in lower half or upper half of an array.
If in lower half, make this half the array to search
If in the upper half, make this half the array to search
Loop back to step 1 until the size of the array to search is one, and this element does
not match, in which case return –1.
Analysis:
computational time for this algorithm is proportional to log2n
Therefore the time complexity is O(log n)
else{
if(key<list[mid])
right=mid-1;
else
left=mid+1;
}
}while(found==0&&left<right);
if(found==0)
index=-1;
else
index=mid;
return index;
}
Example:
#include<iostream>
using namespace std;
main()
{
int a[100],n,i,beg,end,mid,item;
cout<<"\n------------ BINARY SEARCH ------------ \n\n";
cout<<"Enter No. of Elements= ";
cin>>n;
cout<<"\nEnter Elements:\n";
for(i=1;i<=n;i++)
{
cin>>a[i];
}
cout<<"\nEnter Item you want to Search= ";
cin>>item;
beg=1;
end=n;
mid=(beg+end)/2;
while(beg<=end && a[mid]!=item)
{
if(a[mid]<item)
beg=mid+1;
else
end=mid-1;
mid=(beg+end)/2;
}
if(a[mid]==item)
{
cout<<"\nData is Found at Location : "<<mid;
}
else
{
cout<<"Data is Not Found";
}}
Page 23
Data Structures & Algorithm – Lecture Notes
2500
2000
1500
n log n
Time
n^2
1000
500
0
0 10 20 30 40 50 60
n
Page 24
Data Structures & Algorithm – Lecture Notes
& - is a unary operator that returns the memory address of its operand.
m = &count;
* - is an unary operator that returns the value located at the address that follows.
q = *m;
Example1:
#include<iostream>
using namespace std; OUTPUT:
main()
{ 100
int a = 100; 0x23fd6c
int *p = &a;
0x23fd6c 100
cout << a << endl;
0x23fd6c
cout << &a << endl;
cout << p << " " << *p << endl;
cout << &p << endl;
Example2:
#include<iostream> OUTPUT:
using namespace std;
int var = 1; Direct access, var = 1
int *ptr; Indirect access, var = 1
main() {
ptr = &var;
The address of var = 0x489010
cout<<"\nDirect access, var = "<<var;
cout<<"\nIndirect access, var = "<<*ptr; The address of var = 0x489010
cout<<"\n\nThe address of var = "<<&var;
cout<<"\nThe address of var = "<<ptr; }
Page 25
Data Structures & Algorithm – Lecture Notes
Arrays of Pointers
When a program is compiled, the size of the data the program will need to handle is often an unknown
factor; in other words there is no way to estimate the memory requirements of the program. In cases
like this you will need to allocate memory dynamically, that is, while the program is running.
Dynamically allocated memory can be released to continually optimize memory usage with respect
to current requirements. This in turn provides a high level of flexibility, allowing a programmer to
represent dynamic data structures, such as trees and linked lists.
C++ uses the new and delete operators to allocate and release memory, and this means that objects of
any type can be created and destroyed.
new Operator
The new operator is an operator that expects the type of object to be created as an argument.
In its simplest form, a call to new follows this syntax
Syntax:
ptr = new type;
Where ptr is a pointer to type. The new operator creates an object of the specified type and returns the
address of that object. The address is normally assigned to a pointer variable.
Example1:
#include<iostream> OUTPUT:
using namespace std;
main() 7
{
int *pnValue = new int; // dynamically allocate an integer
*pnValue = 7; // assign 7 to this integer
cout<<*pnValue;
} OUTPUT:
Example2: 1000
#include<iostream>
using namespace std;
main(){
double *pld = new double;
Page 26
Data Structures & Algorithm – Lecture Notes
delete operator
Memory that has been allocated by a call to new can be released using the delete operator. A call to
delete follows this syntax.
Syntax:
delete ptr;
The operand ptr addresses the memory space to be released. But make sure that this memory space
was dynamically allocated by a call to new!
Example1:
#include<iostream>
using namespace std; OUTPUT:
main(){
int *pnValue = new int; // dynamically allocate an integer 7
*pnValue = 7; // assign 7 to this integer 0
cout<<*pnValue<<endl;
Example2:
#include<iostream>
using namespace std; OUTPUT:
main()
{ 3956208 3956208
int *ptr, *p;
ptr = new int[100];
p = new int;
delete[] ptr;
delete p;
cout<<*p<<" "<<*ptr;}
Review on Structure
Structure is an aggregate data type built using elements of primitive data type. It is a user defined data
type.
Page 27
Data Structures & Algorithm – Lecture Notes
E.g.
struct Time1 {
int hours;
Int minute;
Int seconds;
} T1, T2…; // declare Time1 data type variables T1, T2 etc. at the time defining
// the user defined data type.
You can initialize structures at the time of declaration or other time in the program.
Syntax:
struct struct-tag {
Type1 member variable1;
Type2 member variable2;
….
Type n member variablen;
} variable-name1= {value of member variable1, value of member variable2 ….};
E.g.
student {
string LName;
int age;
int Id;
String department;
}Abebe = {“Kebede”, 24, 150, “computer science”};
Or
Syntax:
struct-tag.member variable;
E.g.
T1.hours=1;
T1.minute=30;
T1.seconds=00;
main ()
{
printStudent(Mesfin); // function call in the program
}
Page 28
Data Structures & Algorithm – Lecture Notes
Example program:
#include <iostream>
using namespace std;
struct Person{
string name;
int age;
char gender;
};
main(){
Person p;
p.name = "Christopher";
p.age = 34;
p.gender = 'M';
cout << "Name: " << p.name << endl;
cout << "Age: " << p.age << endl;
cout << "Gender: " << p.gender << endl;
}
Pointers to structures
Like any other data type, structures can be pointed by its own data type of pointers.
E.g.
Student * Std;
Student Abebe;
Std = &Abebe;
The value of the pointer Std would be assigned to a reference to the object Abebe (its memory
address).
The arrow operator (->) or deference operator is used exclusively with pointers to objects with
members. It serves to access a member of an object to which we have referenced. i.e. To access a
member through a pointer, we append its name to the pointer’s name separated by arrow.
E.g.
Std->age; is equivalent to (* Std).age;
Both are to mean, we are accessing age member variable of structure pointed by a pointer called Std.
But * (std.age) which is equivalent to * std.age it mean that it evaluate value pointed by member
variable age of object Std.
In general the dot and deference operator summarized in a table as follow.
Page 29
Data Structures & Algorithm – Lecture Notes
Example:
#include <iostream>
using namespace std;
struct Point{
int x;
int y;
};
main(){
Point* p = new Point;
p->x = 9;
p->y = 4;
cout << p->x << " " << p->y << endl;
}
head
Page 30
Data Structures & Algorithm – Lecture Notes
Page 31
Data Structures & Algorithm – Lecture Notes
node *q;
q=p;
p=new node;
p->data=x;
p->link=q;
node *q,*t;
if(p==NULL)
{
p=new node;
p->data=x;
p->link=NULL;
Page 32
Data Structures & Algorithm – Lecture Notes
}
else
{
q=p;
while(q->link!=NULL)
q=q->link;
t=new node;
t->data=x;
t->link=NULL;
q->link=t;
}
node *temp,*temp1;
temp=p;
if(temp1==NULL)
{
temp1= new node;
temp1->data=value;
temp1->link=NULL;
p=temp1;
return;
}
for(int i=0;((i<position)&&(temp->link!=NULL)) ;i++)
{
if(i==(position-1))
{
temp1= new node;
temp1->data= value;
temp1->link=temp->link;
temp->link=temp1;
}
temp=temp->link;
}
Page 33
Data Structures & Algorithm – Lecture Notes
A node can be deleted from the head of the list, end of the list or from somewhere in the middle.
E.g. delete temp; // release from the memory pointed to by temp.
node *q;
q=p;
if(q==NULL)
{
cout<<" \nNo data is present..";
return;
}
p=q->link;
delete q;
return;
node *q,*t;
q=p;
if(q==NULL) {
cout<<" \nThere is no data in the list..";
return; }
if(q->link==NULL) {
p=q->link;
delete q;
return; }
while(q->link->link!=NULL)
q=q->link;
q->link=NULL;
return; }
list::~list() {
node *q;
Page 34
Data Structures & Algorithm – Lecture Notes
if(p==NULL) return;
while(p!=NULL)
{
q=p->link;
delete p;
p=q;
}
Page 35
Data Structures & Algorithm – Lecture Notes
node *q,*r;
q=p;
if(q->data==x) {
p=q->link;
delete q;
return; }
r=q;
while(q!=NULL) {
if(q->data==x) {
r->link=q->link;
delete q;
return; }
r=q;
q=q->link; }
cout<<"\n Element u entered "<<x<<" is not found..";
Page 36
Data Structures & Algorithm – Lecture Notes
node *q;
q=p;
if(q==NULL) {
cout<<" \nNo data is in the list..";
return; }
cout<<" \nThe items present in the list are :";
while(q!=NULL) {
cout<<" "<<q->data;
q=q->link; }
Page 37
Data Structures & Algorithm – Lecture Notes
4.1 Stack
Stack is a data structure provides temporary storage in such a way that the element stored last
will be retrieved first (last in first out or LIFO method). Items can be inserted (“pushed”) and
deleted (“popped”), but only the most recently inserted element can be operated on. This
element is called “Top” of the stack.
Stack is useful for temporary storage, especially for dealing with nested structure or
processes; expressions with in expressions, functions calling other functions, directories with
in directories.
Operations:
Push(s, k):- push an item k into stack s.
Pop(s):- deleting the top element, returning its value
Peek(s):- returning the value of the top elements
IsEmpty(s):- return true if and only if the stack is empty
Create(s):- make an empty stack (remove existing items from the stack
initialize the stack to empty)
Isfull(): return true if and only if the stack is full
Algorithm:
Step-1: Increment the Stack TOP by 1. Check whether it is always less than the Upper
Limit of the stack. If it is less than the Upper Limit go to step-2 else report -"Stack
Overflow"
Step-2: Put the new element at the position pointed by the TOP
Algorithm:
Step-1: If the Stack is empty then give the alert "Stack underflow" and quit; or else go
to step-2
Step-2:
a) Hold the value for the element pointed by the TOP
b) Put a NULL value instead
c) Decrement the TOP by 1
Sample program:
#include<iostream>
#include<stdlib.h>
using namespace std;
int const MAX=5;
int top=-1;
int stack[MAX];
void push();
void pop();
void peek();
void display();
main()
{
int ch;
while(true)
{
cout<<"\n1.Push\n2.Pop\n3.Display\n4.Peek\n5.Exit\n";
cout<<"Choose your option\n";
cin>>ch;
switch(ch) {
case 1:push();
break;
Page 39
Data Structures & Algorithm – Lecture Notes
case 2:pop();
break;
case 3:display();
break;
case 4: peek();
break;
case 5:exit(0);
} }}
void push() {
int element;
if(top==MAX-1) {
cout<<"Stack overflow\n";
return;
}
else {
cout<<"Enter the element\n";
cin>>element;
top=top+1;
stack[top]=element;
cout<<"Element added\n"<<element;
}}
void pop() {
int element;
if(top<0) {
cout<<"Stack underflow\n";
return;
}
else {
element=stack[top];
top=top-1;
cout<<"Element is removed\n"<<element;
}}
void display() {
int i;
if(top<0) {
cout<<"Empty stack\n";
return;
}
else {
cout<<"Stack is\n";
for(i=top;i>=0;i--)
cout<<"\n"<<stack[i];
}}
void peek() {
cout<<"The value in top is: "<<stack[top]; }
Page 40
Data Structures & Algorithm – Lecture Notes
4.2 Queue
Is a data structure that has access to its data at the front and rear.
Three kinds
Normal queue, usualy called queue
Prority queue
Doublly-Ended queue(Dequeue)
operates on FIFO (Fast In First Out) basis.
uses two pointers/indices to keep track of information/data.
has two basic operations:
enqueue - inserting data at the rear of the queue
dequeue – removing data at the front of the queue
dequeue enqueue
Front Rear/back
Analysis:
Consider the following:
an array of size MAX_SIZE , int num[MAX_SIZE];
We need to have two integer variables that tell:
o the index of the front element int FRONT =-1;
o the index of the rear element int REAR =-1;
Page 41
Data Structures & Algorithm – Lecture Notes
Sample program:
#include<iostream>
#include<stdlib.h>
using namespace std;
}
}
void enqueue(){
int element;
if((rear-front)==MAX-1)
cout<<"Queue overflow\n";
else
{
if(front==-1)
front=0;
cout<<"Enter the new element\n";
cin>>element;
rear=rear+1;
queue[rear]=element;
cout<<"Element is added\n"<<element;
}}
void dequeue()
Page 42
Data Structures & Algorithm – Lecture Notes
{
int i;
if(front==-1||front>rear)
{
cout<<"Queue underflow\n";
return;
}
else
{
cout<<"Element is deleted from queue\n"<<queue[front];
for(i=front;i<=rear;i++)
{
queue[i]=queue[i+1];
}
rear=rear-1;
}
}
void display()
{
int i;
if(front==-1||front>rear)
{
cout<<"Queue is empty\n";
}
else
{
cout<<"Queue\n";
for(i=front;i<=rear;i++)
{
cout<<"\n"<<queue[i];
}
}
}
void peek(){
cout<<"The value in front is: "<<queue[front];
}
Page 43
Data Structures & Algorithm – Lecture Notes
A tree is a set of nodes and edges that connect pairs of nodes. It is an abstract model of a
hierarchical structure. Rooted tree has the following structure:
One node distinguished as root.
Every node C except the root is connected from exactly other node P. P is C's parent,
and C is one of C's children.
There is a unique path from the root to the each node.
The number of edges in a path is the length of the path.
Tree Terminologies
Page 44
Data Structures & Algorithm – Lecture Notes
Full binary tree: a binary tree where each node has either 0 or 2 children.
Balanced binary tree: a binary tree where each node except the leaf nodes has left and right children
and all the leaves are at the same level.
Complete binary tree: a binary tree in which the length from the root to any leaf node is either h or
h-1 where h is the height of the tree. The deepest level should also be filled from left to right.
Binary search tree (ordered binary tree): a binary tree that may be
empty, but if it is not empty it satisfies the following.
Every node has a key and no two elements have the same key.
Page 45
Data Structures & Algorithm – Lecture Notes
The keys in the right subtree are larger than the keys in the root.
The keys in the left subtree are smaller than the keys in the root.
The left and the right subtrees are also binary search trees.
Insertion
When a node is inserted the definition of binary search tree should be preserved.
Suppose there is a binary search tree whose root node is pointed by RootNodePtr and we want
to insert a node (that stores 17) pointed by InsNodePtr.
Page 46
Data Structures & Algorithm – Lecture Notes
Traversing
Example:
Page 47
Data Structures & Algorithm – Lecture Notes
Implementation of traversal:
void Binary_tree::pretrav(tree *t = root){
if(root == NULL){
cout<<"Nothing to display";
}else
if(t != NULL){
cout<<t->info<<" ";
pretrav(t->Left);
pretrav(t->Right); } }
void Binary_tree::intrav(tree *t = root){
if(root==NULL){
cout<<"Nothing to display";
}else
if(t!=NULL){
intrav(t->Left);
cout<<t->info<<" ";
intrav(t->Right);
}}
void Binary_tree::posttrav(tree *t = root){
if(root==NULL){
cout<<"Nothing to display";
}else
if(t!=NULL){
posttrav(t->Left);
posttrav(t->Right);
cout<<t->info<<" "; }}
Deletion
To delete a node (whose Num value is N) from binary search tree (whose root node is pointed by
RootNodePtr), four cases should be considered. When a node is deleted the definition of binary search
tree should be preserved.
Page 48
Data Structures & Algorithm – Lecture Notes
Page 49
Data Structures & Algorithm – Lecture Notes
Page 50
Data Structures & Algorithm – Lecture Notes
Page 51
Data Structures & Algorithm – Lecture Notes
Page 52
Data Structures & Algorithm – Lecture Notes
Page 53
Data Structures & Algorithm – Lecture Notes
#include<iostream>
#include<cstdlib>
using namespace std;
struct tree{
int info;
tree *Left, *Right;
};
tree *root;
class Binary_tree{
public:
Binary_tree();
void insert1(int);
tree *insert2(tree *, tree *);
void Delete(int);
void pretrav(tree *);
void intrav(tree *);
void posttrav(tree *);
};
Binary_tree::Binary_tree(){
root = NULL;
}
tree* Binary_tree::insert2(tree *temp,tree *newnode){
if(temp==NULL){
temp=newnode;
}
else if(temp->info < newnode->info){
insert2(temp->Right,newnode);
if(temp->Right==NULL)
temp->Right=newnode;
}
else{
insert2(temp->Left,newnode);
if(temp->Left==NULL)
temp->Left=newnode;
}
return temp;
}
void Binary_tree::insert1(int n){
tree *temp=root,*newnode;
newnode=new tree;
newnode->Left=NULL;
newnode->Right=NULL;
newnode->info=n;
root=insert2(temp,newnode);
}
void Binary_tree::pretrav(tree *t = root){
if(root == NULL){
cout<<"Nothing to display";
}else
Page 54
Data Structures & Algorithm – Lecture Notes
if(t != NULL){
cout<<t->info<<" ";
pretrav(t->Left);
pretrav(t->Right);
}
}
void Binary_tree::intrav(tree *t = root){
if(root==NULL){
cout<<"Nothing to display";
}else
if(t!=NULL){
intrav(t->Left);
cout<<t->info<<" ";
intrav(t->Right);
}
}
void Binary_tree::posttrav(tree *t = root){
if(root==NULL){
cout<<"Nothing to display";
}else
if(t!=NULL){
posttrav(t->Left);
posttrav(t->Right);
cout<<t->info<<" ";
}
}
void Binary_tree::Delete(int key)
{
tree *temp = root,*parent = root, *marker;
if(temp==NULL)
cout<<"The tree is empty"<<endl;
else{
while(temp!=NULL && temp->info!=key){
parent=temp;
if(temp->info<key){
temp=temp->Right;
}else{
temp=temp->Left;
}
}
}
marker=temp;
if(temp==NULL)
cout<<"No node present";
else if(temp==root){
if(temp->Right==NULL && temp->Left==NULL){
root=NULL;
}
else if(temp->Left==NULL){
root=temp->Right;
}
else if(temp->Right==NULL){
root=temp->Left;
Page 55
Data Structures & Algorithm – Lecture Notes
}
else{
tree *temp1;
temp1 = temp->Right;
while(temp1->Left!=NULL){
temp=temp1;
temp1=temp1->Left;
}
if(temp1!=temp->Right){
temp->Left=temp1->Right;
temp1->Right=root->Right;
}
temp1->Left=root->Left;
root=temp1;
}
}
else{
if(temp->Right==NULL && temp->Left==NULL){
if(parent->Right==temp)
parent->Right=NULL;
else
parent->Left=NULL;
}
else if(temp->Left==NULL){
if(parent->Right==temp)
parent->Right=temp->Right;
else
parent->Left=temp->Right;
}
else if(temp->Right==NULL){
if(parent->Right==temp)
parent->Right=temp->Left;
else
parent->Left=temp->Left;
}else{
tree *temp1;
parent=temp;
temp1=temp->Right;
while(temp1->Left!=NULL){
parent=temp1;
temp1=temp1->Left;
}
if(temp1!=temp->Right){
temp->Left=temp1->Right;
temp1->Right=parent->Right;
}
temp1->Left=parent->Left;
parent=temp1;
}
}
delete marker;
}
main(){
Page 56
Data Structures & Algorithm – Lecture Notes
Binary_tree bt;
int choice, n, key;
while(1){
cout<<"\n\t1. Insert\n\t2. Delete\n\t3. Preorder Traversal\n\t4. Inorder Treversal\n\t5. Postorder
Traversal\n\t6. Exit"<<endl;
cout<<"Enter your choice: ";
cin>>choice;
switch(choice){
case 1:
cout<<"Enter item: ";
cin>>n;
bt.insert1(n);
break;
case 2:
cout<<"Enter element to delete: ";
cin>>key;
bt.Delete(key);
break;
case 3:
cout<<endl;
bt.pretrav();
break;
case 4:
cout<<endl;
bt.intrav();
break;
case 5:
cout<<endl;
bt.posttrav();
break;
case 6:
exit(0);
}
}
}
//insert: 100 80 60 40 20 10 200 109 120 140 160 180
Page 57
Data Structures & Algorithm – Lecture Notes
Advanced Sorting
1. Heap Sort
The heap sort algorithm uses the data structure called the heap. A heap is defined as a
complete binary tree in which each node has a value greater than both its children (if any).
Each node in the heap corresponds to an element of the array, with the root node
corresponding to the element with index 0 in the array. Considering a node corresponding to
index i, then its left child has index (2*i + 1) and its right child has index (2*i + 2). If any or
both of these elements do not exist in the array, then the corresponding child node does not
exist either. Note that in a heap the largest element is located at the root node.
It uses a process called "adjust to accomplish its task (building a heap tree) whenever a value
is larger than its parent.
The root of the binary tree (i.e., the first array element) holds the largest key in the heap. This
type of heap is usually called descending heap or mere heap, as the path from the root node to
a terminal node forms an ordered list of elements arranged in descending order. Fig. 6.1
shows a heap.
We can also define an ascending heap as an almost complete binary tree in which the value of
each node is greater than or equal to the value of its father. This root node has the smallest
element of the heap. This type of heap is also called min heap.
Page 58
Data Structures & Algorithm – Lecture Notes
Algorithm:
1. Construct a binary tree
· The root node corresponds to Data[0].
· If we consider the index associated with a particular node to be i, then the
left child of this node corresponds to the element with index 2*i+1 and the right child
corresponds to the element with index 2*i+2. If any or both of these elements do not
exist in the array, then the corresponding child node does not exist either.
2. Construct the heap tree from initial binary tree using "adjust" process.
3. Sort by swapping the root value with the lowest, right most value and
deleting the lowest, right most value and inserting the deleted value in the array in it
proper position.
Page 59
Data Structures & Algorithm – Lecture Notes
Page 60
Data Structures & Algorithm – Lecture Notes
C++ Implementation:
#include <iostream>
using namespace std;
Page 61
Data Structures & Algorithm – Lecture Notes
int i, temp;
for (i = n; i >= 2; i--)
{
temp = a[i];
a[i] = a[1];
a[1] = temp;
max_heapify(a, 1, i - 1);
}}
void build_maxheap(int *a, int n)
{
int i;
for(i = n/2; i >= 1; i--)
{
max_heapify(a, i, n);
}}
main()
{
int n, i, x;
cout<<"enter no of elements of array\n";
cin>>n;
int a[20];
for (i = 1; i <= n; i++)
{
cout<<"enter element"<<(i)<<endl;
cin>>a[i];
}
build_maxheap(a,n);
heapsort(a, n);
cout<<"sorted output\n";
for (i = 1; i <= n; i++)
{
cout<<a[i]<<endl; }}
2. Quick Sort
Quick sort is the fastest known algorithm. It uses divide and conquer strategy and in
the worst case its complexity is O (n2). But its expected complexity is O(nlogn).
The quick sort algorithm works by partitioning the array to be sorted. And each
partitions are internally sorted recursively. In partition the first element of an array is
chosen as a key value. This key value can be the first element of an array. That is, if A
is an array then key = A (0), and rest of the elements are grouped into two portions
such that,
(a) One partition contains elements smaller than key value
(b) Another partition contains elements larger than the key value
Algorithm:
1. Choose a pivot value (mostly the first element is taken as the pivot value)
2. Position the pivot element and partition the list so that:
· the left part has items less than or equal to the pivot value
Page 62
Data Structures & Algorithm – Lecture Notes
· the right part has items greater than or equal to the pivot value
3. Recursively sort the left part
4. Recursively sort the right part
Page 63
Data Structures & Algorithm – Lecture Notes
C++ Implementation:
#include<iostream>
using namespace std;
int Partition(int a[], int beg, int end) //Function to Find Pivot Point
{
int p=beg, pivot=a[beg], loc;
for(loc=beg+1;loc<=end;loc++)
{
if(pivot>a[loc])
{
a[p]=a[loc];
a[loc]=a[p+1];
a[p+1]=pivot;
p=p+1;
}}
return p;
}
void QuickSort(int a[], int beg, int end)
{
Page 64
Data Structures & Algorithm – Lecture Notes
if(beg<end)
{
int p=Partition(a,beg,end); //Calling Procedure to Find Pivot
QuickSort(a,beg,p-1); //Calls Itself (Recursion)
QuickSort(a,p+1,end); //Calls Itself (Recursion)
}}
main()
{
int a[100],i,n,beg,end;
cout<<"\n------- QUICK SORT -------\n\n";
cout<<"Enter the No. of Elements : ";
cin>>n;
for(i=1;i<=n;i++)
{
cin>>a[i];
}
beg=1;
end=n;
QuickSort(a,beg,end); //Calling of QuickSort Function
cout<<"\nAfter Sorting : \n";
for(i=1;i<=n;i++)
{
cout<<a[i]<<endl;
}}
3. Merge Sort
Like quick sort, merge sort uses divide and conquer strategy and its time complexit y
is O(nlogn). It begins by dividing a list into two sublists, and then recursively divides
each of those sublists until there are sublists with one element each. These sublists are
then combined using a simple merging technique. In order to combine two lists, the
first value of each is evaluated, and the smaller value is added to the output list. This
process continues until one of the lists has become exhausted, at which point the
remainder of the other list is simply appended to the output list. Two closest lists are
combined at each end, until all the elements are merged back into a single list.
Algorithm:
1. Divide the array in to two halves.
2. Recursively sort the first n/2 items.
3. Recursively sort the last n/2 items.
4. Merge sorted items (using an auxiliary array).
Page 65
Data Structures & Algorithm – Lecture Notes
C++ Implementation:
#include<iostream>
using namespace std;
void mergesort(int[],int,int);
void merge(int[],int,int,int);
main()
{
int a[10],p,q,r,i,n;
cout<<"Enter the number of elements";
cin>>n;
p=0;
r=n-1;
cout<<"Enter the array";
for(i=0;i<n;i++) {
cin>>a[i];
}
mergesort(a,p,r);
cout<<"The sorted array is:";
for(i=0;i<n;i++) {
cout<<"\n"<<a[i];
Page 66
Data Structures & Algorithm – Lecture Notes
}}
Page 67
Data Structures & Algorithm – Lecture Notes
4. Shell Sort
Algorithm:
1. Choose gap gk between elements to be partly ordered.
2. Generate a sequence (called increment sequence) gk, gk-1,…., g2, g1 where for each sequence
gi, A[j]<=A[j+gi] for 0<=j<=n-1-gi and k>=i>=1
Example: Sort the following list using shell sort algorithm.
C++ Implementation:
#include<iostream>
using namespace std;
Page 68
Data Structures & Algorithm – Lecture Notes
int gap=n/2;
do {
int swap;
do
{
swap=0;
for(int i=0;i<n-gap;i++)
if(a[i]>a[i+gap])
{
int t=a[i];
a[i]=a[i+gap];
a[i+gap]=t;
swap=1;
}}
while(swap); }
while(gap=gap/2); }
main()
{
int a[10];
int n;
cout<<"enter n\n";
cin>>n;
read(a,n);
cout<<"before sorting\n";
display(a,n);
shellsort(a,n);
cout<<"\nafter sorting\n";
display(a,n);
}
Advanced Searching
Hashing is a technique where we can compute the location of the desired record in order
to retrieve it in a single access (or comparison).
Let there is a table of n employee records and each employee record is defined by a
unique employee code, which is a key to the record and employee name. If the key (or
employee code) is converted into array index also called hash code, then the record can be
accessed by the key directly.As in the following example.
Page 69
Data Structures & Algorithm – Lecture Notes
The array indices are called hash codes. The table containing the hash codes is called hash
table. And process of converting the keys(item or employee code) into hash code/array
index is called hashing. The function or the formula used to convert keys into hash code
is called hash function.
It is possible that two different keys k1 and k2 will yield the same hash address/code. This
situation is called Hash Collision.
Hash Function
The basic idea of hash function is the transformation of the key into the corresponding
location in the hash table. A Hash function H can be defined as a function that takes key
as input and transforms it into a hash table index. Hash functions are of two types:
1. Distribution- Independent function
2. Distribution- Dependent function
Following are the most popular Distribution - Independent hash functions:
1. Division method
2. Mid Square method
3. Folding method
Division Method
The hash function H is defined by:
H(k)= k(mod m), where:
Page 70
Data Structures & Algorithm – Lecture Notes
So if you enter the employee code to the hash function, we can directly retrieveTABLE[H(k)]
details directly.
Note: if the memory address begins with 01-m instead of 00- m, then we have to choose the
hash functionH (k) = k (mod m) + 1.
Page 71
Data Structures & Algorithm – Lecture Notes
Hash Collision
It is possible that two non-identical keys K1, K2 are hashed into the same hash
address/code. This situation is called Hash Collision.
Let us consider a hash table having 10 locations. Division method is used to hash the
key. H (k) = k (mod m), if m is chosen as 10. The Hash function produces any
integer between 0 and 9 inclusions, depending on the value of the key. If we want to
insert a new record with key 500 then H(500) = 500(mod 10) = 0. But the location 0
in the table may be already filled ( i.e., not empty). Thus collision occurred.
Collisions are almost impossible to avoid but it can be minimized considerably by
introducing any one of the following three techniques:
1. Open addressing
2. Chaining
3. Bucket addressing
Chaining technique
In chaining technique the entries in the hash table are dynamically allocated and entered
into a linked list associated with each hash key as in the following.
Page 72
Data Structures & Algorithm – Lecture Notes
Page 73