Data-Structure complete unit 3
Data-Structure complete unit 3
Data-Structure complete unit 3
Data structure is a branch of computer science. The study of data structure helps you to
understand how data is organized and how data flow is managed to increase efficiency of
any process or program. Data structure is the structural representation of logical
relationship between data elements. This means that a data structure organizes data items
based on the relationship between the data elements.
Example:
A house can be identified by the house name, location, number of floors and so on.
These structured set of variables depend on each other to identify the exact house.
Similarly, data structure is a structured set of variables that are linked to each other,
which forms the basic component of a system
Basic Terminology
Data structures are the building blocks of any program or the software. Choosing the
appropriate data structure for a program is the most difficult task for a programmer.
Following terminology is used as far as data structures are concerned
Data: Data can be defined as an elementary value or the collection of values, for
example, student's name and its id are the data about the student.
Group Items: Data items which have subordinate data items are called Group item, for
example, name of a student can have first name and the last name.
Record: Record can be defined as the collection of various data items, for example, if we
talk about the student entity, then its name, address, course and marks can be grouped
together to form the record for the student.
File: A File is a collection of various records of one type of entity, for example, if there
are 60 employees in the class, then there will be 20 records in the related file where each
record contains the data about each employee.
Attribute and Entity: An entity represents the class of certain objects. it contains
various attributes. Each attribute represents the particular property of that entity.
Correctness: Data structure is designed such that it operates correctly for all kinds of
input, which is based on the domain of interest. In other words, correctness forms the
primary goal of data structure, which always depends on the specific problems that the
data structure is intended to solve.
Efficiency: Data structure also needs to be efficient. It should process the data at high
speed without utilizing much of the computer resources such as memory space. In a real
time state, the efficiency of a data structure is an important factor that determines the
success and failure of the process.
Adaptability: Developing software projects such as word processors, Web browsers and
Internet search engine involves large software systems that work or execute correctly and
efficiently for many years. Moreover, software evolves due to ever changing market
conditions or due to emerging technologies.
A data structure provides a structured set of variables that are associated with each other
in different ways. It forms a basis of programming tool that represents the relationship
between data elements and helps programmers to process the data easily.
Primitive data structures consist of the numbers and the characters which are
built in programs. These can be manipulated or operated directly by the
machine level instructions. Basic data types such as integer, real, character,
and Boolean come under primitive data structures. These data types are also
known as simple data types because they consist of characters that cannot be
divided.
I) Array
When we declare an array, we can assign initial values to each of its elements by
enclosing the values in braces { }.
Num 26 7 67 50 66
Figure 1.2 Array
The number of values inside braces { } should be equal to the number of elements that
we declare for the array inside the square brackets [ ]. In the example of array Paul, we
have declared 5 elements and in the list of initial values within braces { } we have
specified 5 values, one for each element. After this declaration, array Paul will have five
integers, as we have provided 5 initialization values.
Limitations:
Arrays are of fixed size.
Data elements are stored in contiguous memory locations which may not
be always available.
Insertion and deletion of elements can be problematic because of shifting of
elements from their positions.
However, these limitations can be solved by using linked lists.
Applications:
Storing list of data elements belonging to same data type
Auxiliary storage for other data structures
Storage of binary tree elements of fixed count
Storage of matrices
A linked list is a data structure in which each data element contains a pointer
or link to the next element in the list. Through linked list, insertion and
deletion of the data element is possible at all places of a linear list. Also in
linked list, it is not necessary to have the data elements stored in consecutive
locations. It allocates space for each data item in its own block of memory.
Thus, a linked list is considered as a chain of data elements or records called
nodes. Each node in the list contains information field and a pointer field. The
information field contains the actual data and the pointer field contains address
of the subsequent nodes in the list.
Figure 1.3 represents a linked list with 4 nodes. Each node has two parts. The left part in
the node represents the information part which contains an entire record of data items and
the right part represents the pointer to the next node. The pointer of the last node contains
a null pointer.
Applications:
Implementing stacks, queues, binary trees and graphs of predefined size.
Implement dynamic memory management functions of operating system.
Polynomial implementation for mathematical operations
Circular linked list is used to implement OS or application functions that
require round robin execution of tasks.
Circular linked list is used in a slide show where a user wants to go back to
the first slide after last slide is displayed.
Doubly linked list is used in the implementation of forward and backward buttons
in a browser to move backwards and forward in the opened pages of a website.
Circular queue is used to maintain the playing sequence of multiple players in
a game.
III) Stacks
A stack is a linear data structure in which insertion and deletion of elements are done at
only one end, which is known as the top of the stack. Stack is called a last-in, first-out
(LIFO) structure because the last element which is added to the stack is the first
element which is deleted from the stack.
Applications:
Temporary storage structure for recursive operations
Auxiliary storage structure for nested operations, function
calls, deferred/postponed functions
Manage function calls
Evaluation of arithmetic expressions in various programming languages
Conversion of infix expressions into postfix expressions
Checking syntax of expressions in a programming environment
Matching of parenthesis
String reversal
In all the problems solutions based on backtracking.
Used in depth first search in graph and tree traversal.
Operating System functions
UNDO and REDO functions in an editor.
IV) Queues
A queue is a first-in, first-out (FIFO) data structure in which the element that is
inserted first is the first one to be taken out. The elements in a queue are added at one
end called the rear and removed from the other end called the front. Like stacks,
queues can be implemented by using either arrays or linked lists.
Figure 1.5 shows a queue with 4 elements, where 55 is the front element and 65 is the
rear element. Elements can be added from the rear and deleted from the front.
Applications:
It is used in breadth search operation in graphs.
Job scheduler operations of OS like a print buffer queue, keyboard buffer queue
to store the keys pressed by users
Job scheduling, CPU scheduling, Disk Scheduling
Priority queues are used in file downloading operations in a browser
Data transfer between peripheral devices and CPU.
Interrupts generated by the user applications for CPU
Calls handled by the customers in BPO
V) Trees
A tree is a non-linear data structure in which data is organized in branches. The data
elements in tree are arranged in a sorted order. It imposes a hierarchical structure on
the data elements.
Figure 1.6 represents a tree which consists of 8 nodes. The root of the tree is the node
60 at the top. Node 29 and 44 are the successors of the node 60. The nodes 6, 4, 12
and 67 are the terminal nodes as they do not have any successors.
Applications:
Implementing the hierarchical structures in computer systems like directory
and file system.
Implementing the navigation structure of a website.
Code generation like Huffman’s code.
Decision making in gaming applications.
Implementation of priority queues for priority-based OS scheduling functions
Parsing of expressions and statements in programming language compilers
For storing data keys for DBMS for indexing
Spanning trees for routing decisions in computer and communications networks
Hash trees
path-finding algorithm to implement in AI, robotics and video games applications
VI) Graphs
A graph is also a non-linear data structure. In a tree data structure, all data
elements are stored in definite hierarchical structure. In other words, each
node has only one parent node. While in graphs, each data element is called a
vertex and is connected to many other vertexes through connections called
edges.
Applications:
Representing networks and routes in communication, transportation and
travel applications
Routes in GPS
Interconnections in social networks and other network-based applications
Mapping applications
Ecommerce applications to present user preferences
Utility networks to identify the problems posed to municipal or local corporations
Resource utilization and availability in an organization
Document link map of a website to display connectivity between pages
through hyperlinks
Robotic motion and neural networks
Data structure is a way of storing and organising data efficiently such that the required
operations on them can be performed be efficient with respect to time as well as
memory. Simply, Data Structure are used to reduce complexity (mostly the time
complexity) of the code.
Data structures can be two types:
1. Static Data Structure
2. Dynamic Data Structure
What is a Static Data structure?
In Static data structure the size of the structure is fixed. The content of the data
structure can be modified but without changing the memory space allocated to it.
This section discusses the different operations that can be performed on the various data
structures previously mentioned.
Traversing It means to access each data item exactly once so that it can be processed.
For example, to print the names of all the students in a class.
Searching It is used to find the location of one or more data items that satisfy the given
constraint. Such a data item may or may not be present in the given collection of data
items. For example, to find the names of all the students who secured 100 marks in
mathematics.
Inserting It is used to add new data items to the given list of data items. For example, to
add the details of a new student who has recently joined the course.
Deleting It means to remove (delete) a particular data item from the given collection of
data items. For example, to delete the name of a student who has left the course.
Sorting Data items can be arranged in some order like ascending order or descending
order depending on the type of application. For example, arranging the names of students
in a class in an alphabetical order, or calculating the top three winners by arranging the
participants’ scores in descending order and then extracting the top three.
Merging Lists of two sorted data items can be combined to form a single list of sorted
data items.
Unit 2
5.0 Objective
5.1. What is a Stack?
5.2.Working of Stack
Standard Stack Operations
PUSH operation
POP operation
Applications of Stack
Objective
This chapter would make you understand the following concepts:
What is mean by Stack
Different Operations on stack
Application of stack
Link List Implementation of stack
What is a Stack?
A Stack is a straight information structure that follows the LIFO (Last-In-First-Out)
guideline. Stack has one end, though the Queue has two finishes (front and back). It
contains just a single pointer top pointer highlighting the highest component of the
stack. At whatever point a component is included the stack, it is added on the highest
point of the stack, and the component can be erased uniquely from the stack. All in
all, a stack can be characterized as a compartment in which inclusion and erasure
should be possible from the one end known as the highest point of the stack.
Some key points related to stack
o It is called as stack since it carries on like a certifiable stack, heaps of
books, and so forth
o A Stack is a theoretical information type with a pre-characterized limit, which
implies that it can store the components of a restricted size.
o It is an information structure that follows some request to embed and erase
the components, and that request can be LIFO or FILO.
Working of Stack
Stack chips away at the LIFO design. As we can see in the underneath figure there
are five memory blocks in the stack; along these lines, the size of the stack is 5.
Assume we need to store the components in a stack and how about we expect that
stack is vacant. We have taken the pile of size 5 as appeared underneath in which we
are pushing the components individually until the stack turns out to be full.
Since our stack is full as the size of the stack is 5. In the above cases, we can see that
it goes from the top to the base when we were entering the new component in the
stack. The stack gets topped off from the base to the top.
At the point when we play out the erase procedure on the stack, there is just a single
route for passage and exit as the opposite end is shut. It follows the LIFO design,
which implies that the worth entered first will be eliminated last. In the above case,
the worth 5 is entered first, so it will be taken out simply after the cancellation of the
multitude of different components.
PUSH operation
The means engaged with the PUSH activity is given beneath:
o Before embeddings a component in a stack, we check whether the stack is
full.
o If we attempt to embed the component in a stack, and the stack is full, at that
point the flood condition happens.
o When we introduce a stack, we set the estimation of top as - 1 to watch that
the stack is unfilled.
o When the new component is pushed in a stack, first, the estimation of the top
gets increased, i.e., top=top+1, and the component will be put at the new situation of
the top.
o The components will be embedded until we arrive at the maximum size of the
stack.
POP operation
The means engaged with the POP activity is given beneath:
o Before erasing the component from the stack, we check whether the stack is
vacant.
o If we attempt to erase the component from the vacant stack, at that point the
sub-current condition happens.
o If the stack isn't unfilled, we first access the component which is pointed by
the top
o Once the pop activity is played out, the top is decremented by 1, i.e., top=top-
6.0 Objective
6.1.Queue
6.2.Applications of Queue
6.3.Types of Queues
6.4.Operations on Queue
6.5.Implementationof Queue
Consecutive assignment:
Linked list allocation:
6.6.What are the utilization instances of Queue?
6.7.Types of Queue
6.7.1.Linear Queue
6.7.2.Circular Queue
6.7.3.Priority Queue
6.7.4.Deque
6.8.Array representation of Queue
Objective
This chapter would make you understand the following concepts:
Queue
Definition
Operations, Implementation of simple
queue (Array and Linked list) and applications of queue-BFS
Types of queues: Circular, Double ended,
Priority, 6.1.Queue
1. A Queue can be characterized as an arranged rundown which empowers
embed tasks to be performed toward one side called REAR and erase activities to
be performed at another end called FRONT.
2. Queue is alluded to be as First In First Out rundown.
3. For instance, individuals sitting tight in line for a rail ticket structure a Queue
Applications of Queue
Because of the way that line performs activities on first in first out premise which is
very reasonable for the requesting of activities. There are different uses of queues
examined as beneath.
1. Queues are generally utilized as hanging tight records for a solitary
shared asset like printer, plate, CPU.
2. Queues are utilized in offbeat exchange of information (where
information isn't being moved at similar rate between two cycles) for eg. pipes,
document IO, attachments.
3. Queues are utilized as cradles in the greater part of the applications like MP3
media player, CD player, and so on
4. Queue are utilized to keep up the play list in media major parts to add
and eliminate the tunes from the play-list.
5. Queues are utilized in working frameworks for dealing with interferes.
Complexity
Types of Queues
Prior to understanding the sorts of queues, we first glance at 'what is Queue'.
What is the Queue?
A queue in the information construction can be viewed as like the queue in reality. A
queue is an information structure in which whatever starts things out will go out first.
It follows the FIFO (First-In-First-Out) arrangement. In Queue, the inclusion is done
from one end known as the backside or the tail of the queue, though the erasure is
done from another end known as the front end or the top of the line. At the end of the
day, it very well may be characterized as a rundown or an assortment with an
imperative that the addition can be performed toward one side called as the backside
or tail of the queue and cancellation is performed on another end called as the front
end or the top of the queue.
Operations on Queue
o Enqueue: The enqueue activity is utilized to embed the component at the
backside of the queue. It brings void back.
o Dequeue: The dequeue activity plays out the erasure from the front-finish of
the queue. It additionally restores the component which has been eliminated from the
front-end. It restores a number worth. The dequeue activity can likewise be intended
to void.
o Peek: This is the third activity that profits the component, which is pointed by
the front pointer in the queue yet doesn't erase it.
o Queue flood (isfull): When the Queue is totally full, at that point it shows
the flood condition.
o Queue undercurrent (isempty): When the Queue is unfilled, i.e.,
no components are in the Queue then it tosses the sub-current condition.
A Queue can be addressed as a compartment opened from both the sides in which the
component can be enqueued from one side and dequeued from another side as
demonstrated in the beneath figure:
Implementationof Queue
There are two different ways of executing the Queue:
Consecutive assignment: The successive distribution in a
Queue can be executed utilizing a cluster.
Linked list allocation: The linked list portion in a Queue can be
actualized utilizing a linked list.
What are the utilization instances of Queue?
Here, we will see this present reality situations where we can utilize the Queue
information structure. The Queue information structure is primarily utilized where
there is a shared asset that needs to serve the different asks for however can serve a
solitary solicitation at a time. In such cases, we need to utilize the Queue information
structure for lining up the solicitations. The solicitation that shows up first in the line
will be served first. Coming up next are this present reality situations in which the
Queue idea is utilized:
o Suppose we have a printer divided among different machines in an
organization, and any machine or PC in an organization can send a print solicitation
to the printer. In any case, the printer can serve a solitary solicitation at a time, i.e., a
printer can print a solitary archive at a time. At the point when any print demand
comes from the organization, and if the printer is occupied, the printer's program will
put the print demand in a line.
o If the solicitations are accessible in the Queue, the printer takes a
solicitation from the front of the Queue, and serves it.
o The processor in a PC is likewise utilized as a shared asset. There are
numerous solicitations that the processor should execute, however the processor
can serve a solitary ask for or execute a solitary interaction at a time. Consequently,
the cycles are kept in a Queue for execution.
Types of Queue
There are four kinds of Queues:
Linear Queue
In Linear Queue, an inclusion happens from one end while the erasure happens from
another end. The end at which the addition happens is known as the backside, and the
end at which the erasure happens is known as front end. It carefully keeps the FIFO
rule. The straight Queue can be addressed, as demonstrated in the beneath figure:
The above figure shows that the components are embedded from the backside, and on
the off chance that we embed more components in a Queue, at that point the back
worth gets increased on each addition. In the event that we need to show the
cancellation, at that point it tends to be addressed as:
In the above figure, we can see that the front pointer focuses to the following
component, and the component which was recently pointed by the front pointer was
erased.
The significant disadvantage of utilizing a straight Queue is that inclusion is done
distinctly from the backside. In the event that the initial three components are
erased from the Queue, we can't embed more components despite the fact that the
space is accessible in a Linear Queue. For this
situation, the straight Queue shows the flood condition as the back is highlighting the
last component of the Queue.
Circular Queue
In Circular Queue, all the hubs are addressed as round. It is like the direct Queue
aside from that the last component of the line is associated with the principal
component. It is otherwise called Ring Buffer as all the finishes are associated with
another end. The round line can be addressed as:
Queue
The above figure shows the queue of characters shaping the English word "Hi".
Since, No cancellation is acted in the line till now, thusly the estimation of front
remaining parts - 1 . Be that as it may, the estimation of back increments by one each
time an addition is acted in the queue. Subsequent to embeddings a component into
the queue appeared in the above figure, the queue will look something like after. The
estimation of back will become 5 while the estimation of front remaining parts same.
Algorithm
Step 1: IF REAR = MAX - 1
Write OVERFLOW
Go to step
[END OF IF]
Step 2: IF FRONT = -1 and REAR = -1
SET FRONT = REAR = 0
ELSE
SET REAR = REAR + 1
[END OF IF]
Step 3: Set QUEUE[REAR] = NUM
Step 4: EXIT
Objective
This chapter would make you understand the following concepts:
Understand the concept of Circular Queue
Operation of Circular Queue
Application of Circular Queue
Implementation of Circular Queue
Circular Queue
There was one limit in the exhibit usage of Queue. On the off chance that the back spans
to the end position of the Queue, at that point there may be plausibility that some empty
spaces are left to start with which can't be used. Thus, to defeat such restrictions, the idea
of the round line was presented.
As we can find in the above picture, the back is at the last situation of the Queue and
front is pointing some place as opposed to the 0 th position. In the above exhibit, there
are just two components and other three positions are unfilled. The back is at the last
situation of the Queue; in the event that we attempt to embed the component, at that
point it will show that there are no unfilled spaces in the Queue. There is one answer
for maintain a strategic distance from such wastage of memory space by moving both
the components at the left and change the front and backside as needs be. It's
anything but a for all intents and purposes great methodology since moving all the
components will burn-through loads of time. The effective way to deal with stay
away from the wastage of the memory is to utilize circular queue data structure.
What is a Circular Queue?
A circular queue is like a linear queueas it is likewise founded on the FIFO (First In
First Out) rule aside from that the last position is associated with the principal
position in a round line that shapes a circle. It is otherwise called a Ring Buffer.
7.2.1.Procedure on Circular Queue
Coming up next are the activities that can be performed on a circular queue:
Front: It is utilized to get the front component from the Queue.
Back: It is utilized to get the back component from the Queue.
enQueue(value): This capacity is utilized to embed the new incentive in the Queue.
The new component is constantly embedded from the backside.
deQueue(): This capacity erases a component from the Queue. The cancellation in a
Queue9-
consistently happens from the front end.
Enqueue operation
The steps of enqueue operation are given below:
Step 4: EXIT
Dequeue Operation
The means of dequeue activity are given underneath:
To start with, we check if the Queue is vacant. In the event that the queue is unfilled,
we can't play out the dequeue activity.
At the point when the component is erased, the estimation of front gets decremented
by 1.
On the off chance that there is just a single component left which is to be erased, at
that point the front and back are reset to - 1.
Step 1: IF FRONT = -1
Write " UNDERFLOW "
Goto Step 4
[END of IF]
Step 2: SET VAL = QUEUE[FRONT]
Step 3: IF FRONT = REAR
SET FRONT = REAR = -1
ELSE
IF FRONT = MAX -1
SET FRONT = 0
ELSE
SET FRONT = FRONT + 1
[END of IF]
[END OF IF]
Step 4: EXIT
Let's understand the enqueue and dequeue operation through the diagrammatic
representation.
Deque
The dequeue represents Double Ended Queue. In the queue, the inclusion happens
from one end while the erasure happens from another end. The end at which the
addition happens is known as the backside while theend at which the erasure happens
is known as front end.
Deque is a direct information structure in which the inclusion and cancellation tasks
are performed from the two finishes. We can say that deque is a summed up form of
the line.
How about we take a gander at certain properties of deque.
Deque can be utilized both as stack and line as it permits the inclusion and
cancellation procedure on the two finishes.
In deque, the inclusion and cancellation activity can be performed from one side. The
stack adheres to the LIFO rule in which both the addition and erasure can be
performed distinctly from one end; in this way, we reason that deque can be
considered as a stack.
In deque, the addition can be performed toward one side, and the erasure should be
possible on another end. The queue adheres to the FIFO rule in which the component
is embedded toward one side and erased from another end. Hence, we reason that the
deque can likewise be considered as the queue.
There are two types of Queues, Input-restricted queue, and output-restricted queue.
Information confined queue: The info limited queue implies that a few limitations are
applied to the inclusion. In info confined queue, the addition is applied to one end
while the erasure is applied from both the closures.
Yield confined queue: The yield limited line implies that a few limitations are applied
to the erasure activity. In a yield limited queue, the cancellation can be applied
uniquely from one end, while the inclusion is conceivable from the two finishes.
Operations on Deque
The following are the operations applied on deque:
Insert at front
Delete from end
insert at rear
delete from rear
Other than inclusion and cancellation, we can likewise perform look activity in
deque. Through look activity, we can get the front and the back component of the
dequeue.
We can perform two additional procedure on dequeue:
isFull(): This capacity restores a genuine worth if the stack is full; else, it restores a
bogus worth.
isEmpty(): This capacity restores a genuine worth if the stack is vacant; else it
restores a bogus worth.
Memory Representation
The deque can be executed utilizing two information structures, i.e., round exhibit,
and doubly connected rundown. To actualize the deque utilizing round exhibit, we
initially should realize what is roundabout cluster.
Applications of Deque
The deque can be utilized as a stack and line; subsequently, it can perform
both re-try and fix activities.
3. Assume we need to embed the following component from the back. To embed
the component from the backside, we first need to augment the back, i.e.,
rear=rear+1. Presently, the back is highlighting the subsequent component,
and the front is highlighting the main component.
4. Assume we are again embeddings the component from the backside. To
embed the component, we will first addition the back, and now back focuses
to the third component.
5. In the event that we need to embed the component from the front end, and
addition a component from the front, we need to decrement the estimation of
front by 1. In the event that we decrement the front by 1, at that point the front
focuses to - 1 area, which isn't any substantial area in an exhibit. Thus, we set
the front as (n - 1), which is equivalent to 4 as n is 5. When the front is set, we
will embed the incentive as demonstrated in the beneath figure:
Dequeue Operation
1. On the off chance that the front is highlighting the last component of the
exhibit, and we need to play out the erase activity from the front. To erase any
component from the front, we need to set front=front+1. At present, the
estimation of the front is equivalent to 4, and in the event that we increase the
estimation of front, it becomes 5 which is definitely not a substantial list.
Thusly, we presume that in the event that front focuses to the last component,
at that point front is set to 0 if there should be an occurrence of erase activity.
2. If we want to delete the element from rear end then we need to decrement the
rear value by 1, i.e., rear=rear-1 as shown in the below figure:
3. In the event that the back is highlighting the principal component, and we
need to erase the component from the backside then we need to set rear=n-1
where n is the size of the exhibit as demonstrated in the beneath figure:
Unit 4 : Chapter 8
Linked List
The following are the six functions that we have used in the below program:
Linked list?
In the above figure, we can observe that each node contains the data and the address
of the next node. The last node of the linked list contains the NULL value in the
address part.
How can we declare the Linked list?
The need line can be actualized in four different ways that incorporate clusters,
connected rundown, stack information construction and twofold pursuit tree. The
load information structure is the most productive method of executing the need line,
so we will actualize the need line utilizing a store information structure in this
subject. Presently, first we comprehend the motivation behind why pile is the most
productive route among the wide range of various information structures.
The structure of a linked list can be defined as:
struct node
{
int data;
struct node *next;
}
In the above declaration, we have defined a structure named as a node consisting of
two variables: an integer variable (data), and the other one is the pointer (next), which
contains the address of the next node.
underneath
We can see in the above figure that there are three unique hubs having address 100,
200 and 300 individually. The principal hub contains the location of the following
hub, i.e., 200, the subsequent hub contains the location of the last hub, i.e., 300, and
the third hub contains the NULL incentive in its location part as it doesn't highlight
any hub. The pointer that holds the location of the underlying hub is known as a head
pointer.
As we can see in the above figure, the hub in a doubly-connected rundown has two
location parts; one section stores the location of the following while the other piece of
the hub stores the past hub's location. The underlying hub in the doubly connected
rundown has the NULL incentive in the location part, which gives the location of the
past hub.
Representation of the node in a doubly linked list
struct node
{
int data;
struct node *next;
struct node *prev;
}
In the above portrayal, we have characterized a client characterized structure named a
hub with three individuals, one is information of number sort, and the other two are
the pointers, i.e., next and prev of the hub type. The following pointer variable holds
the location of the following hub, and the prev pointer holds the location of the past
hub. The sort of both the pointers, i.e., next and prev is struct hub as both the pointers
are putting away the location of the hub of the struct hub type.
Circular linked list
A round connected rundown is a variety of an independently connected rundown. The
lone contrast between the separately connected rundown and a round connected
rundown is that the last hub doesn't highlight any hub in an independently connected
rundown, so its connection part contains a NULL worth. Then again, the roundabout
connected rundown is a rundown where the last hub interfaces with the principal hub,
so the connection a piece of the last hub holds the main hub's location. The round
connected rundown has no beginning and finishing hub. We can navigate toward any
path, i.e., either in reverse or forward. The diagrammatic portrayal of the round
connected rundown is appeared underneath:
struct node
{
int data;
struct node *next;
}
A circular linked list is a sequence of elements in which each node has a link to the
next node, and the last node is having a link to the first node. The representation of
the circular linked list will be similar to the singly linked list, as shown below:
Doubly Circular linked list
The doubly circular linked list has the features of both the circular linked list and
doubly linked list.
The above figure shows the portrayal of the doubly round connected rundown wherein the last
connected rundown and doubly roundabout connected rundown is that the doubly
roundabout connected rundown doesn't contain the NULL incentive in the past field
of the hub. As the doubly roundabout connected contains three sections, i.e., two
location parts and one information part so its portrayal is like the doubly connected
rundown.
struct node
{
int data;
struct node *next;
struct node *prev;
}
Linked List
Linked List can be defined as collection of objects called nodes that are randomly
stored in the memory.
A node contains two fields i.e. data stored at that particular address and the pointer
which contains the address of the next node in the memory.
The last node of the list contains pointer to the null.
In the above figure, the bolt addresses the connections. The information a piece of
each hub contains the imprints acquired by the understudy in the diverse subject. The
last hub in the rundown is recognized by the invalid pointer which is available in the
location part of the last hub. We can have as numerous components we need, in the
information part of the rundown.
Complexity
Insertion
The insertion into a singly linked list can be performed at different positions.
Based on the position of the new node being inserted, the insertion is categorized
into the following categories.
SN Operation Description
1 Insertion at It involves inserting any element at the front of the list. We just need
beginning to a few link adjustments to make the new node as the head of the
list.
2 Insertion at end It involves insertion at the last of the linked list. The new node can
of the list be inserted as the only node in the list or it can be inserted as the last
one. Different logics are implemented in each scenario.
3 Insertion after It involves insertion after the specified node of the linked list. We
specified node need to skip the desired number of nodes in order to reach the node
after which the new node will be inserted. .
SN Operation Description
1 Deletion at It involves deletion of a node from the beginning of the list. This
beginning is the simplest operation among all. It just need a few
adjustments in the node pointers.
2 Deletion at the It involves deleting the last node of the list. The list can either be
end of the list empty or full. Different logic is implemented for the different
scenarios.
3 Deletion after It involves deleting the node after the specified node in the list.
specified node we need to skip the desired number of nodes to reach the node
after which the node will be deleted. This requires traversing
through the list.
4 Traversing In traversing, we simply visit each node of the list at least once in
order to perform some specific operation on it, for example,
printing data part of each node present in the list.
5 Searching In searching, we match each element of the list with the given
element. If the element is found on any of the location then
location of that element is returned otherwise null is returned. .
A doubly linked list containing three nodes having numbers from 1 to 3 in their data
part, is shown in the following image.
In C, structure of a node in doubly linked list can be given as :
struct node
{
struct node *prev;
int data;
struct node *next;
}
The prev part of the first node and the next part of the last node will always contain
null indicating end in each direction.
Doubly connected rundown is a mind boggling kind of connected rundown wherein a
hub contains a pointer to the past just as the following hub in the arrangement.
Subsequently, in a doubly connected rundown, a hub comprises of three sections: hub
information, pointer to the following hub in arrangement (next pointer) , pointer to
the past hub (past pointer). An example hub in a doubly connected rundown is
appeared in the figure.
SN Operation Description
1 Insertion at beginning Adding the node into the linked list at beginning.
2 Insertion at end Adding the node into the linked list to the end.
3 Insertion after Adding the node into the linked list after the specified node.
specified node
5 Deletion at the end Removing the node from end of the list.
6 Deletion of the node Removing the node which is present just after the node
having given data containing the given data.
7 Searching Comparing each node data with the item to be searched and
return the location of the item in the list if the item found
else return null.
We can likewise have more than one number of connected rundown in the memory
with the distinctive beginning pointers highlighting the diverse beginning hubs in the
rundown. The last hub is distinguished by its next part which contains the location of
the beginning hub of the rundown. We should have the option to recognize the last
hub of any connected rundown with the goal that we can discover the quantity of
cycles which should be performed while navigating the rundown.
Operations on Circular Singly linked list:
Insertion
SNOperation Description
1 Insertion at beginning Adding a node into circular singly linked list at the beginning.
2 Insertion at the end Adding a node into circular singly linked list at the end.
Deletion& Traversing
SNOperation Description
1 Deletion at Removing the node from circular singly linked list at the beginning.
beginning
2 Deletion at Removing the node from circular singly linked list at the end.
the end
3 Searching Compare each element of the node with the given item and return
the location at which the item is present in the list otherwise return
null.
4 Traversing Visiting each element of the list at least once in order to perform
some specific operation.
Circular Doubly Linked List
Circular Doubly Linked List rundown is a more complexed kind of information
structure in which a hub contain pointers to its past hub just as the following hub.
Round doubly connected rundown doesn't contain NULL in any of the hub. The last
hub of the rundown contains the location of the main hub of the rundown. The main
hub of the rundown additionally contain address of the last hub in its past pointer.
A circular doubly linked list is shown in the following figure.
Because of the way that a round doubly connected rundown contains three sections in
its design hence, it requests more space per hub and more costly essential activities.
Be that as it may, a round doubly connected rundown gives simple control of the
pointers and the looking turns out to be twice as proficient.
Memory Management of Circular Doubly linked list
The accompanying figure shows the manner by which the memory is designated for
a round doubly connected rundown. The variable head contains the location of the
principal component of the rundown for example 1 consequently the beginning hub
of the rundown contains information An is put away at address 1. Since, every hub of
the rundown should have three sections along these lines, the beginning hub of the
rundown contains address of the last hub for example 8 and the following hub for
example 4. The last hub of the rundown that is put away at address 8 and containing
information as 6, contains address of the primary hub of the rundown as
demonstrated in the picture for example 1. In roundabout doubly connected rundown,
the last hub is recognized by the location of the main hub which is put away in the
following piece of the last hub hence the hub which contains the location of the
principal hub, is really the last hub of the rundown.
Operations on circular doubly linked list :
There are different tasks which can be performed on round doubly connected
rundown. The hub design of a roundabout doubly connected rundown is like doubly
connected rundown. Be that as it may, the procedure on round doubly connected
rundown is portrayed in the accompanying table.
SN Operation Description
1 Insertion at beginning Adding a node in circular doubly linked list at the beginning.
2 Insertion at end Adding a node in circular doubly linked list at the end.
3 Deletion at beginning Removing a node in circular doubly linked list from beginning.
4 Deletion at end Removing a node in circular doubly linked list at the end.
Traversing and searching in circular doubly linked list is similar to that in the circular
singly linked list.