Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Data-Structure complete unit 3

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 68

Unit 1: Chapter 1

Introduction to Data Structures


Objective
Introduction:
Basic Concepts of Data Structures
Basic Terminology
Need for Data Structures
Goals of Data Structure
Features of Data Structure
Classification of Data Structures
Static Data Structure vs Dynamic Data Structure
Operations on Data Structures
INTRODUCTION
The study of data structures helps to understand the basic concepts involved in organizing
and storing data as well as the relationship among the data sets. This in turn helps to
determine the way information is stored, retrieved and modified in a computer’s memory.

BASIC CONCEPT OF DATA STRUCTURE

Data structure is a branch of computer science. The study of data structure helps you to
understand how data is organized and how data flow is managed to increase efficiency of
any process or program. Data structure is the structural representation of logical
relationship between data elements. This means that a data structure organizes data items
based on the relationship between the data elements.
Example:
A house can be identified by the house name, location, number of floors and so on.
These structured set of variables depend on each other to identify the exact house.
Similarly, data structure is a structured set of variables that are linked to each other,
which forms the basic component of a system

Basic Terminology

Data structures are the building blocks of any program or the software. Choosing the
appropriate data structure for a program is the most difficult task for a programmer.
Following terminology is used as far as data structures are concerned

Data: Data can be defined as an elementary value or the collection of values, for
example, student's name and its id are the data about the student.

Group Items: Data items which have subordinate data items are called Group item, for
example, name of a student can have first name and the last name.
Record: Record can be defined as the collection of various data items, for example, if we
talk about the student entity, then its name, address, course and marks can be grouped
together to form the record for the student.

File: A File is a collection of various records of one type of entity, for example, if there
are 60 employees in the class, then there will be 20 records in the related file where each
record contains the data about each employee.

Attribute and Entity: An entity represents the class of certain objects. it contains
various attributes. Each attribute represents the particular property of that entity.

Field: Field is a single elementary unit of information representing the attribute of an


entity.

Need for Data Structure


 It gives different level of organization data.
 It tells how data can be stored and accessed in its elementary level.
 Provide operation on group of data, such as adding an item, looking up
highest priority item.
 Provide a means to manage huge amount of data efficiently.
 Provide fast searching and sorting of data.

Goals of Data Structure


Data structure basically implements two complementary goals.

Correctness: Data structure is designed such that it operates correctly for all kinds of
input, which is based on the domain of interest. In other words, correctness forms the
primary goal of data structure, which always depends on the specific problems that the
data structure is intended to solve.

Efficiency: Data structure also needs to be efficient. It should process the data at high
speed without utilizing much of the computer resources such as memory space. In a real
time state, the efficiency of a data structure is an important factor that determines the
success and failure of the process.

Features of Data Structure


Some of the important features of data structures are:

Robustness: Generally, all computer programmers wish to produce software that


generates correct output for every possible input provided to it, as well as execute
efficiently on all hardware platforms. This kind of robust software must be able to
manage both valid and invalid inputs.

Adaptability: Developing software projects such as word processors, Web browsers and
Internet search engine involves large software systems that work or execute correctly and
efficiently for many years. Moreover, software evolves due to ever changing market
conditions or due to emerging technologies.

Reusability: Reusability and adaptability go hand-in-hand.


It is a known fact that the programmer requires many resources for developing any
software, which makes it an expensive enterprise. However, if the software is developed
in a reusable and adaptable way, then it can be implemented in most of the future
applications. Thus, by implementing quality data structures, it is possible to develop
reusable software, which tends to be cost effective and time saving.

CLASSIFICATION OF DATA STRUCTURES

A data structure provides a structured set of variables that are associated with each other
in different ways. It forms a basis of programming tool that represents the relationship
between data elements and helps programmers to process the data easily.

Data structure can be classified into two categories:


Primitive data structure
Non-primitive data structure

Figure 1.1shows the different classifications of data structures.


Figure 1.1 Classifications of data structures.

Primitive Data Structure

Primitive data structures consist of the numbers and the characters which are
built in programs. These can be manipulated or operated directly by the
machine level instructions. Basic data types such as integer, real, character,
and Boolean come under primitive data structures. These data types are also
known as simple data types because they consist of characters that cannot be
divided.

Non-primitive Data Structure


Non-primitive data structures are those that are derived from primitive data structures.
These data structures cannot be operated or manipulated directly by the machine level
instructions. They focus on formation of a set of data elements that is either
homogeneous (same data type) or heterogeneous (different data type).
These are further divided into linear and non-linear data structure based on the structure
and arrangement of data.

Linear Data Structure


A data structure that maintains a linear relationship among its elements is called a linear
data structure. Here, the data is arranged in a linear fashion. But in the memory, the
arrangement may not be sequential.
Ex: Arrays, linked lists, stacks, queues.

1.3.2.1 Non-linear Data Structure


Non-linear data structure is a kind of data structure in which data elements are not
arranged in a sequential order. There is a hierarchical relationship between individual
data items. Here, the insertion and deletion of data is not possible in a linear fashion.
Trees and graphs are examples of non-linear data structures.

I) Array

Array, in general, refers to an orderly arrangement of data elements. Array is a type of


data structure that stores data elements in adjacent locations. Array is considered as linear
data structure that stores elements of same data types. Hence, it is also called as a linear
homogenous data structure.

When we declare an array, we can assign initial values to each of its elements by
enclosing the values in braces { }.

int Num [5] = { 26, 7, 67, 50, 66 };

This declaration will create an array as shown below:


0 1 2 3 4

Num 26 7 67 50 66
Figure 1.2 Array
The number of values inside braces { } should be equal to the number of elements that
we declare for the array inside the square brackets [ ]. In the example of array Paul, we
have declared 5 elements and in the list of initial values within braces { } we have
specified 5 values, one for each element. After this declaration, array Paul will have five
integers, as we have provided 5 initialization values.

Arrays can be classified as one-dimensional array, two-dimensional array or


multidimensional array.
One-dimensional Array: It has only one row of elements. It is stored in ascending
storage location.
Two-dimensional Array: It consists of multiple rows and columns of data elements. It is
also called as a matrix.
Multidimensional Array: Multidimensional arrays can be defined as array of
arrays. Multidimensional arrays are not bounded to two indices or two
dimensions. They can include as many indices as required.

Limitations:
 Arrays are of fixed size.
 Data elements are stored in contiguous memory locations which may not
be always available.
 Insertion and deletion of elements can be problematic because of shifting of
elements from their positions.
However, these limitations can be solved by using linked lists.

Applications:
 Storing list of data elements belonging to same data type
 Auxiliary storage for other data structures
 Storage of binary tree elements of fixed count
 Storage of matrices

II) Linked List

A linked list is a data structure in which each data element contains a pointer
or link to the next element in the list. Through linked list, insertion and
deletion of the data element is possible at all places of a linear list. Also in
linked list, it is not necessary to have the data elements stored in consecutive
locations. It allocates space for each data item in its own block of memory.
Thus, a linked list is considered as a chain of data elements or records called
nodes. Each node in the list contains information field and a pointer field. The
information field contains the actual data and the pointer field contains address
of the subsequent nodes in the list.

Figure 1.3: A Linked List

Figure 1.3 represents a linked list with 4 nodes. Each node has two parts. The left part in
the node represents the information part which contains an entire record of data items and
the right part represents the pointer to the next node. The pointer of the last node contains
a null pointer.

Advantage: Easier to insert or delete data elements


Disadvantage: Slow search operation and requires more memory space

Applications:
 Implementing stacks, queues, binary trees and graphs of predefined size.
 Implement dynamic memory management functions of operating system.
 Polynomial implementation for mathematical operations
 Circular linked list is used to implement OS or application functions that
require round robin execution of tasks.
 Circular linked list is used in a slide show where a user wants to go back to
the first slide after last slide is displayed.
 Doubly linked list is used in the implementation of forward and backward buttons
in a browser to move backwards and forward in the opened pages of a website.
 Circular queue is used to maintain the playing sequence of multiple players in
a game.

III) Stacks

A stack is a linear data structure in which insertion and deletion of elements are done at
only one end, which is known as the top of the stack. Stack is called a last-in, first-out
(LIFO) structure because the last element which is added to the stack is the first
element which is deleted from the stack.

Figure 1.4: A Stack

In the computer’s memory, stacks can be implemented using arrays or linked


lists. Figure 1.4 is a schematic diagram of a stack. Here, element FF is the top
of the stack and element AA is the bottom of the stack. Elements are added to
the stack from the top. Since it follows LIFO pattern, EE cannot be deleted
before FF is deleted, and similarly DD cannot be deleted before EE is deleted
and so on.

Applications:
 Temporary storage structure for recursive operations
 Auxiliary storage structure for nested operations, function
calls, deferred/postponed functions
 Manage function calls
 Evaluation of arithmetic expressions in various programming languages
 Conversion of infix expressions into postfix expressions
 Checking syntax of expressions in a programming environment
 Matching of parenthesis
 String reversal
 In all the problems solutions based on backtracking.
 Used in depth first search in graph and tree traversal.
 Operating System functions
 UNDO and REDO functions in an editor.

IV) Queues

A queue is a first-in, first-out (FIFO) data structure in which the element that is
inserted first is the first one to be taken out. The elements in a queue are added at one
end called the rear and removed from the other end called the front. Like stacks,
queues can be implemented by using either arrays or linked lists.

Figure 1.5 shows a queue with 4 elements, where 55 is the front element and 65 is the
rear element. Elements can be added from the rear and deleted from the front.

Figure 1.5: A Queue

Applications:
 It is used in breadth search operation in graphs.
 Job scheduler operations of OS like a print buffer queue, keyboard buffer queue
to store the keys pressed by users
 Job scheduling, CPU scheduling, Disk Scheduling
 Priority queues are used in file downloading operations in a browser
 Data transfer between peripheral devices and CPU.
 Interrupts generated by the user applications for CPU
 Calls handled by the customers in BPO
V) Trees
A tree is a non-linear data structure in which data is organized in branches. The data
elements in tree are arranged in a sorted order. It imposes a hierarchical structure on
the data elements.

Figure 1.6 represents a tree which consists of 8 nodes. The root of the tree is the node
60 at the top. Node 29 and 44 are the successors of the node 60. The nodes 6, 4, 12
and 67 are the terminal nodes as they do not have any successors.

Figure 1.6: A Tree

Advantage: Provides quick search, insert, and delete operations


Disadvantage: Complicated deletion algorithm

Applications:
 Implementing the hierarchical structures in computer systems like directory
and file system.
 Implementing the navigation structure of a website.
 Code generation like Huffman’s code.
 Decision making in gaming applications.
 Implementation of priority queues for priority-based OS scheduling functions
 Parsing of expressions and statements in programming language compilers
 For storing data keys for DBMS for indexing
 Spanning trees for routing decisions in computer and communications networks
 Hash trees
 path-finding algorithm to implement in AI, robotics and video games applications

VI) Graphs

A graph is also a non-linear data structure. In a tree data structure, all data
elements are stored in definite hierarchical structure. In other words, each
node has only one parent node. While in graphs, each data element is called a
vertex and is connected to many other vertexes through connections called
edges.

Thus, a graph is considered as a mathematical structure, which is composed of


a set of vertexes and a set of edges. Figure shows a graph with six nodes A, B,
C, D, E, F and seven edges [A, B], [A, C], [A, D], [B, C], [C, F], [D, F] and
[D, E].

Figure 1.7 Graph

Advantage: Best models real-world situations


Disadvantage: Some algorithms are slow and very complex

Applications:
 Representing networks and routes in communication, transportation and
travel applications
 Routes in GPS
 Interconnections in social networks and other network-based applications
 Mapping applications
 Ecommerce applications to present user preferences
 Utility networks to identify the problems posed to municipal or local corporations
 Resource utilization and availability in an organization
 Document link map of a website to display connectivity between pages
through hyperlinks
 Robotic motion and neural networks

STATIC DATA STRUCTURE VS DYNAMIC DATA STRUCTURE

Data structure is a way of storing and organising data efficiently such that the required
operations on them can be performed be efficient with respect to time as well as
memory. Simply, Data Structure are used to reduce complexity (mostly the time
complexity) of the code.
Data structures can be two types:
1. Static Data Structure
2. Dynamic Data Structure
What is a Static Data structure?
In Static data structure the size of the structure is fixed. The content of the data
structure can be modified but without changing the memory space allocated to it.

Example of Static Data Structures: Array

What is Dynamic Data Structure?


In Dynamic data structure the size of the structure in not fixed and can be modified
during the operations performed on it. Dynamic data structures are designed to facilitate
change of data structures in the run time.

Example of Dynamic Data Structures: Linked List

Static Data Structure vs Dynamic Data Structure


Static Data structure has fixed memory size whereas in Dynamic Data Structure, the
size can be randomly updated during run time which may be considered efficient with
respect to memory complexity of the code. Static Data Structure provides more easier
access to elements with respect to dynamic data structure. Unlike static data structures,
dynamic data structures are flexible.

OPERATIONS ON DATA STRUCTURES

This section discusses the different operations that can be performed on the various data
structures previously mentioned.

Traversing It means to access each data item exactly once so that it can be processed.
For example, to print the names of all the students in a class.
Searching It is used to find the location of one or more data items that satisfy the given
constraint. Such a data item may or may not be present in the given collection of data
items. For example, to find the names of all the students who secured 100 marks in
mathematics.

Inserting It is used to add new data items to the given list of data items. For example, to
add the details of a new student who has recently joined the course.

Deleting It means to remove (delete) a particular data item from the given collection of
data items. For example, to delete the name of a student who has left the course.

Sorting Data items can be arranged in some order like ascending order or descending
order depending on the type of application. For example, arranging the names of students
in a class in an alphabetical order, or calculating the top three winners by arranging the
participants’ scores in descending order and then extracting the top three.

Merging Lists of two sorted data items can be combined to form a single list of sorted
data items.
Unit 2

Linear Data Structures

5.0 Objective
5.1. What is a Stack?
5.2.Working of Stack
Standard Stack Operations
PUSH operation
POP operation
Applications of Stack

Objective
This chapter would make you understand the following concepts:
 What is mean by Stack
 Different Operations on stack
 Application of stack
 Link List Implementation of stack

What is a Stack?
A Stack is a straight information structure that follows the LIFO (Last-In-First-Out)
guideline. Stack has one end, though the Queue has two finishes (front and back). It
contains just a single pointer top pointer highlighting the highest component of the
stack. At whatever point a component is included the stack, it is added on the highest
point of the stack, and the component can be erased uniquely from the stack. All in
all, a stack can be characterized as a compartment in which inclusion and erasure
should be possible from the one end known as the highest point of the stack.
Some key points related to stack
o It is called as stack since it carries on like a certifiable stack, heaps of
books, and so forth
o A Stack is a theoretical information type with a pre-characterized limit, which
implies that it can store the components of a restricted size.
o It is an information structure that follows some request to embed and erase
the components, and that request can be LIFO or FILO.
Working of Stack
Stack chips away at the LIFO design. As we can see in the underneath figure there
are five memory blocks in the stack; along these lines, the size of the stack is 5.
Assume we need to store the components in a stack and how about we expect that
stack is vacant. We have taken the pile of size 5 as appeared underneath in which we
are pushing the components individually until the stack turns out to be full.

Since our stack is full as the size of the stack is 5. In the above cases, we can see that
it goes from the top to the base when we were entering the new component in the
stack. The stack gets topped off from the base to the top.
At the point when we play out the erase procedure on the stack, there is just a single
route for passage and exit as the opposite end is shut. It follows the LIFO design,
which implies that the worth entered first will be eliminated last. In the above case,
the worth 5 is entered first, so it will be taken out simply after the cancellation of the
multitude of different components.

Standard Stack Operations


Coming up next are some basic activities actualized on the stack:
o push(): When we embed a component in a stack then the activity is known as
a push. On the off chance that the stack is full, at that point the flood condition
happens.
o pop(): When we erase a component from the stack, the activity is known as a
pop. In the event that the stack is unfilled implies that no component exists in the
stack, this state is known as an undercurrent state.
o isEmpty(): It decides if the stack is unfilled or not.
o isFull(): It decides if the stack is full or not.'
o peek(): It restores the component at the given position.
o count(): It restores the all out number of components accessible in a stack.
o change(): It changes the component at the given position.
o display(): It prints all the components accessible in the stack.

PUSH operation
The means engaged with the PUSH activity is given beneath:
o Before embeddings a component in a stack, we check whether the stack is
full.
o If we attempt to embed the component in a stack, and the stack is full, at that
point the flood condition happens.
o When we introduce a stack, we set the estimation of top as - 1 to watch that
the stack is unfilled.
o When the new component is pushed in a stack, first, the estimation of the top
gets increased, i.e., top=top+1, and the component will be put at the new situation of
the top.
o The components will be embedded until we arrive at the maximum size of the
stack.

POP operation
The means engaged with the POP activity is given beneath:
o Before erasing the component from the stack, we check whether the stack is
vacant.
o If we attempt to erase the component from the vacant stack, at that point the
sub-current condition happens.
o If the stack isn't unfilled, we first access the component which is pointed by
the top
o Once the pop activity is played out, the top is decremented by 1, i.e., top=top-

Linked list implementation of stack


Rather than utilizing cluster, we can likewise utilize connected rundown to execute
stack. Connected rundown assigns the memory powerfully. Nonetheless, time
unpredictability in both the situation is same for all the tasks for example push, pop
and look.
In connected rundown execution of stack, the hubs are kept up non-coterminously in
the memory. Every hub contains a pointer to its nearby replacement hub in the stack.
Stack is supposed to be overflown if the space left in the memory pile isn't sufficient
to make a hub.
The top most hub in the stack consistently contains invalid in its location field. Lets
examine the manner by which, every activity is acted in connected rundown usage of
stack.
Adding a node to the stack (Push operation)
Adding a hub to the stack is alluded to as push activity. Pushing a component to a
stack in connected rundown execution is not quite the same as that of a cluster usage.
To push a component onto the stack, the accompanying advances are included.
1. Create a hub first and designate memory to it.
2. If the rundown is vacant then the thing is to be pushed as the beginning hub of
the rundown. This incorporates doling out an incentive to the information part of the
hub and allot invalid to the location part of the hub.
3. If there are a few hubs in the rundown effectively, at that point we need to add
the new component in the start of the rundown (to not disregard the property of the
stack). For this reason, allocate the location of the beginning component to the
location field of the new hub and make the new hub, the beginning hub of the
rundown.
Time Complexity :
Deleting a hub from the stack (POP activity)
Erasing a hub from the highest point of stack is alluded to as pop activity. Erasing a
hub from the connected rundown usage of stack is not quite the same as that in the
exhibit execution. To pop a component from the stack, we need to follow the
accompanying advances :
Check for the undercurrent condition: The sub-current condition happens when we
attempt to fly from a generally unfilled stack. The stack will be unfilled if the head
pointer of the rundown focuses to invalid.
Change the head pointer in like manner: In stack, the components are popped
uniquely from one end, thusly, the worth put away in the head pointer should be
erased and the hub should be liberated. The following hub of the head hub presently
turns into the head hub.
Display the nodes (Traversing)
Showing all the hubs of a stack requires navigating all the hubs of the connected
rundown coordinated as stack. For this reason, we need to follow the accompanying
advances.
1. Copy the head pointer into an impermanent pointer.
2. Move the brief pointer through all the hubs of the rundown and print the
worth field joined to each hub.
Queue

6.0 Objective
6.1.Queue
6.2.Applications of Queue
6.3.Types of Queues
6.4.Operations on Queue
6.5.Implementationof Queue
Consecutive assignment:
Linked list allocation:
6.6.What are the utilization instances of Queue?
6.7.Types of Queue
6.7.1.Linear Queue
6.7.2.Circular Queue
6.7.3.Priority Queue
6.7.4.Deque
6.8.Array representation of Queue

Objective
This chapter would make you understand the following concepts:
 Queue
 Definition
 Operations, Implementation of simple
 queue (Array and Linked list) and applications of queue-BFS
 Types of queues: Circular, Double ended,

Priority, 6.1.Queue
1. A Queue can be characterized as an arranged rundown which empowers
embed tasks to be performed toward one side called REAR and erase activities to
be performed at another end called FRONT.
2. Queue is alluded to be as First In First Out rundown.
3. For instance, individuals sitting tight in line for a rail ticket structure a Queue

Applications of Queue
Because of the way that line performs activities on first in first out premise which is
very reasonable for the requesting of activities. There are different uses of queues
examined as beneath.
1. Queues are generally utilized as hanging tight records for a solitary
shared asset like printer, plate, CPU.
2. Queues are utilized in offbeat exchange of information (where
information isn't being moved at similar rate between two cycles) for eg. pipes,
document IO, attachments.
3. Queues are utilized as cradles in the greater part of the applications like MP3
media player, CD player, and so on
4. Queue are utilized to keep up the play list in media major parts to add
and eliminate the tunes from the play-list.
5. Queues are utilized in working frameworks for dealing with interferes.
Complexity

Types of Queues
Prior to understanding the sorts of queues, we first glance at 'what is Queue'.
What is the Queue?
A queue in the information construction can be viewed as like the queue in reality. A
queue is an information structure in which whatever starts things out will go out first.
It follows the FIFO (First-In-First-Out) arrangement. In Queue, the inclusion is done
from one end known as the backside or the tail of the queue, though the erasure is
done from another end known as the front end or the top of the line. At the end of the
day, it very well may be characterized as a rundown or an assortment with an
imperative that the addition can be performed toward one side called as the backside
or tail of the queue and cancellation is performed on another end called as the front
end or the top of the queue.

Operations on Queue
o Enqueue: The enqueue activity is utilized to embed the component at the
backside of the queue. It brings void back.
o Dequeue: The dequeue activity plays out the erasure from the front-finish of
the queue. It additionally restores the component which has been eliminated from the
front-end. It restores a number worth. The dequeue activity can likewise be intended
to void.
o Peek: This is the third activity that profits the component, which is pointed by
the front pointer in the queue yet doesn't erase it.
o Queue flood (isfull): When the Queue is totally full, at that point it shows
the flood condition.
o Queue undercurrent (isempty): When the Queue is unfilled, i.e.,
no components are in the Queue then it tosses the sub-current condition.
A Queue can be addressed as a compartment opened from both the sides in which the
component can be enqueued from one side and dequeued from another side as
demonstrated in the beneath figure:
Implementationof Queue
There are two different ways of executing the Queue:
Consecutive assignment: The successive distribution in a
Queue can be executed utilizing a cluster.
Linked list allocation: The linked list portion in a Queue can be
actualized utilizing a linked list.
What are the utilization instances of Queue?
Here, we will see this present reality situations where we can utilize the Queue
information structure. The Queue information structure is primarily utilized where
there is a shared asset that needs to serve the different asks for however can serve a
solitary solicitation at a time. In such cases, we need to utilize the Queue information
structure for lining up the solicitations. The solicitation that shows up first in the line
will be served first. Coming up next are this present reality situations in which the
Queue idea is utilized:
o Suppose we have a printer divided among different machines in an
organization, and any machine or PC in an organization can send a print solicitation
to the printer. In any case, the printer can serve a solitary solicitation at a time, i.e., a
printer can print a solitary archive at a time. At the point when any print demand
comes from the organization, and if the printer is occupied, the printer's program will
put the print demand in a line.
o If the solicitations are accessible in the Queue, the printer takes a
solicitation from the front of the Queue, and serves it.
o The processor in a PC is likewise utilized as a shared asset. There are
numerous solicitations that the processor should execute, however the processor
can serve a solitary ask for or execute a solitary interaction at a time. Consequently,
the cycles are kept in a Queue for execution.
Types of Queue
There are four kinds of Queues:
Linear Queue
In Linear Queue, an inclusion happens from one end while the erasure happens from
another end. The end at which the addition happens is known as the backside, and the
end at which the erasure happens is known as front end. It carefully keeps the FIFO
rule. The straight Queue can be addressed, as demonstrated in the beneath figure:

The above figure shows that the components are embedded from the backside, and on
the off chance that we embed more components in a Queue, at that point the back
worth gets increased on each addition. In the event that we need to show the
cancellation, at that point it tends to be addressed as:

In the above figure, we can see that the front pointer focuses to the following
component, and the component which was recently pointed by the front pointer was
erased.
The significant disadvantage of utilizing a straight Queue is that inclusion is done
distinctly from the backside. In the event that the initial three components are
erased from the Queue, we can't embed more components despite the fact that the
space is accessible in a Linear Queue. For this
situation, the straight Queue shows the flood condition as the back is highlighting the
last component of the Queue.
Circular Queue
In Circular Queue, all the hubs are addressed as round. It is like the direct Queue
aside from that the last component of the line is associated with the principal
component. It is otherwise called Ring Buffer as all the finishes are associated with
another end. The round line can be addressed as:

The disadvantage that happens in a direct line is overwhelmed by utilizing the


roundabout queue. On the off chance that the unfilled space is accessible in a round
line, the new component can be included a vacant space by just augmenting the
estimation of back.
Priority Queue
A need queue is another exceptional sort of Queue information structure in which
every component has some need related with it. In view of the need of the
component, the components are organized in a need line. In the event that the
components happen with a similar need, at that point they are served by the FIFO
rule.
In need Queue, the inclusion happens dependent on the appearance while the
cancellation happens dependent on the need. The need Queue can be appeared as:
The above figure shows that the most elevated need component starts things out and
the components of a similar need are organized dependent on FIFO structure.
Deque
Both the Linear Queue and Deque are distinctive as the direct line follows the
FIFO standard while, deque doesn't follow the FIFO rule. In Deque, the
inclusion and erasure can happen from the two closures.

Array representation of Queue


We can without much of a stretch address queue by utilizing direct exhibits. There
are two factors for example front and back, that are actualized on account of each
queue. Front and back factors highlight the situation from where inclusions and
cancellations are acted in a queue. At first, the estimation of front and queue is - 1
which addresses an unfilled queue. Cluster portrayal of a queue containing 5
components alongside the separate estimations of front and back, is appeared in the
accompanying figure.

Queue
The above figure shows the queue of characters shaping the English word "Hi".
Since, No cancellation is acted in the line till now, thusly the estimation of front
remaining parts - 1 . Be that as it may, the estimation of back increments by one each
time an addition is acted in the queue. Subsequent to embeddings a component into
the queue appeared in the above figure, the queue will look something like after. The
estimation of back will become 5 while the estimation of front remaining parts same.

Queue after inserting an element


After deleting an element, the value of front will increase from -1 to 0. however, the
queue will look something like following.
Queue after deleting an element
Algorithm to embed any component in a line
Check if the line is as of now full by contrasting back with max - 1. assuming this is
the case, at that point return a flood blunder.
In the event that the thing is to be embedded as the principal component in the
rundown, all things considered set the estimation of front and back to 0 and addition
the component at the backside.
In any case continue to expand the estimation of back and addition every component
individually having back as the file.

Algorithm
Step 1: IF REAR = MAX - 1
Write OVERFLOW
Go to step
[END OF IF]
Step 2: IF FRONT = -1 and REAR = -1
SET FRONT = REAR = 0
ELSE
SET REAR = REAR + 1
[END OF IF]
Step 3: Set QUEUE[REAR] = NUM
Step 4: EXIT

Linked List implementation of Queue


Because of the disadvantages examined in the past part of this instructional exercise,
the exhibit usage can not be utilized for the huge scope applications where the queues
are actualized. One of the option of cluster usage is connected rundown execution of
queue.
The capacity prerequisite of connected portrayal of a queue with n components is
o(n) while the time necessity for tasks is o(1).
In a linked queue, every hub of the queue comprises of two sections for example
information part and the connection part. Every component of the queue focuses to its
nearby next component in the memory.
In the linked queue, there are two pointers kept up in the memory for example front
pointer and back pointer. The front pointer contains the location of the beginning
component of the queue while the back pointer contains the location of the last
component of the queue.
Inclusion and erasures are performed at back and front end separately. On the off
chance that front and back both are NULL, it shows that the line is vacant.
The connected portrayal of queue is appeared in the accompanying figure.

Operation on Linked Queue


There are two basic operations which can be implemented on the linked queues. The
operations are Insertion and Deletion.
Insert operation
The addition activity attach the line by adding a component to the furthest limit of the
line. The new component will be the last component of the line.
Right off the bat, assign the memory for the new hub ptr by utilizing the
accompanying assertion.
Ptr = (struct node *) malloc (sizeof(struct node));
There can be the two situation of embeddings this new hub ptr into the connected
line.
In the principal situation, we embed component into an unfilled queue. For this
situation, the condition front = NULL turns out to be valid. Presently, the new
component will be added as the lone component of the queue and the following
pointer of front and back pointer both, will highlight NULL.
ptr -> data = item;
if(front == NULL)
{
front = ptr;
rear = ptr;
front -> next = NULL;
rear -> next = NULL;
}
In the subsequent case, the queue contains more than one component. The condition
front = NULL turns out to be bogus. In this situation, we need to refresh the end
pointer back with the goal that the following pointer of back will highlight the new
hub ptr. Since, this is a connected line, consequently we likewise need to make the
back pointer highlight the recently added hub ptr. We additionally need to make the
following pointer of back highlight NULL.
rear -> next = ptr;
rear = ptr;
rear->next = NULL;
In this way, the element is inserted into the queue. The algorithm and the C
implementation is given as follows.
Deletion
Cancellation activity eliminates the component that is first embedded among all the
queue components. Right off the bat, we need to check either the rundown is unfilled
or not. The condition front == NULL turns out to be valid if the rundown is unfilled,
for this situation, we essentially compose undercurrent on the comfort and make exit.
Else, we will erase the component that is pointed by the pointer front. For this reason,
duplicate the hub pointed by the front pointer into the pointer ptr. Presently, move the
front pointer, highlight its next hub and free the hub pointed by the hub ptr. This is
finished by utilizing the accompanying assertio
Queue

Objective
This chapter would make you understand the following concepts:
 Understand the concept of Circular Queue
 Operation of Circular Queue
 Application of Circular Queue
 Implementation of Circular Queue

Circular Queue
There was one limit in the exhibit usage of Queue. On the off chance that the back spans
to the end position of the Queue, at that point there may be plausibility that some empty
spaces are left to start with which can't be used. Thus, to defeat such restrictions, the idea
of the round line was presented.

As we can find in the above picture, the back is at the last situation of the Queue and
front is pointing some place as opposed to the 0 th position. In the above exhibit, there
are just two components and other three positions are unfilled. The back is at the last
situation of the Queue; in the event that we attempt to embed the component, at that
point it will show that there are no unfilled spaces in the Queue. There is one answer
for maintain a strategic distance from such wastage of memory space by moving both
the components at the left and change the front and backside as needs be. It's
anything but a for all intents and purposes great methodology since moving all the
components will burn-through loads of time. The effective way to deal with stay
away from the wastage of the memory is to utilize circular queue data structure.
What is a Circular Queue?
A circular queue is like a linear queueas it is likewise founded on the FIFO (First In
First Out) rule aside from that the last position is associated with the principal
position in a round line that shapes a circle. It is otherwise called a Ring Buffer.
7.2.1.Procedure on Circular Queue
Coming up next are the activities that can be performed on a circular queue:
Front: It is utilized to get the front component from the Queue.
Back: It is utilized to get the back component from the Queue.
enQueue(value): This capacity is utilized to embed the new incentive in the Queue.
The new component is constantly embedded from the backside.
deQueue(): This capacity erases a component from the Queue. The cancellation in a
Queue9-
consistently happens from the front end.

Uses of Circular Queue


The roundabout Queue can be utilized in the accompanying situations:
Memory the board: The roundabout queue gives memory the executives. As we
have just seen that in linear queue, the memory isn't overseen proficiently. Yet, if
there should arise an occurrence of a roundabout queue, the memory is overseen
effectively by putting the components in an area which is unused.
CPU Scheduling: The working framework likewise utilizes the circular queue to
embed the cycles and afterward execute them.
Traffic framework: In a PC control traffic framework, traffic signal is probably the
best illustration of the circular queue. Each light of traffic signal gets ON individually
after each jinterval of time. Like red light gets ON briefly then yellow light briefly
and afterward green light. After green light, the red light gets ON.

Enqueue operation
The steps of enqueue operation are given below:

First, we will check whether the Queue is full or not.


Initially the front and rear are set to -1. When we insert the first element in a Queue,
front and rear both are set to 0.
When we insert a new element, the rear gets incremented, i.e., rear=rear+1.
Scenarios for inserting an element
There are two scenarios in which queue is not full:
If rear != max - 1, then rear will be incremented to mod(maxsize) and the new value
will be inserted at the rear end of the queue.
If front != 0 and rear = max - 1, it means that queue is not full, then set the value of
rear to 0 and insert the new element there.
There are two cases in which the element cannot be inserted:
When front ==0 && rear = max-1, which means that front is at the first position of
the Queue and rear is at the last position of the Queue.
front== rear + 1;
Algorithm to insert an element in a circular
queue Step 1: IF (REAR+1)%MAX = FRONT
Write " OVERFLOW "
Goto step 4
[End OF IF]

Step 2: IF FRONT = -1 and REAR = -1


SET FRONT = REAR = 0
ELSE IF REAR = MAX - 1 and FRONT ! = 0
SET REAR = 0
ELSE
SET REAR = (REAR + 1) % MAX
[END OF IF]

Step 3: SET QUEUE[REAR] = VAL

Step 4: EXIT

Dequeue Operation
The means of dequeue activity are given underneath:
To start with, we check if the Queue is vacant. In the event that the queue is unfilled,
we can't play out the dequeue activity.
At the point when the component is erased, the estimation of front gets decremented
by 1.
On the off chance that there is just a single component left which is to be erased, at
that point the front and back are reset to - 1.

7.6.1.Algorithm to delete an element from the circular queue

Step 1: IF FRONT = -1
Write " UNDERFLOW "
Goto Step 4
[END of IF]
Step 2: SET VAL = QUEUE[FRONT]
Step 3: IF FRONT = REAR
SET FRONT = REAR = -1
ELSE
IF FRONT = MAX -1
SET FRONT = 0
ELSE
SET FRONT = FRONT + 1
[END of IF]
[END OF IF]
Step 4: EXIT

Let's understand the enqueue and dequeue operation through the diagrammatic
representation.
Deque
The dequeue represents Double Ended Queue. In the queue, the inclusion happens
from one end while the erasure happens from another end. The end at which the
addition happens is known as the backside while theend at which the erasure happens
is known as front end.

Deque is a direct information structure in which the inclusion and cancellation tasks
are performed from the two finishes. We can say that deque is a summed up form of
the line.
How about we take a gander at certain properties of deque.
Deque can be utilized both as stack and line as it permits the inclusion and
cancellation procedure on the two finishes.
In deque, the inclusion and cancellation activity can be performed from one side. The
stack adheres to the LIFO rule in which both the addition and erasure can be
performed distinctly from one end; in this way, we reason that deque can be
considered as a stack.
In deque, the addition can be performed toward one side, and the erasure should be
possible on another end. The queue adheres to the FIFO rule in which the component
is embedded toward one side and erased from another end. Hence, we reason that the
deque can likewise be considered as the queue.

There are two types of Queues, Input-restricted queue, and output-restricted queue.
Information confined queue: The info limited queue implies that a few limitations are
applied to the inclusion. In info confined queue, the addition is applied to one end
while the erasure is applied from both the closures.

Yield confined queue: The yield limited line implies that a few limitations are applied
to the erasure activity. In a yield limited queue, the cancellation can be applied
uniquely from one end, while the inclusion is conceivable from the two finishes.

Operations on Deque
The following are the operations applied on deque:
Insert at front
Delete from end
insert at rear
delete from rear
Other than inclusion and cancellation, we can likewise perform look activity in
deque. Through look activity, we can get the front and the back component of the
dequeue.
We can perform two additional procedure on dequeue:
isFull(): This capacity restores a genuine worth if the stack is full; else, it restores a
bogus worth.
isEmpty(): This capacity restores a genuine worth if the stack is vacant; else it
restores a bogus worth.
Memory Representation
The deque can be executed utilizing two information structures, i.e., round exhibit,
and doubly connected rundown. To actualize the deque utilizing round exhibit, we
initially should realize what is roundabout cluster.

What is a circular array?


An exhibit is supposed to be roundabout if the last component of the cluster is
associated with the primary component of the exhibit. Assume the size of the cluster
is 4, and the exhibit is full however the primary area of the cluster is unfilled. In the
event that we need to embed the exhibit component, it won't show any flood
condition as the last component is associated with the primary component. The worth
which we need to embed will be included the primary area of the exhibit.

Applications of Deque
 The deque can be utilized as a stack and line; subsequently, it can perform
both re-try and fix activities.

 It tends to be utilized as a palindrome checker implies that in the event that we


read the string from the two closures, at that point the string would be the
equivalent.
 It tends to be utilized for multiprocessor planning. Assume we have two
processors, and every processor has one interaction to execute. Every
processor is appointed with an interaction or a task, and each cycle contains
numerous strings. Every processor keeps a deque that contains strings that are
prepared to execute. The processor executes an interaction, and on the off
chance that a cycle makes a kid cycle, at that point that cycle will be
embedded at the front of the deque of the parent interaction. Assume the
processor P2 has finished the execution of every one of its strings then it takes
the string from the backside of the processor P1 and adds to the front finish of
the processor P2. The processor P2 will take the string from the front end;
thusly, the erasure takes from both the closures, i.e., front and backside. This
is known as the A-take calculation for planning.

Implementation of Deque using a circular array


The following are the steps to perform the operations on the Deque:
Enqueue operation
1. At first, we are thinking about that the deque is unfilled, so both front and
back are set to - 1, i.e., f = - 1 and r = - 1.

2. As the deque is vacant, so embeddings a component either from the front or


backside would be something very similar. Assume we have embedded
component 1, at that point front is equivalent to 0, and the back is likewise
equivalent to 0.

3. Assume we need to embed the following component from the back. To embed
the component from the backside, we first need to augment the back, i.e.,
rear=rear+1. Presently, the back is highlighting the subsequent component,
and the front is highlighting the main component.
4. Assume we are again embeddings the component from the backside. To
embed the component, we will first addition the back, and now back focuses
to the third component.

5. In the event that we need to embed the component from the front end, and
addition a component from the front, we need to decrement the estimation of
front by 1. In the event that we decrement the front by 1, at that point the front
focuses to - 1 area, which isn't any substantial area in an exhibit. Thus, we set
the front as (n - 1), which is equivalent to 4 as n is 5. When the front is set, we
will embed the incentive as demonstrated in the beneath figure:

Dequeue Operation
1. On the off chance that the front is highlighting the last component of the
exhibit, and we need to play out the erase activity from the front. To erase any
component from the front, we need to set front=front+1. At present, the
estimation of the front is equivalent to 4, and in the event that we increase the
estimation of front, it becomes 5 which is definitely not a substantial list.
Thusly, we presume that in the event that front focuses to the last component,
at that point front is set to 0 if there should be an occurrence of erase activity.
2. If we want to delete the element from rear end then we need to decrement the
rear value by 1, i.e., rear=rear-1 as shown in the below figure:

3. In the event that the back is highlighting the principal component, and we
need to erase the component from the backside then we need to set rear=n-1
where n is the size of the exhibit as demonstrated in the beneath figure:
Unit 4 : Chapter 8
Linked List

Let's create a program of deque.

The following are the six functions that we have used in the below program:

enqueue_front(): It is used to insert the element from the front end.


enqueue_rear(): It is used to insert the element from the rear end.
dequeue_front(): It is used to delete the element from the front end.
dequeue_rear(): It is used to delete the element from the rear end.
getfront(): It is used to return the front element of the deque.
getrear(): It is used to return the rear element of the deque.
Unit 3

Linked list?

1.Advantages of using a Linked list over Array


2.Applications of Linked List
3.Types of Linked List
1.Singly Linked list
2.Doubly linked list
3.Circular linked list
4.Doubly Circular linked list
8.6.Linked List
Uses of Linked List
Why use linked list over array?
What is Linked List?
A linked list is also a collection of elements, but the elements are not stored in a
consecutive location.Suppose a programmer made a request for storing the integer
value then size of 4-byte memory block is assigned to the integer value. The
programmer made another request for storing 3 more integer elements; then, three
different memory blocks are assigned to these three elements but the memory blocks
are available in a random location. So, how are the elements connected?.
These elements are linked to each other by providing one additional information
along with an element, i.e., the address of the next element. The variable that stores
the address of the next element is known as a pointer. Therefore, we conclude that the
linked list contains two parts, i.e., the first one is the data element, and the other is the
pointer. The pointer variable will occupy 4 bytes which is pointing to the next
element.
A linked list can also be defined as the collection of the nodes in which one node is
connected to another node, and node consists of two parts, i.e., one is the data part
and the second one is the address part, as shown in the below figure:

In the above figure, we can observe that each node contains the data and the address
of the next node. The last node of the linked list contains the NULL value in the
address part.
How can we declare the Linked list?
The need line can be actualized in four different ways that incorporate clusters,
connected rundown, stack information construction and twofold pursuit tree. The
load information structure is the most productive method of executing the need line,
so we will actualize the need line utilizing a store information structure in this
subject. Presently, first we comprehend the motivation behind why pile is the most
productive route among the wide range of various information structures.
The structure of a linked list can be defined as:
struct node
{
int data;
struct node *next;
}
In the above declaration, we have defined a structure named as a node consisting of
two variables: an integer variable (data), and the other one is the pointer (next), which
contains the address of the next node.

Advantages of using a Linked list over Array


The following are the advantages of using a linked list over an
array: Dynamic data structure:
The size of the linked list is not fixed as it can vary according to our requirements.
Insertion and Deletion:
Insertion and deletion in linked list are easier than array as the elements in an array
are stored in a consecutive location. In contrast, in the case of a linked list, the
elements are stored in a random location. The complexity for insertion and deletion
of elements from the beginning is O(1) in the linked list, while in the case of an
array, the complexity would be O(n). If we want to insert or delete the element in an
array, then we need to shift the elements for creating the space. On the other hand, in
the linked list, we do not have to shift the elements. In the linked list, we just need to
update the address of the pointer in the node.
Memory efficient
Its memory consumption is efficient as the size of the linked list can grow or shrink
according to our requirements.
Implementation
Both the stacks and queues can be implemented using a linked list.
Disadvantages of Linked list
The following are the disadvantages of linked list:
Memory usage
The node in a linked list occupies more memory than array as each node occupies
two types of variables, i.e., one is a simple variable, and another is a pointer variable
that occupies 4 bytes in the memory.
Traversal
In a linked list, the traversal is not easy. If we want to access the element in a linked
list, we cannot access the element randomly, but in the case of an array, we can
randomly access the element by index. For example, if we want to access the 3rd
node, then we need to traverse all the nodes before it. So, the time required to access
a particular node is large.
Reverse traversing
In a linked list, backtracking or reverse traversing is difficult. In a doubly linked list,
it is easier but requires more memory to store the back pointer.

Applications of Linked List


The applications of the linked list are given below:
o With the assistance of a connected rundown, the polynomials can be
addressed just as we can play out the procedure on the polynomial. We realize
that polynomial is an assortment of terms where each term contains
coefficient and force. The coefficients and force of each term are put away as
hub and connection pointer focuses to the following component in a
connected rundown, so connected rundown can be utilized to make, erase and
show the polynomial.
o An inadequate grid is utilized in logical calculation and mathematical
investigation. In this way, a connected rundown is utilized to address the
scanty grid.
o The different tasks like understudy's subtleties, worker's subtleties or item
subtleties can be executed utilizing the connected rundown as the connected
rundown utilizes the design information type that can hold distinctive
information types.
o Stack, Queue, tree and different other information constructions can be
executed utilizing a connected rundown.
o The chart is an assortment of edges and vertices, and the diagram can be
addressed as a nearness lattice and contiguousness list. In the event that we
need to address the diagram as a nearness network, at that point it very well
may be actualized as an exhibit. In the event that we need to address the
diagram as a nearness list, at that point it tends to be actualized as a connected
rundown.
o To actualize hashing, we require hash tables. The hash table contains sections
that are executed utilizing connected rundown.
o A connected rundown can be utilized to execute dynamic memory
designation. The powerful memory distribution is the memory designation
done at the run-time.
Types of Linked List
Before knowing about the types of a linked list, we should know what is linked list.
So, to know about the linked list, click on the link given below:
Types of Linked list
The following are the types of linked list:
Singly Linked list
Doubly Linked list
Circular Linked list
Doubly Circular Linked list

Singly Linked list


It is the normally utilized connected rundown in projects. In the event that we are
discussing the connected show, it implies it is a separately connected rundown. The
separately connected rundown is an information structure that contains two sections,
i.e., one is the information part, and the other one is the location part, which contains
the location of the following or the replacement hub. The location part in a hub is
otherwise called a pointer.
Assume we have three hubs, and the locations of these three hubs are 100, 200 and
300 separately. The portrayal of three hubs as a connected rundown is appeared in the

underneath

We can see in the above figure that there are three unique hubs having address 100,
200 and 300 individually. The principal hub contains the location of the following
hub, i.e., 200, the subsequent hub contains the location of the last hub, i.e., 300, and
the third hub contains the NULL incentive in its location part as it doesn't highlight
any hub. The pointer that holds the location of the underlying hub is known as a head
pointer.

The connected rundown, which is appeared in the above outline, is referred to as an


independently connected rundown as it contains just a solitary connection. In this
rundown, just forward crossing is conceivable; we can't navigate the regressive way
as it has just one connection in the rundown.
Representation of the node in a singly linked
list struct node
{
int data;
struct node *next;
}

Doubly linked list


As the name recommends, the doubly connected rundown contains two pointers. We
can characterize the doubly connected rundown as a straight information structure
with three sections: the information part and the other two location part. All in all, a
doubly connected rundown is a rundown that has three sections in a solitary hub,
incorporates one information section, a pointer to its past hub, and a pointer to the following hub.
Assume we have three hubs, and the location of these hubs are 100, 200 and 300, separately. The

As we can see in the above figure, the hub in a doubly-connected rundown has two
location parts; one section stores the location of the following while the other piece of
the hub stores the past hub's location. The underlying hub in the doubly connected
rundown has the NULL incentive in the location part, which gives the location of the
past hub.
Representation of the node in a doubly linked list
struct node
{
int data;
struct node *next;
struct node *prev;
}
In the above portrayal, we have characterized a client characterized structure named a
hub with three individuals, one is information of number sort, and the other two are
the pointers, i.e., next and prev of the hub type. The following pointer variable holds
the location of the following hub, and the prev pointer holds the location of the past
hub. The sort of both the pointers, i.e., next and prev is struct hub as both the pointers
are putting away the location of the hub of the struct hub type.
Circular linked list
A round connected rundown is a variety of an independently connected rundown. The
lone contrast between the separately connected rundown and a round connected
rundown is that the last hub doesn't highlight any hub in an independently connected
rundown, so its connection part contains a NULL worth. Then again, the roundabout
connected rundown is a rundown where the last hub interfaces with the principal hub,
so the connection a piece of the last hub holds the main hub's location. The round
connected rundown has no beginning and finishing hub. We can navigate toward any
path, i.e., either in reverse or forward. The diagrammatic portrayal of the round
connected rundown is appeared underneath:
struct node
{
int data;
struct node *next;
}
A circular linked list is a sequence of elements in which each node has a link to the
next node, and the last node is having a link to the first node. The representation of
the circular linked list will be similar to the singly linked list, as shown below:
Doubly Circular linked list
The doubly circular linked list has the features of both the circular linked list and
doubly linked list.

The above figure shows the portrayal of the doubly round connected rundown wherein the last
connected rundown and doubly roundabout connected rundown is that the doubly

roundabout connected rundown doesn't contain the NULL incentive in the past field
of the hub. As the doubly roundabout connected contains three sections, i.e., two
location parts and one information part so its portrayal is like the doubly connected
rundown.
struct node
{
int data;
struct node *next;
struct node *prev;
}

Linked List
 Linked List can be defined as collection of objects called nodes that are randomly
stored in the memory.
 A node contains two fields i.e. data stored at that particular address and the pointer
which contains the address of the next node in the memory.
 The last node of the list contains pointer to the null.

Uses of Linked List


 The rundown isn't needed to be adjoiningly present in the memory. The hub
can dwell anyplace in the memory and connected together to make a
rundown. This accomplishes advanced usage of room.
 list size is restricted to the memory size and shouldn't be announced ahead of
time.
 Void hub can not be available in the connected rundown.
 We can store estimations of crude sorts or items in the separately connected
rundown.

Why use linked list over array?


Till now, we were utilizing cluster information construction to sort out the gathering
of components that are to be put away separately in the memory. Nonetheless, Array
has a few points of interest and hindrances which should be known to choose the
information structure which will be utilized all through the program.
Array contains following limitations:
 The size of cluster should be known ahead of time prior to utilizing it in the
program.
 Expanding size of the cluster is a period taking cycle. It is practically difficult
to grow the size of the exhibit at run time.
 All the components in the cluster require to be adorningly put away in the
memory. Embedding’s any component in the cluster needs moving of every
one of its archetypes.
Linked list is the data structure which can overcome all the limitations of an array.
Using linked list is useful because
 It dispenses the memory progressively. All the hubs of connected rundown
are non-adjacently put away in the memory and connected along with the
assistance of pointers.
 Measuring is not, at this point an issue since we don't have to characterize its
size at the hour of affirmation. Rundown develops according to the program's
interest and restricted to the accessible memory space.
Singly linked list or One way chain
Separately connected rundown can be characterized as the assortment of requested
arrangement of components. The quantity of components may shift as indicated by
need of the program. A hub in the independently connected rundown comprise of two
sections: information part and connection part. Information some portion of the hub
stores real data that will be addressed by the hub while the connection a piece of the
hub stores the location of its nearby replacement.
One way chain or separately connected rundown can be crossed distinctly one way.
As such, we can say that every hub contains just next pointer, hence we can not cross
the rundown the opposite way.
Consider a model where the imprints acquired by the understudy in three subjects are
put away in a connected rundown as demonstrated in the figure.

In the above figure, the bolt addresses the connections. The information a piece of
each hub contains the imprints acquired by the understudy in the diverse subject. The
last hub in the rundown is recognized by the invalid pointer which is available in the
location part of the last hub. We can have as numerous components we need, in the
information part of the rundown.
Complexity

Operations on Singly Linked List


There are various operations which can be performed on singly linked list. A list of
all such operations is given below.
Node Creation
struct node
{
int data;
struct node *next;
};
struct node *head, *ptr;
ptr = (struct node *)malloc(sizeof(struct node *));

Insertion
The insertion into a singly linked list can be performed at different positions.
Based on the position of the new node being inserted, the insertion is categorized
into the following categories.

SN Operation Description

1 Insertion at It involves inserting any element at the front of the list. We just need
beginning to a few link adjustments to make the new node as the head of the
list.
2 Insertion at end It involves insertion at the last of the linked list. The new node can
of the list be inserted as the only node in the list or it can be inserted as the last
one. Different logics are implemented in each scenario.

3 Insertion after It involves insertion after the specified node of the linked list. We
specified node need to skip the desired number of nodes in order to reach the node
after which the new node will be inserted. .

Deletion and Traversing


The Deletion of a node from a singly linked list can be performed at different
positions. Based on the position of the node being deleted, the operation is
categorized into the following categories.

SN Operation Description

1 Deletion at It involves deletion of a node from the beginning of the list. This
beginning is the simplest operation among all. It just need a few
adjustments in the node pointers.

2 Deletion at the It involves deleting the last node of the list. The list can either be
end of the list empty or full. Different logic is implemented for the different
scenarios.

3 Deletion after It involves deleting the node after the specified node in the list.
specified node we need to skip the desired number of nodes to reach the node
after which the node will be deleted. This requires traversing
through the list.

4 Traversing In traversing, we simply visit each node of the list at least once in
order to perform some specific operation on it, for example,
printing data part of each node present in the list.
5 Searching In searching, we match each element of the list with the given
element. If the element is found on any of the location then
location of that element is returned otherwise null is returned. .

Doubly linked list


Doubly connected rundown is a mind boggling kind of connected rundown wherein a
hub contains a pointer to the past just as the following hub in the arrangement.
Subsequently, in a doubly connected rundown, a hub comprises of three sections: hub
information, pointer to the following hub in arrangement (next pointer) , pointer to
the past hub (past pointer). An example hub in a doubly connected rundown is
appeared in the figure.

A doubly linked list containing three nodes having numbers from 1 to 3 in their data
part, is shown in the following image.
In C, structure of a node in doubly linked list can be given as :
struct node
{
struct node *prev;
int data;
struct node *next;
}
The prev part of the first node and the next part of the last node will always contain
null indicating end in each direction.
Doubly connected rundown is a mind boggling kind of connected rundown wherein a
hub contains a pointer to the past just as the following hub in the arrangement.
Subsequently, in a doubly connected rundown, a hub comprises of three sections: hub
information, pointer to the following hub in arrangement (next pointer) , pointer to
the past hub (past pointer). An example hub in a doubly connected rundown is
appeared in the figure.

Memory Representation of a doubly linked list


Memory Representation of a doubly connected rundown is appeared in the
accompanying picture. For the most part, doubly connected rundown burns-through
more space for each hub and in this way, causes more sweeping fundamental
activities, for example, inclusion and erasure. Notwithstanding, we can without much
of a stretch control the components of the rundown since the rundown keeps up
pointers in both the ways (forward and in reverse).
In the accompanying picture, the principal component of the rundown that is for
example 13 put away at address 1. The head pointer focuses to the beginning location
1. Since this is the primary component being added to the rundown along these lines
the prev of the rundown contains invalid. The following hub of the rundown lives at
address 4 accordingly the first hub contains 4 in quite a while next pointer.
We can cross the rundown in this manner until we discover any hub containing
invalid or - 1 in its next part.

Operations on doubly linked list


Node Creation
struct node
{
struct node *prev;
int data;
struct node *next;
};
struct node *head;
All the remaining operations regarding doubly linked list are described in the
following table.

SN Operation Description

1 Insertion at beginning Adding the node into the linked list at beginning.

2 Insertion at end Adding the node into the linked list to the end.

3 Insertion after Adding the node into the linked list after the specified node.
specified node

4 Deletion at beginning Removing the node from beginning of the list

5 Deletion at the end Removing the node from end of the list.

6 Deletion of the node Removing the node which is present just after the node
having given data containing the given data.

7 Searching Comparing each node data with the item to be searched and
return the location of the item in the list if the item found
else return null.

8 Traversing Visiting each node of the list at least once in order to


perform some specific operation like searching, sorting,
display, etc.

 9.1.Circular Singly Linked List


In a roundabout Singly connected rundown, the last hub of the rundown contains a
pointer to the primary hub of the rundown. We can have roundabout separately
connected rundown just as roundabout doubly connected rundown.
We navigate a roundabout separately connected rundown until we arrive at a similar
hub where we began. The roundabout separately loved rundown has no start and no
consummation. There is no invalid worth present in the following piece of any of the
hubs.
The accompanying picture shows a round separately connected rundown.
Circular linked list are generally utilized in errand support in working frameworks.
There are numerous models where round connected rundown are being utilized in
software engineering including program riding where a record of pages visited in the
past by the client, is kept up as roundabout connected records and can be gotten to
again on tapping the past catch.
Memory Representation of circular linked list:
In the accompanying picture, memory portrayal of a round connected rundown
containing signs of an understudy in 4 subjects. Nonetheless, the picture shows a
brief look at how the round rundown is being put away in the memory. The
beginning or top of the rundown is highlighting the component with the file 1 and
containing 13 imprints in the information part and 4 in the following part. Which
implies that it is connected with the hub that is being put away at fourth list of the
rundown.
Notwithstanding, because of the way that we are thinking about roundabout
connected rundown in the memory in this manner the last hub of the rundown
contains the location of the primary hub of the rundown.

We can likewise have more than one number of connected rundown in the memory
with the distinctive beginning pointers highlighting the diverse beginning hubs in the
rundown. The last hub is distinguished by its next part which contains the location of
the beginning hub of the rundown. We should have the option to recognize the last
hub of any connected rundown with the goal that we can discover the quantity of
cycles which should be performed while navigating the rundown.
Operations on Circular Singly linked list:
Insertion
SNOperation Description

1 Insertion at beginning Adding a node into circular singly linked list at the beginning.

2 Insertion at the end Adding a node into circular singly linked list at the end.

Deletion& Traversing
SNOperation Description

1 Deletion at Removing the node from circular singly linked list at the beginning.
beginning
2 Deletion at Removing the node from circular singly linked list at the end.
the end
3 Searching Compare each element of the node with the given item and return
the location at which the item is present in the list otherwise return
null.
4 Traversing Visiting each element of the list at least once in order to perform
some specific operation.
Circular Doubly Linked List
Circular Doubly Linked List rundown is a more complexed kind of information
structure in which a hub contain pointers to its past hub just as the following hub.
Round doubly connected rundown doesn't contain NULL in any of the hub. The last
hub of the rundown contains the location of the main hub of the rundown. The main
hub of the rundown additionally contain address of the last hub in its past pointer.
A circular doubly linked list is shown in the following figure.
Because of the way that a round doubly connected rundown contains three sections in
its design hence, it requests more space per hub and more costly essential activities.
Be that as it may, a round doubly connected rundown gives simple control of the
pointers and the looking turns out to be twice as proficient.
Memory Management of Circular Doubly linked list
The accompanying figure shows the manner by which the memory is designated for
a round doubly connected rundown. The variable head contains the location of the
principal component of the rundown for example 1 consequently the beginning hub
of the rundown contains information An is put away at address 1. Since, every hub of
the rundown should have three sections along these lines, the beginning hub of the
rundown contains address of the last hub for example 8 and the following hub for
example 4. The last hub of the rundown that is put away at address 8 and containing
information as 6, contains address of the primary hub of the rundown as
demonstrated in the picture for example 1. In roundabout doubly connected rundown,
the last hub is recognized by the location of the main hub which is put away in the
following piece of the last hub hence the hub which contains the location of the
principal hub, is really the last hub of the rundown.
Operations on circular doubly linked list :
There are different tasks which can be performed on round doubly connected
rundown. The hub design of a roundabout doubly connected rundown is like doubly
connected rundown. Be that as it may, the procedure on round doubly connected
rundown is portrayed in the accompanying table.
SN Operation Description
1 Insertion at beginning Adding a node in circular doubly linked list at the beginning.
2 Insertion at end Adding a node in circular doubly linked list at the end.
3 Deletion at beginning Removing a node in circular doubly linked list from beginning.
4 Deletion at end Removing a node in circular doubly linked list at the end.

Traversing and searching in circular doubly linked list is similar to that in the circular
singly linked list.

You might also like