Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Diploma CS II Year III Sem Algorithm Unit 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

Unit-1

Fundamentals
Algorithm

The word Algorithm means ” A set of finite rules or instructions to be followed in calculations or
other problem-solving operations ”

Or

According to its formal definition, an algorithm is a finite set of instructions carried out in a
specific order to perform a particular task.

Therefore Algorithm refers to a sequence of finite steps to solve a particular problem.

The Word algorithm comes from the name of a Persian author, Abu Ja’far Mohammed ibn Musa
al khowarizmi (c.825 AD), who wrote a textbook on mathematics. This word has taken on a
special significance in computer science, where “algorithm” has come to refer to a method that
can be used by a computer for the solution of a problem.

An algorithm is used to provide a solution to a particular problem in a form of well-defined


steps. Whenever we use a computer to solve a particular problem, the steps which lead to the
solution should be properly communicated to the computer. While executing an algorithm on a
computer, several operations such as additions and subtractions are combined to perform more
complex mathematical operations. Algorithms can be expressed using natural language,
flowcharts, etc.

Definition:

An algorithm is a finite set of instructions that, if followed, accomplishes a particular task. In


addition, all algorithms must satisfy the following criteria:-

1. Input: - Zero or more inputs are externally supplied to the algorithm.

2. Output: - At least one output is produced by an algorithm.

3. Definiteness: - Each step in the algorithm must be clear and unambiguous.

4. Finiteness: - In an algorithm, it will be terminated after a finite number of steps for all
different cases.
5. Effectiveness: - Every instruction must be very basic so that the purpose of those
instructions must be very clear to us.

To achieve definiteness we need to write algorithm in a programming language, such languages

Page 1 of 33
are designed so that each legitimate sentence has a unique meaning. A program is the expression
of an algorithm in a programming language.
Algorithms that are definite and effective are also called computational procedures. One
important example of computational procedures is the operating system of a digital system. This
procedure is designed to control the execution of jobs, in such a way that when no jobs are
available, it does not terminate but continues in a waiting state until a new job is entered.

Figure: Algorithm
Advantages of Algorithm:

1. Easy to understand: - Since it is a stepwise representation of a solution to a given


problem, it is easy to understand.
2. Language independent: - It is not dependent on any programming language, so it can
easily be understood by anyone.
3. Debug/Error finding: - Every step is independent/ in a flow so it will be easy to spot and
fix the error.
4. Sub-problems: - It is written in a flow so now the programmer can divide the tasks which
makes them easier to code.

Disadvantages of Algorithms:
1. Creating efficient algorithms is time consuming and requires good logical skills.
2. Nasty to show branching and looping in algorithms.

Page 2 of 33
Example of algorithm:

Algorithm of adding two numbers

Step1: Start the program

Step2: Read the values of a & b

Step3: Computer the sum of entered numbers ‘a’, ‘b’, c=a+b

Step4: Print the value of ‘c’

Step5: Stop the program

What is the need of algorithms?

1. Algorithms are necessary for solving complex problems efficiently and effectively.

2. They help to automate processes and make them more reliable, faster, and easier to perform.

3. Algorithms also enable computers to perform tasks that would be difficult or impossible for
humans to do manually.

4. They are used in various fields such as mathematics, computer science, engineering, finance,
and many others to optimize processes, analyze data, make predictions, and provide solutions to
problems.

What is programming models?

A programming model, in the context of computer science and software development, refers to a
high-level abstraction or a set of guidelines and principles that programmers follow when
designing and implementing software applications. It defines how different components of a
software system interact with each other, how data is structured and manipulated, and how tasks
or operations are executed. Programming models provide a structured approach to writing code
and solving problems.

Programming models can encompass various aspects of software development, including:

 Data Representation: How data is structured, stored, and manipulated within the
program. This includes data types, data structures (e.g., arrays, lists, trees), and data
organization.
 Control Flow: How the program controls the sequence of execution, including decisions
(if statements), loops (for, while), and function/method calls.

Page 3 of 33
 Concurrency and Parallelism: How the program handles multiple tasks or processes
running simultaneously, whether on a multi-core processor, across a network, or in a
distributed system.
 Communication: How different parts of the program or different programs communicate
with each other, exchange data, and synchronize their activities.
 Error Handling: How errors and exceptions are detected, reported, and handled within
the program.
 Abstraction: The level of abstraction and encapsulation provided by the programming
model, allowing developers to hide complexity and manage system details.
 Paradigms: The programming paradigm or style encouraged by the model, such as
imperative, declarative, functional, or object-oriented programming.
 Architectural Patterns: Guidance on how to structure software systems, including
architectural patterns like MVC (Model-View-Controller), MVVM (Model-View-
ViewModel), and others.
 Event Handling: How the program reacts to and processes events, which can include
user interactions, sensor inputs, or system notifications.
 Resource Management: How resources like memory, files, and network connections are
allocated, used, and released.
 Scalability: How the program can grow to handle increased workloads and larger
datasets.
 Security: Principles for ensuring the security of data and code within the program.

Different programming models are suited to different types of applications and problem
domains. For example, a programming model suitable for web development may differ
significantly from one used in scientific computing or embedded systems programming. The
choice of a programming model depends on factors such as the nature of the problem being
solved, performance requirements, available technologies, and developer preferences.

In practice, many programming languages and frameworks are designed to support multiple
programming models or paradigms, allowing developers to choose the most appropriate model
for their specific needs.

Types of Programming Models:

 Imperative Programming Model: This is the traditional programming model where you
write a sequence of instructions that specify how the program should perform a task. In
this paradigm, you instruct the computer on how to perform a task by specifying a series
of imperative commands or statements that describe the steps to be executed. These
statements can include assignments, loops, conditionals, and function or method calls. It
is characterized by statements that change a program's state, and examples include
languages like C, C++, and Java.

Page 4 of 33
 Object-Oriented Programming (OOP) Model: The Object-Oriented Programming
(OOP) model is a programming paradigm that revolves around the concept of objects and
classes. It is designed to structure code in a way that models real-world entities and their
interactions. In OOP, software is organized into reusable objects, each representing a
specific instance of a class, which serves as a blueprint defining the object's properties
(attributes) and behaviors (methods). It has key concept and characteristics like classes,
object, Encapsulation, Inheritance, Polymorphism, Abstraction etc. Examples include
languages like Java, Python, and C++ are popular OOP languages.
 Functional Programming Model: The Functional Programming Model is a
programming paradigm that treats computation as the evaluation of mathematical
functions and avoids changing state and mutable data. In this model, functions are treated
as first-class citizens, meaning they can be assigned to variables, passed as arguments to
other functions, and returned as values from functions. Functional programming is based
on a set of principles and concepts that encourage writing code in a declarative and
stateless manner. Functional programming languages include Haskell, Lisp (Common
Lisp and Scheme), Erlang, Clojure, and functional aspects of languages like JavaScript
and Python.
 Event-Driven Programming Model: The Event-Driven Programming Model is a
programming paradigm where the flow of a program's execution is determined by events
or signals rather than following a sequential top-down control flow. In this model,
software components or modules respond to various events or messages as they occur,
allowing for asynchronous and non-blocking behavior. Event-driven programming is
commonly used in applications that involve user interfaces, real-time processing, and
handling multiple concurrent events. Event-driven programming is well-suited for
applications that require responsiveness to user actions, real-time processing of data, and
the ability to handle multiple simultaneous events. It enables developers to create
software systems that efficiently manage and respond to a wide range of inputs and
events in a flexible and maintainable manner. Examples include languages like
javaScript, python and Java.
 Parallel and Concurrent Programming Model: The parallel and concurrent
programming model refers to a programming approach that allows multiple tasks or
processes to be executed simultaneously. This model is particularly useful for improving
the performance and efficiency of computing systems by utilizing multiple processing
units or cores.

In parallel programming, tasks are divided into smaller subtasks that can be executed
simultaneously on different processing units. This allows for efficient utilization of
resources and can significantly speed up the execution of a program. Parallel
programming can be achieved using techniques such as shared memory, message
passing, or task-based parallelism. Concurrent programming, on the other hand, focuses
on managing multiple tasks that may execute independently or interleave with each other.

Page 5 of 33
In concurrent programming, tasks are typically executed asynchronously, and
synchronization mechanisms are used to coordinate and communicate between tasks.
This model is particularly useful in scenarios where tasks need to respond to external
events or interact with each other.

Both parallel and concurrent programming models can be used together to achieve
efficient and responsive systems. However, it is important to note that parallel
programming requires multiple processing units or cores, while concurrent programming
can be implemented on a single processor using techniques such as multitasking or
multithreading. Examples includes languages like Java, C++, Python, Go, Rust etc.

 Declarative Programming Model: Declarative programming is a programming


paradigm that focuses on describing what a program should accomplish, rather than
specifying how to achieve it step by step. In this model, developers declare the desired
outcome or properties of a computation, and the underlying system or interpreter
determines how to achieve that outcome. This is in contrast to imperative programming,
where developers provide explicit instructions for how to perform tasks. Some languages
that support declarative programming model are SQL, Haskell, Prolog, etc.

Model-View-Controller Programming Model: The Model-View-Controller (MVC) is a


popular architectural pattern used in software development to design and structure
applications, especially in the context of user interfaces and web applications. It divides
an application into three interconnected components, each with a specific responsibility:

Model:

 Responsibility: The Model represents the application's core data and business logic. It
encapsulates the data and the rules for manipulating and managing that data. It is the
part of the application that interacts with the database or other external data sources.
 Characteristics: The Model is independent of the user interface (UI) and the
presentation layer. It ensures data integrity and consistency. Changes to the Model
can trigger notifications to other parts of the application.

View:

 Responsibility: The View is responsible for presenting the data to the user and
handling user interface interactions. It displays information from the Model and
allows users to interact with the application.
 Characteristics: The View is concerned with the presentation and user experience. It
should be as passive as possible, meaning it doesn't contain application logic.
Multiple Views can exist for the same Model to offer different presentations of the
data.

Page 6 of 33
Controller:

 Responsibility: The Controller acts as an intermediary between the Model and the
View. It receives user input from the View, processes it, interacts with the Model
to update data or retrieve information, and updates the View accordingly.
 Characteristics: The Controller contains application logic related to user
interactions and workflow. It interprets user actions and delegates data -related
tasks to the Model. Multiple Controllers can manage different parts of the
application.

MVC is commonly used in web development, where the Model represents the
application's data and business logic, the View corresponds to the presentation layer
(HTML/CSS), and the Controller handles HTTP requests and manages the flow of data
between the Model and View. However, MVC is not limited to web applications and can
be applied in various software development contexts to improve code organization and
maintainability.

Component-based Programming Model: Component-based programming is a software


development approach in which the application is built by composing reusable and self-
contained components or modules. Each component encapsulates a specific piece of
functionality and can interact with other components through well-defined interfaces.
This approach promotes modularity, reusability, and maintainability of software systems.

Here are key characteristics and concepts related to component-based programming:

 Reusability: Components are designed to be reusable across different projects


and applications. This reduces development time and effort since developers can
leverage existing components instead of reinventing the wheel for every project.
 Modularity: Components are self-contained and encapsulate specific
functionality. They can be easily replaced or upgraded without affecting the entire
application, promoting a modular design that simplifies maintenance and testing.
 Interoperability: Components can often work together with other components
regardless of their programming language or platform. This promotes
interoperability and allows for the integration of components from different
sources.
 Ease of Maintenance: Because components are isolated and have well-defined
interfaces, it's easier to debug, maintain, and update software. Changes to one
component generally don't affect others, reducing the risk of unintended consequences.
 Scalability: Developers can scale applications by adding or removing
components as needed. This makes it easier to adapt to changing requirements or
to extend functionality.

Page 7 of 33
 Parallel Development: Teams of developers can work on different components
simultaneously, fostering parallel development and collaboration.
 Standardization: Component-based development often relies on standards and
protocols for defining interfaces and communication between components. This
standardization helps ensure consistency and compatibility.

Examples: Common examples of component-based programming include using


graphical user interface (GUI) components like buttons, text fields, and menus in desktop
application development, or using libraries and frameworks in web development.

There are various technologies and frameworks that support component-based


programming, such as COM (Component Object Model) for Windows, CORBA
(Common Object Request Broker Architecture) for distributed systems, and more modern
technologies like JavaBeans, .NET, and various JavaScript libraries and frameworks for
web development.

Overall, component-based programming simplifies software development by breaking


down complex systems into manageable, reusable components, making it easier to
develop and maintain software systems.

What is Data Abstraction?

Definition:

Data abstraction refers to providing only essential information to the outside world and hiding
their background details, i.e., to represent the needed information in program without presenting
the details.

Data abstraction is the process of hiding the unnecessary or irrelevant details of an object or a
model and only showing the essential information to the user in programming and design to
simplify the complexity and improve the efficiency of a system. Data abstraction can be
performed at different levels, such as physical, logical and view level, depending on the amount
of detail that is hidden or shown to the user.
In addition to this, some examples of data abstraction in different domains are:
In database management systems, data abstraction is used to provide different views of the same
database to different users, depending on their roles and privileges.
In object oriented programming data abstraction is used to create classes and objects that
encapsulate data and methods, and provide public interfaces for communication with other class
and objects.
In artificial intelligence, data abstraction is used to create models and representations that capture
the essential features and relationships of a problem domain, while ignoring irrelevant details.

Examples:

Page 8 of 33
 Taking a picture on a smartphone: When users take a picture on their smartphone, they
don't need to know how the camera sensor captures light or how the image is processed
and saved. They only need to know how to open the camera app, focus on the subject,
and press the shutter button.
 Listening to music via headphones: When users listen to music on their smartphone
using a pair of wireless headphones, they know that they are required to enable the
Bluetooth feature on the phone to establish a connection between the two devices.
However, they don't need to know how the phone and headphones communicate with
each other or how the audio is transmitted wirelessly. This is an example of data
abstraction in action.

Benefits of Data Abstraction:

1. It reduces the complexity and increases the readability of a system.


2. It allows for data independence, which means that change in one level of data abstraction
do not affect other level.
3. It enhances the security and integrity of data by preventing unauthorized access to
sensitive information.
4. It facilitates modularity and reusability of code by separating the interface from the
implementation.

Sets

A set is a data structure that stores a collection of unique elements, with no duplicates allowed.
Sets can be implemented using a variety of data structures, including arrays, linked lists, binar y
search trees, and hash tables. Basically, a set is a language-independent data structure that can be
used in different programming languages.

In computer science, a set is a fundamental data structure used to store a collection of distinct
and unordered elements. It is a mathematical concept that has been adapted for computer
programming to provide efficient ways to manage and manipulate data. Sets in computer science
have several important characteristics and operations:

 Distinct Elements: A set does not allow duplicate elements. If you attempt to add the same
element multiple times to a set, it will only be stored once.
 Unordered: Elements in a set have no specific order or sequence. Unlike arrays or lists,
where elements are accessed by an index, elements in a set cannot be accessed by position.
 Efficient Membership Testing: Sets provide efficient membership testing. You can quickly
check whether a specific element is present in a set or not.
 Set Operations: Sets support various mathematical operations, including:

 Union: Combining two sets to create a new set containing all unique elements from both
sets.

Page 9 of 33
 Intersection: Finding the common elements between two sets.
 Difference: Identifying elements in one set that are not present in another set.
 Subset and Superset: Determining if one set is entirely contained within another or vice
versa.
 Adding and Removing Elements: You can add elements to a set or remove elements from
it.

 Sets in computer science are used in various applications, including:


 Duplicate Removal: Sets are excellent for removing duplicate elements from a list or a
collection. When you add elements to a set, duplicates are automatically eliminated.
 Membership Testing: Sets are often used to check whether an element exists in a given
dataset, making them useful for searching and filtering.
 Data Deduplication: In data processing, sets are used to identify and remove duplicate
records from datasets.
 Graph Algorithms: Sets are used to keep track of visited nodes in graph algorithms like
depth-first search (DFS) and breadth-first search (BFS).
 Set Data Structures: Many programming languages, such as Python, Java, and C++,
provide built-in set data structures with methods for performing set operations efficiently.

Some examples of algorithms that use sets are:

 Kruskal’s algorithm: This algorithm finds the minimum spanning tree of a weighted
graph. It uses a set to keep track of the connected components of the graph and avoid
creating cycles.
 Dijkstra’s algorithm: This algorithm finds the shortest path from a source node to all
other nodes in a graph. It uses a set to store the nodes that have been visited and the nodes
that are yet to be visited.
 Hashing: This is a technique that maps data of any size to fixed-size values called hash
codes. It uses a set to store the hash codes and avoid collisions.

multisets

In computer science, a multiset, also known as a bag, is a data structure that is similar to a set but
allows duplicate elements. Unlike sets, which store only distinct elements, multisets permit
multiple occurrences of the same element. Multisets are used when you need to keep track of the
frequency or multiplicity of elements in a collection.

Key characteristics of multisets include:

 Duplicates Allowed: Multisets can contain duplicate elements. This means that you can
add the same element multiple times to a multiset, and each occurrence is retained.

 Unordered: Like sets, multisets are typically considered unordered collections. Elements
have no specific order or position within the multiset.

Page 10 of 33
 Counting Elements: Multisets provide a way to count the occurrences of specific
elements. You can query how many times a particular element appears in the multiset.

 Operations: Multisets support operations such as adding elements, removing elements


(including duplicates), and querying the count of elements.

 Mathematical Notation: In mathematical notation, multisets may be represented using


square brackets or other notations to indicate the presence of duplicate elements. For
example, [1, 2, 2, 3] represents a multiset containing four elements, including two
occurrences of the element 2.

 Multisets find applications in various domains and algorithms, including:

 Counting and Frequency Analysis: Multisets are used to count the occurrences of
words in text documents, track the frequency of items in a dataset, and analyze
data distributions.

 Bag-of-Words Models: In natural language processing (NLP), multisets are


employed to build bag-of-words models, where each word's count is stored,
allowing for text analysis and document similarity calculations.

 Data Structures and Algorithms: Multisets can be implemented as data structures


to support operations like counting occurrences of elements, which is useful in
various algorithms and data processing tasks.

 Combinatorics: In combinatorics, multisets are used to count and generate


combinations with repetition, where you choose elements from a set allowing for
duplicates.

 Statistics: In statistics, multisets can be used to represent samples with repetitions,


such as dice rolls or survey responses.

While sets are commonly implemented as built-in data structures in programming languages like
Python, Java, and C++, multisets may require custom implementations. In some languages,
libraries or third-party packages provide multiset data structures and operations to facilitate
working with multisets efficiently.

Stack

A Stack is a linear data structure that follows the LIFO (Last-In-First-Out) principle. Stack has
one end, whereas the Queue has two ends (front and rear). It contains only one pointer top
pointer pointing to the topmost element of the stack. Whenever an element is added in the stack,
it is added on the top of the stack, and the element can be deleted only from the stack. In other

Page 11 of 33
words, a stack can be defined as a container in which insertion and deletion can be done from the
one end known as the top of the stack.

Figure: A Real-life Example of Stack

The above figure represents the real-life example of a Stack where the operations are performed
from one end only, like the insertion and removal of new books from the top of the Stack. It
implies that the insertion and deletion in the Stack can be done only from the top of the Stack.
We can access only the Stack's tops at any given time.

The primary operations in the Stack are as follows:

1. Push: Operation to insert a new element in the Stack is termed as Push Operation.

2. Pop: Operation to remove or delete elements from the Stack is termed as Pop Operation.

Page 12 of 33
Figure: Stack Data Structure

Some application of stack data structure are:

1. Function Call and Return: Stacks are used in programming languages to manage
function call and return operations. Each function call pushes a new frame onto the call
stack, and when a function returns, the frame is popped from the stack.
2. Expression Evaluation: Stacks are used to evaluate arithmetic expressions, including
infix, postfix, and prefix expressions. They help in maintaining the correct order of
operations and handling parentheses.
3. Undo Functionality: Many applications implement undo functionality using stacks.
Every user action is pushed onto a stack, and the last action can be undone by popping it
from the stack.
4. Backtracking Algorithms: Algorithms like depth-first search (DFS) and backtracking
rely on stacks to keep track of the state of exploration. When moving forward, a state is
pushed onto the stack, and when backtracking, it's popped.
5. Memory Management: Stacks are used for memory management in systems and
languages that use a stack-based memory allocation scheme, such as the call stack for
function execution.
6. Expression Parsing: Stacks are essential for parsing expressions, including parsing
XML, HTML, and other markup languages. They help keep track of open and closing
tags.
7. Balancing Symbols: Stacks are used to check for the balanced use of symbols such as
parentheses, brackets, and braces in code, ensuring proper syntax.

Page 13 of 33
8. Task Scheduling: In operating systems, stacks can be used to manage task scheduling
and process execution. The operating system maintains a stack of processes or threads to
determine which one should execute next.
9. Browser History: Web browsers use stacks to implement forward and backward
navigation in the browsing history. Each visited page is pushed onto the stack.
10. Text Editors' Undo/Redo: Text editors often use stacks to implement undo and redo
functionality for text editing operations.
11. Algorithmic Problems: Some algorithmic problems can be solved efficiently using
stacks, such as the Tower of Hanoi problem and finding the next greater element in an
array.
12. Expression Conversion: Stacks are used to convert between different types of
expressions, such as converting infix expressions to postfix (or reverse Polish notation).
13. Memory Allocation and Deallocation: Stacks can be used in memory management to
allocate and deallocate memory in a last-in, first-out (LIFO) manner.
14. Evaluation of Postfix Expressions: Stacks are used to evaluate postfix expressions
efficiently.
15. Parsing and Syntax Analysis: In compilers and parsers, stacks are used for parsing and
syntax analysis of source code.

Basic Operations on Stack:


There are some basic operations that allow us to perform different actions on a stack.

 Push: Add an element to the top of a stack


 Pop: Remove an element from the top of a stack
 IsEmpty: Check if the stack is empty
 IsFull: Check if the stack is full
 Peek: Get the value of the top element without removing it

Working of Stack Data Structure

1. A pointer called TOP is used to keep track of the top element in the stack.
2. When initializing the stack, we set its value to -1 so that we can check if the stack is
empty by comparing TOP == -1.

3. On pushing an element, we increase the value of TOP and place the new element in the
position pointed to by TOP.

4. On popping an element, we return the element pointed to by TOP and reduce its value.
5. Before pushing, we check if the stack is already full

Page 14 of 33
6. Before popping, we check if the stack is already empty

Figure: Working of Stack

Push Operation
Push operation involves inserting new elements in the stack. Since you have only one end to
insert a unique element on top of the stack, it inserts the new element at the top of the stack.

Figure: Push Operation

Algorithm:
1. Checks if the stack is full.
2. If the stack is full, produces an error and exit.
3. If the stack is not full, increments top to point next empty space.

Page 15 of 33
4. Adds data element to the stack location, where top is pointing.
5. Returns success.

Pseudocode of Push operation:

Begin push: stack, item

If the stack is complete, return null


end if

top ->top+1;

stack[top] <- item

end

Pop Operation
Pop operation refers to removing the element from the stack again since you have only one end
to do all top of the stack. So removing an element from the top of the stack is termed pop
operation.

Figure: Pop Operation

Algorithm:
1. Checks if the stack is empty.
2. If the stack is empty, produces an error and exit.
3. If the stack is not empty, accesses the data element at which top is pointing.

Page 16 of 33
4. Decreases the value of top by 1.
5. Returns success.

Pseudocode of Pop operation:

Begin pop: stack

If the stack is empty


return null

end if

item -> stack[top]

Top -> top - 1

Return item

End

Peek Operation
Peek operation refers to retrieving the topmost element in the stack without removing it from the
collections of data elements.

Algorithm:
1. START
2. return the element at the top of the stack

3. END

Pseudocode of a peek operation is:

begin to peek

return stack[top];

end

isFull Operation:
isFull operation checks whether the stack is full. This operation is used to check the status of the
stack with the help of top pointer.

Algorithm:

1. START

Page 17 of 33
2. If the size of the stack is equal to the top position of the stack, the stack is full. Return 1.
3. Otherwise, return 0.

4. END

Pseudocode of a isFull operation is:

begin

If top equals to maxsize


return true

else
return false

end if

end

isEmpty Operation:
The isEmpty operation verifies whether the stack is empty. This operation is used to check the
status of the stack with the help of top pointer.

Algorithm:
1. START

2. If the top value is -1, the stack is empty. Return 1.


3. Otherwise, return 0.

4. END

Pseudocode of a isEmpty operation is:

Begin
If (Top < 1)
Return true
Else
Return false
End If
End

Advantages of Stack:

Page 18 of 33
The advantages of using stack are listed below:

1. Efficient data management: Stack helps you manage the data in a LIFO (last in, first
out) method, which is not possible with a Linked list and array.
2. Efficient management of functions: When a function is called, the local variables are
stored in a stack, and it is automatically destroyed once returned.
3. Control over memory: Stack allows you to control how memory is allocated and
deallocated.
4. Smart memory management: Stack automatically cleans up the object.
5. Not easily corrupted: Stack does not get corrupted easily; hence it is more secure and
reliable.
6. Does not allow resizing of variables: Variables cannot be resized.

Disadvantages of Stack:
The disadvantages of using stack are listed below:

1. Limited memory size: Stack memory is very limited.


2. Chances of stack overflow: Creating too many objects on the stack can increase the risk
of stack overflow.
3. Random access is not possible: In a stack, random accessing the data is not possible.
4. Unreliable: When variable storage will get overwritten, it will sometimes leads to
undefined behaviour of the function or program.
5. Undesired termination: The stack will fall outside of the memory area, which might
lead to an abnormal termination.

Queue
Queue is a type of linear data structure that follows the First-In-First-Out (FIFO) principle. It
means that the elements added first to the queue are the first to be removed. A queue can be
visualized as a line where elements are placed at the end of the line and removed from the
front of the line. By convention, the end where insertion is performed is called Rear, and the
end at which deletion takes place is known as the Front.

Before dealing with the representation of a queue, examine the real-life example of the queue
to understand it better. The movie ticket counter is an excellent example of a queue where the
customer that came first will be served first. Also, the barricades of the movie ticket counter
stop in-between disruption to attain different operations at different ends.

Page 19 of 33
Figure: Real Life Example of Queue

The following diagram tries to explain queue representation as a data structure:

Figure: Queue Data Structure

Basic Operations of Queue


Queue operations also include initialization of a queue, usage and permanently deleting the
data from the memory.

The most fundamental operations in the queue ADT include: enqueue(), dequeue(), peek(),
isFull(), isEmpty(). These are all built-in operations to carry out data manipulation and to
check the status of the queue.

Queue uses two pointers − front and rear. The front pointer accesses the data from the front
end (helping in enqueueing) while the rear pointer accesses data from the rear end (helping in
dequeuing).

Page 20 of 33
Enqueue(): By this, we add the element to the rear end of the queue. The elements will be
added after the previous element and this process will continue.

Figure: Enqueue Operation

Algorithm:

1 − START

2 – Check if the queue is full.

3 − If the queue is full, produce overflow error and exit.


4 − If the queue is not full, increment rear pointer to point the next empty space.

5 − Add data element to the queue location, where the rear is pointing.

6 − return success.

7 – END
Dequeue(): It is the opposite of enqueue this is used to remove the value from the front end
of the queue it also returns the removed value.

Page 21 of 33
Figure: Dequeue Operation

Algorithm
1 – START

2 − Check if the queue is empty.


3 − If the queue is empty, produce underflow error and exit.

4 − If the queue is not empty, access the data where front is pointing.

5 − Increment front pointer to point to the next available data element.

6 − Return success.

7 – END

isEmpty(): This is a boolean operation that sees whether the queue is empty or not. If the
queue is empty it will return true and if the queue is not empty it will return false.
Algorithm

1 – START

2 – If the count of queue elements equals zero, return true

3 – Otherwise, return false


4 – END

Page 22 of 33
isFull():This is also a boolean operation that results true or false depending on the size of the
queue if the size of the queue is full it will return true otherwise it will return false.
Algorithm

1 – START

2 – If the count of queue elements equals the queue size, return true

3 – Otherwise, return false


4 – END

peek(): It will give the value of the top element of the queue or you can say that the element of
the front of the queue.
Algorithm

1 – START
2 – Return the element at the front of the queue

3 – END

Applications of Queue:

 Queues can be used to schedule jobs and ensure that they are executed in the correct
order.
 Queues can be used to manage the order in which print jobs are sent to the printer.
 Queues can be used to manage the order in which processes are executed on the CPU.
 Queues can be used for buffering data while it is being transferred between two
systems. When data is received, it is added to the back of the queue, and when data is
sent, it is removed from the front of the queue
 It is also used for CPU Scheduling.
 Queues are also used in Memory Management.
Advantages and Disadvantages:

Advantages of Queue:

 A large amount of data can be managed efficiently with ease.

 Operations such as insertion and deletion can be performed with ease as it follows the
first in first out rule.

 Queues are useful when a particular service is used by multiple consumers.

 Queues are fast in speed for data inter-process communication.

 Queues can be used in the implementation of other data structures.

Page 23 of 33
Disadvantages of Queue:
 The operations such as insertion and deletion of elements from the middle are time
consuming.

 Limited Space.

 In a classical queue, a new element can only be inserted when the existing elements are
deleted from the queue.

 Searching an element takes O(N) time.


 Maximum size of a queue must be defined prior.

Types of Queue:

 Simple Queue: Simple queue also known as a linear queue is the most basic version
of a queue. Here, insertion of an element i.e. the Enqueue operation takes place at the
rear end and removal of an element i.e. the Dequeue operation takes place at the front
end.

 Circular Queue: In a circular queue, the element of the queue act as a circular ring.
The working of a circular queue is similar to the linear queue except for the fact that
the last element is connected to the first element. Its advantage is that the memory is
utilized in a better way. This is because if there is an empty space i.e. if no element is
present at a certain position in the queue, then an element can be easily added at that
position.

 Priority Queue: This queue is a special type of queue. Its specialty is that it arranges
the elements in a queue based on some priority. The priority can be something where
the element with the highest value has the priority so it creates a queue with
decreasing order of values. The priority can also be such that the element with the
lowest value gets the highest priority so in turn it creates a queue with increasing
order of values.

 Dequeue: Dequeue is also known as Double Ended Queue. As the name suggests
double ended, it means that an element can be inserted or removed from both the ends
of the queue unlike the other queues in which it can be done only from one end.
Because of this property it may not obey the First In First Out property.

Page 24 of 33
What are the Characteristics of an Algorithm?
The main characteristics of an algorithm are following:
1. Clear and Unambiguous: The algorithm should be clear and unambiguous. Each of its
steps should be clear in all aspects and must lead to only one meaning.
2. Well-Defined Inputs: If an algorithm says to take inputs, it should be well-defined
inputs. It may or may not take input.
3. Well-Defined Outputs: The algorithm must clearly define what output will be yielded
and
it should be well-defined as well. It should produce at least 1 output.
4. Finite-ness: The algorithm must be finite, i.e. it should terminate after a finite time.
5. Feasible: The algorithm must be simple, generic, and practical, such that it can be
executed with the available resources. It must not contain some future technology or
anything.
6. Language Independent: The Algorithm designed must be language-independent, i.e. it
must be just plain instructions that can be implemented in any language, and yet the
output will be the same, as expected.
7. Input: An algorithm has zero or more inputs. Each that contains a fundamental operator
must accept zero or more inputs.
8. Output: An algorithm produces at least one output.Every instruction that contains a
fundamental operator must accept zero or more inputs.
9. Definiteness: All instructions in an algorithm must be unambiguous, precise, and easy to
interpret. By referring to any of the instructions in an algorithm one can clearly
understand what is to be done. Every fundamental operator in instruction must be defined
without any ambiguity.
10. Finiteness: An algorithm must terminate after a finite number of steps in all test cases.
Every instruction which contains a fundamental operator must be terminated within a
finite amount of time. Infinite loops or recursive functions without base conditions do not
possess finiteness.
11. Effectiveness: An algorithm must be developed by using very basic, simple, and feasible
operations so that one can trace it out by using just paper and pencil.

What are the properties of an Algorithm?


Properties of an algorithm are following:
● It should terminate after a finite time.

● It should produce at least one output.


● It should take zero or more input.

● It should be deterministic means giving the same output for the same input case.
● Every step in the algorithm must be effective i.e. every step should do some work.

How to express an Algorithm?

Page 25 of 33
 Natural Language :- Here we express the Algorithm in natural English language. It is
too hard to understand the algorithm from it.
 Flow Chart :- Here we express the Algorithm by making graphical/pictorial
representation of it. It is easier to understand than Natural Language.
 Pseudo Code :- Here we express the Algorithm in the form of annotations and
informative text written in plain English which is very much similar to the real code but
as it has no syntax like any of the programming languages, it can’t be compiled or
interpreted by the computer. It is the best way to express an algorithm because it can be
understood by even a layman with some school level programming knowledge.

What is meant by Algorithm Analysis?


Algorithm analysis is an important part of computational complexity theory, which provides
theoretical estimation for the required resources of an algorithm to solve a specific computational
problem. It involves determining the amount of time and space resources required to execute an
algorithm.
The analysis of algorithms helps us predict the behavior of an algorithm without implementing it
on a specific computer. By analyzing different algorithms, we can compare them to determine
the best one for our purpose.
The analysis of algorithms is based on asymptotic notation, which provides a simple measure for
the efficiency of an algorithm. It allows us to express the time and space complexity of an
algorithm in terms of its input size. Some commonly used asymptotic notations include Big O,
Big Omega, and Big Theta.
To analyze an algorithm, we consider various factors such as the best case, worst case, and
average case scenarios. We also analyze loops and recurrence relations to determine the
complexity of an algorithm.

Why Analysis of Algorithms is important?

 To predict the behavior of an algorithm without implementing it on a specific computer.


 It is much more convenient to have simple measures for the efficiency of an algorithm
than to implement the algorithm and test the efficiency every time a certain parameter in
the underlying computer system changes.
 It is impossible to predict the exact behavior of an algorithm. There are too many
influencing factors.
 The analysis is thus only an approximation; it is not perfect.
 More importantly, by analyzing different algorithms, we can compare them to determine
the best one for our purpose.

Types of Algorithm Analysis:

1. Best case
2. Worst case
3. Average case

Page 26 of 33
 Best case: Define the input for which algorithm takes less time or minimum time. In
the best case calculate the lower bound of an algorithm. Example: In the linear
search when search data is present at the first location of large data then the best case
occurs.
 Worst Case: Define the input for which algorithm takes a long time or maximum
time. In the worst calculate the upper bound of an algorithm. Example: In the linear
search when search data is not present at all then the worst case occurs.
 Average case: In the average case take all random inputs and calculate the
computation time for all inputs. And then we divide it by the total number of inputs.

Average case = all random case time / total no of case

How to analyze an Algorithm?


For a standard algorithm to be good, it must be efficient. Hence the efficiency of an algorithm
must be checked and maintained. It can be in two stages:

1. Priori Analysis:
“Priori” means “before”. Hence Priori analysis means checking the algorithm before its
implementation. In this, the algorithm is checked when it is written in the form of theoretical
steps. This Efficiency of an algorithm is measured by assuming that all other factors, for
example, processor speed, are constant and have no effect on the implementation. This is done
usually by the algorithm designer. This analysis is independent of the type of hardware and
language of the compiler. It gives the approximate answers for the complexity of the program.
2. Posterior Analysis:

“Posterior” means “after”. Hence Posterior analysis means checking the algorithm after its
implementation. In this, the algorithm is checked by implementing it in any programming
language and executing it. This analysis helps to get the actual and real analysis report about
correctness (for every possible input/s if it shows/returns correct output or not), space required,
time consumed, etc. That is, it is dependent on the language of the compiler and the type of
hardware used.

What is Algorithm complexity and how to find it?


An algorithm is defined as complex based on the amount of Space and Time it consumes. Hence
the Complexity of an algorithm refers to the measure of the time that it will need to execute and
get the expected output, and the Space it will need to store all the data (input, temporary data,
and output). Hence these two factors define the efficiency of an algorithm.
The two factors of Algorithm Complexity are:

Page 27 of 33
 Time Factor: Time is measured by counting the number of key operations such as
comparisons in the sorting algorithm.
 Space Factor: Space is measured by counting the maximum memory space required by
the algorithm to run/execute.

Therefore the complexity of an algorithm can be divided into two types:

1. Space Complexity: The space complexity of an algorithm refers to the amount of


memory required by the algorithm to store the variables and get the result. This can be for
inputs, temporary operations, or outputs.

How to calculate Space Complexity?


The space complexity of an algorithm is calculated by determining the following 2
components:

Fixed Part: This refers to the space that is required by the algorithm. For example, input
variables, output variables, program size, etc.
Variable Part: This refers to the space that can be different based on the implementation of
the algorithm. For example, temporary variables, dynamic memory allocation, recursion
stack space, etc.

Therefore Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed
part and S(I) is the variable part of the algorithm, which depends on instance characteristic I.

Example: Consider the below algorithm for Linear Search

Step 1: START
Step 2: Get n elements of the array in arr and the number to be searched in x

Step 3: Start from the leftmost element of arr[] and one by one compare x with each element
of arr[]
Step 4: If x matches with an element, Print True.

Step 5: If x doesn’t match with any of the elements, Print False.


Step 6: END

Here, There are 2 variables arr[], and x, where the arr[] is the variable part of n elements and
x is the fixed part. Hence S(P) = 1+n. So, the space complexity depends on n(number of
elements). Now, space depends on data types of given variables and constant types and it will
be multiplied accordingly.

2. Time Complexity: The time complexity of an algorithm refers to the amount of time
required by the algorithm to execute and get the result. This can be for normal operations,
conditional if-else statements, loop statements, etc.

Page 28 of 33
How to Calculate, Time Complexity?
The time complexity of an algorithm is also calculated by determining the following 2
components:

Constant time part: Any instruction that is executed just once comes in this part. For
example, input, output, if-else, switch, arithmetic operations, etc.

Variable Time Part: Any instruction that is executed more than once, say n times, comes in
this part. For example, loops, recursion, etc.
Therefore Time complexity T(P) of any algorithm P is T(P) = C + TP(I),
where C is the constant time part and TP(I) is the variable part of the algorithm, which
depends on the instance characteristic I.
Example: In the algorithm of Linear Search above, the time complexity is calculated as
follows:
Step 1: –Constant Time

Step 2: — Variable Time (Taking n inputs)

Step 3: –Variable Time (Till the length of the Array (n) or the index of the found element)
Step 4: –Constant Time

Step 5: –Constant Time

Step 6: –Constant Time

Hence, T(P) = 1 + n + n(1 + 1) + 1 = 2 + 3n, which can be said as T(n).

Asymptotic Notations

The main idea of asymptotic analysis is to have a measure of the efficiency of algorithms that
don’t depend on machine-specific constants and don’t require algorithms to be implemented
and time taken by programs to be compared. Asymptotic notations are mathematical tools to
represent the time complexity of algorithms for asymptotic analysis.
There are mainly three asymptotic notations:

1. Big-O Notation (O-notation)


2. Omega Notation (Ω-notation)

3. Theta Notation (Θ-notation)

Page 29 of 33
Big-O Notation (O-notation): Big-O notation represents the upper bound of the running
time of an algorithm. It is defined as the condition that allows an algorithm to complete
statement execution in the longest amount of time possible. Therefore, it gives the worst-case
complexity of an algorithm. If f(n) describes the running time of an algorithm, f(n) is O(g(n))
if there exist a positive constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0
It returns the highest possible output value (big-O)for a given input.
The execution time serves as an upper bound on the algorithm’s time complexity.

Graphical Representation:

If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a positive
constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0
It returns the highest possible output value (big-O)for a given input.

Mathematical Representation of Big-O Notation:


O(g(n)) = { f(n): there exist positive constants c and n 0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0
}

Page 30 of 33
For example, Consider the case of Insertion Sort. It takes linear time in the best case and
quadratic time in the worst case. We can safely say that the time complexity of the Insertion
sort is O(n2).

Note: O(n2) also covers linear time.


The Big-O notation is useful when we only have an upper bound on the time complexity of
an algorithm.

Examples :
{100 , log (2000) , 10^4 } belongs to O(1)

U {(n/4) , (2n+3) , (n/100 + log(n)) } belongs to O(n)


U {(n^2+n) , (2n^2) , (n^2+log(n))} belongs to O( n^2)

Note: Here, U represents union, we can write it in these manner because O provides exact or
upper bounds.

Omega Notation (Ω-Notation): Omega notation represents the lower bound of the running
time of an algorithm. Thus, it provides the best case complexity of an algorithm.

The execution time serves as a lower bound on the algorithm’s time complexity.
It is defined as the condition that allows an algorithm to complete statement execution in the
shortest amount of time.

Let g and f be the function from the set of natural numbers to itself. The function f is said to
be Ω(g), if there is a constant c > 0 and a natural number n0 such that c*g(n) ≤ f(n) for all n ≥
n0

Graphical Representation:

Page 31 of 33
Mathematical Representation of Omega notation:

Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥
n0 }
Let us consider the same Insertion sort example here. The time complexity of Insertion Sort
can be written as Ω(n), but it is not very useful information about insertion sort, as we are
generally interested in worst-case and sometimes in the average case.
Examples :

Theta Notation (Θ-Notation): Theta notation encloses the function from above and below.
Since it represents the upper and the lower bound of the running time of an algorithm, it is
used for analyzing the average case complexity of an algorithm. Let g and f be the function
from the set of natural numbers to itself. The function f is said to be Θ(g), if there are
constants c1, c2 > 0 and a natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥
n0.

Graphical Representation:

Page 32 of 33
Mathematical Representation of Theta notation:

Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤ f(n) ≤
c2 * g(n) for all n ≥ n0}
Note: Θ(g) is a set

The above expression can be described as if f(n) is theta of g(n), then the value f(n) is always
between c1 * g(n) and c2 * g(n) for large values of n (n ≥ n0). The definition of theta also
requires that f(n) must be non-negative for values of n greater than n 0.

The execution time serves as both a lower and upper bound on the algorithm’s time
complexity.

It exist as both, most, and least boundaries for a given input value.
A simple way to get the Theta notation of an expression is to drop low-order terms and
ignore leading constants. For example, Consider the expression 3n3 + 6n2 + 6000 = Θ(n 3),
the dropping lower order terms is always fine because there will always be a number(n) after
which Θ(n3) has higher values than Θ(n2) irrespective of the constants involved. For a given
function g(n), we denote Θ(g(n)) is following set of functions.

Examples :

***

Page 33 of 33

You might also like