Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
20 views

2introduction To Data Structure Algorithms

Uploaded by

Kuya Kim
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

2introduction To Data Structure Algorithms

Uploaded by

Kuya Kim
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 57

COLLEGE OF ENGINEERING

BACHELOR OF SCIENCE IN COMPUTER ENGINEERING


1st Semester – A.Y. 2024-2025

CPE-PC 215

INTRODUCTION TO DATA
STRUCTURE & ALGORITHMS
SECOND YEAR
ALGORITHM
ALGORITHM
An algorithm is a Step By Step process to solve a problem,
where each step indicates an intermediate task. Algorithm
contains finite number of steps that leads to the solution of
the problem
Properties /Characteristics of an
Algorithm
 Input-Output: Algorithm takes ‘0’ or more input and produces the
required output. This is the basic characteristic of an algorithm.
 Finiteness: An algorithm must terminate in countable number of
steps.
 Definiteness: Each step of an algorithm must be stated clearly
and unambiguously.
 Effectiveness: Each and every step in an algorithm can be
converted in to programming language statement.
 Generality: Algorithm is generalized one. It works on all set of
inputs and provides the required output. In other words it is not
restricted to a single input value.
CATEGORIES OF
ALGORITHM
Based on the different types of steps in an Algorithm, it can be divided into three categories, namely

 Sequence
 Selection
 Iteration
SEQUENCE
The steps described in an algorithm are performed
successively one by one without skipping any step. The
sequence of steps defined in an algorithm should be simple
and easy to understand. Each instruction of such an
algorithm is executed, because no selection procedure or
conditional branching exists in a sequence algorithm.
SEQUENCE
EXAMPLE:
// adding two numbers
Step 1: start
Step 2: read a,b
Step 3: Sum=a+b
Step 4: write Sum
Step 5: stop
SELECTION
The sequence type of algorithms are not
sufficient to solve the problems, which
involves decision and conditions. In order to
solve the problem which involve decision
making or option selection, we go for
Selection type of algorithm.
SELECTION
The general format of Selection type of statement is as
shown below:
if(condition)
Statement-1;
else
Statement-2;
The above syntax specifies that if the condition is true, statement-1 will be executed otherwise
statement-2 will be executed. In case the operation is unsuccessful. Then sequence of algorithm
should be changed/ corrected in such a way that the system will reexecute until the operation is
successful
SELECTION
SELECTION
ITERATION
Iteration type algorithms are used in solving
the problems which involves repetition of
statement. In this type of algorithms, a
particular number of statements are repeated
‘n’ no. of times.
ITERATION
Example 1
Step 1 : start
Step 2 : read n
Step 3 : repeat step 4 until n>0
Step 4 : (a) r=n mod 10
(b) s=s+r
(c) n=n/10
Step 5 : write s
Step 6 : stop
Performance Analysis an Algorithm:
The Efficiency of an Algorithm can be measured by the
following metrics.

1. Time Complexity
2. Space Complexity.
TIME COMPLEXITY
The amount of time required for an algorithm
to complete its execution is its time
complexity. An algorithm is said to be
efficient if it takes the minimum (reasonable)
amount of time to complete its execution.
SPACE COMPLEXITY
The amount of space occupied by an
algorithm is known as Space Complexity. An
algorithm is said to be efficient if it occupies
less space and required the minimum amount
of time to complete its execution.
ACTIVIT
1. Write an algorithm to find the factorial of a number entered by
the user.
Y
2. Write an algorithm to find the Simple Interest for a given principal
amount, time, and rate of interest.
3. Write an algorithm to add two numbers.
4. Write an algorithm to check if a person is eligible to vote.
5. Write an algorithm to calculate the average of a list of numbers.
6. Write an algorithm to find the sum of the digits of a given
number.
STACKS & HEAP
MEMORY
STACKS
- The Stack is a region of memory used to
manage function calls and local variables. It's a
Last-In-First-Out (LIFO) structure, which makes
it fast but limited in size.
STACKS
- A place in the computer memory where all
variables that are declared and initialized
before runtime are stored.
- A temporary storage memory, if you come out
of the program the memory of the variable will
not be there.
- Any data on the stack for a function will
automatically be deleted.
STACKS
- Data is added or removed in Last-in-first-out
manner (LIFO)
STACKS
- The stack has a fixed size
- Both stack and heap are stored on the RAM
- If there is not enough space on the stack to
handle the memory being assigned to it, a
stack overflow occurs.
STACKS
Key Characteristics of Stack
- Automatic memory management: When a function is called,
local variables are automatically pushed onto the stack. When
the function exits, the memory is automatically reclaimed.
Key Characteristics of Stack
- Fixed size: The stack has a limited size, and if it exceeds the
available memory, it leads to a stack overflow.

- Fast allocation and deallocation: Since it operates in a LIFO


manner, pushing and popping from the stack are constant-time
operations (O(1)).
USE IN DATA STRUCTURES

- Recursion: Stack memory is used heavily in recursion. Each recursive


call creates a new stack frame (set of function parameters, local
variables, and return addresses). Recursive algorithms such as those
for tree traversal or divide-and-conquer strategies (like quicksort) rely
on this structure.
USE IN DATA STRUCTURES

- Explicit Stack Data Structure: In many algorithms (such as depth-first


search (DFS) on graphs or expression evaluation), an explicit stack
data structure (implemented using arrays or linked lists) is used. This
mirrors the stack memory's LIFO behavior.
When computing the factorial of a number using
recursion, the function call for each recursive step is
stored in stack memory until the base case is reached.
Once the base case is reached, the function calls return
in reverse order, using the LIFO structure of the stack.

EXAMPLE
int factorial(int n) {
if (n == 0) return 1;
else return n * factorial(n - 1);
}
EXAMPLE

int factorial(int n) {
if (n == 0) return 1;
else return n * factorial(n - 1);
}

In this example, each recursive


call to factorial is stored in
stack memory.
EXAMPLE
Here’s a step-by-step visualization of the call stack as factorial(4) runs:

1.factorial(4) returns 4 * factorial(3)


2.factorial(3) returns 3 * factorial(2)
3.factorial(2) returns 2 * factorial(1)
4.factorial(1) returns 1 * factorial(0)
5.factorial(0) returns 1 (base case)
EXAMPLE
Here’s a step-by-step visualization of the call stack as factorial(4) runs:

1. Call factorial(4) → Stack: factorial(4)


2. Call factorial(3) → Stack: factorial(4), factorial(3)
3. Call factorial(2) → Stack: factorial(4), factorial(3), factorial(2)
4. Call factorial(1) → Stack: factorial(4), factorial(3), factorial(2),
factorial(1)
5. Call factorial(0) (Base case) → Stack: factorial(4), factorial(3),
factorial(2), factorial(1), factorial(0)
EXAMPLE
After factorial(0) returns 1, functions are popped off the stack:

1. factorial(1) returns 1 * 1 = 1 → Stack: factorial(4), factorial(3),


factorial(2)
2. factorial(2) returns 2 * 1 = 2 → Stack: factorial(4), factorial(3)
3. factorial(3) returns 3 * 2 = 6 → Stack: factorial(4)
4. factorial(4) returns 4 * 6 = 24 → Stack is empty
HEAP
The Heap is a region of memory used for dynamic memory
allocation. Unlike the stack, memory on the heap does not
follow a strict LIFO structure and is more flexible but slower
to allocate and deallocate. It's used to manage memory for
objects or data structures that need to persist beyond the
scope of a function.
Key Characteristics of Heap
- Dynamic memory management: Memory on the heap is
allocated manually (e.g., using malloc() in C or new in
C++/Java) and must be freed when no longer needed to avoid
memory leaks.
Key Characteristics of Heap
- Flexible size: The heap can grow and shrink during
program execution, based on the memory needs.
- Slower access: Allocating and freeing memory on the heap
takes more time compared to the stack because the system
must search for available memory blocks, and memory
fragmentation can occur.
USE IN DATA STRUCTURES

- Recursion: Stack memory is used heavily in recursion. Each recursive


call creates a new stack frame (set of function parameters, local
variables, and return addresses). Recursive algorithms such as those
for tree traversal or divide-and-conquer strategies (like quicksort) rely
on this structure.
USE IN DATA STRUCTURES
Dynamic Data Structures: Heap memory is crucial for dynamic data
structures like linked lists, binary trees, graphs, and hash tables.
These data structures require memory to be allocated dynamically
because their size can change during program execution.

- Linked Lists: Each node in a linked list is allocated on the heap.


Since nodes are created and destroyed dynamically, the heap
provides the flexibility needed for this structure.

- Binary Trees: Nodes in trees (e.g., binary search trees, AVL trees,
heaps) are dynamically allocated on the heap to allow insertion and
deletion of elements during runtime.
USE IN DATA STRUCTURES
Dynamic Arrays: Unlike static arrays (which are allocated on the
stack), dynamic arrays (like ArrayList in Java or std::vector in C++)
allocate memory from the heap to allow resizing.
EXAMPLE
struct Node {
int data;

struct Node* left;


struct Node* right;};
struct Node* createNode(int data) {

struct Node* newNode = (struct Node*) malloc(sizeof(struct Node)); // Heap


allocation
newNode->data = data;
newNode->left = NULL;
newNode->right = NULL;
return newNode;
} Consider the creation of a binary tree node, where each
node is dynamically allocated on the heap.
EXAMPLE
struct Node {
int data;

struct Node* left;


struct Node* right;};
struct Node* createNode(int data) {

struct Node* newNode = (struct Node*) malloc(sizeof(struct Node)); // Heap


allocation
newNode->data = data;
newNode->left = NULL;
newNode->right = NULL;
return newNode;
}
EXAMPLE
 malloc(sizeof(struct Node)): The function malloc() allocates a block
of memory on the heap large enough to hold a Node structure. The
size of this block is determined by sizeof(struct Node), which gives the
total size of the structure in bytes.
 Why the heap is used: The node is created using malloc so that it
persists in memory even after the function createNode returns. If the
memory for the node were allocated on the stack, it would be
automatically deallocated when the function exits, and the pointer
would become invalid.
 Manual deallocation: Since malloc() allocates memory on the heap,
you will need to call free(newNode) at some point to avoid memory
leaks when you're done using the node.
ABSTRACT DATA
TYPE
ABSTRACT DATA TYPE

An Abstract Data Type (ADT) is a theoretical


concept in computer science that defines a data
structure by its behavior (what operations can be
performed on it and how) rather than its
implementation. It abstracts away the details of how
the data is organized and focuses on the operations
that can be performed on the data.
Key Characteristics of ADT
- Encapsulation: ADTs encapsulate the data and the operations that
can be performed on the data.
- Interface vs. Implementation: The user of an ADT knows what
operations are available but does not need to know how those
operations are implemented.
- Modularity: ADTs help separate the interface from the implementation,
making the code modular and easier to maintain or change.
Common ADTs
 Stack:
- Operations: push(), pop(), top(), isEmpty()
- Behavior: Follows the Last-In-First-Out (LIFO) principle.

 Queue:
- Operations: enqueue(), dequeue(), front(), isEmpty()
- Behavior: Follows the First-In-First-Out (FIFO)
Common ADTs
 Graph:
- Operations: addVertex(), addEdge(), removeVertex(), traverse()
- Behavior: A collection of vertices connected by edges, representing
relationships between objects.
 Set:
- Operations: add(), remove(), contains(), union(), intersection()
- Behavior: A collection of distinct elements with no particular order.
int main() {
Stack s; // Declare a Stack 's' (we assume 'Stack' is a predefined struct)
initStack(&s); // Initialize the stack

push(&s, 10); // Push the integer 10 onto the stack


push(&s, 20); // Push the integer 20 onto the stack
push(&s, 30); // Push the integer 30 onto the stack

printf("Top element: %d\n", peek(&s)); // Peek at the top element of the stack

printf("Popped: %d\n", pop(&s)); // Pop the top element (30) and print it
printf("Popped: %d\n", pop(&s)); // Pop the next top element (20) and print it

printf("Is stack empty? %d\n", isEmpty(&s)); // Check if the stack is empty (returns 0 for false or 1 for true)

return 0; // Return 0 to indicate successful execution


}
Run Time Analysis
Run Time Analysis
Algorithm run time analysis is a fundamental concept in
computer science, specifically in Data Structures and
Algorithms (DSA). It involves measuring the efficiency of an
algorithm in terms of the time it takes to run relative to the
size of the input. This is critical for evaluating which
algorithm is best suited for a problem, especially when
dealing with large datasets.
Why Analyze Run Time?
 Predict performance: How will the algorithm behave as
the input size grows?
 Optimize resource use: It’s essential to minimize time
(run time) and space (memory) for efficient programs.
 Compare algorithms: It allows us to objectively
compare multiple algorithms to solve the same problem.
Types of Run Time
Analysis
 Worst-case analysis: The maximum time an algorithm will take
on any input of size n. This is the most common type of analysis
as it guarantees a time bound.
 Best-case analysis: The minimum time an algorithm will take
on any input of size n. While not very practical, it gives an idea
of the best possible scenario.
 Average-case analysis: The expected time an algorithm will
take over all possible inputs of size n. This is harder to calculate
but gives a more realistic expectation.
Big-O Notation
In run time analysis, we use Big-O Notation to express the
asymptotic behavior of an algorithm, which describes how the run
time grows relative to the input size. Big-O ignores constants and
lower-order terms, focusing on the dominant factor as the input
grows larger.
Common Time
Complexities
 O(1): Constant time - The algorithm runs in constant time, regardless of input size.
Example: Accessing an element in an array.
 O(log n): Logarithmic time - The algorithm reduces the problem size logarithmically at each step.
Example: Binary search in a sorted array.
 O(n): Linear time - The algorithm's run time grows linearly with the input size.
Example: Traversing an array.
 O(n log n): Linearithmic time - The run time grows in proportion to n and log n.
Example: Merge sort, Quick sort (average case).
 O(n²): Quadratic time - The run time grows quadratically with input size.
Example: Bubble sort, Selection sort.
 O(2^n): Exponential time - The run time doubles with each addition to the input size.
Example: Solving the traveling salesman problem using brute force.
Common Time
Complexities
 O(1): Constant Time
Example: Checking if the first element of a list is greater than 10. Regardless of the list size,
this operation takes the same amount of time.
 O(log n): Logarithmic Time
Example: Finding a name in a phone book that's been sorted alphabetically. Each step
halves the number of names you need to consider.
 O(n): Linear Time
Example: Counting the number of apples in a basket. You have to check each apple
individually, so the time it takes grows with the number of apples.
 O(n log n): Linearithmic Time
Example: Organizing a list of names using a sorting algorithm like merge sort. The process
involves repeatedly dividing the list and sorting the smaller parts, which takes more time than
just scanning through the list.
 O(n²): Quadratic Time
Example: Checking every possible pair of students in a class to see if they are friends. If you
have a list of students, you compare each student with every other student.
 O(2^n): Exponential Time
Example: Solving a puzzle like the traveling salesman problem using brute force, where you
try every possible route. As the number of cities increases, the number of possible routes grows
exponentially.
Common Time
Complexities

You might also like