Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

data_structures_part-A

The document covers various topics in computer science, including algorithm analysis, data structures, and tree traversal techniques. It discusses growth rates of functions, properties of linked lists, stack operations, and the construction of binary trees from traversals. Additionally, it addresses hashing methods and their applications in efficient data retrieval.

Uploaded by

Annie Portia
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

data_structures_part-A

The document covers various topics in computer science, including algorithm analysis, data structures, and tree traversal techniques. It discusses growth rates of functions, properties of linked lists, stack operations, and the construction of binary trees from traversals. Additionally, it addresses hashing methods and their applications in efficient data retrieval.

Uploaded by

Annie Portia
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

UNIT 1

PART A–––––-

1. Order the functions by growth rate:

Order Functions

Same growth N,Nlog⁡log⁡N,Nlog⁡(N2),Nlog⁡2NN, N \log\log N, N \log(N^2), N \log^2


rate NN,NloglogN,Nlog(N2),Nlog2N

Ascending log⁡N,N,N1.5,N2,N3,2N\log N, N, N^{1.5}, N^2, N^3, 2^NlogN,N,N1.5,N2,N3,2N


order

2. Analyze the growth relationship:

Statement True/False Reason

T1(N)+T2(N)=O(f(N))T_1(N) + T_2(N) = True Sum of two functions in


O(f(N))T1(N)+T2(N)=O(f(N)) O(f(N))O(f(N))O(f(N)) is also in
O(f(N))O(f(N))O(f(N)).

T1(N)−T2(N)=o(f(N))T_1(N) - T_2(N) = False Difference cannot guarantee smaller


o(f(N))T1(N)−T2(N)=o(f(N)) growth than f(N)f(N)f(N).

3. Prove log⁡kN=o(N)\log^kN = o(N)logkN=o(N):


As N→∞N \to \inftyN→∞, log⁡kNN→0\frac{\log^kN}{N} \to 0NlogkN→0, showing that
log⁡kN\log^kNlogkN grows much slower than NNN.

4. Find f(N)f(N)f(N) and g(N)g(N)g(N):


Choose f(N)=N2sin⁡(N)f(N) = N^2 \sin(N)f(N)=N2sin(N) and g(N)=Nlog⁡Ng(N) = N \log Ng(N)=NlogN.
Neither dominates the other as their growth rates vary.

5. Algorithm to check A[i]=iA[i] = iA[i]=i:

• Use binary search for array A[1]<A[2]<...<A[N]A[1] < A[2] < ... < A[N]A[1]<A[2]<...<A[N].

• Compare A[mid]A[mid]A[mid] with midmidmid. Adjust search space accordingly.

• Running time: O(log⁡N)O(\log N)O(logN).

6. Check if NNN is prime:

def is_prime(N):

if N <= 1: return False


for i in range(2, int(N**0.5) + 1):

if N % i == 0: return False

return True

7. Recursive Fibonacci Algorithm:

def fibonacci(n):

if n <= 1: return n

return fibonacci(n-1) + fibonacci(n-2)

8. Fast Exponentiation without recursion:

def fast_exp(base, exp):

result = 1

while exp > 0:

if exp % 2 == 1:

result *= base

base *= base

exp //= 2

return result

9. Multiplications in fast exponentiation:

• Binary representation of NNN has ⌊log⁡2N⌋+1\lfloor \log_2 N \rfloor + 1⌊log2N⌋+1 bits.

• Multiplications: ⌊log⁡2N⌋\lfloor \log_2 N \rfloor⌊log2N⌋.

10. Fixed-size integers in computation:


Assuming fixed size prevents overflow, ensures constant-time arithmetic, and improves algorithm
predictability.

11. Euclid’s Algorithm:


Compute gcd⁡(m,n)\gcd(m, n)gcd(m,n):

def gcd(m, n):

while n != 0:

m, n = n, m % n
return m

12. Mathematical definitions for algorithms:


Definitions include Big-O notation, asymptotic growth, logarithms, and modular arithmetic.

13. Define model:


A model is a simplified representation of a system to analyze or predict its behavior.

14. Rules for algorithm analysis:

• Dominant term determines complexity.

• Constant factors are ignored.

• Worst-case input defines upper bounds.

15. Binary search with one comparison:


At each step, compute mid-point and compare A[mid]A[mid]A[mid] directly with the target.
O(log⁡N)O(\log N)O(logN) comparisons.

16. Area and Circumference of Circle:

def circle_geometry(radius):

area = 3.14 * radius**2

circumference = 2 * 3.14 * radius

return area, circumference

17. Measure running time of an algorithm:


Count the number of basic operations (e.g., additions, comparisons) and express as a function of
input size NNN.

18. Define algorithm with example:


An algorithm is a step-by-step procedure to solve a problem.
Example: Sorting a list using bubble sort.

19. Running time for nested loops:


If there are kkk nested loops, each iterating NNN times, the running time is O(Nk)O(N^k)O(Nk).
20. Running time for if-else statements:
Depends on condition evaluation. For a single check: O(1)O(1)O(1).
Example:

if x > 10:

print("Big") # O(1)

else:

print("Small") # O(1)

UNIT 2

PART A

1. Differentiate overflow and underflow in linear lists:

• Overflow: Adding to a full list exceeds memory limits.

• Underflow: Removing from an empty list causes failure.


Example: Array size is 5; inserting the 6th element causes overflow.

2. Why do we need FindPrevious() in a linked list?


FindPrevious() helps locate the predecessor node for operations like deletion, as linked lists lack
direct backward navigation (except in doubly linked lists).

3. Abstract Data Type (ADT):


ADT defines data and operations abstractly without implementation details. Example: Stack supports
push, pop, and peek operations irrespective of internal data storage.

4. Deleting 97 in array-based list:


After deleting 979797:

• 4th location: Undefined (empty).

• 3rd location: 777777, as cur_pos decrements to 3.


Array: [27,12,19,77,_][27, 12, 19, 77, \_][27,12,19,77,_].

5. Polynomial representation using linked list:


Each node stores a term’s coefficient and exponent. Example for 6x3+9x2+7x+16x^3 + 9x^2 + 7x +
16x3+9x2+7x+1:

[6, 3] -> [9, 2] -> [7, 1] -> [1, 0]


6. Differentiate Linear Array and Linked List:

Feature Linear Array Linked List

Memory allocation Fixed (static). Dynamic (nodes).

Size flexibility Inflexible. Flexible.

Insertion/Deletion Expensive (shift). Easier (adjust pointers).

7. Advantages of ADT:

• Abstracts implementation details.

• Promotes code reusability and modularity.

• Simplifies debugging and optimization.


Example: Using stack ADT simplifies implementing DFS.

8. Primitive vs Non-Primitive Data Structures:

Feature Primitive Non-Primitive

Nature Basic (int, char). Derived (list, graph).

Complexity Simple. Complex relationships.

9. Circular Linked List:


In a circular linked list, the last node points back to the first, forming a loop. It efficiently supports
operations requiring circular traversal.

10. Basic linked list operations:

• Insertion: Add nodes at head, tail, or position.

• Deletion: Remove specified nodes.

• Traversal: Visit all nodes sequentially.

• Search: Find a specific value.

11. Polynomial p(x)=4x3+6x2+7x+9p(x) = 4x^3 + 6x^2 + 7x + 9p(x)=4x3+6x2+7x+9:

[4, 3] -> [6, 2] -> [7, 1] -> [9, 0]

12. Differentiate calloc() and realloc():


Function Purpose Use in Linked List

calloc() Allocates zeroed memory. Initialize empty nodes.

realloc() Resizes existing memory. Expand list dynamically.

13. Linear vs Circular Linked List:

Feature Linear Circular

End pointer NULL. Points to head node.

Use case Sequential traversal. Repeated traversal.

14. Structure of singly and doubly linked lists:

• Singly Linked List:

[data | next] -> [data | next] -> NULL

• Doubly Linked List:

[prev | data | next] <-> [prev | data | next]

15. Advantages and disadvantages of linked lists:


Advantages: Dynamic memory, efficient insert/delete.
Disadvantages: Higher memory (pointers), slower search.
Example: Linked lists excel in implementing stacks/queues.

16. Operations on data structures:

• Insertion: Add elements.

• Deletion: Remove elements.

• Search: Locate values.

• Traversal: Visit all nodes.

• Sort: Rearrange elements.

17. Abstract Data Type (ADT):


ADT defines operations and properties of data without focusing on implementation. Example: Queue
ADT supports enqueue, dequeue, and peek.
18. Why is doubly linked list more useful?
Allows traversal in both directions, simplifies deletion, and supports bidirectional operations
efficiently. Example: Browser history navigation.

19. Advantages of circular linked list:

• Eliminates NULL pointers, simplifying traversal.

• Efficient for circular tasks (e.g., buffer management).

• Dynamically resizable, unlike arrays.

20. Linked list for student details:


Each node stores student information (e.g., name, ID):

[Name, ID, next] -> [Name, ID, next] -> NULL

UNIT 3

PART A

1. Stacks for symbol balancing:


A stack pushes opening symbols ((, [) and pops for matching closing ones. If unmatched, it flags an
error. Example: Fix ((A+B)+[C-D]] to ((A+B)+[C-D]).

2. Applications of stacks:

• Expression evaluation.

• Backtracking (e.g., maze solving).

• Syntax parsing.

• Undo operations in text editors.

3. MaxHeap level order after insertion (11, 7):


Initially: [10,8,5,3,2][10, 8, 5, 3, 2][10,8,5,3,2].
After insertions: [11,10,7,8,2,5,3][11, 10, 7, 8, 2, 5, 3][11,10,7,8,2,5,3].

4. Define Push, Pop, Top in Stack ADT:

• Push: Adds an element to the stack.

• Pop: Removes the top element.

• Top: Retrieves the top element without removing.


5. FIFO vs LIFO:

Feature FIFO (Queue) LIFO (Stack)

Order First In, First Out. Last In, First Out.

Use case Scheduling tasks. Undo operations.

6. ENQUEUE vs DEQUEUE:

Operation ENQUEUE DEQUEUE

Definition Insert into queue. Remove from queue.

End involved Rear. Front.

7. Advantage of circular queue:


Circular queues reuse space by connecting the rear to the front, making them efficient for fixed-size
buffers.

8. Prefix to postfix:

• ++A*BCD: AB∗CD++AB*CD++AB∗CD++.

• +*AB*CD: AB∗CD∗+AB*CD*+AB∗CD∗+.

9. Infix to postfix conversion:


Infix: a+b∗(cd−e)(f+g∗h)−ia+b*(c^d-e)^(f+g*h)-ia+b∗(cd−e)(f+g∗h)−i.
Postfix: abcde−fgh∗+∗+i−abcd^e-fgh*+^*+i-abcde−fgh∗+∗+i−.

10. Applications of stack:

• Parentheses matching.

• Function call tracking.

• Depth-First Search.

• Expression conversion (infix to postfix).

11. Purpose of Top and Pop:

• Top: Retrieves stack's top element for viewing.

• Pop: Removes the top element after use.


12. Causes of stack underflow:
Occurs when Pop is called on an empty stack. Avoid by checking if the stack is empty before removal.

13. Rules for infix to postfix conversion:

• Operands: Append directly.

• Operators: Use precedence, associativity.

• Parentheses: Push opening, pop for closing.

14. Define double-ended queue:


A deque allows insertion and deletion at both ends, supporting flexible access compared to a
standard queue.

15. Applications of a queue:

• Process scheduling.

• Printing tasks.

• Message queues.

• Simulation of real-world queues.

16. Array vs Stack:

Feature Array Stack

Access Random access. Sequential (LIFO).

Use case General data storage. Temporary storage.

17. Circular queue vs Linear queue:


Circular queue uses space efficiently by reusing front-rear linkage, while a linear queue wastes space
after deletions.

18. Priority queue and applications:


A priority queue organizes elements by priority. Applications: Task scheduling, Dijkstra's algorithm,
and Huffman coding.

19. Algorithm to delete from circular queue:


• Check underflow (empty queue).

• Remove the front element.

• Update front pointer: front=(front+1)%sizefront = (front + 1) \% sizefront=(front+1)%size.

20. Queue classification:

• Linear queue: Basic FIFO.

• Circular queue: Efficient space reuse.

• Priority queue: Organized by priority.

• Deque: Double-ended queue.

UNIT 4

PART A

1. Binary tree from traversals (Inorder: 42513, Preorder: 12453):


Constructed tree:

markdown

Copy code

/\

2 5

/\

4 3

2. Expression tree for (2x+y)(5a−b)3(2x+y)(5a-b)^3(2x+y)(5a−b)3:

(*)

/ \

(+) (^)

/\ /\

2x y (-) 3

/\

5 b
3. Full binary trees among 8, 15, 13, 14 nodes:
Only 15 nodes can form a full binary tree because a full binary tree always has 2h−12^h -
12h−1 nodes (e.g., 15 for h=4h = 4h=4).

4. Number of different trees with 10 nodes:


Catalan number for n=10n = 10n=10:
C10=111(20×19)=16796C_{10} = \frac{1}{11}(20 \times 19) = 16796C10=111(20×19)=16796

5. Tree traversals and types:

• Inorder: Left, Root, Right.

• Preorder: Root, Left, Right.

• Postorder: Left, Right, Root.

• Level-order: Breadth-first traversal.

6. Binary tree (Inorder: DBEAFGC, Preorder: ABDECFG):


Constructed tree:

mathematica

Copy code

/\

B C

/\ \

D E G

7. Hashing:
Hashing maps keys to values using a hash function for efficient data retrieval, often used in
hash tables.

8. Rehashing with quadratic probing:


Quadratic probing handles collisions by trying indices (hash+i2)%size(hash + i^2) \%
size(hash+i2)%size for i=1,2,…i = 1, 2, \ldotsi=1,2,…. Example: Insert 5 into table.
9. Extendible hashing:
Uses a directory to manage buckets dynamically. It prevents overflow by splitting buckets
based on keys' binary hash prefixes.

10. Types of collision resolution:

• Open addressing.

• Separate chaining.

• Quadratic probing.

• Double hashing.

11. Best hashing technique:


Separate chaining is ideal for handling collisions due to simplicity and efficient memory usage.
Example: Insert 5 and 15 into table size 10.

12. Advantages of open addressing:


Efficient memory usage, no need for linked structures, and better cache performance.

13. Separate chaining vs linear probing:

Method Advantage Disadvantage

Separate Chaining Unlimited bucket capacity. Extra memory for chains.

Linear Probing Compact storage. Clustering issues.

14. Binary search complexity:

• Best-case: O(1)O(1)O(1).

• Average/worst-case: O(log⁡n)O(\log n)O(logn).

15. Tree depth:


The depth of a tree is the number of edges in the longest path from the root to a leaf.

16. Define tree and applications:


A tree is a hierarchical data structure with nodes connected by edges. Applications: File
systems, routing, decision trees.
17. Binary tree memory representation:

1. Array: Index-based.

2. Linked list: Pointers for child nodes.


Prefer linked list for dynamic size management.

18. Operations on binary search trees:

• Insertion.

• Deletion.

• Search.

• Traversal (inorder, preorder, postorder).

19. Searching in a hash table:

1. Compute the hash value.

2. Check the corresponding bucket.

3. If collision, resolve (e.g., probing or chaining).

20. Define hash table and hashing:

• Hash table: Data structure for storing key-value pairs.

• Hashing: Technique to convert a key into an index for quick access.

UNIT 5

PART A

1. Complexity of Binary Search:

• Best-case: O(1)O(1)O(1) (key found at the middle).

• Worst-case/Average-case: O(log⁡n)O(\log n)O(logn) (halving the array repeatedly).

2. Types of Searching:

1. Linear search.

2. Binary search.

3. Interpolation search.

4. Exponential search.
5. Hash-based search.

3. Fastest searching algorithm:


Binary search is fastest for sorted data (O(log⁡n)O(\log n)O(logn)) due to efficient halving, making it
ideal for large datasets.

4. Linear vs Binary Search:

Aspect Linear Search Binary Search

Data Requirement Unsorted data allowed. Requires sorted data.

Complexity O(n)O(n)O(n) O(log⁡n)O(\log n)O(logn).

5. Types of Sorting Methods:

1. Comparison-based: Bubble sort, Quick sort, Merge sort.

2. Non-comparison-based: Radix sort, Counting sort.

6. Quick Sort (3, 1, 4, 1, 5, 9, 2, 6, 5):


Sorted array: [1, 1, 2, 3, 4, 5, 5, 6, 9]. Pivot chosen at each step partitions data recursively.

7. Insertion Sort (3, 1, 4, 1, 5, 9, 2, 6, 5):


Sorted array: [1, 1, 2, 3, 4, 5, 5, 6, 9]. Elements are shifted to insert in the correct position.

8. Merge Sort (3, 1, 4, 1, 5, 9, 2, 6, 5):


Sorted array: [1, 1, 2, 3, 4, 5, 5, 6, 9]. Array split recursively and merged in sorted order.

9. Worst-case complexity of QuickSort:


The worst case occurs when the pivot repeatedly divides into n−1n-1n−1 and 000, leading to
O(n2)O(n^2)O(n2).

10. Quick Sort on (7, 11, 14, 6, 9, 4, 3, 12):


Sorted array: [3, 4, 6, 7, 9, 11, 12, 14]. Pivot divides and sorts recursively.

11. Define Searching:


Searching is the process of finding the position of a specific element in a collection of elements.
12. Heapsort elements in O(log⁡n)O(\log n)O(logn):
No elements can be sorted in O(log⁡n)O(\log n)O(logn) using heapsort; sorting always takes
O(nlog⁡n)O(n \log n)O(nlogn).

13. Worst-case complexity of HeapSort:


The worst-case time complexity is O(nlog⁡n)O(n \log n)O(nlogn) due to repeated heapification.

14. Insertion Sort Function (C):

Copy code

void insertionSort(int arr[], int n) {

for (int i = 1; i < n; i++) {

int key = arr[i], j = i - 1;

while (j >= 0 && arr[j] > key)

arr[j + 1] = arr[j--];

arr[j + 1] = key;

15. Factors for choosing sorting algorithms:

• Data size.

• Memory availability.

• Stability requirement.

• Complexity constraints.

• Data pre-sorting.

16. Running time of Insertion Sort (all keys equal):


O(n)O(n)O(n), as no elements are shifted; keys are already positioned.

17. Running time of HeapSort (presorted input):


Always O(nlog⁡n)O(n \log n)O(nlogn), as it builds a heap regardless of presorted input.
18. Pivot element in QuickSort:
An element chosen to partition the array into smaller and larger elements.

19. Define MergeSort:


A divide-and-conquer algorithm that recursively divides and merges subarrays in sorted order.
Complexity: O(nlog⁡n)O(n \log n)O(nlogn).

20. Heapsort vs Insertion Sort:

Aspect HeapSort Insertion Sort

Complexity O(nlog⁡n)O(n \log n)O(nlogn). O(n2)O(n^2)O(n2).

Data Requirement Suitable for large data. Efficient for small data.

4o

You might also like