DS CH 1 PDF
DS CH 1 PDF
DS CH 1 PDF
2. Disp max ele from array. N eles e.g how many eles? 5 enter els -> 20 10 100 50
5 => 100 max
3. Addn of two matrixes [2][2] [3][3] num rows and cols => 3 [3][3]
4. Sort eles of array (1D) => 5 10 20….
5. Accept n eles from user and display it. mXn eles from user and display it.
Using function.
As applications are getting complexed and amount of data is increasing day by day,
there may arise the following problems:
Processor speed: To handle very large amount of data, high speed processing is
required, but as the data is growing day by day to the billions of files per entity,
processor may fail to deal with that much amount of data.
Data Search: Consider an inventory size of 106 items in a store, If our application
needs to search for a particular item, it needs to traverse 106 items every time,
results in slowing down the search process.
in order to solve the above problems, data structures are used. Data is organized to
form a data structure in such a way that all items are not required to be searched
and required data can be searched instantly.
Data: Data can be defined as an elementary value or the collection of values, for
example, student's name and its id are the data about the student.
Data Structure :
10 20 40 50 100 30
Non Linear Data Structures: This data structure does not form a sequence i.e. each
item or element is connected with two or more other items in a non-linear
arrangement. The data elements are not arranged in sequential structure.
Characteristics of Algorithms
Time complexity : The amount of time is required by the algorithm is called time
complexity.
Space Complexity : The amount of space / memory required by the algorithm is
called Space complexity.
In this function, the n2 term dominates the function that is when n gets sufficiently
large.
N F(2^n) F(n!)
1 2 1
2 4 2
3 8 6
4 16 24
5 32 120
Asymptotic notation:
The word Asymptotic means approaching a value or curve arbitrarily closely (i.e., as
some sort of limit is taken).
Asymptotic analysis
Classification of Algorithms
o Constant Complexity:
It imposes a complexity of O(1). It undergoes an execution of a constant
number of steps like 1, 5, 10, etc. for solving a given problem. The count of
operations is independent of the input data size.
o Logarithmic Complexity:
It imposes a complexity of O(log(N)). It undergoes the execution of the order
of log(N) steps. To perform operations on N elements, it often takes the
logarithmic base as 2.
For N = 1,000,000, an algorithm that has a complexity of O(log(N)) would
undergo 20 steps (with a constant precision). Here, the logarithmic base does
not hold a necessary consequence for the operation count order, so it is
usually omitted.
o Linear Complexity:
o O(1) < O(log n) < O (n) < O(n log n) < O(n^2) < O (n^3)< O(2^n) < O(n!)
Constant < = Logarithmic <= Polynomial <= exponential <= Factorial
1. Big-oh notation: Big-oh is the formal method of expressing the upper bound of an
algorithm's running time. It is the measure of the longest amount of time. The
function f (n) = O (g (n)) [read as "f of n is big-oh of g of n"] if and only if exist
positive constant c and such that
Hence, function g (n) is an upper bound for function f (n), as g (n) grows faster than f
(n)
For Example:
1. 1. 3n+2=O(n) as 3n+2≤4n for all n≥2
2. 2. 3n+3=O(n) as 3n+3≤4n for all n≥3
For Example:
f (n) =8n2+2n-3≥8n2-3
=7n2+(n2-3)≥7n2 (g(n))
Thus, k1=7
For Example:
3n+2= θ (n) as 3n+2≥3n and 3n+2≤ 4n, for n
k1=3,k2=4, and n0=2
Example1:
In the first example, we have an integer i and a for loop running from i equals 1 to n.
Now the question arises, how many times does the name get printed?
A()
{
int i,j;
for (i=1 to n)
printf("Edward");
}
Since i equals 1 to n, so the above program will print Edward, n number of times.
Thus, the complexity will be O(n).
Example2:
A()
{
int i, j:
for (i=1 to n) i=1 i<=5
for (j=1 to n) j=1 j<=5
printf("hello");
}
In this case, firstly, the outer loop will run n times, such that for each time, the inner
loop will also run n times. Thus, the time complexity will be O(n2).
Example3:
A()
{
i = 1; S = 1;
while (S<=n) 1 to n=5 6
{
i++;
S = S + i;
printf("Edward");
} 5
}
As we can see from the above example, we have two variables; i, S and then we
have while S<=n, which means S will start at 1, and the entire loop will stop
whenever S value reaches a point where S becomes greater than n.
Here i is incrementing in steps of one, and S will increment by the value of i, i.e., the
increment in i is linear. However, the increment in S depends on the i.
Initially;
i=1, S=1
i=2, S=3
Since we don't know the value of n, so let's suppose it to be k. Now, if we notice the
value of S in the above case is increasing; for i=1, S=1; i=2, S=3; i=3, S=6; i=4, S=10; …
Thus, it is nothing but a series of the sum of first n natural numbers, i.e., by the
time i reaches k, the value of S will be k(k+1)/2.
To stop the loop, has to be greater than n, and when we solve this equation,
we will get > n. Hence, it can be concluded that we get a complexity of O(√n) in
this case.
2+
1 max = A[0] n+
2 for i = 1 to n-1 do (n-1)+(n-1)= 2(n-1)+ (including increment
3 if (A[i] > max) then max = A[i] counter)
4 return max 1
Total: 2+n+2(n-1)+1= 3+n+2(n-1)=3+n+2n-2= 3n-1
-1(const) ::::: 3n
Example1:
1. A(n)
2. {
3. if (n>1)
4. return (A(n-1))
5. }
Solution;
Here we will see the simple Back Substitution method to solve the above problem.
Now, according to Eqn. (1), i.e. T(n) = 1 + T(n-1), the algorithm will run until n>1.
Basically, n will start from a very large number, and it will decrease gradually. So,
when T(n) = 1, the algorithm eventually stops, and such a terminating condition is
called anchor condition, base condition or stopping condition.
TOH(n, x, y, z)
{ Toh(3) t(1,x…)
T(1,z,y,x)
if (n >= 1)
Toh(2,x,z,y) toh(1,x,y,z)
toh(1,x,z,y) toh(2,z,y,x)
{
// put (n-1) disk to z by using y
TOH((n-1), x, z, y)
After Generalization :
https://www.baeldung.com/cs/towers-of-hanoi-complexity
Space Complexity:
The space complexity of a program is the amount of memory it needs to run to
completion. The space need by a program has the following components:
Instruction space: Instruction space is the space needed to store the compiled
version of the program instructions.
Data space: Data space is the space needed to store all constant and variable
values. Data space has two components:
instances.
Environment stack space: The environment stack is used to save information
needed to resume execution of partially completed functions.
Instruction Space: The amount of instructions space that is needed depends on
factors such as:
used to complete the program into machine code.
• Algorithm abc(a,b,c) ,
return a+b++*c+(a+b-c)/(a+b) +4.0;
}
The Space needed by each of these algorithms is seen to be the sum of the
following component.
1.A fixed part that is independent of the characteristics (eg:number,size)of the
inputs and outputs.
The part typically includes the instruction space (ie. Space for the code), space for
simple variable and fixed-size component variables (also called aggregate) space
for constants, and so on.
2. A variable part that consists of the space needed by component variables whose
size is dependent on the particular problem instance being solved, the space
needed by referenced variables (to the extent that is depends on instance
characteristics), and the recursion stack space.
The space requirement s(p) of any algorithm p may therefore be written as,
S(P) = c+ Sp(Instance characteristics) Where ‘c’ is a constant.
Example 2:
Algorithm sum(a,n)
{
s=0.0;
for I=1 to n do
s= s+a[I];
return s;
}
elements to be summed.