AnshulPadiyar ML Lab File
AnshulPadiyar ML Lab File
AnshulPadiyar ML Lab File
B. Tech. VI Semester
Submitted to Submitted By
Prof. Nidhi Nigam Aayush Ojha
Designation, CSIT Dept. (0827CI201004)
ACROPOLIS INSTITUTE OF TECHNOLOGY & RESEARCH, INDORE
Certificate
This is to certify that the experimental work entered in this journal as per the
B Tech III year syllabus prescribed by the RGPV was done by Mr. Aayush Ojha
BTech VI semester CI in the Machine Learning Laboratory of this institute during
the academic year Jan June 2023
Signature of Faculty
INDEX PAGE
Aim: Python Basic Programming including Python Data Structures such as List, Tuple, Strings,
Dictionary, Lambda Functions, Python Classes and Objects and Python Libraries such as Numpy, Pandas,
Mat plotlib etc.
What is Python ?
Python is a high-level, general-purpose, interpreter and object-oriented programming language. The biggest
strength of Python is huge collection of standard library which can be used for the following:
Machine Learning
GUI Applications (like Kivy, Tkinter, PyQt etc. )
Web frameworks like Django (used by YouTube, Instagram, Dropbox)
Image processing (like OpenCV, Pillow)
Web scraping (like Scrapy, BeautifulSoup, Selenium)
Test frameworks
Multimedia
Scientific computing
Text processing
Python Keywords
Keywords are the reserved words in Python. We cannot use a keyword as a variable name, function name
or any other identifier.
Indentation in Python
Whitespace is used for indentation in Python. Unlike many other programming languages which only
serve to make the code easier to read, Python indentation is mandatory. One can understand it better by
looking at an example of indentation in Python. A block is a combination of all these statements. Block
can be regarded as the grouping of statements for a specific purpose. Most programming languages like
C, C++, and Java use braces { } to define a block of code for indentation. One of the distinctive roles of
Python is its use of indentation to highlight the blocks of code.
branch = 'CSIT'
if branch == 'CSIT':
print('Welcome to CSIT')
else:
print('Other branch')
print('All set !')
Comments in Python
Python comments start with the hash symbol # and continue to the end of the line. Comments in
Python are useful information that the developers provide to make the reader understand the source
code. It explains the logic or a part of it used in the code. Comments in Python are usually helpful to
someone maintaining or enhancing your code when you are no longer around to answer questions about
it. There are two types of comments:
1. Single-line comment
2. multi-line comment
# Aayush Ojha
In Python programming language, the type of control flow statements are as follows:
1. The if statement
2. The if-else statement
3. The nested-if statement
4. The if-elif-else ladder
# Aayush Ojha
# if else example
#if-else
num = 100;
if num==100:
print('num is 100')
else:
print('num is not 100')
#nested if-else
if num<=100:
if num<=50:
print("number is between 0 to 50")
else:
print("number is between 50 to 100")
else:
print('number is above 100')
#if elif else
if num==100:
print("number is 100")
elif num==200:
print('number is 200')
elif num==300:
print('number is 300')
else:
print('number something else')
LIST
Python Lists are just like dynamically sized arrays, declared in other languages (vector in C++ and
ArrayList in Java). In simple language, a list is a collection of things, enclosed in [ ] and separated by
commas.
fruits = ['mango','apple',1000,'banana','orange',True,58.50]
print(fruits) #print list
print('element at index 3 '+fruits[3]) #access element using index value
print('length of list ',len(fruits)) #print length of list
fruits.append(2020.55); #append to the list
fruits.insert(2,False); #insert at index 2
fruits.extend([8,'potato']); #insert at end
fruits.reverse();
fruits.remove('apple');
fruits.pop(3); #remove at index 3
fruits.clear();
# fruits.sort(); #only work on same type of values
in list
print(fruits);
Tuple
Tuple is a collection of objects separated by commas. In some ways, a tuple is similar to a list in terms
of indexing, nested objects, and repetition but a tuple is immutable, unlike lists which are mutable.
print(myTuple)
print('length of tuple : ',len(myTuple)); #length of tuple
print('element at 2 ',myTuple[2]); #access element using
indexing
print(myTuple[1:3]) #slicing from index 1 to 2
print(myTuple.count('banana')) #count occurrence of banana
print('banana' in myTuple) # print true if present
mylist = ['banana', 'apple', 'orange', 'pineapple'];
convertedTuple = tuple(mylist); #convert list into tuple
Strings in Python
A string is a data structure in Python that represents a sequence of characters. It is an immutable data
type, meaning that once you have created a string, you cannot change it.
Dictionary in Python
Dictionary in Python is a collection of keys values, used to store data values like a map, which, unlike
other data types which hold only a single value as an element.
myDict = {100:'kuldeep' ,
200:'jaydeep',300:'kunal',400:'nikhilesh',500:'himanshu',600:'mahendra'};
print(myDict)
print(myDict.get(400)) #get value using key
print(myDict.values()) #prints only values
print(myDict.keys()) #prints only keys
print(myDict.pop(400)) #pop item which has key 400
print(myDict.popitem()) #pop last item
del(myDict[100]) #delete item which has key 100
myDict.update({10000:"ram"}) #add new element at last
print(myDict.__sizeof__())
Lambda Functions
Python Lambda Functions are anonymous function means that the function is without a name. As we
already know that the def keyword is used to define a normal function in Python. Similarly,
the lambda keyword is used to define an anonymous function in Python.
List = [1,2,3,4,5,6,7,8]
newList = list(map(square , List))
print(newList)
A class is a user-defined blueprint or prototype from which objects are created. Classes provide a means
of bundling data and functionality together.
Syntax:
class ClassName:
# Statement
An Object is an instance of a Class. A class is like a blueprint while an instance is a copy of the class
with actual values.
Syntax:
Object = className()
def details(self):
print('name is : ',self.name)
print('age is : ',self.age)
print('grade is : ',self.grade)
def get_name(self):
return self.name
def get_age(self):
return self.age
def get_grade(self):
return self.grade
def set_name(self, name):
self.name = name
def set_age(self, age):
self.age = age
def set_grade(self, grade):
self.grade = grade
#initializing object s1
s1 = Student('kuldeep',20,92)
s1.details()
Python Libraries
A Python library is a collection of related modules. It contains bundles of code that can be used
repeatedly in different programs. It makes Python Programming simpler and convenient for the
programmers. some of the commonly used libraries:
1. TensorFlow: This library was developed by Google in collaboration with the Brain Team. It
is an open-source library used for high-level computations. It is also used in machine
learning and deep learning algorithms. It contains a large number of tensor operations.
Researchers also use this Python library to solve complex computations in Mathematics and
Physics.
2. Matplotlib: This library is responsible for plotting numerical data. And that’s why it is used
in data analysis. It is also an open-source library and plots high-defined figures like pie
charts, histograms, scatterplots, graphs, etc.
4. Numpy: The name “Numpy” stands for “Numerical Python”. It is the commonly used
library. It is a popular machine learning library that supports large matrices and multi-
dimensional data. It consists of in-built mathematical functions for easy computations. Even
libraries like TensorFlow use Numpy internally to perform several operations on tensors.
Array Interface is one of the key features of this library.
5. SciPy: The name “SciPy” stands for “Scientific Python”. It is an open-source library used
for high-level scientific computations. This library is built over an extension of Numpy. It
works with Numpy to handle complex computations. While Numpy allows sorting and
indexing of array data, the numerical data code is stored in SciPy. It is also widely used by
application developers and engineers.
6. Scrapy: It is an open-source library that is used for extracting data from websites. It
provides very fast web crawling and high-level screen scraping. It can also be used for data
mining and automated testing of data.
8. PyGame: This library provides an easy interface to the Standard Directmedia Library (SDL)
platform-independent graphics, audio, and input libraries. It is used for developing video
games using computer graphics and audio libraries along with Python programming
language.
10. PyBrain: The name “PyBrain” stands for Python Based Reinforcement Learning, Artificial
Intelligence, and Neural Networks library. It is an open-source library built for beginners in
the field of Machine Learning. It provides fast and easy-to-use algorithms for machine
learning tasks. It is so flexible and easily understandable and that’s why is really helpful for
developers that are new in research fields
EXPERIMENT-2
A Python list comprehension consists of brackets containing the expression, which is executed for each
element along with the for loop to iterate over each element in the Python list. Python List
comprehension provides a much more short syntax for creating a new list based on the values of an
existing list.
Syntax:
newList = [ expression(element) for element in oldList if condition ]
Note:
List comprehension is an elegant way to define and create lists based on existing lists.
List comprehension is generally more compact and faster than normal functions and loops for
creating list.
However, we should avoid writing very long list comprehensions in one line to ensure that code is
user-friendly.
Remember, every list comprehension can be rewritten in for loop, but every for loop can’t be
Numpy:
NumPy is a general-purpose array-processing package. It provides a high-performance multidimensional
array object, and tools for working with these arrays. It is the fundamental package for scientific
computing with Python. It is open-source software. It contains various features including these
important ones:
A powerful N-dimensional array object
Sophisticated (broadcasting) functions
Tools for integrating C/C++ and Fortran code
Useful linear algebra, Fourier transform, and random number capabilities
Pandas
Pandas is an open-source library that is made mainly for working with relational or labeled data both
easily and intuitively. It provides various data structures and operations for manipulating numerical data
and time series. This library is built on top of the NumPy library. Pandas is fast and it has high
performance & productivity for users.
Advantages
Matplotlib
Matplotlib is an amazing visualization library in Python for 2D plots of arrays. Matplotlib is a multi-
platform data visualization library built on NumPy arrays and designed to work with the broader SciPy
stack. It was introduced by John Hunter in the year 2002. One of the greatest benefits of visualization is
that it allows us visual access to huge amounts of data in easily digestible visuals. Matplotlib consists of
several plots like line, bar, scatter, histogram etc.
Aim: Brief Study of Machine Learning Frameworks such as Open CV, Scikit Learn, Keras, Tensorflow
etc.
What is ML framework ?
Machine learning relies on algorithms. Unless you’re a data scientist or ML expert, these algorithms are
very complicated to understand and work with. A machine learning framework, then, simplifies machine
learning algorithms. An ML framework is any tool, interface, or library that lets you develop ML models
easily, without understanding the underlying algorithms.There are a variety of machine learning
frameworks, geared at different purposes. Nearly all ML the frameworks—those we discuss here and those
we don’t—are written in Python. Python is the predominant machine learning programming language.
Open CV
OpenCV is the huge open-source library for the computer vision, machine learning, and image
processing and now it plays a major role in real-time operation which is very important in today’s
systems. By using it, one can process images and videos to identify objects, faces, or even handwriting
of a human. When it integrated with various libraries, such as NumPy, python is capable of processing
the OpenCV array structure for analysis. To Identify image pattern and its various features we use vector
space and perform mathematical operations on these features.
OpenCV Functionality
Image/video I/O, processing, display (core, imgproc, highgui)
Object/feature detection (objdetect, features2d, nonfree)
Geometry-based monocular or stereo computer vision (calib3d, stitching, videostab)
Computational photography (photo, video, superres)
Machine learning & clustering (ml, flann)
CUDA acceleration (gpu)
Scikit Learn
Scikit-learn (Sklearn) is the most useful and robust library for machine learning in Python. It provides a
selection of efficient tools for machine learning and statistical modeling including classification,
regression, clustering and dimensionality reduction via a consistence interface in Python. This library,
which is largely written in Python, is built upon NumPy, SciPy and Matplotlib.
Uses:
Linear regression
Decision tree regressions
Random Forest regressions
K-Nearest neighbor
SVMs
Stochastic Gradient Descent models
Scikit provides model analysis tools like the confusion matrix for assessing how well a model performed.
Many times, you can start an ML job in scikit-learn and then move to another framework. For example,
scikit-learn has excellent data pre-processing tools for one-hot encoding categorical data. Once the data is
pre-processed through Scikit, you can move it into TensorFlow or PyTorch.
Keras:
Keras is an open-source high-level Neural Network library, which is written in Python is capable enough
to run on Theano, TensorFlow, or CNTK. It was developed by one of the Google engineers, Francois
Chollet. It is made user-friendly, extensible, and modular for facilitating faster experimentation with deep
neural networks. It not only supports Convolutional Networks and Recurrent Networks individually but
also their combination.
It cannot handle low-level computations, so it makes use of the Backend library to resolve it. The backend
library act as a high-level API wrapper for the low-level API, which lets it run on TensorFlow, CNTK, or
Theano.
Advantages of Keras
It is very easy to understand and incorporate the faster deployment of network models.
It has huge community support in the market as most of the AI companies are keen on using it.
It supports multi backend, which means you can use any one of them among TensorFlow, CNTK,
and Theano with Keras as a backend according to your requirement.
Since it has an easy deployment, it also holds support for cross-platform.
Tensorflow
TensorFlow was developed at Google Brain and then made into an open source project. TensorFlow can:
TensorFlow is among the de facto machine learning frameworks used today, and it is free. (Google thinks
the library can be free, but ML models use significant resources for production purposes, so they capitalize
on selling the resources to run their tools.)
TensorFlow is a full-blown, ML research and production tool. It can be very complex—but it doesn’t have
to be. Like an Excel spreadsheet, TensorFlow can be used simply or more expertly:
TF is simple enough for the basic user who wants to return a prediction on a given set of data
TF can also work for the advanced user who wishes to set up multiple data pipelines,
transform the data to fit their model, customize all layers and parameters of their model, and
train on multiple machines while maintaining privacy of the user.
TensorFlow has a rich set of tools. For example, the activation functions for neural networks can do all the
hard work of statistics. If we define deep learning as the ability to do neural networks, then TensorFlow
does that. But it can also handle more everyday problems, like regression.
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)
EXPERIMENT-5
Aim: For a given set of training data examples stored in a .CSV file, implement and demonstrate the
scratch Implementation of Linear Regression Algorithm.
# Split the data into features (X) and target variable (y)
X = data.iloc[:, :-1].values
y = data.iloc[:, -1].values
We will create an instance of the Linear Regression model and call the fit method to train it on the training
data.
final code:
class LinearRegression:
def __init__(self):
self.coefficients = None
def fit(self, X, y):
# Add a column of ones to X for the bias term
X = np.hstack((np.ones((X.shape[0], 1)), X))
# Calculate the coefficients using the normal equation
self.coefficients = np.linalg.inv(X.T)
Result:
EXPERIMENT-6
Aim: For a given set of training data examples stored in a .CSV file, implement and demonstrate the
Implementation of Linear Regression Algorithm Linear Regression using Python library (for any given
CSV dataset )
Linear Regression
Simple linear regression is an approach for predicting a response using a single feature. It is assumed
that the two variables are linearly related. Hence, we try to find a linear function that predicts the
response value(y) as accurately as possible as a function of the feature or independent variable(x).
Let us consider a dataset where we have a value of response y for every feature x:
h(xi) = B0+B1xi
Here,
h(xi) represents the predicted response value for ith observation.
b_0 and b_1 are regression coefficients and represent y-intercept and slope of regression line
respectively.
To create our model, we must “learn” or estimate the values of regression coefficients b_0 and b_1. And
once we’ve estimated these coefficients, we can use the model to predict responses!
In this article, we are going to use the principle of Least Squares.
Here, e_i is a residual error in ith observation.
So, our aim is to minimize the total residual error.
We define the squared error or cost function, J as:
and our task is to find the value of b_0 and b_1 for which J(b_0,b_1) is minimum!
Without going into the mathematical details, we present the result here:
where SS_xy is the sum of cross-deviations of y and x:
and SS_xx is the sum of squared deviations of x.
# Aayush Ojha 0827CI201004
import numpy as np
import matplotlib.pyplot as plt
def estimate_coef(x, y):
# number of observations/points
n = np.size(x)
# mean of x and y vector
m_x = np.mean(x)
m_y = np.mean(y)
# calculating cross-deviation and deviation about x
SS_xy = np.sum(y*x) - n*m_y*m_x
SS_xx = np.sum(x*x) - n*m_x*m_x
# calculating regression coefficients
b_1 = SS_xy / SS_xx
b_0 = m_y - b_1*m_x
return (b_0, b_1)
def plot_regression_line(x, y, b):
# plotting the actual points as scatter plot
plt.scatter(x, y, color = "m",
marker = "o", s = 30)
# predicted response vector
y_pred = b[0] + b[1]*x
# plotting the regression line
plt.plot(x, y_pred, color = "g")
# putting labels
plt.xlabel('x')
plt.ylabel('y')
# function to show plot
plt.show()
def main():
# observations / data
x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
y = np.array([1, 3, 2, 5, 7, 8, 8, 9, 10, 12])
# estimating coefficients
b = estimate_coef(x, y)
print("Estimated coefficients:\nb_0 = {} \
\nb_1 = {}".format(b[0], b[1]))
# plotting regression line
plot_regression_line(x, y, b)
if __name__ == "__main__":
main()
Aim: For a given set of training data examples stored in a .CSV file, implement and demonstrate the
scratch Implementation for binary classification using Logistic Regression Algorithm.
Logistic regression is one of the most popular Machine Learning algorithms, which comes under the
Supervised Learning technique. It is used for predicting the categorical dependent variable using a
given set of independent variables. Logistic regression predicts the output of a categorical dependent
variable. Therefore the outcome must be a categorical or discrete value. It can be either Yes or No, 0
or 1, true or False, etc. but instead of giving the exact value as 0 and 1, it gives the probabilistic values
which lie between 0 and 1.Logistic Regression is much similar to the Linear Regression except that
how they are used. Linear Regression is used for solving Regression problems, whereas Logistic
regression is used for solving the classification problems.
Dataset:
Import Libraries:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
Read Data
dataset = pd.read_csv("User_Data.csv")
it is very important to perform feature scaling here because Age and Estimated Salary values lie in
different ranges. If we don’t scale the features then the Estimated Salary feature will dominate the Age
feature when the model finds the nearest neighbor to a data point in the data space.
Evaluation Metrics
OUTPUT:
EXPERIMENT-8
Aim: Build an Artificial Neural Network (ANN) by implementing the Backpropagation algorithm and
test the same using MNIST Handwritten Digit Multiclass classification data sets with use of use of batch
normalization, early stopping and drop out.
Backpropagation:
The Backpropagation algorithm is a supervised learning method for multilayer feed-forward networks
from the field of Artificial Neural Networks. The principle of the backpropagation approach is to model a
given function by modifying internal weightings of input signals to produce an expected output signal. The
system is trained using a supervised learning method, where the error between the system’s output and a
known expected output is presented to the system and used to modify its internal state. Backpropagation
can be used for both classification and regression problems.
Implementation:
Import Libraries
Load Dataset
Prepare Dataset
y[:3]
Split dataset
#Aayush Ojha 0827CI201004
#Split data into train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=20,
random_state=4)
Helper function
Backpropagation
# on output layer
Z2 = np.dot(A1, W2)
A2 = sigmoid(Z2)
# Calculating error
mse = mean_squared_error(A2, y_train)
acc = accuracy(A2, y_train)
results=results.append({"mse":mse, "accuracy":acc},ignore_index=True )
# backpropagation
E1 = A2 - y_train
dW1 = E1 * A2 * (1 - A2)
E2 = np.dot(dW1, W2.T)
dW2 = E2 * A1 * (1 - A1)
# weight updates
W2_update = np.dot(A1.T, dW1) / N
W1_update = np.dot(x_train.T, dW2) / N
W2 = W2 - learning_rate * W2_update
W1 = W1 - learning_rate * W1_update
Plot MSE
Plot Accuracy
Z2 = np.dot(A1, W2)
A2 = sigmoid(Z2)
output:
Accuracy: 0.8
EXPERIMENT-9
Aim: Build an Artificial Neural Network by implementing the Backpropagation algorithm and test the
same using CIFAR 100 Multiclass classification data sets with use of use of batch normalization, early
stopping and drop out.
Backpropagation:
The Backpropagation algorithm is a supervised learning method for multilayer feed-forward networks
from the field of Artificial Neural Networks. The principle of the backpropagation approach is to model a
given function by modifying internal weightings of input signals to produce an expected output signal. The
system is trained using a supervised learning method, where the error between the system’s output and a
known expected output is presented to the system and used to modify its internal state. Backpropagation
can be used for both classification and regression problems.
Implementation:
Import Libraries
Load Dataset
Prepare Dataset
y[:3]
Split dataset
Backpropagation
# on output layer
Z2 = np.dot(A1, W2)
A2 = sigmoid(Z2)
# Calculating error
mse = mean_squared_error(A2, y_train)
acc = accuracy(A2, y_train)
results=results.append({"mse":mse, "accuracy":acc},ignore_index=True )
# backpropagation
E1 = A2 - y_train
dW1 = E1 * A2 * (1 - A2)
E2 = np.dot(dW1, W2.T)
dW2 = E2 * A1 * (1 - A1)
# weight updates
W2_update = np.dot(A1.T, dW1) / N
W1_update = np.dot(x_train.T, dW2) / N
W2 = W2 - learning_rate * W2_update
W1 = W1 - learning_rate * W1_update
Plot MSE
Aim: ANN implementation use of batch normalization, early stopping and drop out (For Image
Dataset such as Covid Dataset).
Batch Normalization
Batch Normalization is a technique that normalizes the inputs to each layer in the network to have zero
mean and unit variance. It is used to speed up the training process and improve the performance of ANNs.
By reducing the internal covariate shift, batch normalization helps to stabilize the distribution of the output
of each layer, making the training process more efficient and reducing the risk of overfitting.
Early stopping
Early stopping is a technique that monitors the validation error during training and stops the training
process when the validation error stops improving. This helps to prevent overfitting by stopping the
training process before the model starts to fit the noise in the training data.
Drop Out
Drop out is a technique that randomly drops out some of the neurons in the network during training. This
helps to prevent overfitting by forcing the remaining neurons to learn more robust features.
Loading Images
Read image
model = keras.Sequential([
layers.Dense(1024, activation='relu', input_shape=[11]),
layers.Dropout(0.3),
layers.BatchNormalization(),
layers.Dense(1024, activation='relu'),
layers.Dropout(0.3),
layers.BatchNormalization(),
layers.Dense(1024, activation='relu'),
layers.Dropout(0.3),
layers.BatchNormalization(),
layers.Dense(1),
])
model.compile(
optimizer='adam',
loss='mae',
)
history = model.fit(
X_train, y_train,
validation_data=(X_valid, y_valid),
batch_size=256,
epochs=100,
verbose=0,
)
EXPERIMENT-11
Aim: Build an Convolutional Neural Network by implementing the Backpropagation algorithm and test
the same using MNIST Handwritten Digit Multiclass classification data sets.
Pre-processing Dataset
#Aayush Ojha 0827CI201004
x_train = x_train / 255.0
x_test = x_test / 255.0
Testing Performance
#Aayush Ojha 0827CI201004
# Evaluate the model on the test set
test_loss, test_acc = model.evaluate(x_test,
tf.keras.utils.to_categorical(y_test))
print('Test accuracy:', test_acc)
Output:
EXPERIMENT-12
Aim: Build an Convolutional Neural Network by implementing the Backpropagation algorithm and test
the same using CIFAR 100 Multiclass classification data sets.
Convolutional Neural Network is one of the main categories to do image classification and image
recognition in neural networks. Scene labeling, objects detections, and face recognition, etc., are some of
the areas where convolutional neural networks are widely used. CNN takes an image as input, which is
classified and process under a certain category such as dog, cat, lion, tiger, etc. The computer sees an
image as an array of pixels and depends on the resolution of the image. Based on image resolution, it will
see as h * w * d, where h= height w= width and d= dimension.
Data preprocessing: Load and preprocess the CIFAR 100 dataset. This may involve tasks such as
splitting the data into training, validation, and test sets, normalization, and data augmentation.
Model architecture: Define the CNN architecture, including the number of layers, the type of layers
(convolutional, pooling, dense), activation functions, and the number of filters.
Forward propagation: Implement the forward propagation algorithm to compute the output of the CNN
for a given input.
Loss function: Define a suitable loss function to measure the error between the predicted output and the
actual output.
Backpropagation: Implement the backpropagation algorithm to compute the gradients of the loss
function with respect to the parameters of the CNN.
Update parameters: Use the computed gradients to update the parameters of the CNN using an
optimization algorithm such as stochastic gradient descent.
Training: Train the CNN on the training dataset by iterating over the training examples, computing the
loss, and updating the parameters using backpropagation.
Evaluation: Evaluate the performance of the trained CNN on the validation and test sets.
Fine-tuning: Fine-tune the CNN by adjusting the hyperparameters, such as the learning rate, batch size,
and regularization, based on the performance on the validation set.
Test: Test the final CNN on the test set and report the performance metrics, such as accuracy, precision,
and recall.
Output:
EXPERIMENT-13
Output:
EXPERIMENT-14
What is RNN?
RNN stands for Recurrent Neural Network. It is a type of artificial neural network that is designed to work
with sequential data, such as time-series data or natural language processing (NLP) data. RNNs can be
used for a variety of tasks such as language modeling, speech recognition, machine translation, and image
captioning. The key feature of RNNs is that they have a recurrent connection that allows information to be
passed from one step of the sequence to the next. This allows the network to maintain a memory of what it
has seen earlier in the sequence and use it to make predictions at each step. The basic building block of an
RNN is a cell, which takes an input and a hidden state as input and produces an output and a new hidden
state as output.
model = keras.Sequential()
# Add an Embedding layer expecting input vocab of size 1000, and
# output embedding dimension of size 64.
model.add(layers.Embedding(input_dim=1000, output_dim=64))
model.summary()
model = keras.Sequential()
model.add(layers.Embedding(input_dim=1000, output_dim=64))
model.add(layers.Dense(10))
model.summary()
encoder_vocab = 1000
decoder_vocab = 2000
encoder_input = layers.Input(shape=(None,))
encoder_embedded = layers.Embedding(input_dim=encoder_vocab, output_dim=64)(
encoder_input
)
decoder_input = layers.Input(shape=(None,))
decoder_embedded = layers.Embedding(input_dim=decoder_vocab, output_dim=64)(
decoder_input
)
Model: "model"
_______________________________________________________________________
___________________________
Layer (type) Output Shape Param #
Connected to
=======================================================================
===========================
input_1 (InputLayer) [(None, None)] 0
_______________________________________________________________________
___________________________
input_2 (InputLayer) [(None, None)] 0
_______________________________________________________________________
___________________________
embedding_2 (Embedding) (None, None, 64) 64000
input_1[0][0]
_______________________________________________________________________
___________________________
embedding_3 (Embedding) (None, None, 64) 128000
input_2[0][0]
_______________________________________________________________________
___________________________
encoder (LSTM) [(None, 64), (None, 33024
embedding_2[0][0]
_______________________________________________________________________
___________________________
decoder (LSTM) (None, 64) 33024
embedding_3[0][0]
encoder[0][1]
encoder[0][2]
_______________________________________________________________________
___________________________
dense_2 (Dense) (None, 10) 650
decoder[0][0]
=======================================================================
===========================
Total params: 258,698
Trainable params: 258,698
Non-trainable params: 0
_______________________________________________________________________
___________________________
------------Completed--------------