Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Tharun

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 100

PRINCE SHRI VENKATESHWARA

PADMAVATHY ENGINEERING
COLLEGE
(An Autonomous Institution)

Mambakkam-Medavakkam MainRoad,
Ponmar,Chennai- 600127

DEPARTMENT OF INFORMATION AND TECHNOLOGY

CCS355-NEURAL NETWORKS AND DEEP LEARNING LABORATORY


(B.TECHIT–VI SEMESTER)

Academic Year:2023–2024

Name of the Student :

Register Number :

Year/ Semester :
PRINCE SHRI VENKATESHWARA
PADMAVATHYENGINEERING COLLEGE
(An Autonomous Institution)

BONAFIDE CERTIFICATE

Name : ………………………………………………
Register No : ………………………………………………
Semester : ………………………………………………
Branch : ………………………………………………

Certified that this is a Bonafide Record of the work done by the above student in the
CCS355-Neural Networks and Deep LearningLaboratory during the year2023- 2024.

Signature of Faculty In-Charge Signature of Principal

Submitted for Practical Examination held on……………………….

Internal Examiner External Examiner


VISION OF THE INSTITUTE

To be a prominent institution for technical education and research to meet the global
challenges and demand for the societal needs.

MISSION OF THE INSTITUTE


 To develop the needed resources and infrastructure, and to establish a conducive
ambience for the teaching- learning process.

 To nurture in the students, professional and ethical values, and to install in them a spirit
of innovation and entrepreneurship.

 To encourage in the students a desire for higher learning and research, to equip them to
face the global challenges.

 To provide opportunities for students to get the needed additional skills to make them
industry ready.

 To interact with industries and other organizations to facilitate transfer of knowledge


and know-how.

VISION OF THE DEPARTMENT

To produce competent graduates suitable for industries, organization and research at global level
by providing quality technical education and by imparting human values to meet the
globalized technological society.

MISSION OF THE DEPARTMENT

M1. To encourage students to become as problem-solving individuals by providing an efficient


teaching and learning environment with essential resources.

M2. To produce competent graduates suitable for industries, organization and research at global
level by providing quality technical education and by imparting human values to meet the
globalized technological society.

M3. To promote higher education, research and develop a culture of innovation driven
entrepreneurship by inculcating the professional and moral values in the students.
INSTRUCTIONS TO STUDENTS

Before entering the lab the student should carry the following things (MANDATORY)
 Identity card issued by the college.
 Class notes
 Lab observation book
 Lab Manual
 Lab Record
 Student must sign in and sign out in the register provided when attending the lab session
without fail.
 Come to the laboratory in time. Students, who are late more than 15 min., will not be allowed
to attend the lab.
 Students need to maintain 100% attendance in lab if not a strict action will be taken.
 All students must follow a Dress Code while in the laboratory
 Foods, drinks are NOT allowed.
 All bags must be left at the indicated place.
 Refer to the lab staff if you need any help in using the lab.
 Respect the laboratory and its other users.
 Workspace must be kept clean and tidy after experiment is completed.
 Read the Manual carefully before coming to the laboratory and be sure about what you
are supposed to do.
 Do the experiments as per the instructions given in the manual.
 Copy all the programs to observation which are taught in class before attending the lab
session.
 Students are not supposed to use floppy disks, pen drives without permission of lab- in
charge.
 Lab records need to be submitted on or before the date of submission.
Syllabus

PRACTICAL EXPERIMENTS 30PERIODS

1. Implement simple vector addition in TensorFlow.


2. Implement a regression model in Keras.
3. Implement a perceptron in TensorFlow/Keras Environment.
4. Implement a Feed-Forward Network in TensorFlow/Keras.
5. Implement an Image Classifier using CNN in TensorFlow/Keras.
6. Improve the Deep learning model by fine tuning hyper parameters.
7. Implement a Transfer Learning concept in Image Classification.
8. Using a pre trained model on Keras for Transfer Learning
9. Perform Sentiment Analysis using RNN
10. Implement an LSTM based Autoencoder in TensorFlow/Keras.
11. Image generation using GAN.

COURSE OUTCOMES: TOTAL: 60 PERIODS

At the end of this course, the students will be able to:


CO1: Apply Convolution Neural Network for image processing.
CO2: Understand the basics of associative memory and unsupervised learning networks.
CO3: Apply CNN and its variants for suitable applications.
CO4: Analyze the key computations underlying deep learning and use them to build and train deep neural
networks
for various tasks.
CO5: Apply autoencoders and generative models for suitable applications.

COs Course Outcomes Experiments


List
CO1 Implement the basic Artificial neural networks model. 1,2,7

CO2 Understand the basics of associative memory and 1,2,3


unsupervised learning networks
CO3 Apply CNN and its variants for suitable applications. 4,5

CO4 Analyse the effectiveness of different regularization methods 8,6


and batch normalization techniques on deep learning models.
CO5 Apply RNN’s and auto encoders to real-world datasets. 10,11
Mapping of Course Outcomes with the POs and PSOs

CO PO PO PO PO PO PO PO PO PO PO1 PO1 PO1 PSO PSO


/PO 1 2 3 4 5 6 7 8 9 0 1 2 1 2
CO 3 2 2 2 - - 1 1 - 1 1
1
CO 3 2 2 2 - - 1 1 1 - 1
2
CO 3 2 2 2 - - 1 1 - - 1
3
CO 2 2 2 2 - - 1 1 1 - 1
4
CO 3 2 2 2 - - 1 1 1 1 1
5

1 - low, 2 - medium, 3 - high, ‘-' - no correlation


Relevance of PO’s /PSO’s
Exp Title of Type CO’s PO’s
. Experiments
No.
1 Implement simple vector Implementation and CO1,CO2 PO3,PO5
addition in TensorFlow. Installation
2 Implement a regression Implementation and CO1,CO2 PO3,PO5
model in Keras Installation
3 Implement a perceptron in Implementation and CO3,CO4 PO1,PO2
TensorFlow/Keras Installation
Environment.
4 Implement a Feed-Forward Analysis and CO3,CO4 PO1,PO2
Network in Implementaion
TensorFlow/Keras.
5 Implement an Image Design & Development CO2,CO4 PO3,PO5
Classifier using CNN in
TensorFlow/Keras.
6 Improve the Deep learning Design & Development CO2,CO4 PO4,PO3
model by fine tuning hyper
parameters.
7 Implement a Transfer Implementation & CO2,CO5, CO3 PO3,PO5
Learning concept in Image Installation
Classification.
8 Using a pre trained model Development &Execution CO3,CO4 PO1,PO2
on Keras for Transfer
Learning
9 Perform Sentiment Analysis Implementation & CO3,CO4 PO3 & PSO1
using RNN Installation
10 Implement an LSTM based Design & Development CO1,CO2, CO4 PO3 & PSO1
Autoencoder in
TensorFlow/Keras.
11 Image generation using Modern Tools, Analysis CO1,CO2, CO5 PO5,PO2,PO4 &
GAN & Investigations PSO1
12 Implement any simple Implementation & CO3,CO4 PO3 & PSO1
Reinforcement Algorithm Installation
for an NLP problem
13 Implement Object Detection Design & Development CO1,CO2 CO4 PO1,PO2
using CNN.
14 Recommendation system Design & Development CO1,CO2, CO4 PO1,PO2
from sales data using Deep
Learning

Exp Title of Experiments Type CO’s PO’s


.
No.
15 Train a Deep learning Design, Implementation & CO1,CO2, PO1,PO2,PO3
model to classify a given Installation CO4,C03
image using pre trained
model
S.N DATE EXPERIMENT TITLE PAGE MARKS SIGN
O NO
1 Implement simple vector addition in
TensorFlow.
2 Implement a regression model in
Keras.
3 Implement a perceptron in
TensorFlow/Keras Environment.
4 Implement a Feed-Forward Network
in TensorFlow/Keras.
5 Implement an Image Classifier using
CNN in TensorFlow/Keras.

6 Improve the Deep learning model by


fine tuning hyper parameters.
7 Implement a Transfer Learning
concept in Image Classification
8 Using a pre trained model on Keras
for Transfer Learning
9 Perform Sentiment Analysis using
RNN
10 Implement an LSTM based
Autoencoder in TensorFlow/Keras.
11 Image generation using GAN

TABLE OF CONTENTS
TOPIC BEYOND THE SYLLABUS

S.NO DATE EXPERIMENT TITLE PAGE MARKS SIGN


NO
12 Train a Deep learning model to
classify a given image using pre
trained model
13 Recommendation system from sales
data using Deep Learning.
14 Implement Object Detection using
CNN.
15 Implement any simple
Reinforcement Algorithm for an
NLP problem.
Reg-411721205051

EX NO:1 IMPLEMENT SIMPLE VECTOR ADDITION IN


DATE: TENSORFLOW

AIM:
The aim of implementing simple vector addition in TensorFlow is to demonstrate the
basic syntax and functionality of the framework while performing a fundamental
mathematical operation.

ALGORITHM:

Step 1: Import the TensorFlow library. Importing the TensorFlow library allows us to
use all of the functions and classes that TensorFlow provides
Step 2: Create two vectors, x and y .Creating two vectors, x and y, is done using the
tf.constant() function. This function takes a list of values asinput and creates a
TensorFlow tensor from it.
Step 3: Add the two vectors together. Adding the two vectors together is done using the
tf.add() function. This function takes two tensors as input and returns a
tensor that isthe sum of the two input tensors .
Step 4: Print the result .Printing the result is done using the print() function. This
function prints the value of the input tensor to the console.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 13


Reg-411721205051

PROGRAM:

import tensorflow as tf
# Define two vectors as constants
vector1 = tf.constant([1.0, 2.0, 3.0])
vector2 = tf.constant([4.0, 5.0, 6.0])
# Add the vectors using the add operation
result = tf.add(vector1, vector2)
# Create a TensorFlow session
with tf.Session() as session:
# Run the computation and get the result
addition_result = session.run(result)
# Print the result
print("Vector 1:", addition_result[0])
print("Vector 2:", addition_result[1])
print("Vector Sum:", addition_result[2])

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 14


Reg-411721205051

OUTPUT:

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 15


Reg-411721205051

VIVA QUESTIONS:

1.What is Tensorflow?
TensorFlow is defined as an open-source platform and framework for
machine
learning, which includes libraries and tools based on Python and Java —
designed with theobjective of training machine learning and deep learning
models on data.

2.What is Neural Network?


A neural network is a machine learning program, or model, that makes decisions
in a manner similar to the human brain, by using processes that mimic the way
biological neurons work together to identify phenomena, weigh options and arrive at
conclusions.

3.Can you explain the concept of a neural network and its basic architecture?
The architecture of neural networks is made up of an input, output, and hidden layer.
Neural networks themselves, or artificial neural networks (ANNs), are a subset of machine
learning designed to mimic the processing power of a human brain.

4.What is the difference between a perceptron, a feed forward neuralnetwork and a


recurrent neural network?
Feedforward neural networks pass the data forward from input to output, while recurrent
networks have a feedback loop where data can be fed back into the input at some point before
it
is fed forward again for further processing and final output.

5.How do you define activation functions in the context of neural networks? Can you
provide examples of commonly used activation functions and explain their purposes?
•Linear or Identity Activation Function.
•Non-linear Activation Function.
•Sigmoid or Logistic Activation Function.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 16


Reg-411721205051

RESULT:
Thus the implementation simple Vector addition in Tensorflow has been

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 17


Reg-411721205051

EX NO:2
DATE: IMPLEMENT AREGRESSION MODEL IN KERAS

Successfully executed and codes are generated.

AIM:
The aim of implementing a regression model in Keras is to develop a neural network that
can accurately predict continuous values based on input features.

ALGORITHM:

Step 1: Import Libraries: Import necessary libraries including Keras, NumPy, and
any other required libraries.
Step 2: Build the Model:Initialize the model Create a Sequential model in Keras.
Step 3: Evaluate the model: Use the evaluate method to evaluate the model's
performance on the test data.
Step 4: Make predictions: Use the trained model to make predictions on new/unseen
data.
Step 5: Save the model: Serialize the trained model to disk for future use
without retraining.
Step 6: Deploy the model: Deploy the trained model in a production environment
for real-world use.
Step 7: Monitor model performance: Continuously monitor the model's performance
and iterates needed to improve it.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 18


Reg-411721205051

PROGRAM:

import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
# Generate some sample data for regression
X = np.random.rand(100, 1)
# Input feature
y = 2 * X + 1 + 0.1 * np.random.rand(100, 1)
# Simulated linear relationship with noise
# Define the model architecture
model = keras.Sequential([
layers.Input(shape=(1,)), # Input layer
layers.Dense(1) # Output layer with 1 unit for regression
])
# Compile the model
model.compile(optimizer='adam', loss='mean_squared_error')
# Train the model model.fit(X, y, epochs=100, verbose=0) # You can adjust the
number of epochs
# Evaluate the model (optional)
loss = model.evaluate(X, y)
print("Mean Squared Error:", loss)
# Make predictions
new_data = np.array([[0.5], [0.8], [1.0]]) # New data for prediction
predictions = model.predict(new_data)
PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 19
Reg-411721205051

print("Predictions:", predictions)

OUTPUT:

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 20


Reg-411721205051

VIVA
QUESTIONS:

1. What is Keras?
Keras is a high-level, deep learning API developed by Google for implementing
neural networks. It is written in Python and is used to make the implementation
of neural networks easy. It also supports multiple backend neural network
computation.

2. What is mean by Regression?


Regression is a statistical method used in finance, investing, and other disciplines
that attempts to determine the strength and character of the relationship between
one dependent variable (usually denoted by Y) and a series of other variables
(known as independent variables).

3. What is the role of activation functions in a neural network?


The activation function decides whether a neuron should be activated or not by
calculating the weighted sum and further adding bias to it. The purpose of the
activation function is to introduce non-linearity into the output of a neuron.

4. How do you initialize the weights in a neural network?


Use Heuristics for weight initializationThe most common heuristics are as follows:
(a) For RELU activation function: This heuristic is called He-et-al Initialization.
(b) For tanh activation function : This heuristic is known as Xavier initialization.

5. What is the purpose of the bias term in a neural network?


In simple words, neural network bias can be defined as the constant which is
added to the product of features and weights. It is used to offset the result.
PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 21
Reg-411721205051

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 22


Reg-411721205051

RESULT:
Thus the implementing a regression model in keras has been successfully

EX NO:3 IMPLEMENT A PERCEPTRON IN


DATE: TENSORFLOW/KERAS ENVIRONMENT

executed and codesare generated.

AIM:
The aim of this program is to implement a perceptron model using
TensorFlow/Keras, a powerful library for building and training neural Networks.

ALGORITHM:

Step 1: Import necessary libraries


Step 2: Prepare the dataset
Step 3: Define the perceptron model
Step 4: Compile the model
Step 5: Train the model
Step 6: Evaluate the model
Step 7: Make predictions
Step 8: Fine-tune the model
Step 9: Save or deploy the model
Step 10: Iterate and optimize

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 23


Reg-411721205051

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 24


Reg-411721205051

PROGRAM:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
import numpy as np
# Generate some sample data for a logical OR operation
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]) # Input features
y = np.array([0, 1, 1, 1]) # Output labels (OR gate)
# Define a simple perceptron model
model = keras.Sequential([
Dense(units=1, input_dim=2, activation='sigmoid')
])
# Compile the model
model.compile(optimizer=SGD(learning_rate=0.1), loss='mean_squared_error',
metrics=['accuracy'])
# Train the model
model.fit(X, y, epochs=1000, verbose=0) # You can adjust the number of epochs
# Evaluate the model
loss, accuracy = model.evaluate(X, y)
print("Loss:", loss)
print("Accuracy:", accuracy)
# Make predictions
predictions = model.predict(X)
print("Predictions:")
print(predictions)

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 25


Reg-411721205051

OUTPUT:

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 26


Reg-411721205051

VIVA QUESTIONS:

1. What is mean by perceptron?


Perceptron is a type of artificial neural network, which is a fundamental concept in machine
learning.

2. Define the terms input layer, hidden layer, and output layer in the context of a neural
network.
The Neural Network is constructed from 3 type of layers:
Input layer — initial data for the neural network.
Hidden layers — intermediate layer between input and output layer and place where all the
computation is done.
Output layer — produce the result for given inputs.

3. How do you prevent overfitting in a neural network?


Method 1: Data augmentation.
Method 2: Simplifying neural network.
Method 3: Weight regularization.
Method 4: Dropouts.
Method 5: Early stopping.

4. Discuss the role of regularization techniques such as L1 and L2 regularization.


L1 and L2 regularization are techniques used to prevent overfitting in machine learning
models by introducing a penalty for model complexity. L1 Regularization(LASSO):
Penalizes the absolute value of the weight coefficients. Minimizes the sum of the absolute
weights of the coefficients.

5. Explain the concept of gradient descent and how it is used to optimize neural
network parameters.
Gradient Descent is an optimization algorithm for finding a local minimum of a differentiable
function. Gradient descent in machine learning is simply used to find the values of a
function's parameters (coefficients) that minimize a cost function as far as possible

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 27


Reg-411721205051

RESULT:
Thus the Implement a perceptron in TensorFlow/Keras Environment has been
successfully executed and codes are generated.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 28


Reg-411721205051

EX NO:4 IMPLEMENT A FEED-FORWARD NETWORK IN


DATE: TENSORFLOW/KERAS ENVIRONMENT

AIM:
Thus the Implement a perceptron in TensorFlow/Keras Environment has been
successfully executed and codes are generated.

ALGORITHM:

Step 1: Import necessary libraries.


Step 2: Prepare the dataset.
Step 3: Define the FFN model architecture.
Step 4: Define the output layer.
Step 5: Compile the model.
Step 6: Train the model.
Step 7: Evaluate the model.
Step 8: Make predictions.
Step 9: Fine-tune the model.
Step 10: Save or deploy the model.
Step 11: Iterate and optimize.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 29


Reg-411721205051

PROGRAM:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense
import numpy as np
# Generate some sample data
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]) # Input features
y = np.array([0, 1, 1, 0]) # Output labels (XOR gate)
# Define a feedforward neural network model
model = keras.Sequential([
Dense(units=4, input_dim=2, activation='relu'),
# 2 input features, 4 hidden units with ReLU activation
Dense(units=1, activation='sigmoid') # 1 output unit with sigmoid activation
])
# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Train the model
model.fit(X, y, epochs=1000, verbose=0) # You can adjust the number of epochs
# Evaluate the model
loss, accuracy = model.evaluate(X, y)
print("Loss:", loss)
print("Accuracy:", accuracy)
# Make predictions
predictions = model.predict(X)
print("Predictions:")
print(predictions)

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 30


Reg-411721205051

OUTPUT:

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 31


Reg-411721205051

VIVA QUESTIONS:

1. What is mean by Feed Forward Network?


A feedforward neural network is one of the simplest types of artificial neural
networks devised. In this network, the information moves in only one direction—
forward—from the input nodes, through the hidden nodes (if any), and to the
output nodes. There are no cycles or loops in the network.

2. What is a neural network and how does it mimic the human brain?
A neural network is a method in artificial intelligence that teaches computers to
process data in a way that is inspired by the human brain. It is a type of machine
learning process, called deep learning, that uses interconnected nodes or neurons
in a layered structure that resembles the human brain.

3. Discuss the challenges associated with training RNNs on long sequences.


RNNs suffer from the problem of vanishing gradients. The gradients carry
information used in the RNN, and when the gradient becomes too small, the
parameter updates become insignificant.

4. How do you interpret the weights and activations of a trained neural network
model?
The weights are usually initialized randomly while the bias at 0. ‍The behaviour
of a neuron is also influenced by its activation function which, parallel to the
action potential for a natural neuron, defines the activation conditions and relative
values of the final output.

5. What are some common methods for handling missing data in neural
training?
• Deleting Rows with missing values.
• Impute missing values for continuous variable.
• Impute missing values for categorical variable.
• Other Imputation Methods

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 32


Reg-411721205051

RESULT:
Thus the Implement a Feed-Forward Network in TensorFlow/Keras has been
successfully executed and codes are generated.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 33


Reg-411721205051

EX NO:5 IMPLEMENT AN IMAGE CLASSIFIER USING CNN IN


DATE: TENSORFLOW /KERAS

AIM:
The aim of this project is to implement an Image Classifier using Convolutional
Neural Networks (CNNs) in TensorFlow/Keras.

ALGORITHM:

Step 1:Prepare the Data: Gather and preprocess your dataset. This involves loading
images,resizing them to a uniform size, and possibly normalizing pixel values.

Step 2:Build the CNN Model: Import necessary libraries such as TensorFlow and Keras.

Step 3: Compile the Model: Specify metrics to evaluate the model's performance during
training
Step 4: Train the Model:Feed the training data into the model

Step 5: Evaluate the Model:Once training is complete, evaluate the model's


performance on the test set.

Step 6: Fine-tuning:Consider using pre-trained models or transfer learning if you have


limited data

Step 7:Predictions:Convert prediction probabilities into class labels.

Step 8:Save or Deploy the Model:Save the trained model for future use.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 34


Reg-411721205051

PROGRAM:

import tensorflow as tf
from tensorflow import keras from tensorflow.keras.layers import Conv2D, MaxPooling2D,
Flatten, Dense, Dropout
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.utils import to_categorical
# Load and preprocess the CIFAR-10 dataset
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0 # Normalize pixel values to the range [0, 1]
y_train = to_categorical(y_train, 10) # One-hot encode the labels
y_test = to_categorical(y_test, 10)
# Define the CNN model
model = keras.Sequential([
Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)),
MaxPooling2D((2, 2)),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D((2, 2)),
Conv2D(64, (3, 3), activation='relu'),
Flatten(),
Dense(64, activation='relu'),
Dropout(0.5), Dense(10, activation='softmax')
])
# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# Train the model
model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))
# Evaluate the model
loss, accuracy = model.evaluate(x_test, y_test)
print("Test Loss:", loss)
print("Test Accuracy:", accuracy)

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 35


Reg-411721205051

OUTPUT:

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 36


Reg-411721205051

VIVA QUESTIONS:
1. What is mean by CNN?
A convolutional neural network (CNN) is a type of artificial neural network used
primarily for image recognition and processing, due to its ability to recognize
patterns in images.

2. What is mean by Image Classifier?


Image classification is the process of categorizing and labeling groups of pixels or
vectors within an image based on specific rules.

3. What are some common optimization algorithms used in training neural


networks?
•Gradient Descent.
• Stochastic Gradient Descent (SGD)
• Mini Batch Stochastic Gradient Descent (MB-SGD)
• SGD with momentum.
•Nesterov Accelerated Gradient (NAG)

4. Explain the concept of adaptive learning rate methods such as AdaGrad and
RMSProp.
By adopting Adaptive Learning Rate methodologies like AdaGrad and RMSprop,
we let these optimizer tune the learning rate by learning the characteristics of the
underlying data.

5. What is the role of momentum in gradient descent optimization?


With the addition of momentum, the gradient descent optimization process can
overcome the oscillations of noisy gradients and ride through areas of the search
space that are flat.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 37


Reg-411721205051

RESULT:
Thus the implement an image Classifier Using Convolutional Neural Networks
(CNNs) in Tensorflow/Keras has been successfully executed and codes are
generated.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 38


Reg-411721205051

EX NO:6 IMPROVE THE DEEP LEARNING MODEL BY


DATE: FINE TUNING HYPER PARAMETERS

AIM:
The aim of this program is to improve the performance of deep learning models by fine-tuning
their hyperparameters.

ALGORITHM:

Step 1: Define Hyperparameters:Identify the hyperparameters of your deep learning model that
you want to fine-tune.
Step 2: Choose Evaluation Metric:Determine the evaluation metric(s) you want to optimize for,
such as accuracy, precision, recall, F1 score, etc.
Step 3: Split Data:Split your dataset into training, validation, and test sets.
Step 4: Set Hyperparameter Search Space:Define the range or distribution for each hyperparameter
you want to tune.
Step 5: Hyperparameter Optimization Loop
Step 6: Select Best Hyperparameters
Step 7: Train Final Model: Train the final model using the selected hyperparameters
Step 8: Evaluate on Test Set: Evaluate the performance of the final model on the test
Step 9: Iterate: Depending on the results, you might want to iterate on the process by adjusting the
search space or trying different hyperparameter optimization techniques.
Step 10: Deploy Model : Once satisfied with the performance, deploy the model for real-world use.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 39


Reg-411721205051

PROGRAM:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.utils import to_categorical
# Load and preprocess the CIFAR-10 dataset
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0 # Normalize pixel values to the range [0, 1]
y_train = to_categorical(y_train, 10) # One-hot encode the labels
y_test = to_categorical(y_test, 10)
# Define a function to create and compile the CNN model
def create_cnn_model(learning_rate=0.001, dropout_rate=0.25):
model = keras.Sequential([
Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)),
MaxPooling2D((2, 2)),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D((2, 2)),
Conv2D(64, (3, 3), activation='relu'),
Flatten(),
Dense(64, activation='relu'),
Dropout(dropout_rate),
Dense(10, activation='softmax')
])
optimizer = keras.optimizers.Adam(learning_rate=learning_rate)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
return model
# Define hyperparameters to search
learning_rates = [0.001, 0.01, 0.0001]
dropout_rates = [0.25, 0.5]

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 40


Reg-411721205051

best_accuracy = 0
best_model = None
# Hyperparameter tuning loop
for lr in learning_rates:
for dr in dropout_rates:
print(f"Training model with learning rate = {lr} and dropout rate = {dr}")
model = create_cnn_model(learning_rate=lr, dropout_rate=dr)
model.fit(x_train, y_train, epochs=10, verbose=0)
_, accuracy = model.evaluate(x_test, y_test)
if accuracy >best_accuracy:
best_accuracy = accuracy
best_model = model
print(f"Best Test Accuracy: {best_accuracy}")

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 41


Reg-411721205051

OUTPUT:

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 42


Reg-411721205051

VIVA QUESTIONS:

1. What is the purpose of hyperparameter tuning in deep learning models?


Hyperparameter tuning aims to optimize the configuration of parameters that
are not learned during the training process but significantly influence the model's
performance. This optimization process helps to enhance the model's
performance on unseen data and improve its generalization capabilities.

2. Can you name a few hyperparameters commonly tuned in deep learning


models?
Some commonly tuned hyperparameters include:
• Learning rate
• Batch size
• Number of layers
• Number of units per layer
• Activation functions
• Dropout rate
• Regularization strength

3. How can grid search and random search differ in hyperparameter


optimization?
Grid search exhaustively searches through all combinations of specified
hyperparameter values, making it more thorough but computationally expensive,
especially in high-dimensional search spaces.

4. What is the role of cross-validation in hyperparameter tuning?


Cross-validation is essential for evaluating the performance of different
hyperparameter configurations while guarding against overfitting to the validation
set. By splitting the data into multiple folds, training and validating the model on
different subsets, we can obtain more reliable estimates of the model's
performance and better generalize to unseen data.

5. How does regularization contribute to hyperparameter tuning?

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 43


Reg-411721205051

Regularization techniques such as L1/L2 regularization and dropout are


hyperparameters themselves that help control overfitting.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 44


Reg-411721205051

RESULT:
Thus the performance of deep learning models by fine-tuning their
hyperparameters has been successfully executed and codes are generated.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 45


Reg-411721205051

EX NO:7 IMPLEMENT A TRANSFER LEARNING CONCEPT


DATE: IN IMAGE CLASSIFICATION

AIM:
The aim of implementing transfer learning in an image classification program is to
harness the knowledge learned by pre-trained models on large datasets and apply it to
similar tasks with limited data availability.

ALGORITHM:

Step 1:Select Pre-trained Model: Choose a pre-trained convolutional neural network


(CNN) model that has been trained on a large dataset such as ImageNet.
Step 2:Load Pre-trained Model: Load the pre-trained model along with its
weights, excluding the classification layers.
Step 3:Freeze Base Layers: Optionally freeze the weights of the convolutional base
layers to prevent them from being updated during training.
Step 4:Modify Architecture: Replace or add new fully connected layers
Step 5:DataPreprocessing : Preprocess the input images to match the input
format expected by the pre-trained model.
Step 6:Data Augmentation.
Step 7:Compile Model: Compile the modified model with an appropriate loss
function, optimizer, and evaluation metric for your classification task.
Step 8:Train Model: Train the modified model on your new dataset.
Step 9:Fine-tune :Optionally unfreeze some of the top layers of the pre-trained model
and continue training
Step 10:Deploy Model Once satisfied with the model's performance, deploy it
for inference on new images in real-world applications.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 46


Reg-411721205051

PROGRAM:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import GlobalAveragePooling2D, Dense
from tensorflow.keras.optimizers import Adam
# Load a pre-trained model (MobileNetV2) excluding the top classification layers
base_model = MobileNetV2(weights='imagenet', include_top=False)
# Create a new model on top
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(10, activation='softmax')(x)
model = keras.Model(inputs=base_model.input, outputs=predictions)
# Freeze the layers of the pre-trained model
for layer in base_model.layers:
layer.trainable = False
# Compile the model
model.compile(optimizer=Adam(lr=0.001), loss='categorical_crossentropy',
metrics=['accuracy'])
# Load and preprocess your dataset
# You can use your own dataset or a built-in dataset like CIFAR-10
# Example of using CIFAR-10
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.utils import to_categorical
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# Normalize pixel values to the range [0, 1]
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 47
Reg-411721205051

# Use data augmentation for better performance


data_generator = ImageDataGenerator(
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True
)
# Fine-tune the model
model.fit(data_generator.flow(x_train, y_train, batch_size=32), epochs=10,
validation_data=(x_test, y_test))
# Evaluate the model
loss, accuracy = model.evaluate(x_test, y_test)
print("Test Loss:", loss)print("Test Accuracy:", accuracy)

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 48


Reg-411721205051

OUTPUT:

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 49


Reg-411721205051

VIVA QUESTIONS:

1. What is transfer learning in the context of deep learning?


Transfer learning involves leveraging knowledge from a pre-trained model on a related task
and applying it to a new task. In deep learning, this often means using the learned features
of a pre-trained model as a starting point for a new model, thereby reducing the need for
large amounts of labeled data and computational resources.
2. Why do we typically freeze the layers of the pre-trained model in transfer learning?
Freezing the layers of the pre-trained model prevents them from being updated during
training, ensuring that the learned features are retained and not overwritten. This is crucial,
especially when the pre-trained model has been trained on a large dataset and we want to
preserve its feature extraction capabilities.
3. How do we adapt a pre-trained model to a new task in transfer learning?
We adapt a pre-trained model to a new task by adding custom classification layers on top of
the pre-trained layers. These custom layers are trained specifically for the new task while
keeping the pre-trained layers frozen to retain their learned features.
4. What are the advantages of using transfer learning?
Transfer learning allows us to:
 Train models with less labeled data

 Utilize pre-trained models trained on large datasets, saving computational resources.


 Benefit from the generalization power of features learned from diverse datasets.
 Speed up the training process.
5. How do we choose a pre-trained model for transfer learning?
The choice of pre-trained model depends on factors such as the similarity of the pre-trained
model's task to the new task, the availability of pre-trained models for the desired architecture,
and computational constraints. Models like VGG, ResNet, Inception, and MobileNet
are popular choices for transfer learning due to their effectiveness and availability in
libraries like TensorFlow and PyTorch.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 50


Reg-411721205051

RESULT:
Thus the Implement a Transfer Learning concept in Image Classification
has been successfully executed and codes are generated

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 51


Reg-411721205051

EX NO:8 USING A PRE TRAINED MODEL ON KERAS FOR


DATE: TRANSFER LEARNING

AIM:
The aim of using a pre-trained model for transfer learning in Keras is to expedite the
process of developing highly accurate image classification models, even when faced
with limited computational resources or labeled data.

ALGORITHM:
Step 1:Select Pre-trained Model: Choose a pre-trained convolutional neural
network (CNN) model such as VGG, ResNet, Inception, or MobileNet.
Step 2:Load Pre-trained Model: Load the pre-trained model along with its
weights, excluding the classification layers.
Step 3:Freeze Base Layers :Optionally freeze the weights of the convolutional base
layers to prevent them from being updated during training.
Step 4:Modify Architecture Replace
Step 5:Compile Model: Compile the modified model with an appropriate loss
Function,optimizer, and evaluation metric for your classification task.
Step 6:Train Model: Train the modified model on your new dataset.
Step 7:Fine-tune: Optionally unfreeze some of the top layers of the pre-trained model
and continue training on the new dataset with a lower learning rate to fine-tune
the model further.
Step 8:Evaluate Model: Evaluate the trained model on a separate validation set to
assess its performance.
Step 9:Test Model :Test the trained model on a separate test set to assess its
generalization performance on unseen data.
Step 10:Deploy Model Once satisfied with the model's performance, deploy it
for inference on new images in real-world applications.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 52


Reg-411721205051

PROGRAM:
import numpy as np
import tensorflow as tf
from tensorflow.keras.applications import VGG16
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Dropout
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Load pre-trained VGG16 model without top layers
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
# Freeze base model layers
for layer in base_model.layers:
layer.trainable = False
# Custom output layers
model = Sequential()
model.add(base_model)
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
# Compile the model
model.compile(optimizer=Adam(lr=0.0001), loss='binary_crossentropy', metrics=['accuracy'])
# Print model summary
model.summary()
# Define data generators
train_datagen = ImageDataGenerator(rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory('train',

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 53


Reg-411721205051

target_size=(224, 224),
batch_size=32,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory('validation',
target_size=(224, 224),
batch_size=32,
class_mode='binary')
# Train the model
history = model.fit(train_generator,
steps_per_epoch=len(train_generator),
epochs=10,
validation_data=validation_generator,
validation_steps=len(validation_generator))
print("Training and validation accuracy over epochs:")
print(history.history['accuracy'])
print(history.history['val_accuracy'])

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 54


Reg-411721205051

OUTPUT:

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 55


Reg-411721205051

VIVA QUESTIONS:
1. What is a pre-trained model in the context of deep learning?
A pre-trained model is a model that has been trained on a large dataset, typically
for a specific task such as image classification, object detection, or natural
language processing.

2. How does transfer learning leverage pre-trained models?


Transfer learning involves taking a pre-trained model and adapting it to a new,
possibly related task. Instead of training a model from scratch, we reuse the
learned features of the pre-trained model and only train additional layers or
fine-tune certain layers to fit the new task.

3. Why do we freeze the layers of the pre-trained model during transfer


learning?
Freezing the layers of the pre-trained model prevents them from being updated
during training, preserving the learned features. Since these layers have already
been trained on a large dataset, they contain valuable information that is often
useful for the new task. By freezing these layers, we ensure that this information
is retained and not overwritten during training.

4. What are some popular pre-trained models available in Keras?


Keras provides access to several pre-trained models, including VGG, ResNet,
Inception, MobileNet, and more. These models are trained on large datasets such
as ImageNet and are available through the Keras Applications module. Users can
easily load these models with pre-trained weights and use them for transfer
learning or other tasks.

5. When would you choose to fine-tune the layers of a pre-trained model during
transfer learning?

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 56


Reg-411721205051

Fine-tuning the layers of a pre-trained model is typically done when the new
task is similar to the task the model was originally trained on, but the dataset for
the new task is significantly different. In such cases, fine-tuning allows the model
to adapt its learned features to the nuances of the new dataset, potentially leading
to better performance. However, fine-tuning requires caution as it can lead to
overfitting, especially when the new dataset is small.

RES
ULT:
PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 57
Reg-411721205051

EX NO:9 PERFORM SENTIMENT ANALYSIS USING RNN


DATE:
Thus the Using a pre trained model on Keras for Transfer Learning
program has been successfully executed and codes are generated.

AIM:
To train an RNN model on a dataset containing labeled examples of text with
corresponding sentiment labels.

ALGORITHM:

Step 1:Data Collection Gather a dataset of text documents labeled with sentiment.

Step 2:Data Preprocessing : Tokenization Split each text document into individual
words or tokens.

Step 3:Embedding Layer:Convert each word index into a dense vector representation .

Step 4:Model Architecture:Define an RNN architecture such as LSTM


(Long Short-Term Memory) or GRU (Gated Recurrent Unit).

Step 5:Inference:Use the trained model to predict sentiment labels for new text data.

Step 6:Fine-tuning :Fine-tune the model hyperparameters or architecture to


improve performance.

Step 7:Deployment:Deploy the trained model in a production environment where it


can process new text inputs and provide sentiment predictions.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 58


Reg-411721205051

PROGRAM:

import tensorflow as tf
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, SimpleRNN, Dense
from tensorflow.keras.optimizers import Adam
# Load and preprocess the IMDb dataset
max_features = 10000 # Maximum number of words in the vocabulary
maxlen = 100 # Maximum sequence length
batch_size = 32
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
x_train = pad_sequences(x_train, maxlen=maxlen)
x_test = pad_sequences(x_test, maxlen=maxlen)
# Build the RNN model
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(SimpleRNN(32))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer=Adam(lr=0.001), loss='binary_crossentropy',
metrics=['accuracy'])
# Train the model
model.fit(x_train, y_train, epochs=5, batch_size=batch_size, validation_split=0.2)
# Evaluate the model
loss, accuracy = model.evaluate(x_test, y_test)
print("Test Loss:", loss)

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 59


Reg-411721205051

print("Test Accuracy:", accuracy)

OUTPUT:

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 60


Reg-411721205051

VIVA QUESTIONS:

. 1.What is sentiment analysis?


Sentiment analysis, also known as opinion mining, is a natural language
processing (NLP) task that involves determining the sentiment expressed in a
piece of text, such as positive, negative, or neutral. It is commonly used to
analyze customer feedback, social media posts, product reviews, and more.

. 2.What is an RNN, and how does it differ from other types of neural networks?
A Recurrent Neural Network (RNN) is a type of neural network architecture
designed to handle sequential data by maintaining a hidden state that captures
information from previous time steps. Unlike feedforward neural networks, which
process input data independently, RNNs are capable of capturing temporal
dependencies in sequences.

. 3.How does the embedding layer work in the context of sentiment analysis?
The embedding layer converts input word indices into dense vectors of fixed size,
allowing the model to learn meaningful representations of words in a continuous
space. This dense representation helps capture semantic relationships between
words and improves the model's ability to generalize to unseen data.

. 4.Why do we use padding in sequence data for sentiment analysis?


Padding is used to ensure that all input sequences have the same length, which is
necessary for batch processing and efficient computation in neural networks. By
padding shorter sequences with zeros or other placeholders, we create uniformity
in the input data, enabling the model to process multiple sequences
simultaneously.

. 5.What is the significance of the activation function 'sigmoid' in the output layer
. of the sentiment analysis model?

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 61


Reg-411721205051

The sigmoid activation function squashes the output of the neural network to a
range between 0 and 1, making it suitable for binary classification tasks like
sentimentanalysis.

RESULT:
Thus the Perform Sentiment Analysis using RNN has been successfully
executed and codes are generated.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 62


Reg-411721205051

EX NO:10 IMPLEMENT AN LSTM BASED AUTOENCODER IN


DATE: TENSORFLOW/KERAS

AIM:
The aim of this project is to implement an LSTM-based autoencoder using
TensorFlow/Keras.

ALGORITHM:
Step 1:Import Necessary Libraries: Import TensorFlow and other required libraries
like Keras.
Step 2: Prepare the Data: Prepare your data for training the autoencoder. It could be
time series data or any sequential data.
Step 3:Combine Encoder and Decoder: Create an instance of the Sequential or
Functional API model.
Step 4:Compile the Model: Define the loss function, typically mean squared
error (MSE), since it's a reconstruction task.
Step 5:Train the Model: Fit the model to your training data.
Step 6:Evaluate the Model: Evaluate the performance of the autoencoder on
your validation set.
Step 7:Use the Autoencoder: Once trained, you can use the encoder part to
extract meaningful representations from your data.
Step 8:Fine-tuning and Optimization: Experiment with different architectures,
hyperparameters, and training strategies to improve performance.
Step 9:Save and Deploy :Save the trained model for future use or deployment in
production systems.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 63


Reg-411721205051

PROGRAM:

import numpy as np
import tensorflow as tf
from tensorflow.keras.applications import VGG16
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Dropout
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Load pre-trained VGG16 model without top layers
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
# Freeze base model layers
for layer in base_model.layers:
layer.trainable = False
# Custom output layers
model = Sequential()
model.add(base_model)
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
# Compile the model
model.compile(optimizer=Adam(lr=0.0001), loss='binary_crossentropy', metrics=['accuracy'])
# Print model summary
model.summary()
# Define data generators
train_datagen = ImageDataGenerator(rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory('train',
target_size=(224, 224),

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 64


Reg-411721205051

batch_size=32,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory('validation',
target_size=(224, 224),
batch_size=32,
class_mode='binary')
# Train the model
history = model.fit(train_generator,
steps_per_epoch=len(train_generator),
epochs=10,
validation_data=validation_generator,
validation_steps=len(validation_generator))

# Example output
print("Training and validation accuracy over epochs:")
print(history.history['accuracy'])
print(history.history['val_accuracy'])

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 65


Reg-411721205051

OUTPUT:

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 66


Reg-411721205051

VIVA QUESTIONS:

1. What is an autoencoder?
An autoencoder is an unsupervised learning neural network architecture that
aims to learn efficient representations of input data by training the model to
reconstruct its input.

2. How does the LSTM architecture differ from traditional feedforward neural
networks?
Long Short-Term Memory (LSTM) networks are a type of recurrent neural
network (RNN) designed to process sequential data while addressing the
vanishing gradient problem

3. What is the purpose of the RepeatVector layer in an LSTM-based


autoencoder?
The RepeatVector layer repeats the output of the encoder for each time step in
the decoder, allowing the decoder to reconstruct the entire sequence from the
encoded representation.

4. How is the loss function defined in an autoencoder model?


In an autoencoder, the loss function is typically defined as the reconstruction
error, which measures the difference between the input data and the reconstructed
output. Common loss functions used include mean squared error (MSE) for
continuous data and binary cross-entropy for binary data.

5. What are some applications of autoencoders?


Autoencoders have various applications, including:
 Dimensionality reduction and feature learning

 Anomaly detection and outlier detection

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 67


Reg-411721205051

 Data denoising and reconstruction


 Image compression and generation
 Collaborative filtering and recommendation systems

RESULT:
Thus the Implement an LSTM based Autoencoder in TensorFlow/Keras has
been successfully executed and codes are generated.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 68


Reg-411721205051

EX NO:11 IMAGE GENERATION USING GAN


DATE:

AIM:
The aim of an Image Generation using GANs program is to harness the power of
deep learning to generate images that exhibit realistic characteristics.

ALGORITHM:
Step 1:Import Necessary Libraries: Import TensorFlow (or any other deep learning
framework) along with other required libraries like NumPy for numerical
operations and Matplotlib for visualization.
Step 2:Build the Discriminator: Define the discriminator architecture using
convolutional layers.
Step 3:Compile Discriminator:Compile the discriminator with appropriate loss
function (binary cross-entropy) and optimizer (e.g., Adam).
Step 4:Compile GAN Model: Compile the GAN model with binary cross-entropy loss
and optimizer (e.g., Adam).
Step 5:Training Loop: Iterate over a fixed number of epochs.
Step 6:Evaluate and Visualize: Periodically, evaluate the performance of the generator
by generating sample images.
Step 7:Tune and Optimize: Experiment with different architectures, hyperparameters,
and training strategies to improve the quality of generated images.
Step 8:Save and Deploy: Save the trained generator model for future use or deployment
in production systems.
Step 9:Iterate and Improve: Iterate on the GAN architecture and training process
to achieve better image generation results.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 69


Reg-411721205051

PROGRAM:
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense, LeakyReLU, Reshape, Flatten
from tensorflow.keras.optimizers import Adam
# Define the generator
generator = keras.Sequential([
Dense(128, input_shape=(100,), activation='relu'),
Dense(784, activation='sigmoid'),
Reshape((28, 28))
])
# Define the discriminator
discriminator = keras.Sequential([
Flatten(input_shape=(28, 28)),
Dense(128, activation='relu'),
Dense(1, activation='sigmoid')
])
# Combine the generator and discriminator to form a GAN
discriminator.compile(loss='binary_crossentropy', optimizer=Adam(0.0002, 0.5))
discriminator.trainable = False # Set discriminator non-trainable
gan_input = keras.Input(shape=(100,))
x = generator(gan_input)
gan_output = discriminator(x)
gan = keras.Model(gan_input, gan_output)
gan.compile(loss='binary_crossentropy', optimizer=Adam(0.0002, 0.5))
# Load the MNIST dataset
mnist = keras.datasets.mnist
(X_train, _), (_, _) = mnist.load_data()

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 70


Reg-411721205051

X_train = X_train / 127.5 - 1.0


X_train = np.expand_dims(X_train, axis=-1)
# Training the GAN
batch_size = 32
half_batch = batch_size // 2
epochs = 10000
for epoch in range(epochs):
# Train the discriminator
idx = np.random.randint(0, X_train.shape[0], half_batch)
real_images = X_train[idx]
noise = np.random.normal(0, 1, (half_batch, 100))
generated_images = generator.predict(noise)
real_labels = np.ones((half_batch, 1))
fake_labels = np.zeros((half_batch, 1))
d_loss_real = discriminator.train_on_batch(real_images, real_labels)
d_loss_fake = discriminator.train_on_batch(generated_images, fake_labels)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# Train the generator
noise = np.random.normal(0, 1, (batch_size, 100))
valid_labels = np.ones((batch_size, 1))
g_loss = gan.train_on_batch(noise, valid_labels)
# Print the progress
if epoch % 100 == 0:
print(f"Epoch: {epoch}")

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 71


Reg-411721205051

OUTPUT:

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 72


Reg-411721205051

VIVA QUESTIONS:

1. What is a Generative Adversarial Network (GAN)?


A Generative Adversarial Network (GAN) is a type of neural network
architecture consisting of two networks: a generator and a discriminator. The
generator aims to generate realistic data samples, such as images, while the
discriminator aims to distinguish between real and fake samples.

2. How does a GAN generate realistic images?


The generator in a GAN generates realistic images by learning to map random
noise vectors from a latent space to the space of real images. Through training,
the generator learns to generate images that are increasingly indistinguishable
from real images by iteratively adjusting its parameters to minimize the
discriminator's ability to differentiate between real and generated images.

3. What is the loss function used in training a GAN?


The loss function in training a GAN consists of two components: the generator
loss and the discriminator loss. The generator loss measures how well
generator is fooling the discriminator, typically calculated as the cross-entropy
between the discriminator's predictions and a label indicating that the generated
images are real.

4. How does the training process of a GAN work?


During training, the generator and discriminator are trained iteratively in
alternating steps. In each step, the generator generates fake images from random
noise vectors and passes them to the discriminator, which evaluates their realism.
The discriminator provides feedback to the generator, indicating how well it is
fooling the discriminator

5. What are some challenges associated with training GANs?

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 73


Reg-411721205051

 Mode collapse: The generator may learn to generate a limited set of samples,
ignoring the diversity of the true data distribution.
 Instability: GAN training is sensitive to hyperparameters and architecture
choices, and it may suffer from mode collapse or oscillation during training.

RESULT:
Thus the Image generation using GAN has been successfully executed and codes are
generated.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 74


Reg-411721205051

EX NO:12 TRAIN A DEEP LEARNING MODEL TO CLASSIFY A


DATE: GIVEN IMAGE USING PRE TRAINED MODEL

AIM:
The aim of this project is to train a deep learning model to classify given images into
multiple classes using a pre-trained model as a base.

ALGORITHM:
Step 1:Import Necessary Libraries Import TensorFlow (or any other deep learning
framework), along with other required libraries like Keras, NumPy, and
Matplotlib.
Step 2:Load Pre-trained Model: Load a pre-trained convolutional neural
network (CNN) model that has been trained on a large dataset (e.g., ImageNet).
Step 3:Data Augmentation Optionally, apply data augmentation techniques such as
random rotations, flips, shifts, or zooms to increase the diversity of training
examples and improve generalization.
Step 4:Evaluate the Model: Evaluate the performance of the trained model on your
testing data using the evaluate() method. Calculate metrics such as
accuracy,precision, recall, and F1-score to assess classification performance.
Step 5:Visualize Results: Visualize the model's predictions on sample images from the
testing set to gain insights into its performance and identify any
misclassifications.
Step 6:Fine-tune and Optimize: Experiment with different architectures,
hyperparameters,and training strategies to improve the model's performance.
Fine-tune based on evaluation results to achieve better classification accuracy.
Step 7:Save and Deploy Save the trained model weights and architecture for future use
or deployment in production systems.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 75


Reg-411721205051

PROGRAM:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.models import Model
from tensorflow.keras.layers import GlobalAveragePooling2D, Dense, Input
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.utils import to_categorical
# Load and preprocess the CIFAR-10 dataset
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# Normalize pixel values to the range [0, 1]
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
# Load a pre-trained model (MobileNetV2) excluding the top classification layers
base_model = MobileNetV2(weights='imagenet', include_top=False, input_shape=(32, 32, 3))
# Add custom classification layers on top
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(256, activation='relu')(x)
predictions = Dense(10, activation='softmax')(x) # 10 classes in CIFAR-10
model = Model(inputs=base_model.input, outputs=predictions)
# Freeze the layers of the pre-trained model
for layer in base_model.layers:
layer.trainable = False
# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])
# Train the model

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 76


Reg-411721205051

model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))


# Evaluate the model
loss, accuracy = model.evaluate(x_test, y_test)
print("Test Loss:", loss)
print("Test Accuracy:", accuracy)

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 77


Reg-411721205051

OUTPUT:

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 78


Reg-411721205051

VIVA QUESTIONS:

1. What is the purpose of using a pre-trained model in transfer learning?


Pre-trained models, such as ResNet50, have been trained on large datasets,
typically on tasks like image classification. By leveraging the features learned
by these models, we can significantly reduce the amount of labeled data and
computational resourcesrequired to train a new model for a specific task,
making transfer learning an efficient approach.

2. Why do we freeze the layers of the pre-trained model during transfer


learning?
Freezing the layers of the pre-trained model prevents them from being updated
during training, preserving the learned features. Since these layers have already
learned meaningful representations from a large dataset, freezing them allows
us to focus on training the new classification layers on top, which are specific
to our task, without disturbing the pre-learned features.

3. How does data augmentation contribute to the training process in image


classification?
Data augmentation involves applying a variety of transformations to the
training images, such as rotation, shifting, shearing, and flipping, to artificially
increase the diversity of the training data

4. What are some common activation functions used in the classification layers of
deep learning models?
Common activation functions used in classification layers include ReLU (Rectified
Linear Unit) for hidden layers and softmax for the output layer in multi-class
classification tasks.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 79


Reg-411721205051

5. How do you evaluate the performance of a deep learning model trained for
image classification?

The performance of an image classification model can be evaluated using metrics such
as accuracy, precision, recall, and F1-score on a separate validation or test dataset.
Additionally, visual inspection of the model's predictions onsample images can provide
insights into its performance and potential areas for improvement.

RESULT:

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 80


Reg-411721205051

EX NO:13 RECOMMENDATION SYSTEM FROM SALES DATA


DATE: USING DEEP LEARNING

Thus the Train a Deep learning model to classify a given image using pretrained model has
been successfully executed and codes are generated.

AIM:
The aim of this recommendation system from sales data using deep learning is to provide
personalized product recommendations.

ALGORITHM:
Step 1:Data Collection and Preprocessing: Collect historical sales data, including
information about customers, products, and transactions.

Step 2:Train the Model: Train the deep learning model using the training data.

Step 3:Evaluate the Model: Evaluate the performance of the trained model on the
testing data.

Step 4:Tune and Optimize: Experiment with different model architectures,


hyperparameters, and training strategies to improve performance.

Step 5:Deploy the Model: Deploy the trained model into a production environment
where it can generate recommendations in real-time or in batch mode.

Step 6:Monitor and Update:Continuously monitor the performance of the


recommendation system in production

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 81


Reg-411721205051

PROGRAM:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.models import Model
from tensorflow.keras.layers import GlobalAveragePooling2D, Dense, Input
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.utils import to_categorical
# Load and preprocess the CIFAR-10 dataset
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# Normalize pixel values to the range [0, 1]
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
# Load a pre-trained model (MobileNetV2) excluding the top classification layers
base_model = MobileNetV2(weights='imagenet', include_top=False, input_shape=(32, 32, 3))
# Add custom classification layers on top
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(256, activation='relu')(x)
predictions = Dense(10, activation='softmax')(x) # 10 classes in CIFAR-10
model = Model(inputs=base_model.input, outputs=predictions)
# Freeze the layers of the pre-trained model
for layer in base_model.layers:
layer.trainable = False
# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])
# Train the model

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 82


Reg-411721205051

model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))


# Evaluate the model
loss, accuracy = model.evaluate(x_test, y_test)
print("Test Loss:", loss)
print("Test Accuracy:", accuracy)

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 83


Reg-411721205051

OUTPUT:

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 84


Reg-411721205051

VIVA QUESTIONS:

1. What is a recommendation system, and how does it benefit businesses?


A recommendation system is a type of information filtering system that predicts a
user's preferences or interests in items such as products, movies, or music and
suggests relevant items to the user.

2. How can deep learning be applied to build recommendation systems from sales
data?
Deep learning techniques, such as neural collaborative filtering (NCF) and deep
autoencoders, can be applied to learn complex patterns and representations from
sales data to generate accurate recommendations.

3. What types of sales data are typically used to build recommendation systems?
Various types of sales data can be used, including transactional data
(e.g., purchase history), user behavior data (e.g., browsing history,
clickstream data), user profile data (e.g., demographic information, preferences),
and item attributes (e.g., product descriptions, features).

4. What are some evaluation metrics used to assess the performance of


recommendation systems?
Common evaluation metrics for recommendation systems include precision, recall,
F1-score, mean average precision (MAP), normalized discounted cumulative gain
(NDCG), and area under the receiver operating characteristic curve (AUC-ROC).

5. How can businesses leverage recommendation systems to optimize sales and


marketing strategies?
Businesses can leverage recommendation systems to personalize marketing
campaigns, cross-sell and upsell related products, increase customer retention and
loyalty, optimize inventory management, and enhance customer satisfaction.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 85


Reg-411721205051

RESULT:
Thus the Recommendation system from sales data using Deep Learning has been
successfully executed and codes are generated.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 86


Reg-411721205051

EX NO:14 IMPLEMENT OBJECT DETECTION USING CNN


DATE:

AIM:
The aim of implementing object detection using CNN is to develop a model capable
of accurately detecting and localizing objects within images.

ALGORITHM:
Step 1:Data Collection and Annotation: Gather a dataset of images containing the
objects you want to detect.
Step 2: Preprocess the Data: Resize the images to a consistent size suitable for input to
the CNN.
Step 3:Choose a CNN Architecture:Select a CNN architecture suitable for object
detection tasks.
Step 4:Split Data into Training and Testing Sets: Divide the dataset into training and
testing sets to evaluate the model's performance.
Step 5:Build the Model: Implement the chosen CNN architecture using a deep learning
framework like TensorFlow or PyTorch.
Step 6:Train the Model: Train the model on the training data using the fit() or train()
method.
Step 7:Evaluate the Model:Evaluate the performance of the trained model on the
testing data.
Step 8:Deploy the Model:Deploy the trained model in a production environment
where it can detect objects in new images or videos.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 87


Reg-411721205051

PROGRAM:
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
# Define the paths to the model and label map
PATH_TO_FROZEN_GRAPH = 'path/to/frozen_inference_graph.pb'
PATH_TO_LABELS = 'path/to/label_map.pbtxt'
# Load a frozen model and label map
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
lOMoARcPSD|33990425
tf.import_graph_def(od_graph_def, name='')
category_index =
label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS,
use_display_name=True)
# Function to run object detection
def run_inference_for_single_image(image, graph):
with graph.as_default():

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 88


Reg-411721205051

with tf.Session() as sess:


# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in [
'num_detections', 'detection_boxes', 'detection_scores',
'detection_classes', 'detection_masks'
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(tensor_name)
if 'detection_masks' in tensor_dict:
detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
else:
detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
# Run inference
output_dict = sess.run(tensor_dict, feed_dict={image_tensor: image})
# All outputs are float32 numpy arrays, so convert types as appropriate
output_dict['num_detections'] = int(output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict['detection_classes'][0].astype(np.int64)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
if 'detection_masks' in output_dict:
lOMoARcPSD|33990425
output_dict['detection_masks'] = output_dict['detection_masks'][0]
return output_dict
# Load an image for object detection
image_path = 'path/to/image.jpg'
PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 89
Reg-411721205051

image = Image.open(image_path)
image_np = np.array(image)
# Actual detection.
output_dict = run_inference_for_single_image(image_np, detection_graph)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks'),
use_normalized_coordinates=True,
line_thickness=8)
plt.figure(figsize=(12, 8))
plt.imshow(image_np)
plt.show()

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 90


Reg-411721205051

OUTPUT:

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 91


Reg-411721205051

VIVA QUESTIONS:

1. What is object detection, and how does it differ from image classification?
Object detection is the process of identifying and localizing objects within an
image, often by drawing bounding boxes around them. Unlike image
classification, which assigns a single label to the entire image, object detection
requires identifying multiple objects and their locations within the image.

2. How does a Convolutional Neural Network (CNN) contribute to object


detection?
CNNs are well-suited for object detection tasks due to their ability to
automatically learn hierarchical features from images. In object detection
systems, CNNs are typically used as feature extractors to analyze image regions
and extract relevant features that help in detecting and localizing objects.
3. What is the purpose of the VGG16 model in the object detection
implementation?
The VGG16 model serves as a pre-trained feature extractor in the object detection
implementation. By using a pre-trained model like VGG16, we can leverage the
learned features from a large dataset (e.g., ImageNet) to extract meaningful
features from images, which can then be used as input for subsequent layers in
the object detection model.

4. How is the output layer of the object detection model configured?


In the given example, the output layer consists of a single neuron with a sigmoid
activation function. This configuration is suitable for binary object detection tasks,
where the model predicts whether an object is present (1) or not (0) within the
image.
5. How can the performance of the object detection model be evaluated?
The performance of the object detection model can be evaluated using metrics

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 92


Reg-411721205051

such as precision, recall, F1-score, and mean Average Precision (mAP). These
metrics assess the accuracy, completeness, and localization quality of the detected
objects compared to ground truth annotations.

RESULT:
Thus the Implement Object Detection using CNN has been successfully executed
and codes are generated.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 93


Reg-411721205051

EX NO:15 IMPLEMENT ANY SIMPLE REINFORCEMENT


DATE: ALGORITHM FOR AN NLP PROBLEM

AIM:
The aim of this project is to implement a simple reinforcement learning algorithm for
solving an NLP problem.

ALGORITHM:
Step 1:Import Necessary Libraries: Import libraries such as TensorFlow, Keras, or
PyTorch for deep learning, along with any other required libraries for text
processing and RL.
Step 2:Preprocess Text Data: Prepare your text data for training. This may involve
tokenization, padding, and any other necessary preprocessing steps.
Step 3:Define the Environment: Define the RL environment for the text generation task.
Step 4:Define the Reward Function: Define a reward function that evaluates the
quality of generated text sequences.
Step 5:Training Loop: Iterate over a fixed number of episodes or until convergence.
Step 6:Fine-tune and Optimize: Experiment with different architectures,
hyperparameters, and training strategies to improve text generation quality.
Step 7:Generate Text: Use the trained text generation model to generate text samples.
Step 8:Deploy the Model: Deploy the trained text generation model in a production
environment where it can generate text on-demand.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 94


Reg-411721205051

PROGRAM:
import random
# Define a simple environment with a chatbot
class ChatbotEnvironment:
def __init__(self):
self.state = "Hello, how can I assist you?"
self.conversation_history = []
self.reward = 0
def step(self, action):
user_query = action
self.conversation_history.append((self.state, user_query))
# Simulate chatbot's response (you can replace this with a more advanced model)
if "help" in user_query:
self.state = "Sure, I can help you with that. What do you need?"
self.reward += 1
elif "thank you" in user_query:
self.state = "You're welcome! Let me know if you need anything else."
self.reward += 1
else:
self.state = "I'm sorry, I didn't understand. Can you rephrase your question?"
self.reward -= 1
return self.state, self.reward
# Define a simple Q-learning agent
class QLearningAgent:
def __init__(self, actions, learning_rate=0.1, discount_factor=0.9,
exploration_prob=0.2):
self.actions = actions
self.learning_rate = learning_rate
self.discount_factor = discount_factor
self.exploration_prob = exploration_prob

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 95


Reg-411721205051

self.q_table = {}
def choose_action(self, state):
if random.uniform(0, 1) <self.exploration_prob:
return random.choice(self.actions)
else:
q_values = [self.q_table.get((state, a), 0) for a in self.actions]
return self.actions[q_values.index(max(q_values))]
def learn(self, state, action, reward, next_state):
current_q = self.q_table.get((state, action), 0)
best_next_q = max([self.q_table.get((next_state, a), 0) for a in self.actions])
updated_q = current_q + self.learning_rate * (reward + self.discount_factor * best_next_q - current_q)
self.q_table[(state, action)] = updated_q
# Train the chatbot using Q-learning
env = ChatbotEnvironment()
agent = QLearningAgent(actions=["ask", "thank"])
epochs = 1000
for _ in range(epochs):
state = env.state
action = agent.choose_action(state)
next_state, reward = env.step(action)
agent.learn(state, action, reward, next_state)
# Test the trained chatbot
state = env.state
print("Chatbot's response:", state)

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 96


Reg-411721205051

OUTPUT:

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 97


Reg-411721205051

VIVA QUESTIONS:

1. What is reinforcement learning, and how does it differ from other machine
learning paradigms?
Reinforcement learning is a type of machine learning paradigm where an agent
learns to make decisions by interacting with an environment to maximize cumulative
rewards.

2. What is the Q-learning algorithm, and how does it work?


Q-learning is a model-free reinforcement learning algorithm that learns to make
optimal decisions by estimating the value of taking specific actions in particular
states.

3. How is the concept of states, actions, and rewards applied in the context of
NLP problems?
In NLP problems, states represent the current context or partial output, actions
correspond to possible words or tokens that can be added to the output, and
rewards can be based on the quality or fluency of the generated text.

4. What are the key components of the Q-learning algorithm, and how do they
interact?
The key components of the Q-learning algorithm include the Q-table, which stores
the expected cumulative rewards for each action-state pair, the environment,
which defines the state transitions and rewards, and the learning parameters such
as learning rate (alpha), discount factor (gamma), and exploration rate (epsilon).

5. How can the performance of a reinforcement learning algorithm for NLP


tasks be evaluated?
The performance of a reinforcement learning algorithm for NLP tasks can be
evaluated based on various metrics such as text fluency, coherence,

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 98


Reg-411721205051

grammaticality, semantic relevance, and similarity to human-generated text.

RESULT:
Thus the Implement any simple Reinforcement Algorithm for an NLP problem has been
successfully executed and codes are generated.

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 99


Reg-411721205051

PSVPEC/IT/CCS355/ NEURAL NETWORK AND DEEP LEARNING LABORATORY 1

You might also like