Deep Learning Record
Deep Learning Record
Deep Learning Record
RECORD
NAME:
REG. NO:
SLOT:
FACULTY NAME:
CHENNAI – 600127
TAMILNADU
FACULTY INCHARGE
____________________________________
AIM
INSTALLATION OF PYTHON
The anaconda installer will open. Click “Next” on the welcome screen, click “I agree” and
choose “Just Me” and select installation location.
Click “Install” button. The anaconda will be installed in few minutes. Then click “Finish”.
Step 5: Jupyter-notebook
Open the anaconda navigator and then click on “launch” on “Jupyter-notebook”. The
Jupyter notebook will open in browser. Open a new file and start coding.
1. Loading a dataset.
Keras datasets
from tensorflow.keras.datasets import boston_housing, mnist, cifar10, imdb
(x_train,y_train),(x_test,y_test) = mnist.load_data()
(x_train2,y_train2),(x_test2,y_test2) = boston_housing.load_data()
(x_train3,y_train3),(x_test3,y_test3) = cifar10.load_data()
(x_train4,y_train4),(x_test4,y_test4) = imdb.load_data(num_words=20000)
num_classes = 10
Other datasets
from urllib.request import urlopen
data = np.loadtxt(urlopen(“http://archive.ics.uci.edu/ml/machine-learning-
databases/pima-indians-di abetes/pima-indians-diabetes.data”),delimiter= ”,”)
X = data[:,0:8]
y = data [:,8]
General syntax
from keras.datasets import your_dataset_module
(x_train, y_train), (x_test, y_test) = your_dataset_module.load_data()
In this example, replace your_dataset_module with the specific dataset module you want
to use. For example, common modules include mnist, cifar10, imdb, etc.
2. Preprocessing
Sequence padding
from tensorflow.keras.preprocessing import sequence
x_train4 = sequence.pad_sequences(x_train4,maxlen=80)
x_test4 = sequence.pad_sequences(x_test4,maxlen=80)
General syntax
from keras.preprocessing.sequence import pad_sequences
sequences = [
[1, 2, 3, 4],
[5, 6, 7],
[8, 9]
]
maxlen = 10
padded_sequences = pad_sequences(sequences, maxlen=maxlen, padding='pre',
truncating='pre', value=0.0)
In this example, sequences is a list of lists, where each inner list represents a sequence.
The pad_sequences function is then used to pad or truncate these sequences to a specified
length (maxlen). The padding and truncating parameters determine whether padding or
truncation occurs at the beginning or end of each sequence. Adjust the maxlen parameter
according to the desired length for your sequences. The value parameter allows you to
specify the value used for padding.
One-Hot encoding
from tensorflow.keras.utils import to_categorical
Y_train = to_categorical(y_train, num_classes)
Y_test = to_categorical(y_test, num_classes)
Y_train3 = to_categorical(y_train3, num_classes)
Y_test3 = to_categorical(y_test3, num_classes)
General syntax
from keras.utils import to_categorical
labels = [0, 1, 2, 1, 0, 2]
one_hot_labels = to_categorical(labels, num_classes=num_classes)
In this example, labels is a list of categorical labels. The to_categorical function is then
used to convert these labels into one-hot encoded format. The num_classes parameter
specifies the total number of classes in your dataset. Make sure to replace num_classes
with the actual number of classes in your dataset. The resulting one_hot_labels will be a
NumPy array representing the one-hot encoded labels.
Standardization / Normalization
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(x_train2)
standardized_X = scaler.transform(x_train2)
standardized_X_test = scaler.transform(x_test2)
General syntax
from sklearn.preprocessing import StandardScaler, MinMaxScaler
# Standardization
scaler_standard = StandardScaler()
standardized_features = scaler_standard.fit_transform(features)
# Normalization (MinMax Scaling)
scaler_minmax = MinMaxScaler()
normalized_features = scaler_minmax.fit_transform(features)
In this example, you replace features with your actual dataset, whether it's a NumPy array
or a Pandas DataFrame. The StandardScaler is used for standardization (z-score
normalization), and the MinMaxScaler is used for normalization to a specific range
(usually [0, 1]).
3. Model architecture
Sequential model
from tensorflow.keras.models import Sequential
model = Sequential()
CNN
from tensorflow.keras.layers import Activation,Conv2D,MaxPooling2D,Flatten
model2.add(Conv2D(32,(3,3),padding= ,input_shape=x_train.shape[1:]))
model2.add(Activation( )) >>> model2.add(Conv2D(32,(3,3)))
model2.add(Activation('relu'))
model2.add(MaxPooling2D(pool_size=(2,2)))
model2.add(Dropout(0.25))
model2.add(Conv2D(64,(3,3), padding= ))
model2.add(Activation( ))
model2.add(Conv2D(64,(3, 3)))
model2.add(Activation( ))
model2.add(MaxPooling2D(pool_size=(2,2)))
model2.add(Dropout(0.25))
model2.add(Flatten())
model2.add(Dense(512))
model2.add(Activation( ))
model2.add(Dropout(0.5))
model2.add(Dense(num_classes))
model2.add(Activation( ))
General syntax
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
# Create a Sequential model
model = Sequential()
# Add Convolutional layers
model.add(Conv2D(filters=32, kernel_size=(3, 3), activation='relu',
input_shape=(img_height, img_width, channels)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# Flatten the feature maps
model.add(Flatten())
# Add Fully Connected layers
model.add(Dense(units=128, activation='relu'))
model.add(Dense(units=num_classes, activation='softmax'))
# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# Print the model summary
model.summary()
In this example,
Sequential: This is the Keras sequential model, a linear stack of layers.
Conv2D: Convolutional layer for 2D spatial convolution.
MaxPooling2D: Max pooling layer for spatial data.
Flatten: Flattens the input. Does not affect the batch size.
Dense: Fully connected layer.
input_shape: Specifies the shape of the input data (height, width, channels) for the
first layer.
activation: Activation function for the layer.
pool_size: Size of the pooling window for max pooling.
units: Number of neurons in the Dense layer.
optimizer, loss, and metrics: Compilation parameters for the model.
Make sure to adjust the parameters, such as filters, kernel_size, pool_size, units, etc., based
on your specific requirements and the nature of your dataset.
RNN
from tensorflow.keras.klayers import Embedding,LSTM
model3.add(Embedding(20000,128))
model3.add(LSTM(128,dropout=0.2,recurrent_dropout=0.2))
model3.add(Dense(1,activation='sigmoid'))
General syntax
from keras.models import Sequential
from keras.layers import SimpleRNN, Dense
# Create a Sequential model
model = Sequential()
# Add SimpleRNN layer
model.add(SimpleRNN(units=50, activation='relu', input_shape=(timesteps, features)))
# Add Dense layer for output
model.add(Dense(units=num_classes, activation='softmax'))
# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# Print the model summary
model.summary()
In this example,
Sequential: The Keras sequential model, a linear stack of layers.
SimpleRNN: Simple recurrent layer.
Dense: Fully connected layer.
units: Number of units/neurons in the layer.
activation: Activation function for the layer.
input_shape: Shape of the input data (timesteps, features) for the first layer.
num_classes: Number of output classes in the final Dense layer.
optimizer, loss, and metrics: Compilation parameters for the model.
Make sure to adjust the parameters such as units, input_shape, num_classes, etc., based on
your specific requirements and the nature of your dataset.
4. Prediction
model3.predict(x_test4, batch_size=32)
model3.predict_classes(x_test4,batch_size=32)
In this example:
You can replace 'adam', 'categorical_crossentropy', and ['accuracy'] with your preferred
optimizer, loss function, and metrics, respectively.
7. Model training
Few of the functions from the “scikit-learn” package are given below:
1. Importing packages
iris = datasets.load_iris()
General syntax
In this example, replace "XXXX" with the specific dataset you want to load, such as
load_iris, load_digits, load_boston, etc.
General syntax
In this example,
Adjust the test_size and other parameters based on your specific needs. The train_test_split
function returns the training and testing sets for both features and target variables.
4. Model fitting
lr.fit(X, y)
k_means.fit(X_train)
5. Prediction
y_pred = lr.predict(X_test)
y_pred = k_means.predict(X_test)
In this example,
Adjust the normalization technique and other parameters based on your specific
requirements. The Normalizer class provides a flexible way to scale features individually.
7. Creating model
kNN
from sklearn import neighbors
knn = neighbors.KNeighborsClassifier(n_neighbors=5)
General syntax
from sklearn.model_selection import train_test_split
from sklearn.YourModelModule import YourModelClass
# Replace YourModelModule and YourModelClass with the actual module and class for
your chosen algorithm
# Assuming X and y are your feature matrix and target variable
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Create an instance of your model
model = YourModelClass()
In this example,
Replace YourModelModule and YourModelClass with the actual module and class for the
algorithm you want to use. For example, if you want to use a Support Vector Machine
(SVM), you would replace YourModelModule with svm and YourModelClass with SVC
for classification or SVR for regression.
Classification report
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
Confusion matrix
from sklearn.metrics import confusion _matrix
print (confusion _matrix ( y _test, y _pred))
R2 score
from sklearn.metrics import r2_score
r2_score(y_true, y_pred)
V-measure
from sklearn.metrics import v_measure_score
metrics.v_measure_score(y_true, y_pred)
Cross-Validation
from sklearn.cross_validation import cross_val_score
print(cross_val_score(knn, X_train, y_train, cv= 4))
print(cross_val_score(lr, X, y, cv=2))
RESULT
Thus, in this experiment, the python is installed and few packages, namely, “keras” and “scikit-
learn” are explored successfully.
EXPERIMENT NO: 2
AIM
To implement the various basic logic gates using McCulloch Pitt Neuron model.
REQUIREMENTS
Python, Numpy
PROCEDURE
STEP 1: Import the necessary library of NumPy to handle arrays and randomness.
STEP 2: Set a random seed to ensure reproducibility of random weight vector generation.
STEP 3: Create an input table that represents the input combinations for the logic gate you want
to implement (e.g., AND, OR, etc.).
STEP 4: Display the input table to see the input combinations.
STEP 5: Create a weight vector with random values sampled from {-1, 1}. The length of the
weight vector should match the number of inputs.
STEP6: Display the weight vector, which will determine the behavior of the neuron.
STEP 7: Calculate the dot product of the input table and the weight vector, resulting in a vector
of dot products for each input combination.
STEP 8: Create a function that applies a threshold to determine the binary output. If the dot product
is greater than or equal to a specified threshold, the function returns 1; otherwise, it returns 0.
STEP 9: Define the threshold value based on the logic gate you want to implement (e.g., T=2 for
an AND gate, T=1 for an OR gate, T=-1 for a NOR gate).
STEP 10: Iterate through the dot product vector and apply the threshold function to each element.
Print the activations for each input combination, which represent the gate's behavior.
By adjusting the threshold value (T) in step 9, you can easily implement different logic gates using
the same framework. The code outputs the results for the specified gate based on the chosen
threshold value.
PROGRAM
import numpy as np
# Compute the dot product between the input vector and weight vector
dot = input_table @ W
print(f'Dot product: {dot}')
#FOR OR GATE:
T = 1 # Threshold for OR gate
for i in range(4):
activation = linear_threshold_gate(dot[i], T)
print(f'Activation (OR): {activation}')
RESULT
The McCulloch Pits Neuron model has been successfully implemented for AND, OR,NOR and
NAND functions.
EXPERIMENT NO:3
AIM
PREREQUISITES
Python 3.9
PROCEDURE
1. Define the predict function that computes the activation of a perceptron and
returns 1 if the activation is greater than or equal to 0; otherwise, it returns 0.
2. Implement the train_perceptron function to train a perceptron on a given dataset.
Specify the initial weights, learning rate, and the number of training epochs.
3. Create training data for the AND gate, where each item is a tuple containing input
values and the corresponding target output (0 or 1).
4. Set the input size, learning rate, and the number of training epochs.
5. Call the train_perceptron function with the AND gate training data, input size,
learning rate, and epochs to obtain the final weights.
6. In the train_perceptron function, use a stopping criterion to check if the weights
have not changed between epochs. If the weights remain the same, stop
training.
7. Print the final weights after training.
8. Verify the AND gate's behavior by using the predict function with the final
weights to obtain the outputs for specific input combinations.
PROGRAM
def predict(inputs,weights):
activation = weights[0]
for i in range(len(inputs)):
previous_weights = None
print("Initial Weights:",
weights) for epoch in
range(epochs):
previous_weights = weights.copy()
print("Epoch:", epoch + 1)
if weights == previous_weights:
return weights
training_data_and = [([0, 0], 0), ([0, 1], 0), ([1, 0], 0), ([1, 1], 1)]
input_size = 2
learning_rate = 0.1
epochs = 10
for inputs in [(0, 0), (0, 1), (1, 0), (1, 1)]:
result=predict_and_gate(inputs,final_weigh
ts)
RESULT
AIM
PREREQUISITES
PROCEDURE
Program1
1. Import Sequential from Keras models, Dense, activation from keras, layers and numpy
libraries.
3. Define the numbers neural model and its arguments and set the number of neurons for each
layer.
4. Compile the model and calculate its accuracy and print the Summary of the model.
PROGRAM
Program 1:
import numpy as np
batch_size = 8
epochs = 50
model.add(Dense(2, input_shape=(2,)))
model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.summary()
OUTPUT
PROCEDURE
Program 2
1.Import Sequential from tensorflow, Keras, module, dense from tensorflow from
tensorflow,keras,layers, train_test_split from sklearn,model Selection and pandas.
2 Import the data and split it into input(x) and output(y) variables.
5. Fit the model on the dataset and make predictions with model and print the output.
PROGRAM
PROGRAM 2
import pandas as pd
dataset = pd.read_csv('diabetes.csv')
X = dataset.iloc[:, :-1].values
y = dataset["Outcome"].values
model = Sequential()
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
for i in range(5):
print('%s => %d (expected %d)' % (X[i].tolist(), predictions[i], y[i]))
OUTPUT
RESULT
Thus, the program for creating model of neural network and training on diabetes dataset as well
as array input is implemented successfully.
EXPERIMENT NO: 5
To create a convolutional neural network and apply on MINIST digit recognition dataset
PREREQUISITES
DATASET
MINIST Dataset
PROCEDURE
PROGRAM
import tensorflow as tf
accuracy_scores = []
loss_scores = []
def train_model(optimizer_name):
model = models.Sequential([
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(10)
])
model.compile(optimizer=optimizer_name,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
validation_data=(X_test, y_test))
# Evaluate the model and return accuracy and loss
accuracy_scores.append(accuracy)
loss_scores.append((train_acc, train_loss))
plt.figure(figsize=(12, 6))
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
plt.figure(figsize=(12, 6))
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
OUTPUT
RESULT
AIM
PREREQUISITES
PROCEDURE
The provided code is for a Simple Recurrent Neural Network (RNN) to forecast sunspot activity.
- The code begins by loading monthly sunspot activity data from a CSV file using the `read_csv`
function. The data is scaled using `MinMaxScaler` and then split into training and testing sets.
The split is determined by the `split_percent` parameter (default is 80% training).
2. Data Sequencing:
- The time series data is prepared for training by creating sequences of data for input (`trainX`
and `testX`) and corresponding target values (`trainY` and `testY`). The sequences are created
with a fixed number of time steps (defined by `time_steps`).
- A Simple RNN model is created using the `create_RNN` function. You can customize the
number of hidden units, dense units, and activation functions. The model is compiled with the
mean squared error loss and the Adam optimizer.
- The model is then trained using the training data (`trainX` and `trainY`) for a specified number
of epochs (in this case, 20) and a batch size of 1.
4. Prediction:
- After training, the model makes predictions on both the training and testing data
(`train_predict` and `test_predict`).
5. Error Calculation:
- The code calculates and prints the root mean squared error (RMSE) for both the training and
testing data. RMSE is a common metric for regression tasks.
6. Plotting:
- The code visualizes the actual sunspot activity and the model's predictions. The red vertical
line in the plot separates the training and testing examples.
The code demonstrates a simple time series forecasting task using a Simple RNN model and
provides a visualization of the model's performance. You can adjust the model's architecture and
training parameters to experiment with different settings for your specific forecasting task.
PROGRAM
import numpy as np
import pandas as pd
import math
data = np.array(df.values.astype('float32'))
data = scaler.fit_transform(data).flatten()
n = len(data)
split = int(n*split_percent)
train_data = data[:split]
test_data = data[split:]
Y = dat[Y_ind]
rows_x = len(Y)
X = dat[:time_steps*rows_x]
return X, Y
model = Sequential()
model.add(Dense(units=dense_units, activation=activation[1]))
model.compile(loss='mean_squared_error', optimizer='adam')
return model
# Error of predictions
# Print RMSE
rows = len(actual)
plt.figure(figsize=(15, 6), dpi=80)
plt.plot(range(rows), actual)
plt.plot(range(rows), predictions)
plt.axvline(x=len(trainY), color='r')
plt.legend(['Actual', 'Predictions'])
plt.ylabel('Sunspots scaled')
plt.title('Actual and Predicted Values. The Red Line Separates The Training And Test
Examples')
sunspots_url= 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/monthly-
sunspots.csv'
time_steps = 12
activation=['tanh', 'tanh'])
# make predictions
train_predict = model.predict(trainX)
test_predict = model.predict(testX)
# Print error
# Plot result
Epoch 9/20
187/187 - 0s - loss: 0.0042 - 405ms/epoch - 2ms/step
Epoch 10/20
187/187 - 0s - loss: 0.0043 - 369ms/epoch - 2ms/step
Epoch 11/20
187/187 - 0s - loss: 0.0043 - 415ms/epoch - 2ms/step
Epoch 12/20
187/187 - 0s - loss: 0.0043 - 322ms/epoch - 2ms/step
Epoch 13/20
187/187 - 0s - loss: 0.0043 - 336ms/epoch - 2ms/step
Epoch 14/20
187/187 - 0s - loss: 0.0041 - 350ms/epoch - 2ms/step
Epoch 15/20
187/187 - 0s - loss: 0.0042 - 402ms/epoch - 2ms/step
Epoch 16/20
187/187 - 0s - loss: 0.0041 - 390ms/epoch - 2ms/step
Epoch 17/20
187/187 - 0s - loss: 0.0042 - 355ms/epoch - 2ms/step
Epoch 18/20
187/187 - 0s - loss: 0.0041 - 437ms/epoch - 2ms/step
Epoch 19/20
187/187 - 0s - loss: 0.0041 - 431ms/epoch - 2ms/step
Epoch 20/20
187/187 - 0s - loss: 0.0041 - 412ms/epoch - 2ms/step
6/6 [==============================] - 0s 3ms/step
2/2 [==============================] - 0s 3ms/step
Train RMSE: 0.063 RMSE
Test RMSE: 0.091 RMSE
RESULT
Overall, the code showcases a basic time series forecasting example using a Simple RNN model.
The RMSE values and the visual plot offer insights into the model's ability to capture patterns and
make predictions on the sunspot activity data. Further model optimization and experimentation
may be required for more accurate and robust forecasting in real-world scenarios.
EXPERIMENT NO: 7
The aim is to demonstrate a simple example of LSTM neural network for univariate time series
forecasting.
PREREQUISITES
1. Python
2. Numpy
3. Keras
4. Matplotlib
PROCEDURE
1. Data Preparation
- Create the `split_sequence` function to partition the time series into input (X) and output (y)
pairs.
- Iterate through the data, generating input sequences of length `n_steps` and corresponding
output values.
- Result in two NumPy arrays: `X` for input sequences and `y` for output values.
3. Data Reshaping
- Reshape the `X` array to the required 3D format `[samples, timesteps, features]`.
- Set `n_features` to 1, indicating each time step in the univariate time series has one feature.
4. Model Definition
- Train the model with historical data, using input sequences `X` and their corresponding output
values `y`, over 200 training epochs.
6. Prediction
- Define a new input sequence, `x_input`, representing the most recent `n_steps` values in the
time series.
- Reshape `x_input` to match the model's input requirements.
- Utilize the model to predict the next value in the time series, storing the prediction in `yhat`.
PROGRAM
import numpy as np
from keras.models import Sequential
from keras.layers import LSTM, Dense
import matplotlib.pyplot as plt
# Demonstrate prediction
x_input = np.array([70, 80, 90])
x_input = x_input.reshape((1, n_steps, n_features))
yhat = model.predict(x_input, verbose=0)
print(yhat)
OUTPUT
RESULT
Thus, Long Short Term memory (LSTM) neural network has been successfully implemented for
univariate time series forecasting.
EXPERIMENT NO: 8
PREREQUISITES
1. Python
2. Keras
3. Matplotlib
PROCEDURE
4. Training:
- Initialize lists to store accuracy and loss history during training.
- Train the LSTM model for 1000 epochs.
- For each epoch:
a. Get a new random sequence using `get_sequence`.
b. Fit the model with this sequence for one epoch and batch size 1.
c. Append the accuracy and loss from the training history to the respective lists.
6. Display Results:
- Append the expected and predicted labels to respective lists.
- Print the expected and predicted values for each timestep.
7. Classification Report
- Generate a classification report to evaluate the model's performance.
- Print the classification report, including metrics like precision, recall, and F1-score.
PROGRAM
# define LSTM
model = Sequential()
model.add(LSTM(20, input_shape=(n_timesteps, 1), return_sequences=True))
model.add(TimeDistributed(Dense(1, activation='sigmoid')))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# train LSTM
for epoch in range(1000):
X, y = get_sequence(n_timesteps)
history = model.fit(X, y, epochs=1, batch_size=1, verbose=2)
accuracy_history.append(history.history['accuracy'][0])
loss_history.append(history.history['loss'][0])
plt.subplot(1, 2, 2)
plt.plot(range(1, len(loss_history) + 1), loss_history)
plt.title('Model Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.show()
OUTPUT
Exp
ecte
d: [
0] P
redi
cted
[0]
Exp
ecte
d: [
0] P
redi
cted
[0]
Expected: [0] Predicted [0]
Expected: [0] Predicted [0]
Expected: [1] Predicted [0]
Expected: [1] Predicted [1]
Expected: [1] Predicted [1]
Expected: [1] Predicted [1]
Expected: [1] Predicted [1]
Expected: [1] Predicted [1]
Classification Report:
precision recall f1-score support
accuracy 0.85 20
macro avg 0.85 0.85 0.85 20
weighted avg 0.85 0.85 0.85 20
RESULT
Thus, bidirectional LSTM has been successfully implemented with model accuracy and loss
graphs.
EXPERIMENT NO: 9
AIM
The aim of this code is to build and train a recurrent neural network (RNN) model
using the Gated Recurrent Unit (GRU) architecture for sentiment analysis on the
IMDB movie reviews dataset. The model is trained to predict whether a movie
review is positive or negative based on the provided dataset.
PREREQUISITES
PROCEDURE
1. Download the IMDB movie reviews dataset using the imdb.load_data function
from Keras by limiting the dataset to only 5000 words and pad the sequences to
a fixed length of 500 words.
2. Build a sequential neural network model.
3. Add an embedding layer to convert integer indices to dense vectors.
4. Add a GRU layer with 100 units to capture sequential dependencies.
5. Add a dense output layer with a sigmoid activation function for binary
classification.
6. Compile the model using binary crossentropy loss and the Adam optimizer.
7. Fit the model to the training data with a batch size of 64 and over 3 epochs.
8. Use 20% of the training data for validation.
9. Plot the training and validation accuracy over epochs.
10.Plot the training and validation loss over epochs.
11.Use the trained model to predict the sentiment (positive or negative) for each
review in the test set.
12.Evaluate the model on the test set and print the accuracy.
PROGRAM
word_count = 5000
word_max = 500
model = Sequential()
model.add(GRU(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
print(model.summary())
# Model fitting
plt.figure(figsize=(10, 4))
plt.subplot(1, 2, 1)
plt.ylabel('Accuracy')
plt.legend()
plt.subplot(1, 2, 2)
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.tight_layout()
plt.show()
y_predict = model.predict(x_test)
print(y_predict)
RESULT
Thus, the code shows the implementation of GRU on IMDB review dataset. The
plots shows the training and validation accuracy as well as the training and
validation loss over epochs. It also prints the predicted sentiments for the test set
and evaluates the overall accuracy of the model on the test data.
EXPERIMENT NO:10
AIM
To create and implement a bidirectional RNN for IMDB dataset using python.
PREREQUISITES
PROCEDURE
The provided code is a Python script for sentiment analysis on the IMDB movie reviews dataset
using a Bidirectional SimpleRNN-based neural network. The procedure for this code is as follows:
2. Define Parameters:
- Specify parameters, such as the number of features, maximum sequence length (maxlen),
embedding size, and hidden layer size for the model.
- Load the IMDB dataset using `imdb.load_data` and preprocess it using `pad_sequences` to
ensure all sequences have the same length.
- Compile the model with 'adam' optimizer and binary cross-entropy loss.
- Use the training data (`X_train` and `y_train`) to train the model.
- Monitor the training process and save the training history in the `history` variable.
7. Plot Training and Validation Metrics:
- Generate plots for training and validation loss as well as accuracy over epochs.
8. Generate Predictions:
- Use the trained model to generate predictions on the test data (`X_test`).
- Calculate the classification report, which includes metrics like precision, recall, F1-score, and
support, to evaluate the model's performance on sentiment classification.
- Print the generated classification report to assess the model's performance on positive and
negative sentiment classification.
The code provides a comprehensive analysis of the model's performance, including visualizations
of the training process and detailed classification metrics.
PROGRAM
import warnings
# Ignore warnings
warnings.filterwarnings('ignore')
# Define parameters
features = 2000
maxlen = 50
embedding = 128
hidden = 64
model = Sequential()
model.add(Bidirectional(SimpleRNN(hidden)))
model.add(Dense(1, activation='sigmoid'))
batch_size = 32
epochs = 5
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.legend()
plt.title('Loss')
plt.legend()
plt.title('Accuracy')
plt.show()
# Generate predictions
y_pred_prob = model.predict(X_test)
OUTPUT
Epoch 1/5
782/782 [==============================] - 15s 17ms/step - loss: 0.5313 - accuracy: 0.7237 -
val_loss: 0.4514 - val_accuracy: 0.7898
Epoch 2/5
782/782 [==============================] - 12s 16ms/step - loss: 0.4008 - accuracy: 0.8236 -
val_loss: 0.4717 - val_accuracy: 0.7882
Epoch 3/5
782/782 [==============================] - 13s 16ms/step - loss: 0.3102 - accuracy: 0.8678 -
val_loss: 0.5151 - val_accuracy: 0.7655
Epoch 4/5
782/782 [==============================] - 13s 16ms/step - loss: 0.2016 - accuracy: 0.9215 -
val_loss: 0.6243 - val_accuracy: 0.7590
Epoch 5/5
782/782 [==============================] - 12s 16ms/step - loss: 0.1125 - accuracy: 0.9589 -
val_loss: 0.8345 - val_accuracy: 0.7458
782/782 [==============================] - 3s 3ms/step
Classification Report:
precision recall f1-score support
RESULT
The code helps assess how well the Bidirectional SimpleRNN model performs in
sentiment analysis, offering a comprehensive view of its accuracy and the quality of
its predictions. The conclusion will depend on the specific results and metrics
generated, but it allows you to make informed decisions about the model's
effectiveness in the sentiment analysis task.
EXPERIMENT NO:11
IMPLEMENTATION OF AUTOENCODERS
AIM
To employ Autoencoder model in python, train it, plot evaluation metrics such as loss and
predict values.
PREREQUISITES
Python 3.9
Keras
Numpy
Matplotlib.pyplot
PROCEDURE
1. Import libraries and dependencies, including deep learning framework (keras).
2. Import dataset and preprocess it. If it is image data, normalize pixel values.
3. Define input layer, hidden layers, activation function for autoencoder model and
compile it.
4. While defining encoding and decoding layer, add bottleneck to it. Here, the encoded
and decoded layers give output of dimensionality 128units and bottleneck layer gives
an output of 400units (20x20 pixels).
5. Specify loss function and optimizer. Here, they are mean squared error and adam
respectively.
6. Fit autoencoder model to the training data for both input and target data.
7. Evaluate the performance of autoencoder using loss and accuracy metrics.
8. Plot accuracy and metrics.
9. Display original values and predicted values.
PROGRAM
from keras.layers import Dense, Input from keras.models import Model
# Encoding layers
encoded = Dense(128, activation='relu')(input_img)
encoded = Dense(encoding_dim, activation='relu')(encoded) # This is the bottleneck layer
# Decoding layers
decoded = Dense(128, activation='relu')(encoded)
decoded = Dense(input_dim, activation='sigmoid')(decoded)
# Autoencoder model
autoencoder = Model(input_img, decoded)
# Encoder model
encoder = Model(input_img, encoded)
# Decoder model
encoded_input = Input(shape=(encoding_dim,)) decoder_layer1 =
autoencoder.layers[-2] decoder_layer2 = autoencoder.layers[-1]
decoder = Model(encoded_input,
decoder_layer2(decoder_layer1(encoded_input)))
OUTPUT
RESULT
Thus, an Autoencoder model has been trained and achieved. The autoencoder model was
successfully implemented in this experiment.
EXPERIMENT NO: 12
REQUIREMENTS
TensorFlow
Keras
Python
PROCEDURE
PROGRAM
import torch.nn as nn
import torchvision
import numpy as np
# Set device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
lr = 0.0002
beta1 = 0.5
beta2 = 0.999
num_epochs = 10
self.model = nn.Sequential(
nn.Linear(latent_dim, 128 * 8 * 8),
nn.ReLU(),
nn.Unflatten(1, (128, 8, 8)),
nn.Upsample(scale_factor=2),
nn.Conv2d(128, 128, kernel_size=3, padding=1),
nn.BatchNorm2d(128, momentum=0.78),
nn.ReLU(),
nn.Upsample(scale_factor=2),
nn.Conv2d(128, 64, kernel_size=3, padding=1),
nn.BatchNorm2d(64, momentum=0.78),
nn.ReLU(),
nn.Conv2d(64, 3, kernel_size=3, padding=1),
nn.Tanh()
)
self.model = nn.Sequential(
nn.Conv2d(3, 32, kernel_size=3, stride=2, padding=1),
nn.LeakyReLU(0.2),
nn.Dropout(0.25),
nn.Conv2d(32, 64, kernel_size=3, stride=2, padding=1),
nn.ZeroPad2d((0, 1, 0, 1)),
nn.BatchNorm2d(64, momentum=0.82),
nn.LeakyReLU(0.25),
nn.Dropout(0.25),
nn.Conv2d(64, 128, kernel_size=3, stride=2, padding=1),
nn.BatchNorm2d(128, momentum=0.82),
nn.LeakyReLU(0.2),
nn.Dropout(0.25),
nn.Conv2d(128, 256, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(256, momentum=0.8),
nn.LeakyReLU(0.25),
nn.Dropout(0.25),
nn.Flatten(),
nn.Linear(256 * 5 * 5, 1),
nn.Sigmoid()
)
# Loss function
adversarial_loss = nn.BCELoss()
# Optimizers
optimizer_G = optim.Adam(generator.parameters()\
, lr=lr, betas=(beta1, beta2))
optimizer_D = optim.Adam(discriminator.parameters()\
, lr=lr, betas=(beta1, beta2))
# Configure input
real_images = real_images.to(device)
# ---------------------
# Train Discriminator
# ---------------------
optimizer_D.zero_grad()
(fake_images.detach()), fake)
d_loss = (real_loss + fake_loss) / 2
# -----------------
# Train Generator
# -----------------
optimizer_G.zero_grad()
# Adversarial loss
g_loss = adversarial_loss(discriminator(gen_images), valid)
# ---------------------
# Progress Monitoring
# ---------------------
if (i + 1) % 100 == 0:
print(
f"Epoch [{epoch+1}/{num_epochs}]\
Batch {i+1}/{len(dataloader)} "
f"Discriminator Loss: {d_loss.item():.4f} "
f"Generator Loss: {g_loss.item():.4f}"
)
Epoch [10/10] Batch 1500/1563 Discriminator Loss: 0.5253 Generator Loss: 1.3269
RESULT
The implementation of Generative Adversarial Networks was performed successfully and
verified.
EXPERIMENT NO: 13
AIM
The aim is to develop a robust deep learning model for the classification of German traffic
REQUIREMENTS
Python >3.9
PROCEDURE
etc.
2. Load the GTSRB dataset and set the paths for the training and test datasets.
3. Resize the images to a consistent size (30x30 pixels) for model compatibility and find
4. Plot a bar chart showing the number of images in each class for visual inspection.
5. Shuffle the data and split it into training and validation sets.
8. Compile the model with categorical crossentropy loss, Adam optimizer, and accuracy
as a metric.
generalization.
10. Train the model using the augmented training dataset and monitor performance on the
validation set.
PROGRAM
#Importing Required Libraries
import numpy as np
import pandas as pd
import os
import cv2
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from PIL import Image
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam
from sklearn.metrics import accuracy_score
np.random.seed(42)
from matplotlib import style
style.use('fivethirtyeight')
IMG_HEIGHT = 30
IMG_WIDTH = 30
channels = 3
NUM_CATEGORIES = len(os.listdir(train_path))
NUM_CATEGORIES
# Label Overview
11:'Right-of-way at intersection',
12:'Priority road',
13:'Yield',
14:'Stop',
15:'No vehicles',
17:'No entry',
18:'General caution',
21:'Double curve',
22:'Bumpy road',
23:'Slippery road',
25:'Road work',
26:'Traffic signals',
27:'Pedestrians',
28:'Children crossing',
29:'Bicycles crossing',
30:'Beware of ice/snow',
35:'Ahead only',
38:'Keep right',
39:'Keep left',
40:'Roundabout mandatory',
41:'End of no passing',
folders = os.listdir(train_path)
train_number = []
class_num = []
train_number.append(len(train_files))
class_num.append(classes[int(folder)])
sorted_pairs = sorted(zipped_lists)
tuples = zip(*sorted_pairs)
plt.bar(class_num, train_number)
plt.xticks(class_num, rotation='vertical')
plt.show()
import random
imgs = test["Path"].values
plt.figure(figsize=(25,25))
for i in range(1,26):
plt.subplot(5,5,i)
rand_img = imread(random_img_path)
plt.imshow(rand_img)
plt.grid(b=None)
image_data = []
image_labels = []
for i in range(NUM_CATEGORIES):
images = os.listdir(path)
for img in images:
try:
image_data.append(np.array(resize_image))
image_labels.append(i)
except:
image_data = np.array(image_data)
image_labels = np.array(image_labels)
print(image_data.shape, image_labels.shape)
#Shuffling the data
shuffle_indexes = np.arange(image_data.shape[0])
np.random.shuffle(shuffle_indexes)
image_data = image_data[shuffle_indexes]
image_labels = image_labels[shuffle_indexes]
#Splitting the dataset
X_train = X_train/255
X_val = X_val/255
print("X_train.shape", X_train.shape)
print("X_valid.shape", X_val.shape)
print("y_train.shape", y_train.shape)
print("y_valid.shape", y_val.shape)
#One hot encoding
print(y_train.shape)
print(y_val.shape)
#model creation
model = keras.models.Sequential([
keras.layers.Conv2D(filters=16,kernel_size=(3,3),activation='relu',input_shape=(IMG_HEI
GHT,IMG_WIDTH,channels)),
keras.layers.MaxPool2D(pool_size=(2, 2)),
keras.layers.BatchNormalization(axis=-1),
keras.layers.MaxPool2D(pool_size=(2, 2)),
keras.layers.BatchNormalization(axis=-1),
keras.layers.Flatten(),
keras.layers.Dense(512, activation='relu'),
keras.layers.BatchNormalization(),
keras.layers.Dropout(rate=0.5),
keras.layers.Dense(43, activation='softmax')
])
lr = 0.001
epochs = 10
aug = ImageDataGenerator(
rotation_range=10,
zoom_range=0.15,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.15,
horizontal_flip=False,
vertical_flip=False,
fill_mode="nearest")
pd.DataFrame(history.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
#loading the dataset and running the predictions
test = pd.read_csv(data_dir + '/Test.csv')
labels = test["ClassId"].values
imgs = test["Path"].values
data =[]
try:
data.append(np.array(resize_image))
except:
X_test = np.array(data)
X_test = X_test/255
pred = model.predict_classes(X_test)
#Confusion matrix
cf = confusion_matrix(labels, pred)
plt.figure(figsize = (20,20))
sns.heatmap(df_cm, annot=True)
#classification report
print(classification_report(labels, pred))
#prediction a dataset
plt.figure(figsize = (25, 25))
start_index = 0
for i in range(25):
plt.subplot(5, 5, i + 1)
plt.grid(False)
plt.xticks([])
plt.yticks([])
prediction = pred[start_index + i]
actual = labels[start_index + i]
col = 'g'
if prediction != actual:
col = 'r'
plt.imshow(X_test[start_index + i])
plt.show()
OUTPUT
RESULT
Thus, traffic sign recognition model for the GTSRB dataset was implemented
successfully.