Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
44 views

Deep Learning Lab Manual

Artificial intelligence and data science

Uploaded by

sharmirks74
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views

Deep Learning Lab Manual

Artificial intelligence and data science

Uploaded by

sharmirks74
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Ex.

no: 1 A XOR logic using DNN


Date:

AIM:
Write a python program for solving XOR logic using DNN.

ALGORITHM:
Step 1 : Import required modules
Step 2 : Assign X(input) and y(output)
Step 3 : Assign the model to Sequential() function
Step 4 : Add hidden layer with the use of Dense() function
Step 5 : Compile and fit the model
Step 6 : Calculate the Loss and Accuracy
Step 7 : Print the Predicted output
Program:

#packages
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
#input and target
X=np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y=np.array([[0], [1], [1], [0]])
#building the model
model=Sequential()
#Adding the dense value for the hidden layer
model.add(Dense(8,input_dim=2,activation='relu'))
model.add(Dense(1,activation='sigmoid'))
#compileing the model
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
#fitting the input and target into the model
model.fit(X,y,epochs=1000)
#calculating the loss and accuracy
loss,accuracy=model.evaluate(X,y)
print(f"Loss:{loss:.4f},Accuracy:{accuracy:.4f}")
#predicting the values
predictions=model.predict(X)
print("Predictions:")
for i in range(len(X)):
print(f"Input:{X[i]},Predicted Output:{predictions[i][0]:.4f}")
Output :

RESULT:
Thus the above python program to work with XOR logic using DNN has been executed
successfully and the output is displayed.
Ex.no: 1 B AND logic using DNN
Date:

AIM:
Write a python program for solving AND logic using DNN.

ALGORITHM:
Step 1 : Import required modules
Step 2 : Assign X(input) and y(output)
Step 3 : Assign the model to Sequential() function
Step 4 : Add hidden layer with the use of Dense() function
Step 5 : Compile and fit the model
Step 6 : Calculate the Loss and Accuracy
Step 7 : Print the Predicted output
Program:

#packages
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
#input and target
X=np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y=np.array([[0], [0], [0], [1]])
#building the model
model=Sequential()
#Adding the dense value for the hidden layer
model.add(Dense(8,input_dim=2,activation='relu'))
model.add(Dense(1,activation='sigmoid'))
#compileing the model
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
#fitting the input and target into the model
model.fit(X,y,epochs=1000)
#calculating the loss and accuracy
loss,accuracy=model.evaluate(X,y)
print(f"Loss:{loss:.4f},Accuracy:{accuracy:.4f}")
#predicting the values
predictions=model.predict(X)
print("Predictions:")
for i in range(len(X)):
print(f"Input:{X[i]},Predicted Output:{predictions[i][0]:.4f}")
Output :

RESULT:
Thus the above python program to work with AND logic using DNN has been executed
successfully and the output is displayed.
Ex.no: 1 C OR logic using DNN
Date:

AIM:
Write a python program for solving OR logic using DNN.

ALGORITHM:
Step 1 : Import required modules
Step 2 : Assign X(input) and y(output)
Step 3 : Assign the model to Sequential() function
Step 4 : Add hidden layer with the use of Dense() function
Step 5 : Compile and fit the model
Step 6 : Calculate the Loss and Accuracy
Step 7 : Print the Predicted output
Program:

#packages
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
#input and target
X=np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y=np.array([[0], [1], [1], [1]])
#building the model
model=Sequential()
#Adding the dense value for the hidden layer
model.add(Dense(8,input_dim=2,activation='relu'))
model.add(Dense(1,activation='sigmoid'))
#compileing the model
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
#fitting the input and target into the model
model.fit(X,y,epochs=1000)
#calculating the loss and accuracy
loss,accuracy=model.evaluate(X,y)
print(f"Loss:{loss:.4f},Accuracy:{accuracy:.4f}")
#predicting the values
predictions=model.predict(X)
print("Predictions:")
for i in range(len(X)):
print(f"Input:{X[i]},Predicted Output:{predictions[i][0]:.4f}")
Output :

RESULT:
Thus the above python program to work with OR logic using DNN has been executed
successfully and the output is displayed.
Ex.no: 2 Character Recognition using
CNN
Date:

AIM:
Write a python program for Character recognition using CNN.

ALGORITHM:
Step 1 : Import required modules
Step 2 : Load the training images
Step 3 : Assign the model to Sequential() function
Step 4 : Add hidden layer with the use of Dense() function
Step 5 : Compile and fit the model
Step 6 : Calculate the Loss and Accuracy
Step 7 : Print the Predicted output
Program:

#packages
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import mnist
import numpy as np

# Load and preprocess the MNIST dataset


(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images, test_images = train_images / 255.0, test_images / 255.0

# Build the CNN model


model = models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(10, activation='softmax') # 10 output classes (digits 0-9)
])
# Compile the model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.summary()

# Reshape and preprocess images for CNN input


train_images = train_images.reshape((60000, 28, 28, 1))
test_images = test_images.reshape((10000, 28, 28, 1))

# Train the model


model.fit(train_images, train_labels, epochs=5, batch_size=64, validation_split=0.1)

# Evaluate the model


test_loss, test_acc = model.evaluate(test_images, test_labels)
print(f"Test accuracy: {test_acc}")

# Make predictions
predictions = model.predict(test_images)
predicted_labels = np.argmax(predictions, axis=1)

# Display a few predicted and actual labels


for i in range(10):
print(f"Predicted: {predicted_labels[i]}, Ac
Output :

RESULT:
Thus the above python program for Character recognition using CNN has been executed
successfully and the output is displayed.
Ex.no: 3 Face Recognition using
CNN
Date:

AIM:
Write a python program for Face Recognition using CNN.

ALGORITHM:
Step 1 : Prepare the data
Step 2 : Model Architecture
Step 3 : Model compilation
Step 4 : Training the data
Step 5 : Evaluation and prediction
Program:

import keras
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropout
from keras.optimizers import Adam
from keras.callbacks import TensorBoard
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import roc_curve, auc
from sklearn.metrics import accuracy_score
from keras.utils import np_utils
import itertools

!pip install keras_vggface

# load the "Train Images"


x_train = np.load('/content/trainX.npy')
#normalize every image
x_train = np.array(x_train,dtype='float32')/255
x_test = np.load('/content/testX.npy')
x_test = np.array(x_test,dtype='float32')/255
# load the Label of Images
y_train= np.load('/content/trainY.npy')
y_test= np.load('/content/testY.npy')

# show the train and test Data format


print('x_train : {}'.format(x_train[:]))
print('Y-train shape: {}'.format(y_train))
print('x_test shape: {}'.format(x_test.shape))
x_train, x_valid, y_train, y_valid= train_test_split(
x_train, y_train, test_size=.05, random_state=1234,)
im_rows=112
im_cols=92
batch_size=512
im_shape=(im_rows, im_cols, 1)

#change the size of images


x_train = x_train.reshape(x_train.shape[0], *im_shape)
x_test = x_test.reshape(x_test.shape[0], *im_shape)
x_valid = x_valid.reshape(x_valid.shape[0], *im_shape)

print('x_train shape: {}'.format(y_train.shape[0]))


print('x_test shape: {}'.format(y_test.shape))

cnn_model= Sequential([
Conv2D(filters=36, kernel_size=7, activation='relu', input_shape= im_shape),
MaxPooling2D(pool_size=2),
Conv2D(filters=54, kernel_size=5, activation='relu', input_shape= im_shape),
MaxPooling2D(pool_size=2),
Flatten(),
Dense(2024, activation='relu'),
Dropout(0.5),
Dense(1024, activation='relu'),
Dropout(0.5),
Dense(512, activation='relu'),
Dropout(0.5),
#20 is the number of outputs
Dense(20, activation='softmax')
])

cnn_model.compile(
loss='sparse_categorical_crossentropy',#'categorical_crossentropy',
optimizer=Adam(lr=0.0001),
metrics=['accuracy']
)
cnn_model.summary()
history=cnn_model.fit(
np.array(x_train), np.array(y_train), batch_size=512,
epochs=250, verbose=2,
validation_data=(np.array(x_valid),np.array(y_valid)),
)
scor = cnn_model.evaluate( np.array(x_test), np.array(y_test), verbose=0)

print('test los {:.4f}'.format(scor[0]))


print('test acc {:.4f}'.format(scor[1]))
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
predicted =np.argmax(cnn_model.predict(x_test), axis=-1)
predict_x=cnn_model.predict(x_test)
classes_x=np.argmax(predict_x,axis=1)
print(predicted)
print(y_test)
ynew = np.argmax(cnn_model.predict(x_test), axis=-1)

Acc=accuracy_score(y_test, ynew)
print("accuracy : ")
print(Acc)
#/tn, fp, fn, tp = confusion_matrix(np.array(y_test), ynew).ravel()
cnf_matrix=confusion_matrix(np.array(y_test), ynew)

y_test1 = np_utils.to_categorical(y_test, 20)

def plot_confusion_matrix(cm, classes,


normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
#print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
#print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
print('Confusion matrix, without normalization')
print(cnf_matrix)
plt.figure()
plot_confusion_matrix(cnf_matrix[1:10,1:10], classes=[0,1,2,3,4,5,6,7,8,9],
title='Confusion matrix, without normalization')
plt.figure()
plot_confusion_matrix(cnf_matrix[11:20,11:20], classes=[10,11,12,13,14,15,16,17,18,19],
title='Confusion matrix, without normalization')
print("Confusion matrix:\n%s" % confusion_matrix(np.array(y_test), ynew))
print(classification_report(np.array(y_test), ynew))
Output :
RESULT:
Thus the above python program for Face recognition using CNN has been executed
successfully and the output is displayed.
Ex.no: 4 Language Modelling using RNN
Date:

AIM:
Write a python program for Language modelling using RNN.

ALGORITHM:
Step 1 : Import Libraries
Step 2 : Load and Preprocess Data
Step 3 : Build RNN Model
Step 4 : Train the Model
Step 5 : Generate Text
Program:

from keras.models import Sequential


from keras.layers import Embedding, LSTM, Dense
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
import numpy as np

# Example text data


text_data = [
"The quick brown fox jumps over the lazy dog.",
"A journey of a thousand miles begins with a single step.",
"To be or not to be, that is the question.",
"All that glitters is not gold.",
"In the end, we will remember not the words of our enemies, but the silence of our
friends."
]

# Preprocess the data


tokenizer = Tokenizer()
tokenizer.fit_on_texts(text_data)
total_words = len(tokenizer.word_index) + 1

input_sequences = []
for line in text_data:
token_list = tokenizer.texts_to_sequences([line])[0]
for i in range(1, len(token_list)):
n_gram_sequence = token_list[:i+1]
input_sequences.append(n_gram_sequence)

max_sequence_length = max([len(seq) for seq in input_sequences])


input_sequences = pad_sequences(input_sequences, maxlen=max_sequence_length,
padding='pre')

X, y = input_sequences[:, :-1], input_sequences[:, -1]


y = to_categorical(y, num_classes=total_words)

# Build RNN Model


model = Sequential()
model.add(Embedding(total_words, 100, input_length=max_sequence_length-1))
model.add(LSTM(100))
model.add(Dense(total_words, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

# Train the Model


model.fit(X, y, epochs=100, verbose=2)

# Generate Text
seed_text = "The quick brown fox"
next_words = 10

for _ in range(next_words):
token_list = tokenizer.texts_to_sequences([seed_text])[0]
token_list = pad_sequences([token_list], maxlen=max_sequence_length-1, padding='pre')
predicted_probs = model.predict(token_list, verbose=0)
predicted_index = np.argmax(predicted_probs)
output_word = ""

for word, index in tokenizer.word_index.items():


if index == predicted_index:
output_word = word
break

seed_text += " " + output_word

print(seed_text)

Output :

RESULT:
Thus the above python program for Language Modelling using RNN has been executed
successfully and the output is displayed.
Ex.no: 5 Sentiment Analysis using LSTM
Date:

AIM:
Write a python program for Sentiment Analysis using LSTM.

ALGORITHM:
Step 1 : Import Libraries
Step 2 : Load and Preprocess Data
Step 3 : Tokenize and Pad Sequences
Step 4 : Build LSTM Model
Step 5 : Split Data and Train Model
Step 6 : Evaluate Model
Step 7 : Predict Sentiment on New Data
Program:

import numpy as np # linear algebra


import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from sklearn.feature_extraction.text import CountVectorizer
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM, SpatialDropout1D
from sklearn.model_selection import train_test_split
from keras.utils import to_categorical
import re
data = pd.read_csv('/content/Sentiment.csv')
data = data[['text','sentiment']]
data = data[data.sentiment != "Neutral"]
data['text'] = data['text'].apply(lambda x: x.lower())
data['text'] = data['text'].apply((lambda x: re.sub('[^a-zA-z0-9\s]','',x)))
print(data[ data['sentiment'] == 'Positive'].size)
print(data[ data['sentiment'] == 'Negative'].size)
for idx,row in data.iterrows():
row[0] = row[0].replace('rt',' ')
max_fatures = 2000
tokenizer = Tokenizer(num_words=max_fatures, split=' ')
tokenizer.fit_on_texts(data['text'].values)
X = tokenizer.texts_to_sequences(data['text'].values)
X = pad_sequences(X)
embed_dim = 128
lstm_out = 196
model = Sequential()
model.add(Embedding(max_fatures, embed_dim,input_length = X.shape[1]))
model.add(SpatialDropout1D(0.4))
model.add(LSTM(lstm_out, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(2,activation='softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer='adam',metrics = ['accuracy'])
print(model.summary())
Y = pd.get_dummies(data['sentiment']).values
X_train, X_test, Y_train, Y_test = train_test_split(X,Y, test_size = 0.33, random_state = 42)
print(X_train.shape,Y_train.shape)
print(X_test.shape,Y_test.shape)
batch_size = 32
model.fit(X_train, Y_train, epochs = 7, batch_size=batch_size, verbose = 2)
validation_size = 1500
X_validate = X_test[-validation_size:]
Y_validate = Y_test[-validation_size:]
X_test = X_test[:-validation_size]
Y_test = Y_test[:-validation_size]
score,acc = model.evaluate(X_test, Y_test, verbose = 2, batch_size = batch_size)
print("score: %.2f" % (score))
print("acc: %.2f" % (acc))
pos_cnt, neg_cnt, pos_correct, neg_correct = 0, 0, 0, 0
for x in range(len(X_validate)):

result = model.predict(X_validate[x].reshape(1,X_test.shape[1]),batch_size=1,verbose =
2)[0]
if np.argmax(result) == np.argmax(Y_validate[x]):
if np.argmax(Y_validate[x]) == 0:
neg_correct += 1
else:
pos_correct += 1
if np.argmax(Y_validate[x]) == 0:
neg_cnt += 1
else:
pos_cnt += 1
print("pos_acc", pos_correct/pos_cnt*100, "%")
print("neg_acc", neg_correct/neg_cnt*100, "%")
twt = ['Chris Christie is really standing out at the #GOPdebate']
#vectorizing the tweet by the pre-fitted tokenizer instance
twt = tokenizer.texts_to_sequences(twt)
#padding the tweet to have exactly the same shape as `embedding_2` input
twt = pad_sequences(twt, maxlen=28, dtype='int32', value=0)
print(twt)
sentiment = model.predict(twt,batch_size=1,verbose = 2)[0]
if(np.argmax(sentiment) == 0):
print("negative")
elif (np.argmax(sentiment) == 1):
print("positive")
Output :

RESULT:
Thus the above python program for Sentiment Analysis using LSTM has been executed
successfully and the output is displayed.
Ex.no: 6 Speech Tagging using sequence to sequence architecture
Date:

AIM:
Write a python program for Speech Tagging using sequence to sequence architecture.

ALGORITHM:
Step 1 : Import the required libraries
Step 2 : Define Example Data
Step 3 : Preprocess Data
Step 4 : Build Seq2Seq Model
Step 5 : Train the Model
Step 6 : Save the Trained Model
Step 7 : Use the Trained Model for Prediction
Program:

from keras.models import Model


from keras.layers import Input, LSTM, Embedding, Dense
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
from keras.callbacks import EarlyStopping
from keras.models import load_model
import numpy as np

speech_data = [
("The quick brown fox jumps over the lazy dog.", ["DT", "JJ", "NN", "VBZ", "IN", "DT",
"JJ", "NN"]),
("A journey of a thousand miles begins with a single step.", ["DT", "NN", "IN", "DT",
"NN", "NNS", "VBZ", "IN", "DT", "JJ", "NN", "."]),
]

input_texts, target_texts = zip(*speech_data)


tokenizer_input = Tokenizer()
tokenizer_input.fit_on_texts(input_texts)
tokenizer_output = Tokenizer()
tokenizer_output.fit_on_texts(target_texts)

num_encoder_tokens = len(tokenizer_input.word_index) + 1
num_decoder_tokens = len(tokenizer_output.word_index) + 1
max_encoder_seq_length = max([len(seq) for seq in
tokenizer_input.texts_to_sequences(input_texts)])
max_decoder_seq_length = max([len(seq) for seq in
tokenizer_output.texts_to_sequences(target_texts)])
encoder_input_data = pad_sequences(tokenizer_input.texts_to_sequences(input_texts),
maxlen=max_encoder_seq_length)
decoder_input_data = pad_sequences(tokenizer_output.texts_to_sequences(target_texts),
maxlen=max_decoder_seq_length, padding='post')
decoder_target_data = pad_sequences(tokenizer_output.texts_to_sequences(target_texts),
maxlen=max_decoder_seq_length, padding='post')
embedding_dim = 50
latent_dim = 256
encoder_inputs = Input(shape=(None,))
encoder_embedding = Embedding(num_encoder_tokens, embedding_dim)(encoder_inputs)
encoder_lstm = LSTM(latent_dim, return_state=True)
_, state_h, state_c = encoder_lstm(encoder_embedding)
encoder_states = [state_h, state_c]
decoder_inputs = Input(shape=(None,))
decoder_embedding = Embedding(num_decoder_tokens, embedding_dim)(decoder_inputs)
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_embedding, initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

model = Model([encoder_inputs, decoder_inputs], decoder_outputs)


model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

epochs = 50
batch_size = 32
decoder_target_one_hot = to_categorical(decoder_target_data, num_decoder_tokens)
early_stopping = EarlyStopping(monitor='val_loss', patience=3, restore_best_weights=True)

model.fit(
[encoder_input_data, decoder_input_data],
decoder_target_one_hot,
epochs=epochs,
batch_size=batch_size,
validation_split=0.2,
callbacks=[early_stopping]
)

model.save("speech_tagging_model.h5")
loaded_model = load_model("speech_tagging_model.h5")

new_input_sequence = tokenizer_input.texts_to_sequences(["Another example sentence."])


new_encoder_input_data = pad_sequences(new_input_sequence,
maxlen=max_encoder_seq_length)
decoder_input_data = np.zeros((1, max_decoder_seq_length))

for i in range(max_decoder_seq_length):
predicted_probs = loaded_model.predict([new_encoder_input_data, decoder_input_data])
predicted_index = np.argmax(predicted_probs[0, i, :])
decoder_input_data[0, i] = predicted_index

index_to_pos = {index: pos for pos, index in tokenizer_output.word_index.items()}


predicted_tags = [index_to_pos.get(int(index), '<PAD>') for index in decoder_input_data[0]
if index != 0]

print("Predicted Part-of-Speech Tags:", predicted_tags)


Output :

RESULT:
Thus the above python program for Speech Tagging using sequence to sequence architecture
has been executed successfully and the output is displayed.
Ex.no: 7 Machine Translation using Encoder-Decoder model
Date:

AIM:
Write a python program for Machine Translation using Encoder-Decoder model.

ALGORITHM:
Step 1 : Import libraries
Step 2 : Data Preparation
Step 3 : Build the Encoder Model and Decoder Model
Step 4 : Add Attention Mechanism
Step 5 : Generate Output
Step 6 : Define the Model
Step 7 : Train the Model
Program:

import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Embedding, LSTM, Dense, Attention
# Example Input Data
english_sentences = ['I love machine learning', 'Deep learning is fascinating', 'TensorFlow is
a powerful tool']
french_sentences = ['J\'aime l\'apprentissage automatique', 'L\'apprentissage profond est
fascinant', 'TensorFlow est un outil puissant']
# Tokenize and pad the sequences
tokenizer_eng = Tokenizer()
tokenizer_eng.fit_on_texts(english_sentences)
eng_sequences = tokenizer_eng.texts_to_sequences(english_sentences)
eng_sequences_padded = pad_sequences(eng_sequences, padding='post')
tokenizer_frn = Tokenizer()
tokenizer_frn.fit_on_texts(french_sentences)
frn_sequences = tokenizer_frn.texts_to_sequences(french_sentences)
frn_sequences_padded = pad_sequences(frn_sequences, padding='post')
# Define vocabulary sizes
vocab_size_eng = len(tokenizer_eng.word_index) + 1
vocab_size_frn = len(tokenizer_frn.word_index) + 1

# Define model parameters


embedding_dim = 256
hidden_units = 512
num_epochs = 10
batch_size = 64
# Build the Encoder model
encoder_inputs = Input(shape=(None,))
encoder_embedding = Embedding(input_dim=vocab_size_eng,
output_dim=embedding_dim)(encoder_inputs)
encoder_outputs, state_h, state_c = LSTM(hidden_units, return_sequences=True,
return_state=True)(encoder_embedding)
encoder_states = [state_h, state_c]
# Build the Decoder model
decoder_inputs = Input(shape=(None,))
decoder_embedding = Embedding(input_dim=vocab_size_frn,
output_dim=embedding_dim)(decoder_inputs)
decoder_lstm = LSTM(hidden_units, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_embedding, initial_state=encoder_states)
# Add Attention Mechanism
attention_layer = Attention()([decoder_outputs, encoder_outputs])
context_vector = tf.reduce_sum(attention_layer, axis=1, keepdims=True)
decoder_attention = tf.concat([context_vector, decoder_outputs], axis=-1)
# Generate Output
decoder_dense = Dense(vocab_size_frn, activation='softmax')
decoder_outputs = decoder_dense(decoder_attention)

# Define the Model


model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
# Compile the model
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Train the model
model.fit([eng_sequences_padded, frn_sequences_padded[:, :-1]], frn_sequences_padded[:,
1:], epochs=num_epochs, batch_size=batch_size)
Output :

RESULT:
Thus the above python program for Machine Translation using Encoder-Decoder model has
been executed successfully and the output is displayed.
Ex.no: 8 Image Augmentation using GANs
Date:

AIM:
Write a python program for Image Augmentation using GANs.

ALGORITHM:
Step 1 : Import Libraries
Step 2 : Define GAN Architecture
Step 3 : Load and Preprocess Dataset
Step 4 : Train the GAN
Step 5 : Generate Augmented Images
Step 6 : Display Generated Images
Step 7 : Adjust Hyperparameters and Iterate
Program:

import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, Reshape, Flatten
from tensorflow.keras.layers import BatchNormalization, Activation, LeakyReLU,
UpSampling2D, Conv2D
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import numpy as np
import matplotlib.pyplot as plt

# Define the GAN architecture


def build_generator(latent_dim, img_shape):
model = Sequential()
model.add(Dense(256, input_dim=latent_dim))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(512))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(1024))
model.add(LeakyReLU(alpha=0.2))
model.add(BatchNormalization(momentum=0.8))
model.add(Dense(np.prod(img_shape), activation='tanh'))
model.add(Reshape(img_shape))
return model

def build_discriminator(img_shape):
model = Sequential()
model.add(Flatten(input_shape=img_shape))
model.add(Dense(512))
model.add(LeakyReLU(alpha=0.2))
model.add(Dense(256))
model.add(LeakyReLU(alpha=0.2))
model.add(Dense(1, activation='sigmoid'))
return model

def build_gan(generator, discriminator):


discriminator.trainable = False
model = Sequential()
model.add(generator)
model.add(discriminator)
return model

# Define parameters
img_rows, img_cols, channels = 28, 28, 1
img_shape = (img_rows, img_cols, channels)
latent_dim = 100

# Build and compile the discriminator


discriminator = build_discriminator(img_shape)
discriminator.compile(loss='binary_crossentropy', optimizer=Adam(0.0002, 0.5),
metrics=['accuracy'])
# Build the generator
generator = build_generator(latent_dim, img_shape)
# Build and compile the GAN
discriminator.trainable = False
gan = build_gan(generator, discriminator)
gan.compile(loss='binary_crossentropy', optimizer=Adam(0.0002, 0.5))
# For demonstration purposes, let's use the MNIST dataset
(x_train, _), (_, _) = tf.keras.datasets.mnist.load_data()
x_train = x_train / 127.5 - 1.0
x_train = np.expand_dims(x_train, axis=3)
# Training the GAN
epochs = 30000
batch_size = 64
half_batch = int(batch_size / 2)
for epoch in range(epochs):
idx = np.random.randint(0, x_train.shape[0], half_batch)
imgs = x_train[idx]
noise = np.random.normal(0, 1, (half_batch, latent_dim))
gen_imgs = generator.predict(noise)
d_loss_real = discriminator.train_on_batch(imgs, np.ones((half_batch, 1)))
d_loss_fake = discriminator.train_on_batch(gen_imgs, np.zeros((half_batch, 1)))
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
noise = np.random.normal(0, 1, (batch_size, latent_dim))
valid_y = np.array([1] * batch_size)
g_loss = gan.train_on_batch(noise, valid_y)
if epoch % 1000 == 0:
print(f"{epoch} [D loss: {d_loss[0]} | D accuracy: {100 * d_loss[1]}] [G loss:
{g_loss}]")
# Generate augmented images
num_generated_images = 10
noise = np.random.normal(0, 1, (num_generated_images, latent_dim))
generated_images = generator.predict(noise)
# Display the generated images
for i in range(num_generated_images):
plt.imshow(generated_images[i, :, :, 0], cmap='gray')
plt.axis('off')
plt.show()
Output :
RESULT:
Thus the above python program for Image Augmentation using GANs has been executed
successfully and the output is displayed.

You might also like