Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

LSTMImplementation.ipynb - Colaboratory

The document outlines a program for implementing an LSTM model using TensorFlow to classify movie reviews from the IMDB dataset. It details the steps of importing libraries, loading and preparing the dataset, defining and compiling the model, training it over 10 epochs, and finally evaluating its performance. The model achieved a test accuracy of approximately 84.32% after training.

Uploaded by

cookiesntacos
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

LSTMImplementation.ipynb - Colaboratory

The document outlines a program for implementing an LSTM model using TensorFlow to classify movie reviews from the IMDB dataset. It details the steps of importing libraries, loading and preparing the dataset, defining and compiling the model, training it over 10 epochs, and finally evaluating its performance. The model achieved a test accuracy of approximately 84.32% after training.

Uploaded by

cookiesntacos
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

2/18/24, 3:28 PM LSTMImplementation.

ipynb - Colaboratory

The following steps are taken in the program:

1) Import necessary libraries.

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Embedding
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing import sequence

2) Load and prepare the IMDB dataset.

max_features = 10000 # Number of words to consider as features


maxlen = 500 # Cut texts after this number of words

print("Loading data...")
(input_train, y_train), (input_test, y_test) = imdb.load_data(num_words=max_features)
print(len(input_train), "train sequences")
print(len(input_test), "test sequences")

print("Pad sequences (samples x time)")


input_train = sequence.pad_sequences(input_train, maxlen=maxlen)
input_test = sequence.pad_sequences(input_test, maxlen=maxlen)
print("input_train shape:", input_train.shape)
print("input_test shape:", input_test.shape)

Loading data...
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/imdb.npz
17464789/17464789 [==============================] - 1s 0us/step
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
input_train shape: (25000, 500)
input_test shape: (25000, 500)

3) Define the model.

model = Sequential()
model.add(Embedding(max_features, 32))
model.add(LSTM(32))
model.add(Dense(1, activation='sigmoid'))

4) Compile the model.

model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])

5) Train the model.

history = model.fit(input_train, y_train,


epochs=10,
batch_size=128,
validation_split=0.2)

output Epoch 1/10


157/157 [==============================] - 57s 347ms/step - loss: 0.6067 - acc: 0.6564 - val_loss: 0.5497 - val_acc: 0.7200
Epoch 2/10
157/157 [==============================] - 51s 327ms/step - loss: 0.3575 - acc: 0.8517 - val_loss: 0.4184 - val_acc: 0.8042
Epoch 3/10
157/157 [==============================] - 52s 332ms/step - loss: 0.2714 - acc: 0.8957 - val_loss: 0.4682 - val_acc: 0.8214
Epoch 4/10
157/157 [==============================] - 52s 332ms/step - loss: 0.2313 - acc: 0.9123 - val_loss: 0.2921 - val_acc: 0.8858
Epoch 5/10
157/157 [==============================] - 52s 332ms/step - loss: 0.2017 - acc: 0.9245 - val_loss: 0.3899 - val_acc: 0.8634
Epoch 6/10
157/157 [==============================] - 50s 321ms/step - loss: 0.1854 - acc: 0.9326 - val_loss: 0.3171 - val_acc: 0.8656
Epoch 7/10

https://colab.research.google.com/drive/1WAKC0xvBPIn5czAhLmLi5d_al0YTJV3w#scrollTo=TEsONDH_2G6E&printMode=true 1/2
2/18/24, 3:28 PM LSTMImplementation.ipynb - Colaboratory
157/157 [==============================] - 52s 329ms/step - loss: 0.1660 - acc: 0.9398 - val_loss: 0.3038 - val_acc: 0.8798
Epoch 8/10
157/157 [==============================] - 51s 323ms/step - loss: 0.1494 - acc: 0.9483 - val_loss: 0.3052 - val_acc: 0.8770
Epoch 9/10
157/157 [==============================] - 51s 328ms/step - loss: 0.1390 - acc: 0.9518 - val_loss: 0.4212 - val_acc: 0.8686
Epoch 10/10
157/157 [==============================] - 52s 334ms/step - loss: 0.1275 - acc: 0.9557 - val_loss: 0.3880 - val_acc: 0.8438

6) Evaluate the model.

evaluation = model.evaluate(input_test, y_test)


print(f'Test Loss: {evaluation[0]} - Test Accuracy: {evaluation[1]}')

782/782 [==============================] - 34s 43ms/step - loss: 0.4023 - acc: 0.8432


Test Loss: 0.40225595235824585 - Test Accuracy: 0.8432000279426575

https://colab.research.google.com/drive/1WAKC0xvBPIn5czAhLmLi5d_al0YTJV3w#scrollTo=TEsONDH_2G6E&printMode=true 2/2

You might also like