Python Deep Learning Lab Programs (2)
Python Deep Learning Lab Programs (2)
Source Code:
import tensorflow as tf
from tensorflow.keras import layers, models
# Step 3: Flatten the 3D output to 1D and add Dense layers for classification
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax')) # Assuming 10 classes for CIFAR-10
1
model.fit(x_train, y_train, epochs=10, batch_size=32, validation_data=(x_test, y_test))
Result:
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170498071/170498071 [==============================] - 7s 0us/step
Epoch 1/10
1563/1563 [==============================] - 56s 34ms/step - loss: 1.7659 -
accuracy: 0.3827 - val_loss: 1.4559 - val_accuracy: 0.4655
Epoch 2/10
1563/1563 [==============================] - 52s 33ms/step - loss: 1.3744 -
accuracy: 0.5078 - val_loss: 1.3300 - val_accuracy: 0.5203
Epoch 3/10
1563/1563 [==============================] - 52s 33ms/step - loss: 1.2385 -
accuracy: 0.5607 - val_loss: 1.2420 - val_accuracy: 0.5709
Epoch 4/10
1563/1563 [==============================] - 50s 32ms/step - loss: 1.1355 -
accuracy: 0.6019 - val_loss: 1.1483 - val_accuracy: 0.5969
Epoch 5/10
1563/1563 [==============================] - 51s 33ms/step - loss: 1.0459 -
accuracy: 0.6322 - val_loss: 1.0826 - val_accuracy: 0.6224
Epoch 6/10
1563/1563 [==============================] - 52s 33ms/step - loss: 0.9758 -
accuracy: 0.6607 - val_loss: 1.1183 - val_accuracy: 0.6212
Epoch 7/10
1563/1563 [==============================] - 60s 38ms/step - loss: 0.9119 -
accuracy: 0.6821 - val_loss: 1.0722 - val_accuracy: 0.6469
Epoch 8/10
1563/1563 [==============================] - 57s 37ms/step - loss: 0.8683 -
accuracy: 0.6982 - val_loss: 1.0293 - val_accuracy: 0.6566
2
Epoch 9/10
1563/1563 [==============================] - 60s 38ms/step - loss: 0.8196 -
accuracy: 0.7165 - val_loss: 1.0136 - val_accuracy: 0.6623
Epoch 10/10
1563/1563 [==============================] - 59s 38ms/step - loss: 0.7767 -
accuracy: 0.7316 - val_loss: 1.0620 - val_accuracy: 0.6629
313/313 [==============================] - 3s 10ms/step - loss: 1.0620 - accuracy:
0.6629
Test accuracy: 0.6628999710083008
3
EXERCISE - 2
Aim: Design Artificial Neural Networks for Identifying and Classifying an actor using
Kaggle Dataset.
Source Code:
# Importing necessary libraries
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.preprocessing import LabelEncoder
from tensorflow.python.keras import utils
from keras.models import Sequential
from keras.layers import Dense, Flatten, InputLayer
import keras
from keras.utils import np_utils
import imageio # To read images
from PIL import Image # For image resizing
#upload zip train and test datasets in content folder. After uploading the folders unzip the
datasets as shown below
!unzip /content/agedetectiontrain.zip
!unzip /content/agedetectiontest.zip
# Reading the data
train = pd.read_csv('/content/train.csv')
test = pd.read_csv('/content/test.csv')
4
img = np.array(Image.fromarray(img).resize((32, 32))).astype('float32')
temp.append(img)
train_x = np.stack(temp)
test_x = np.stack(temp)
epochs = 5
batch_size = 128
5
model = Sequential([
InputLayer(input_shape=input_num_units),
Flatten(),
Dense(units=hidden_num_units, activation='relu'),
Dense(units=output_num_units, activation='softmax'),
])
test['Class'] = pred
test.to_csv('out.csv', index=False)
6
Result:
<ipython-input-1-b5e04c5b5b2f>:27: DeprecationWarning: Starting with ImageIO v3 the
behavior of this function will switch to that of iio.v3.imread. To keep the current behavior
(and make this warning disappear) use `import imageio.v2 as imageio` or call
`imageio.v2.imread` directly.
img = imageio.imread(img_path)
<ipython-input-1-b5e04c5b5b2f>:37: DeprecationWarning: Starting with ImageIO v3 the
behavior of this function will switch to that of iio.v3.imread. To keep the current behavior
(and make this warning disappear) use `import imageio.v2 as imageio` or call
`imageio.v2.imread` directly.
img = imageio.imread(img_path)
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) (None, 3072) 0
=================================================================
Total params: 1,538,003
Trainable params: 1,538,003
Non-trainable params: 0
_________________________________________________________________
Epoch 1/5
156/156 [==============================] - 4s 24ms/step - loss: 0.8982 - accuracy:
0.5728
Epoch 2/5
156/156 [==============================] - 4s 24ms/step - loss: 0.8481 - accuracy:
0.6041
Epoch 3/5
7
156/156 [==============================] - 5s 31ms/step - loss: 0.8273 - accuracy:
0.6135
Epoch 4/5
156/156 [==============================] - 4s 23ms/step - loss: 0.8143 - accuracy:
0.6223
Epoch 5/5
156/156 [==============================] - 4s 23ms/step - loss: 0.8069 - accuracy:
0.6277
Epoch 1/5
125/125 [==============================] - 5s 38ms/step - loss: 0.8016 - accuracy:
0.6314 - val_loss: 0.8577 - val_accuracy: 0.5881
Epoch 2/5
125/125 [==============================] - 4s 29ms/step - loss: 0.7955 - accuracy:
0.6341 - val_loss: 0.7877 - val_accuracy: 0.6459
Epoch 3/5
125/125 [==============================] - 4s 29ms/step - loss: 0.7909 - accuracy:
0.6393 - val_loss: 0.7802 - val_accuracy: 0.6429
Epoch 4/5
125/125 [==============================] - 4s 31ms/step - loss: 0.7857 - accuracy:
0.6436 - val_loss: 0.7741 - val_accuracy: 0.6509
Epoch 5/5
125/125 [==============================] - 4s 35ms/step - loss: 0.7860 - accuracy:
0.6410 - val_loss: 0.7810 - val_accuracy: 0.6494
208/208 [==============================] - 1s 6ms/step
1/208 [..............................] - ETA: 4s<ipython-input-1-b5e04c5b5b2f>:90:
DeprecationWarning: Starting with ImageIO v3 the behavior of this function will switch to
that of iio.v3.imread. To keep the current behavior (and make this warning disappear) use
`import imageio.v2 as imageio` or call `imageio.v2.imread` directly.
img = imageio.imread(os.path.join('/content/Test', img_name))
208/208 [==============================] - 1s 6ms/step
Original: MIDDLE
Predicted: ['YOUNG']
8
9
EXERCISE - 3
Aim: Design a CNN for Image Recognition which includes hyper parameter tuning.
Source Code:
import numpy as np
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import RandomizedSearchCV
from sklearn.metrics import accuracy_score
#In this case, -1 is used in the first dimension to indicate that the size of that dimension
should be automatically calculated based on the other dimensions. The resulting shape will be
(num_samples, 28, 28, 1).
x_test = x_test.reshape(-1, 28, 28, 1).astype(np.float32) / 255.0
y_train = tf.keras.utils.to_categorical(y_train, num_classes=10)
y_test = tf.keras.utils.to_categorical(y_test, num_classes=10)
10
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
return model
Result:
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11490434/11490434 [==============================] - 0s 0us/step
11
<ipython-input-1-cb2fd6eb48c0>:36: DeprecationWarning: KerasClassifier is deprecated,
use Sci-Keras (https://github.com/adriangb/scikeras) instead. See
https://www.adriangb.com/scikeras/stable/migration.html for help migrating.
model = KerasClassifier(build_fn=create_model, verbose=0)
Fitting 3 folds for each of 10 candidates, totalling 30 fits
[CV] END dense_units=128, filters=64, kernel_size=(3, 3), pool_size=(2, 2); total time=
1.2min
[CV] END dense_units=128, filters=64, kernel_size=(3, 3), pool_size=(2, 2); total time=
1.5min
[CV] END dense_units=128, filters=64, kernel_size=(3, 3), pool_size=(2, 2); total time=
1.5min
[CV] END dense_units=64, filters=64, kernel_size=(3, 3), pool_size=(2, 2); total time=
53.2s
[CV] END dense_units=64, filters=64, kernel_size=(3, 3), pool_size=(2, 2); total time=
53.2s
[CV] END dense_units=64, filters=64, kernel_size=(3, 3), pool_size=(2, 2); total time=
43.1s
[CV] END dense_units=128, filters=64, kernel_size=(3, 3), pool_size=(3, 3); total time=
46.6s
[CV] END dense_units=128, filters=64, kernel_size=(3, 3), pool_size=(3, 3); total time=
47.6s
[CV] END dense_units=128, filters=64, kernel_size=(3, 3), pool_size=(3, 3); total time=
37.8s
[CV] END dense_units=128, filters=16, kernel_size=(5, 5), pool_size=(3, 3); total time=
27.1s
[CV] END dense_units=128, filters=16, kernel_size=(5, 5), pool_size=(3, 3); total time=
27.3s
[CV] END dense_units=128, filters=16, kernel_size=(5, 5), pool_size=(3, 3); total time=
27.3s
[CV] END dense_units=256, filters=16, kernel_size=(3, 3), pool_size=(2, 2); total time=
47.8s
[CV] END dense_units=256, filters=16, kernel_size=(3, 3), pool_size=(2, 2); total time=
46.0s
12
[CV] END dense_units=256, filters=16, kernel_size=(3, 3), pool_size=(2, 2); total time=
31.1s
[CV] END dense_units=64, filters=16, kernel_size=(5, 5), pool_size=(2, 2); total time=
26.9s
[CV] END dense_units=64, filters=16, kernel_size=(5, 5), pool_size=(2, 2); total time=
25.0s
[CV] END dense_units=64, filters=16, kernel_size=(5, 5), pool_size=(2, 2); total time=
23.8s
[CV] END dense_units=256, filters=32, kernel_size=(3, 3), pool_size=(3, 3); total time=
46.1s
[CV] END dense_units=256, filters=32, kernel_size=(3, 3), pool_size=(3, 3); total time=
46.5s
[CV] END dense_units=256, filters=32, kernel_size=(3, 3), pool_size=(3, 3); total time=
46.6s
[CV] END dense_units=64, filters=64, kernel_size=(5, 5), pool_size=(3, 3); total time=
48.2s
[CV] END dense_units=64, filters=64, kernel_size=(5, 5), pool_size=(3, 3); total time=
53.3s
[CV] END dense_units=64, filters=64, kernel_size=(5, 5), pool_size=(3, 3); total time=
40.0s
[CV] END dense_units=64, filters=64, kernel_size=(3, 3), pool_size=(3, 3); total time=
47.8s
[CV] END dense_units=64, filters=64, kernel_size=(3, 3), pool_size=(3, 3); total time=
47.8s
[CV] END dense_units=64, filters=64, kernel_size=(3, 3), pool_size=(3, 3); total time=
47.9s
[CV] END dense_units=64, filters=16, kernel_size=(3, 3), pool_size=(2, 2); total time=
21.0s
[CV] END dense_units=64, filters=16, kernel_size=(3, 3), pool_size=(2, 2); total time=
25.2s
[CV] END dense_units=64, filters=16, kernel_size=(3, 3), pool_size=(2, 2); total time=
25.0s
Best Parameters: {'pool_size': (3, 3), 'kernel_size': (5, 5), 'filters': 64, 'dense_units': 64}
313/313 [==============================] - 2s 7ms/step
13
Test Accuracy: 0.9834
14
EXERCISE - 4
Source Code:
Result:
15
EXERCISE - 5
Source Code:
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.datasets import mnist
from tensorflow.keras.losses import MeanSquaredError
# Load the MNIST dataset
(x_train, _), (x_test, _) = mnist.load_data()
16
])
return model
input_shape = (784,)
model = build_mlp(input_shape)
model.compile(optimizer=Adam(learning_rate=0.001), loss=MeanSquaredError())
batch_size = 128
epochs = 20
num_images = 10
plt.figure(figsize=(18, 4))
for i in range(num_images):
# Noisy image
ax = plt.subplot(3, num_images, i + 1)
plt.imshow(x_test_noisy[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# Denoised image
ax = plt.subplot(3, num_images, i + 1 + num_images)
plt.imshow(decoded_images[i].reshape(28, 28))
plt.gray()
17
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# Original image
ax = plt.subplot(3, num_images, i + 1 + 2*num_images)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
Result:
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11490434/11490434 [==============================] - 0s 0us/step
Epoch 1/20
422/422 [==============================] - 5s 10ms/step - loss: 0.0479 - val_loss:
0.0292
Epoch 2/20
422/422 [==============================] - 3s 7ms/step - loss: 0.0253 - val_loss:
0.0225
Epoch 3/20
422/422 [==============================] - 3s 7ms/step - loss: 0.0211 - val_loss:
0.0197
Epoch 4/20
422/422 [==============================] - 3s 7ms/step - loss: 0.0189 - val_loss:
0.0183
Epoch 5/20
422/422 [==============================] - 4s 9ms/step - loss: 0.0177 - val_loss:
0.0173
Epoch 6/20
422/422 [==============================] - 3s 7ms/step - loss: 0.0167 - val_loss:
0.0167
Epoch 7/20
18
422/422 [==============================] - 3s 7ms/step - loss: 0.0160 - val_loss:
0.0160
Epoch 8/20
422/422 [==============================] - 3s 7ms/step - loss: 0.0155 - val_loss:
0.0155
Epoch 9/20
422/422 [==============================] - 4s 9ms/step - loss: 0.0151 - val_loss:
0.0153
Epoch 10/20
422/422 [==============================] - 4s 9ms/step - loss: 0.0148 - val_loss:
0.0149
Epoch 11/20
422/422 [==============================] - 3s 7ms/step - loss: 0.0144 - val_loss:
0.0146
Epoch 12/20
422/422 [==============================] - 3s 7ms/step - loss: 0.0142 - val_loss:
0.0143
Epoch 13/20
422/422 [==============================] - 4s 9ms/step - loss: 0.0139 - val_loss:
0.0144
Epoch 14/20
422/422 [==============================] - 4s 9ms/step - loss: 0.0137 - val_loss:
0.0139
Epoch 15/20
422/422 [==============================] - 3s 7ms/step - loss: 0.0135 - val_loss:
0.0138
Epoch 16/20
422/422 [==============================] - 4s 10ms/step - loss: 0.0133 - val_loss:
0.0136
Epoch 17/20
422/422 [==============================] - 3s 7ms/step - loss: 0.0132 - val_loss:
0.0135
Epoch 18/20
19
422/422 [==============================] - 3s 7ms/step - loss: 0.0130 - val_loss:
0.0134
Epoch 19/20
422/422 [==============================] - 3s 7ms/step - loss: 0.0129 - val_loss:
0.0132
Epoch 20/20
422/422 [==============================] - 4s 9ms/step - loss: 0.0128 - val_loss:
0.0132
313/313 [==============================] - 1s 2ms/step
20
EXERCISE - 6
Source Code:
Result:
21
EXERCISE - 7
Aim: Design a Deep learning Network for Robust Bi-Tempered Logistic Loss.
Source Code:
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
import matplotlib.pyplot as plt
# Define the Robust Bi-Tempered Logistic Loss function with fixed t1 and t2
def create_robust_bi_tempered_loss(t1, t2, label_smoothing=0.0):
#The @tf.function decorator is used in TensorFlow to convert a regular Python function
into a TensorFlow computation graph.
@tf.function
def loss(y_true, y_pred):
y_pred = tf.clip_by_value(y_pred, 1e-7, 1.0 - 1e-7)
y_true = tf.cast(y_true, y_pred.dtype)
term1 = (1 - y_true) * (tf.math.pow(1 - y_pred, t2 - 1))
term2 = y_true * (tf.math.pow(y_pred, t1 - 1))
loss_value = -(term1 + term2)
if label_smoothing > 0.0:
loss_value += label_smoothing * y_true * tf.math.log(y_true / y_pred)
return tf.reduce_mean(loss_value)
return loss
22
# Compile the Model with RBL Loss
t1 = 0.8 # Temperature parameter 1
t2 = 1.2 # Temperature parameter 2
label_smoothing = 0.1 # Optional label smoothing
rbl_loss = create_robust_bi_tempered_loss(t1, t2, label_smoothing)
model.compile(optimizer='adam', loss=rbl_loss, metrics=['accuracy'])
# Make Predictions
predictions = loaded_model.predict(new_data)
predicted_classes = (predictions > 0.5).astype(int)
print(predictions)
print("Predicted Classes:", predicted_classes)
Result:
1/1 [==============================] - 0s 175ms/step
[[0.63754183]
[0.6558753 ]
[0.6071969 ]
[0.5628828 ]
[0.67136395]]
Predicted Classes: [[1]
[1]
[1]
23
[1]
[1]]
24
EXERCISE - 8
Source Code:
import tensorflow as tf
from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D, BatchNormalization,
Flatten, Dense, Dropout
25
fc2 = Dense(256, activation='relu')(dropout1)
dropout2 = Dropout(0.5)(fc2)
output_layer = Dense(num_classes, activation='softmax')(dropout2)
model = tf.keras.Model(inputs=input_layer, outputs=output_layer)
return model
# Set hyperparameters
input_shape = (32, 32, 3) # Input shape for CIFAR-10
num_classes = 10 # Number of classes in CIFAR-10 dataset
Result:
Model: "model"
_________________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 32, 32, 3)] 0
26
batch_normalization_1 (BatchNormalization) (None, 8, 8, 128) 512
=================================================================
Total params: 2,440,202
Trainable params: 2,439,562
Non-trainable params: 640
_________________________________________________________________________
27
EXERCISE - 9
Source Code:
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from tensorflow.keras.models import Model
28
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
# Noisy image
ax = plt.subplot(3, n, i + 1 + n)
plt.imshow(x_test_noisy[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# Denoised image
29
ax = plt.subplot(3, n, i + 1 + 2*n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
Result:
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11490434/11490434 [==============================] - 0s 0us/step
Epoch 1/10
469/469 [==============================] - 181s 383ms/step - loss: 0.1718 -
val_loss: 0.1195
Epoch 2/10
469/469 [==============================] - 172s 367ms/step - loss: 0.1150 -
val_loss: 0.1101
Epoch 3/10
469/469 [==============================] - 172s 366ms/step - loss: 0.1089 -
val_loss: 0.1061
Epoch 4/10
469/469 [==============================] - 173s 369ms/step - loss: 0.1058 -
val_loss: 0.1035
Epoch 5/10
469/469 [==============================] - 173s 369ms/step - loss: 0.1036 -
val_loss: 0.1018
Epoch 6/10
469/469 [==============================] - 175s 372ms/step - loss: 0.1020 -
val_loss: 0.1005
Epoch 7/10
469/469 [==============================] - 172s 366ms/step - loss: 0.1009 -
val_loss: 0.0995
Epoch 8/10
469/469 [==============================] - 172s 367ms/step - loss: 0.1000 -
val_loss: 0.0987
30
Epoch 9/10
469/469 [==============================] - 175s 374ms/step - loss: 0.0993 -
val_loss: 0.0980
Epoch 10/10
469/469 [==============================] - 171s 365ms/step - loss: 0.0988 -
val_loss: 0.0983
313/313 [==============================] - 8s 25ms/step
31
EXERCISE - 10
Source Code:
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
# Generator model
generator = Sequential([
Dense(128, input_shape=(100,), activation='relu'),
Dense(784, activation='sigmoid')
])
# Discriminator model
discriminator = Sequential([
Dense(128, input_shape=(784,), activation='relu'),
Dense(1, activation='sigmoid')
])
# Compile discriminator
discriminator.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
32
# Training loop
epochs = 10000
batch_size = 64
# Train generator
noise = np.random.randn(batch_size, 100)
generator_loss = combined.train_on_batch(noise, np.ones((batch_size, 1)))
if epoch % 1000 == 0:
print(f"Epoch: {epoch}, D Loss: {discriminator_loss[0]}, G Loss: {generator_loss}")
33
Result:
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11490434/11490434 [==============================] - 0s 0us/step
2/2 [==============================] - 0s 5ms/step
Epoch: 0, D Loss: 1.15574049949646, G Loss: 0.865715503692627
1/1 [==============================] - 0s 35ms/step
34
Epoch: 7000, D Loss: 0.356029212474823, G Loss: 3.2198615074157715
1/1 [==============================] - 0s 15ms/step
35