Lab Report 08: Convolutional Networks For Images With Keras: Sukkur Institute of Business Administration University
Lab Report 08: Convolutional Networks For Images With Keras: Sukkur Institute of Business Administration University
Lab Report 08: Convolutional Networks For Images With Keras: Sukkur Institute of Business Administration University
Submission Profile
Name: Submission date (dd/mm/yy):
Enrollment ID: Receiving authority name and signature:
Comments: __________________________________________________________________________
Instructor Signature
Input shape:
4+D tensor with shape: batch_shape + (channels, rows, cols) if data_format='channels_first' or 4+D
tensor with shape: batch_shape + (rows, cols, channels) if data_format='channels_last'.
Output shape:
4+D tensor with shape: batch_shape + (filters, new_rows, new_cols) if data_format='channels_first' or
4+D tensor with shape: batch_shape + (new_rows, new_cols, filters).
Loading, splitting the data for validation and converting labels to categorical:
Building the dense layer model and evaluating the testing loss and testing accuracy:
Reason of reshaping data into 4-dimmensional data is that convolutional layers expect the shape of the
data to be in 4 dimensions; 3 dimensions of data and 4 th is the batch size.
It can be seen by adding a single convolutional layer into the network boosted the accuracy from 88%
almost 97%. That is the reason convolutional works best for image data.
Convolutional layer scans image parts and deal every part as a feature as in above case it has made 32
features each of 26 by 26 causing the learning parameters to be of very large number like we have 6 Lac
92 thousand something parameters but if we look at the summary of model containing only dense layer
it has only 25 thousand and 450 parameters indicating model with only dense layers is very light as
compared to model with convolutional layer.
MaxPooling layer
A pooling layer is another building block of a CNN. Its function is to progressively reduce the spatial size
of the representation to reduce the amount of parameters and computation in the network. Pooling
layer operates on each feature map independently. It also reduces overfitting.
It is operation that calculates the maximum, or largest, value in each patch of each feature map. The
results are down sampled or pooled feature maps that highlight the most present feature in the patch.
In other words, it downsamples the input representation by taking the maximum value over the window
defined by pool_size for each dimension along the features axis.
Implementing a neural network containing neural network with MaxPooling layer:
By just adding a maxpooling layer with the pool size of 3 by 3 we reduced the parameters from 692,906
to only 66,218. The reduced parameters are less than 10% parameters of the model without using
maxpooling layer.
Dog and Cat classification using Deep Learning
Generating arrays from images for training data:
Concatenating dogs and cat arrays into one variable for train and test:
Creating labels for cats and dogs 0 and 1 respectively equal to length of data for training data:
Creating labels for cats and dogs 0 and 1 respectively equal to length of data for testing data:
Concatenating the labels for dogs and cats for training and testing:
Predicting an image from test data and displaying it for cat class:
Regularization
In machine learning it is used apply penalty (dropping) to coefficients to overcome overfitting and in
deep learning it is used to apply penalty (dropping) to learning weights to overcome overfitting and to
increase generalization of the model on the new data. Some major are implemented below.
L1 Regularization:
L2 Regularization:
Combined:
Declaring K-Fold module with n_splits = 4, building and compiling the model:
Evaluating the testing loss and testing accuracy for each K-fold.
This implementation is using for loop but there is direct method for implementing this illustrated below.
Also, before applying K-Fold it is a good practice to shuffle the data.
Direct use of K-fold with using Sklearn API with Keras model.
Shuffling the data:
Loading Wrapper for using the Scikit-Learn API with Keras models and Applying K-fold for 5
splits and averaging the all accuracies per each k-Fold:
For 10 K-folds:
Comparison of different optimizers taken for K-fold with 5 splits: