DL Practical
DL Practical
DL Practical
Material Required:
● Python
● TensorFlow Library
● NumPy Library
● scikit-learn Library
● Seaborn Library
● Matplotlib Library
● MNIST Dataset (automatically downloaded via TensorFlow)
Methodology:
● Fit the model to the training data for a specified number of epochs (e.g., 10).
● Validate the model using the test data.
● Compute the accuracy, error rate, precision, and recall of the model using scikit-learn
metrics.
Code:
import tensorflow as tf
from tensorflow.keras import layers, models
from sklearn.metrics import accuracy_score, precision_score, recall_score
import numpy as np
Output:
PRACTICAL-2
Aim: To deploy a confusion matrix for evaluating a trained neural network model and to simulate
overfitting by using a complex model architecture on the MNIST dataset.
Material Required:
● Python 3.x
● TensorFlow
● NumPy
● Matplotlib
● Seaborn
● Jupyter Notebook or any Python IDE
Theory:
Confusion Matrix
● Definition: A confusion matrix is a table that is often used to describe the performance of a
classification model. It allows the visualization of the performance of an algorithm.
● Components:
○ True Positives (TP): Correctly predicted positive cases.
○ True Negatives (TN): Correctly predicted negative cases.
○ False Positives (FP): Incorrectly predicted positive cases.
○ False Negatives (FN): Incorrectly predicted negative cases.
Overfitting
● Definition: Overfitting occurs when a model learns the training data too well, including its
noise and outliers, resulting in poor generalization to new data.
● Indication: A significant gap between training accuracy and validation/test accuracy is a sign
of overfitting.
Experimental Procedure:
Step 1: Load and Preprocess the MNIST Dataset
python
Copy code
import tensorflow as tf
# Generate Predictions
y_pred = np.argmax(model.predict(X_test), axis=1)
Output:
PRATICAL NO. 3
# Draw edges
for edge in G.edges():
x_start, y_start = positions[edge[0]]
x_end, y_end = positions[edge[1]]
ax.plot([x_start, x_end], [y_start, y_end], 'k-', alpha=0.5)
ax.set_xlim(left, right)
ax.set_ylim(bottom, top)
ax.axis('off')
# Example usage
fig, ax = plt.subplots(figsize=(12, 8))
draw_neural_net(ax, left=0, right=5, bottom=0, top=10, layer_sizes=[3, 5, 3])
plt.show()
Output:
PRACTICAL NO. 4
Material Required:
● Python
● TensorFlow Library
● NumPy Library
● scikit-learn Library
● Seaborn Library
● Matplotlib Library
● MNIST Dataset (automatically downloaded via TensorFlow)
Introduction
Object detection a very important problem in computer vision. Here the model is tasked with localizing
the objects present in an image, and at the same time, classifying them into different categories. Object
detection models can be broadly classified into "single-stage" and "two-stage" detectors. Two-stage
detectors are often more accurate but at the cost of being slower. Here in this example, we will
implement RetinaNet, a popular single-stage detector, which is accurate and runs fast. RetinaNet uses
a feature pyramid network to efficiently detect objects at multiple scales and introduces a new loss, the
Focal loss function, to alleviate the problem of the extreme foreground-background class imbalance.
Code:-
# import packages
from imutils.video import VideoStream
from imutils.video import FPS
import numpy as np
import argparse
import imutils
import time
import cv2
fps = FPS().start()
while True:
# grab the frame from the threaded video stream and resize it to have a maximum width of 400 pixels
# vs is the VideoStream
frame = vs.read()
frame = imutils.resize(frame, width=400)
print(frame.shape) # (225, 400, 3)
# grab the frame dimensions and convert it to a blob
# First 2 values are the h and w of the frame. Here h = 225 and w = 400
(h, w) = frame.shape[:2]
# Resize each frame
resized_image = cv2.resize(frame, (300, 300))
# Predictions:
predictions = net.forward()
confidence = predictions[0, 0, i, 2]
Output:-
PRACTICAL NO. 5
Material Required:
● Python
● TensorFlow Library
● NumPy Library
● scikit-learn Library
● Seaborn Library
● Matplotlib Library
●
Introduction:
Python Recommendation Systems employs a data-driven methodology to offer customers tailored
recommendations. It uses user data and algorithms to forecast and suggest goods, services, or content
that a user is probably going to find interesting. These systems are essential in applications where users
may become overwhelmed by large volumes of information, such as social media, streaming services,
and e-commerce. Building recommendation systems is a common use for Python because of its
modules and machine learning frameworks. The two main kinds are content-based filtering (which
takes into account the characteristics of products and user profiles) and collaborative filtering (which
generates recommendations based on user behaviour and preferences). Hybrid strategies that integrate
the two approaches are also popular.
Code:
PRATICAL NO. 6
Introduction:
Backpropagation is a key algorithm used to train artificial neural networks by minimizing the error
between the predicted and actual outputs. It works through two main steps: forward propagation and
backward propagation. In forward propagation, input data moves through the network layers, where
weights, biases, and activation functions transform the data to produce an output. The error is
calculated by comparing this output to the actual target value. During backward propagation, the error
is propagated back through the network, adjusting the weights and biases using the gradient descent
method to reduce the error. This iterative process continues until the network's predictions become
more accurate.
Key Components
Neural Network Architecture: A feedforward neural network with multiple layers.
Activation Function: A sigmoid function will be used to introduce non-linearity.
Loss Function: Mean squared error (MSE) will be used to measure the difference between
predicted and actual outputs.
Optimization: Gradient descent will be used to update the weights and biases.
Implementation Steps
1. Define the Neural Network:
o Create classes for neurons and layers.
o Initialize weights and biases randomly.
2. Forward Pass:
o Calculate the weighted sum of inputs for each neuron.
o Apply the activation function to obtain the output.
o Propagate the output to the next layer.
3. Backward Pass:
o Calculate the error gradient for the output layer.
o Propagate the error gradient backward through the network, calculating gradients for
each layer's weights and biases.
4. Update Weights and Biases:
o Use gradient descent to update the weights and biases based on the calculated gradients.
5. Repeat:
o Iterate through the training data multiple times (epochs), updating weights and biases
in each iteration.
Applications of Backpropagation:
1. Image Recognition: Backpropagation is widely used in convolutional neural networks (CNNs)
for tasks like object detection, face recognition, and image classification.
2. Natural Language Processing (NLP): It powers language models for applications like
sentiment analysis, machine translation, and speech recognition.
3. Medical Diagnosis: Neural networks trained using backpropagation help in detecting diseases
from medical images, such as identifying tumors in MRI scans.
4. Financial Forecasting: Used to predict stock prices, market trends, and credit risk assessments
by learning from historical data.
5. Autonomous Systems: In robotics and self-driving cars, backpropagation enables learning
from environmental data, improving decision-making in real-time.
Code:
import numpy as np
def sigmoid_derivative(x):
return x * (1 - x)
# Training parameters
learning_rate = 0.1
epochs = 10000
# Training loop
for epoch in range(epochs):
# Forward pass
hidden_input = np.dot(X, W1) + b1 # Input to hidden layer
hidden_output = sigmoid(hidden_input) # Output from hidden layer
# Backpropagation
# Compute the error in output
error_output = y_pred - y
delta_output = error_output * sigmoid_derivative(y_pred)
W1 -= learning_rate * X.T.dot(delta_hidden)
b1 -= learning_rate * np.sum(delta_hidden, axis=0, keepdims=True)
Output:
Epoch 0, Loss: 0.2558
Epoch 1000, Loss: 0.2494
Epoch 2000, Loss: 0.2454
Epoch 3000, Loss: 0.2047
Epoch 4000, Loss: 0.1532
Epoch 5000, Loss: 0.1387
Epoch 6000, Loss: 0.1336
Epoch 7000, Loss: 0.1312
Epoch 8000, Loss: 0.1297
Epoch 9000, Loss: 0.1288
Introduction:
Neural networks are powerful tools for solving complex problems by learning patterns from data. The
backpropagation algorithm is a key component of neural networks, allowing them to learn by adjusting
their internal weights based on the error between predicted and actual outputs. By iteratively updating
the weights in response to errors, the network becomes increasingly accurate in its predictions.
Backpropagation, or "backward propagation of errors," is an efficient way to compute gradients of the
loss function with respect to the weights in the network, using the chain rule of calculus. These
gradients are then used by optimization algorithms, such as gradient descent, to update the network
weights and minimize the loss.
Key Components:
Neural Network Architecture: A feedforward neural network with multiple layers.
Activation Function: A sigmoid function will be used to introduce non-linearity.
Loss Function: Mean squared error (MSE) will be used to measure the difference between
predicted and actual outputs.
Optimization: Gradient descent will be used to update the weights and biases.
Implementation Steps:
1. Define the Neural Network:
o Create classes for neurons and layers.
o Initialize weights and biases randomly.
2. Forward Pass:
o Calculate the weighted sum of inputs for each neuron.
o Apply the activation function to obtain the output.
o Propagate the output to the next layer.
3. Backward Pass:
o Calculate the error gradient for the output layer.
o Propagate the error gradient backward through the network, calculating gradients for
each layer's weights and biases.
4. Update Weights and Biases:
o Use gradient descent to update the weights and biases based on the calculated gradients.
5. Repeat:
o Iterate through the training data multiple times (epochs), updating weights and biases
in each iteration.
Applications:
The backpropagation algorithm is widely used in various machine learning tasks, including:
Image Classification: Training deep learning models like convolutional neural networks
(CNNs) to classify images.
Natural Language Processing (NLP): For tasks such as sentiment analysis, machine
translation, and text classification.
Recommendation Systems: To learn user-item interactions and make personalized
recommendations.
Financial Forecasting: Predicting stock prices or risk factors in financial markets.
Medical Diagnostics: Helping to predict the likelihood of diseases based on patient data.
Code:
import numpy as np
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense, Embedding, Dot, Lambda, Reshape, Flatten
from tensorflow.keras import backend as K # Corrected import of K
return model
# Sample data
user_ids = np.random.randint(0, num_users, size=(100,))
item_ids = np.random.randint(0, num_items, size=(100,))
positive_item_ids = np.random.randint(0, num_items, size=(100,))
Output:
4/4 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 1.0344
Epoch 2/10
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.9939
Epoch 3/10
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.9668
Epoch 4/10
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.9392
Epoch 5/10
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.9301
Epoch 6/10
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.8931
Epoch 7/10
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.8598
Epoch 8/10
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 4ms/step - loss: 0.8409
Epoch 9/10
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.8212
Epoch 10/10
4/4 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step - loss: 0.7808
PRATICAL NO. 8
Introduction:
Fully Convolutional Neural Networks (FCNNs) are a type of deep learning architecture that have
gained significant attention due to their ability to process input images of arbitrary size and produce
output maps of corresponding size. Unlike traditional convolutional neural networks (CNNs) that
typically end with fully connected layers, FCNNs consist entirely of convolutional layers, making them
more flexible and efficient for tasks like semantic segmentation and object detection.
Key characteristics of FCNNs:
No fully connected layers: FCNNs replace fully connected layers with convolutional layers,
allowing them to process input images of any size without requiring fixed-size input.
Upsampling layers: FCNNs often use upsampling layers (e.g., transposed convolutions) to
increase the spatial resolution of the feature maps, enabling them to generate output maps that
match the size of the input image.
Dense prediction: FCNNs produce dense predictions, meaning they assign a class label or
probability to each pixel in the input image.
Applications of FCNNs:
Semantic segmentation: Assigning a semantic label (e.g., car, person, road) to each pixel in
an image.
Object detection: Identifying and localizing objects within an image.
Image generation: Generating new images or modifying existing ones.
Medical image analysis: Analyzing medical images for tasks like tumor detection and
segmentation
Advantages of FCNNs:
Flexibility: FCNNs can handle input images of arbitrary size, making them suitable for a wide
range of applications.
Efficiency: FCNNs can be more efficient than traditional CNNs, especially for large input
images.
End-to-end learning: FCNNs can learn feature extraction, classification, and localization in a
single end-to-end process.
Code:
pip install torch torchvision;
import torch
import torch.nn as nn
import torchvision.models as models
return x
# Initialize the FCN model
num_classes = 21 # For example, 21 classes in Pascal VOC dataset
model = FCN(num_classes)