Deep Learning Lab
Deep Learning Lab
Aim:
Algorithm:
STEP 1: Prepare a dataset with input features (0 or 1) and corresponding XOR outputs.
STEP 2: Design a multilayer perceptron with input, hidden, and output layers.
STEP 4: Choose a loss function (e.g., mean squared error) and an optimization algorithm
(e.g., stochastic gradient descent).
STEP 5: Train the network using the dataset, adjusting weights and biases iteratively.
STEP 6: Evaluate the trained network's performance on XOR inputs and compare with
expected outputs.
Program:
import numpy as np
import tensorflow as tf
model = tf.keras.Sequential([
])
predictions = model.predict(x_train)
print("Predictions:")
for i in range(len(x_train)):
Sample Output:
Predictions:
Aim:
Algorithm:
STEP 1: Prepare a dataset of character and digit images along with their labels.
STEP 2: Design an artificial neural network (ANN) with input, hidden, and output layers.
STEP 5: Use backpropagation and gradient descent to optimize the network's parameters.
STEP 6: Evaluate the trained ANN's accuracy on a test dataset of character and digit images.
Program :
import tensorflow as tf
import numpy as np
# Load the dataset (you may need to replace this with your own dataset)
mnist = keras.datasets.mnist
model = keras.Sequential([
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
sample_image = test_images[0]
sample_label = test_labels[0]
predicted_label = np.argmax(predictions)
Sample Output:
Epoch 1/5
Epoch 2/5
Epoch 4/5
Epoch 5/5
Predicted Label: 7
Experiment 3: Implement Analysis of X-ray Image using Autoencoders
Aim:
To apply autoencoders to X-ray images for feature extraction and anomaly detection,
illustrating the potential of unsupervised learning in medical image analysis.
Algorithm:
STEP 3: Define a suitable loss function, often using mean squared error.
STEP 4: Train the autoencoder using X-ray images to learn compact representations.
STEP 6: Use encoded representations for tasks like anomaly detection or image denoising.
Program:
import numpy as np
import tensorflow as tf
# Load a sample X-ray image (you may need to replace this with your own dataset)
autoencoder.compile(optimizer='adam', loss='mean_squared_error')
plt.figure(figsize=(10, 5))
plt.subplot(1, 3, 1)
plt.imshow(x_ray_image, cmap='gray')
plt.title("Original X-ray")
plt.subplot(1, 3, 2)
plt.imshow(noisy_x_ray, cmap='gray')
plt.title("Noisy X-ray")
plt.subplot(1, 3, 3)
plt.imshow(denoised_x_ray[0], cmap='gray')
plt.title("Denoised X-ray")
plt.tight_layout()
plt.show()
Sample Output:
Aim:
Algorithm:
STEP 1: Collect a dataset of spoken audio samples and their corresponding transcripts.
STEP 2: Preprocess the audio data by converting it into spectrograms or other suitable
representations.
STEP 3: Design a deep learning model, such as a recurrent neural network (RNN) or a
transformer, for speech recognition.
STEP 4: Implement a suitable loss function, like connectionist temporal classification (CTC)
loss.
STEP 6: Evaluate the model's performance by measuring word error rate or other relevant
metrics.
Program:
import tensorflow as tf
import numpy as np
# Generate sample audio features (you would use actual audio features in practice)
lstm = LSTM(128)(embedding)
speech_recognition_model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])
# Sample audio input for prediction (you would use actual audio input in practice)
sample_input_audio = np.random.rand(1, 100, 20) # Single input, 100 time steps, 20 features
each
predicted_probs = speech_recognition_model.predict(sample_input_audio)
predicted_word_index = np.argmax(predicted_probs)
Sample Output:
The output will display the predicted word index based on the sample audio input.
Experiment 5: Develop Object Detection and Classification for Traffic Analysis using
CNN
Aim:
Algorithm:
STEP 1: Assemble a dataset of traffic images with object annotations and labels.
STEP 2: Design a convolutional neural network (CNN) architecture for object detection and
classification.
STEP 4: Choose appropriate loss functions for object detection and classification, such as
region proposal network (RPN) loss and categorical cross-entropy.
STEP 5: Train the CNN on the dataset and fine-tune the model.
STEP 6: Evaluate the model's performance in terms of object detection accuracy and
classification accuracy.
Program:
import numpy as np
import tensorflow as tf
# Generate sample traffic images and labels (you would use actual data in practice)
model = Sequential([
MaxPooling2D((2, 2)),
MaxPooling2D((2, 2)),
Flatten(),
Dense(128, activation='relu'),
])
# Sample traffic image for prediction (you would use actual images in practice)
prediction = model.predict(sample_traffic_image)
Sample Output:
The output will display the predicted probability of the sample traffic image belonging to the
positive class (1) based on the trained model. This is a simplified example, and in practice,
you would use real traffic images and labels for training and testing.
Experiment 6: Implement Online Fraud Detection of Share Market Data using Data
Analytics Tools
Aim:
To implement a data analytics solution for real-time fraud detection in share market
transactions, showcasing the application of data analytics in financial security.
Algorithm:
STEP 2: Preprocess the data by cleaning, transforming, and aggregating relevant features.
STEP 5: Apply machine learning algorithms, like isolation forests or clustering, to detect
anomalies.
STEP 6: Evaluate the effectiveness of the fraud detection method using metrics like
precision, recall, and F1-score.
Program:
import pandas as pd
# Load sample share market data (you would use actual data in practice)
data = pd.read_csv('sample_market_data.csv')
model.fit(train_data)
Sample Output:
The output will display the number of anomalies detected by the Isolation Forest model.
Experiment 7: Implement Image Augmentation using Deep RBM
Aim:
Algorithm:
STEP 3: Train the RBM using the input images to learn patterns and features.
STEP 4: Implement image augmentation techniques using the learned RBM representations,
such as noise injection or distortion.
STEP 5: Apply the augmented images for training a separate deep learning model (e.g.,
CNN).
STEP 6: Compare the performance of the model trained with and without image
augmentation.
Program:
import numpy as np
# Generate a sample dataset of images (you would use actual images in practice)
num_samples = 100
image_size = 28 * 28
rbm.fit(original_images)
hidden_features = rbm.transform(original_images)
rbm2.fit(hidden_features)
augmented_features = rbm2.transform(hidden_features)
reconstructed_hidden_features = rbm2.inverse_transform(augmented_features)
reconstructed_images = rbm.inverse_transform(reconstructed_hidden_features)
plt.figure(figsize=(10, 4))
for i in range(5):
plt.subplot(2, 5, i + 1)
plt.title("Original")
plt.subplot(2, 5, i + 6)
plt.title("Augmented")
plt.tight_layout()
plt.show()
Sample Output:
Aim:
Algorithm:
STEP 2: Preprocess the text data by tokenizing, padding, and converting to word
embeddings.
STEP 3: Design a long short-term memory (LSTM) neural network architecture for
sentiment analysis.
STEP 4: Choose a suitable loss function, like binary cross-entropy, for sentiment prediction.
Program:
import numpy as np
import tensorflow as tf
# Sample text data for sentiment analysis (you would use actual text data in practice)
texts = [
"This is terrible.",
tokenizer.fit_on_texts(texts)
word_index = tokenizer.word_index
sequences = tokenizer.texts_to_sequences(texts)
model = tf.keras.Sequential([
tf.keras.layers.Embedding(input_dim=len(word_index) + 1, output_dim=16,
input_length=10),
tf.keras.layers.LSTM(16),
tf.keras.layers.Dense(1, activation='sigmoid')
])
# Sample test text for prediction (you would use actual text in practice)
sample_test_sequence = tokenizer.texts_to_sequences(sample_test_text)
prediction = model.predict(sample_padded_sequence)
print("Predicted Sentiment:", "Positive" if prediction > 0.5 else "Negative")
Sample Output:
The output will display the predicted sentiment (positive or negative) of the sample test text
based on the trained LSTM model.
Experiment 9: Number Plate Recognition of Traffic Video Analysis (Mini Project)
Aim:
To create a system that can automatically recognize and extract number plate
information from traffic videos, showcasing the practical application of computer vision
techniques in traffic management and law enforcement.
Algorithm:
STEP 1: Collect a dataset of traffic videos containing scenes with number plates.
STEP 3: Design a pipeline that combines object detection for locating number plates and
optical character recognition (OCR) for reading the characters.
STEP 4: Implement a suitable OCR algorithm, such as Tesseract, to extract characters from
number plates.
STEP 5: Train and fine-tune the OCR model using labeled character data.
STEP 6: Apply the pipeline to the video frames, recognize number plates, and output the
results with accuracy scores.
Progrsm:
import cv2
import numpy as np
import pytesseract
# Load a sample traffic video (you would use an actual video in practice)
video_path = 'sample_traffic_video.mp4'
cap = cv2.VideoCapture(video_path)
while cap.isOpened():
if not ret:
break
# Convert the frame to grayscale
# Apply image processing techniques (you would use appropriate techniques in practice)
break
cap.release()
cv2.destroyAllWindows()
Sample Output:
As the code runs, a window will open displaying the traffic video frames. In the terminal, the
recognized number plate text will be printed for each frame.