Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CSD Minor

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 36

DEPARTMENT OF COMPUTER SCIENCE AND DESIGN

SCHOOL OF COMPUTING
10214CD601 MINOR PROJECT -1
SUMMER SEMESTER(2023-2024)
REVIEW-11

“SKIN CANCER DETECTION”

SUPERVISED BY
PRESENTED BY
DR. R. SRINIVASAN, M.E., Ph.D.,
1.ABHISHEK KUMAR (VTU20712)(21UEDL0003)
2.RAMAKRISHNA REDDY (VTU20745)(21UEDL0010)
3.PAVAN SESHU KUMAR (VTU20704)(21UEDL0005)

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 1
OVERVIEW
ABSTRACT
OBJECTIVES
TIMELINE OF THE PROJECT
INTRODUCTION
LITERATURE REVIEW
DESIGN AND METHODOLOGIES
IMPLEMENTATION
CONCLUSION
REFERENCES

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 2
ABSTRACT

 INTRODUCTION:
• Skin cancer ranks as the most common type of cancer globally.
• The increasing incidence underscores the need for effective detection methods.
 PURPOSE:
• Early detection enables prompt medical intervention, increasing the likelihood of successful treatment and better patient outcomes.
• Identifying skin cancer lesions in their early stages helps prevent the progression of the disease to more advanced and potentially life-
threatening stages.
 METHOD:
• Dermatologists and healthcare professionals conduct visual inspections of the skin to identify suspicious lesions.
• This method relies on the experience and expertise of the examiner in recognizing irregularities in size, shape, color, and texture.
 RESULTS:
• In many cases, skin lesions may be identified as benign, indicating that they are non-cancerous.
• This result brings relief to the individual and often requires no further treatment, though monitoring may be recommended.
 CONCLUSION:
• Skin cancer detection holds immense importance in addressing a widespread public health concern, given the increasing incidence of skin
cancer globally.

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING / PROJECT TITLE 3
OBJECTIVE

 Aim of the project:


• Develop methods and algorithms that enable the early detection of skin cancer,
emphasizing the identification of malignant lesions in their initial stages.
• Create a system that provides accurate and reliable results, minimizing the chances of
false positives or false negatives in the diagnosis of skin cancer.

 Scope of project:
• Design and implement advanced algorithms, leveraging artificial intelligence and
machine learning, to analyze skin images and identify potential cancerous lesions.

• Integrate diverse datasets, including dermatological images, patient records, and


clinical data, to enhance the accuracy and reliability of the detection system.

• Ensure that the skin cancer detection system can be seamlessly integrated into existing
healthcare infrastructure, including electronic health records (EHR) systems

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 4
TIMELINE OF THE PROJECT

GANTT CHART

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 5
INTRODUCTION
 Overview:

 Skin cancer stands as one of the most prevalent forms of cancer worldwide, with an increasing incidence
over recent years.

 Early detection is paramount in effectively managing skin cancer, as it allows for timely intervention and
improves the chances of successful treatment outcomes.

 Recent advancements in technology, particularly in artificial intelligence and machine learning, have ushered
in innovative approaches to skin cancer detection.

 Various methods, ranging from traditional visual inspections to advanced computer-aided diagnosis (CAD)
systems, contribute to a comprehensive toolkit for identifying skin cancer.

 Machine learning algorithms analyze vast datasets, aiding in the identification of subtle patterns and
anomalies in skin lesions that may be indicative of cancer.

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 6
LITERATURE REVIEW
Author’s Name Paper name and Year of publication Main content of the
publication details paper

Jianhua Zhao; Harvey Lul; Annual international 2008 Real-time reman


David I Mclean:Haishan Zeng conference of the IEEE spectroscopy for non -
engineering in Medicine and invasive skin cancer
Biology society detection – preliminary
Publisher: IEEE results

Haishan Zeng; Jianhuab Asia Communications and 2016 Real-time in vivo tissue
Zhao; Micheal A Short; David Photonics Conference(ACP) Raman Spectroscopy for
I. McWilliams; Wenbo Wang Publisher: IEEE early cancer Detection

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING / PROJECT TITLE 7
Author’s Name Paper name publication details Year of publication Main content of the paper

Agung W. Setiawan International conference on 2020 Effect of color Enhancement


informatics, IoT, and on early detection of skin
Enabling Technologies(ICIoT) cancer using convolutional
neural Network

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 8
DESIGN AND METHODOLOGIES

MODULE 1: Machine learning


MODULE 2: CNN algorithm

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 9
MODULE 1: Machine Learning(Google Colaboratory)

Step:1 Collection of data using Machine Learing and Google Colaboratory

1.Machine learning algorithm setup:


1. Description: Implement the chosen model architecture using the selected framework. Define the layers, activation functions,
and other components of the neural network.

2.Steps:
1. Open your web browser and go to Google Colab.
2. Choose "New Notebook" to create a new Colab notebook.

2.CNN algorithm:
1.Description
• Each filter detects specific patterns, such as edges, textures, or more complex structures.
• Learn features using filters through convolution operations.
• Produces predictions using a softmax activation for classification.
• Adjusts model parameters using labeled data via backpropagation.

3. Data Collection Workflow:


1.Description:
• Resize images, normalize pixel values, and handle missing or noisy data to prepare it for
• training.

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 10
Step 2: Processing the data

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 11
Module 2-CNN algorithm
A Convolutional Neural Network (CNN) is a type of artificial neural network designed specifically for processing and analyzing visual
data. It has proven highly effective in tasks such as image classification, object detection, and image generation. Here's a concise
description of the CNN algorithm:
1.Input Layer:
1. Accepts raw image data, typically represented as a three-dimensional tensor (height, width, channels), where channels represent
color information (e.g., Red, Green, Blue).
2.Convolutional Layers:
1. Utilize learnable filters (kernels) to perform convolution operations, extracting features like edges, textures, and patterns from the
input.
3.Activation Function:
1. Introduces non-linearity, commonly ReLU (Rectified Linear Unit), after each convolutional operation, allowing the model to learn
complex relationships.
4.Pooling (Subsampling) Layers:
1. Reduce spatial dimensions through operations like max pooling or average pooling, preserving important features and reducing
computational complexity.
5.Flattening:
1. Transform the processed data into a one-dimensional vector, preparing it for connectivity to fully connected layers.
6.Fully Connected Layers:
1. Comprise densely connected neurons that learn high-level representations from the flattened features, facilitating complex
decision-making.
7.Output Layer:
1. Produces final predictions based on the learned features. For classification tasks, often employs a softmax activation to generate.

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING / PROJECT TITLE 12
Step 3: Using CNN algorithm

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 13
Step 4: The output

Eg,

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 14
IMPLEMENTATION

Architecture Diagram
Data –Flow Diagram
Use Case Diagram
Class Diagram
Activity Diagram
Sequence Diagram
Collaboration Diagram(If applicable)
E-R Diagram

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 15
Architecture Diagram

Architecture Diagram:
1.Input Data:
1. Dermatology images serve as the input to the system.
2.Preprocessing Module:
1. Prepares the input data through operations like resizing, normalization, and data augmentation.
3.Convolutional Neural Network (CNN):
1. The core of the system, consisting of convolutional layers, activation functions, pooling layers, and fully connected layers.
4.Training Module:
1. Implements training processes such as backpropagation and optimization to update the network parameters.
5.Validation Module:
1. Assesses the CNN's performance during training for tuning hyperparameters and preventing overfitting.

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 16
Data –Flow Diagram

Data-flow Diagram:
•Describes the flow of data between different modules
or components of the skin cancer detection system.
•Shows how data moves from the input (Dermatology
Images) through various processing modules to the
final output.

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 17
Use Case Diagram

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 18
Class Diagram

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 19
Activity Diagram

Eg,

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 20
E-R Diagram

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 21
TESTING

UNIT TESTING
INTEGRATION TESTING
FUNCTIONAL TESTING
WHITE BOX TESTING
BLACK BOX TESTING

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING / PROJECT TITLE 22
UNIT TESTING
1.Preprocessing Modules:
1. Test the data preprocessing steps to ensure that the input data is correctly transformed and prepared for the model.
2.Model Components:
1. Unit test the components responsible for loading and using the SKIN CANCER DETECTOR model. Ensure that the model is loaded
correctly, and inference produces the expected results.
3.Data Collection and Labeling:
1. If there are components responsible for data collection or labeling, perform unit tests to verify their correctness.
4.Image Handling:
1. Test functions or modules handling image input to ensure proper handling of image data.
5.API Endpoints (if applicable):
1. If your project involves an API or user interface, unit test the endpoints or functions responsible for handling user input and providing output.
6.Data Storage and Retrieval:
1. If your project involves storing or retrieving data from a database or file system, unit test these components to verify their correctness.
Unit testing provides several benefits:
•Detecting Bugs Early: Identifying and fixing issues at the unit level helps catch bugs early in the development process.
•Maintainability: Unit tests serve as documentation for the expected behavior of individual components. They also make it easier to refactor or
modify code without introducing regressions.
•Isolation of Issues: Unit tests help isolate issues to specific components, making it easier to identify the root cause of problems.
•Continuous Integration: Unit tests are often integrated into continuous integration workflows, ensuring that changes to the codebase do not break
existing functionality.

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING / PROJECT TITLE 23
INTEGRATION TESTING
# Import necessary libraries
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.preprocessing.image import ImageDataGenerator

# Define the CNN model


model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(224, 224, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(128, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))

# Compile the model


model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING / PROJECT TITLE 24
SYSTEM TESTING
• System testing for the chilli crop disease detection project involves a comprehensive evaluation of the entire software system to ensure
that it meets the specified requirements and functions as expected in a real-world environment. This testing phase encompasses various
aspects, starting with the collection and handling of input data. The system is subjected to tests where different types of images,
including various resolutions and formats, are uploaded to assess the system's response and its ability to handle different scenarios, such
as invalid or corrupt image files.
• The testing process extends to the preprocessing module, where the transformation of input data is scrutinized. This includes verifying
that images are resized, normalized, and prepared adequately for input into the DARKNET53 model.
• Loading and inference with the model are thoroughly examined, ensuring that the model is loaded successfully with different
configurations and that predictions align with expected outcomes for known images. Result visualization and reporting functionalities
are scrutinized to confirm that the system accurately represents the detected crop health, with tests involving images of both healthy and
diseased crops.
• If applicable, the user interface or API functionality is assessed, including user interactions, feedback mechanisms, and the correct
handling of requests. Performance testing is conducted to evaluate the system's response times for different image sizes and processing
loads, ensuring that the model performs adequately under varying workloads.
• Robust error handling mechanisms are verified, intentionally introducing errors to assess the system's ability to provide meaningful error
messages and validate input data before processing.Security features, if present, are thoroughly tested to identify and mitigate
vulnerabilities, including checks for unauthorized access and data breaches. Compatibility testing ensures that the system is compatible
with different browsers, devices, and platforms, guaranteeing a consistent and functional user experience across various environments.
• Finally, usability testing is conducted to assess the overall user-friendliness of the system, with a focus on navigation and ease of use.
System testing, encompassing these diverse aspects, aims to provide confidence in the reliability, performance, and security of the chilli
crop disease detection application across its entire functionality spectrum.

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING / PROJECT TITLE 25
FUNCTIONAL TESTING
Image Upload and Input Handling:
Objective Verify that the system correctly handles image uploads and processes them for analysis.
Test Cases:
- Upload a valid image and ensure it is processed correctly.
- Attempt to upload unsupported file formats and verify appropriate error messages.

2. Preprocessing Functionality:
Objective: Ensure that the preprocessing module transforms input data appropriately.
Test Cases:
- Test resizing and normalization of images.
- Check that preprocessing handles different types of input data.

3. Model Loading and Inference:


Objective: Confirm that the DARKNET53 model is loaded successfully and provides accurate predictions.
Test Cases:
- Load the model with various configurations.
- Verify that inference results align with the expected outcomes for known images.

4. Result Visualization:
- Objective: Validate that the system accurately visualizes and reports the results of crop health detection.
- Test Cases:
- Visualize results for both healthy and diseased crops.
- Check that the visualization provides clear and understandable information.

5. User Interface Functionality (if applicable):


- Objective: Assess the functionality of the user interface.
- Test Cases:
- Test user interactions, such as initiating the detection process.
- Verify that users receive appropriate feedback during and after the detection.

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING / PROJECT TITLE 26
INPUT AND OUTPUT

INPUT:

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING / PROJECT TITLE 27
INPUT AND OUTPUT

OUTPUT:

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING / PROJECT TITLE 28
SOURCE CODE

from google.colab import drive


drive.mount('/conte/drive’)

from google.colab import files


uploaded = files.upload()

import tensorflow as tf
import tensorflow_hub as hub
import os
from keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import Dropout
from tensorflow.keras import Model
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import layers

print ("class : ")


for i in range(len(Labels)):
print (i, end = " ")
print (Labels[i])

import time
t = time.time()

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING / PROJECT TITLE 29
def predict_reload(image):
probabilities = reloaded.predict(np.asarray([img]))[0]
class_idx = np.argmax(probabilities)

return {Labels[class_idx]: probabilities[class_idx]}

for idx, filename in enumerate(random.sample(validation_generator.filenames, 2)):


print("SOURCE: class: %s, file: %s" % (os.path.split(filename)[0], filename))

img = upload(filename)
prediction = predict_reload(img)
print("PREDICTED: class: %s, confidence: %f" % (list(prediction.keys())[0],
list(prediction.values())[0]))
plt.imshow(img)
plt.figure(idx)
plt.show()

import pandas as pd
import numpy as np
import seaborn as sn
print('Confusion Matrix')
cm = confusion_matrix(validation_generator.classes, y)
df = pd.DataFrame(cm, columns=validation_generator.class_indices)
plt.figure(figsize=(10,7))
sn.heatmap(df, annot=True)

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING / PROJECT TITLE 30
OUTPUT

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING / PROJECT TITLE 31
CONCLUSION

The integration of machine learning in skin cancer detection represents a significant


leap forward in diagnostic capabilities. The use of advanced algorithms has
demonstrated remarkable accuracy, enabling early identification of skin lesions and
contributing to improved prognosis for patients. The efficiency and objectivity of
machine learning models have the potential to reduce human error, offering a
valuable tool for healthcare professionals in their diagnostic process. Despite these
promising advancements, challenges such as dataset biases and model
interpretability need careful consideration.Additionally, successful implementation
requires collaboration between medical practitioners, computer scientists, and
regulatory bodies. As research and development in this field continue, the ongoing
refinement of models and their seamless integration into clinical workflows will be
pivotal for realizing the full benefits of machine learning in skin cancer detection.

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING / PROJECT TITLE 32
Plagiarism Report of PPT

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING / PROJECT TITLE 33
Web references/video links

1.https://www.mdpi.com/2318494

2.https://www.sciencedirect.com/science/article/pii/S2352914819
302047

3.https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8705277

4.https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8705277/

5.https://youtu.be/Oh4i91mDmEQ?si=SZAL1kM_FkAYg-FG

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING / PROJECT TITLE 34
REFERENCES

1.K Ramlakhan and Y Shang, "A Mobile Automated Skin Lesion Classification System", Proceedings of 23rd IEEE
International Conference on Tools with Artificial Intelligence.

2. P. K. Roy, A. Mazumdar and P. Adhikary, "Study on Fuzzy Logic based optimum penstock design", ARPN Journal.

3. B.V. S Krishna, T. Abhinaya, Godwin J. Jijin, Shree S. Tharanee and B. Sreenidhi, "Detection of Leukemia and Its
Types Using Combination of SVM and KNN Algorithm", Lecture Notes in Network and Systems., 2021.

4. S. R. Guha et al., "CNN Based Skin Lesion Analysis for Classifying Melanoma", Published at: The International
Conference on Sustainable Technologies for Industry 4.0, 2019.

5. Dr T. Vijayakumar, "SELECTIVE IMAGE ENHANCEMENT AND RESTORATION FOR SKIN CANCER


IDENTIFICATION", Journal of Innovative Image Processing, vol. 1, no. 1, pp. 1-10, 2019.

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 35
THANK YOU

January 21, 2024 DEPARTMENT OF COMPUTER SCIENCE & DESIGN / SKIN CANCER DETECTION 36

You might also like