Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Final Report

Download as pdf or txt
Download as pdf or txt
You are on page 1of 75

AUTOMATED SEGMENTATION OF BRAIN TUMOR

MRI IMAGES USING DEEP LEARNING AND


MACHINE
LEARNING

A PROJECT REPORT

Submitted by

S.GEETHANJALI (920820104006)
S.SALINI (920820104032)
T.SOWMIYA (920820104040)
T.SUBBULAKSHMI (920820104042)

in partial fulfillment for the award of the degree of


BACHELOR OF ENGINEERING

IN

COMPUTER SCIENCE AND ENGINEERING

NPR COLLEGE OF ENGINEERING AND TECHNOLOGY,


NATHAM, DINDIGUL.

ANNA UNIVERSITY :: CHENNAI 600 025

MAY 2024
ANNA UNIVERSITY :: CHENNAI 600 025

BONAFIDE CERTIFICATE

Certified that this project report “AUTOMATED SEGMENTATION OF BRAIN


TUMOR MRI IMAGES USING DEEP LEARNING AND MACHINE LEARNING”
is the bonafide work of “GEETHANJALI.S (920820104006), SALINI.S (920820104032),
SOWMIYA.T(920820104040), SUBBULAKSHMI.T(920820104042)” who carried out
the project work under my supervision.

SIGNATURE SIGNATURE
Dr. M. INDRA DEVI M .E, Ph.D., Mrs.K.RAJALAKSHMI M.E.,(Ph.D)
HEAD OF THE DEPARTMENT SUPERVISOR
Professor, Assistant Professor,
Computer Science and Computer Science and
Engineering, Engineering,
NPR College of Engineering NPR college of Engineering
and Technology, and Technology,
Natham, Natham,
Dindigul – 624001. Dindigul – 624001.

Submitted for the ANNA UNIVERSITY viva-voce Examination held on………………..


at NPR College of Engineering and Technology, Natham .

INTERNAL EXAMINER EXTERNAL EXAMINER

3
ACKNOWLEDGEMENT

First and foremost, we praise and thank nature from the depth of my heart
which has given us an immense source of strength, comfort, and inspiration in the
completion of this project work.

We would like to express sincere thanks to our Principal Dr. B.


MARUTHUKANNAN, M.E., Ph.D., for forwarding us to do our project and
offering adequate facilities to complete our project.

We extend our gratitude to our Head of the Department of Computer Science


and Engineering Dr. M. INDRA DEVI M.E., Ph.D., professor for providing
constructive suggestions and his sustained encouragement all through this project.

We express our graceful thanks to our Project Guide Mrs. K. RAJALAKSHMI


M.E., (Ph.D) Assistant Professor for their valuable technical guidance, patience and
motivation, which helped us to complete this project in a successful manner.

Also, we would like to record our deepest gratitude to our parents for their constant
encouragement and support which motivated us to complete our project.

4
NPR COLLEGE OF ENGINEERING & TECHNOLOGY
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
INSTITUTE VISION & MISSION

Vision
• To develop students with intellectual curiosity and technical expertise to meet
the global needs. Mission
• To achieve academic excellence by offering quality technical education using
best teaching techniques.
• To improve Industry – Institute interactions and expose industrial atmosphere.
• To develop interpersonal skills along with value-based education in a dynamic
Learning environment .
• To explore solutions for real time problems in the society.

DEPARTMENT VISION & MISSION

Vision
• To produce globally competent technical professionals for digitized society.

Mission
• To establish conducive academic environment by imparting quality education
and value-added training.
5
• To encourage students to develop innovative projects to optimally resolve the
Challenging social problems.

NPR COLLEGE OF ENGINEERING & TECHNOLOGY


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

PROGRAM OUTCOMES (PO)

PO1: Engineering knowledge: Apply the knowledge of mathematics, science, engineering fundamentals, and
engineering specialization to the solution of complex engineering problems.
PO2: Problem analysis: Identify, formulate, review research literature, and analyze complex engineering problems
reaching substantiated conclusions using first principles of mathematics, natural sciences, and engineering sciences.
PO3: Design / development of solutions: Design solutions for complex engineering problems and design system
components or processes that meet the specified needs with appropriate consideration for the public health and
safety, and the cultural, societal, and environmental considerations.
PO4: Conduct investigations of complex problems: Use research-based knowledge and research methods
including design of experiments, analysis and interpretation of data, and synthesis of the information to provide
valid conclusions.
PO5: Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern engineering and
IT tools including prediction and modeling to complex engineering activities with an understanding of the
limitations.
PO6: The engineer and society: Apply reasoning informed by the contextual knowledge to assess societal, health,
safety, legal and cultural issues and the consequent responsibilities relevant to the professional engineering practice.
PO7: Environment and sustainability: Understand the impact of the professional engineering solutions in societal
and environmental contexts, and demonstrate the knowledge of, and need for sustainable development.
PO8: Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of the
engineering practice.
PO9: Individual and team work: Function effectively as an individual, and as a member or leader in diverse
teams, and in multidisciplinary settings.

6
PO10: Communication: Communicate effectively on complex engineering activities with the engineering
community and with society at large, such as, being able to comprehend and write effective reports and design
documentation, make effective presentations, and give and receive clear instructions.
PO11: Project management and finance: Demonstrate knowledge and understanding of the engineering and
management principles and apply these to one’s own work, as a member and leader in a team, to manage projects
and in multidisciplinary environments.

PO12: Life-long learning: Recognize the need for, and have the preparation and ability to engage in

independent and life-long learning in the broadest context of technological change.

NPR COLLEGE OF ENGINEERING & TECHNOLOGY


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

COURSE OUTCOMES, PROGRAM EDUCATIONAL OBJECTIVES &

PROGRAM SPECIFIC OUTCOMES

COURSE OUTCOMES

C411.1: Identify technically and economically feasible problems of social relevance.

C411.2: Plan and build the project team with assigned responsibilities.

C411.3: Identify and survey the relevant literature for getting exposed to related solutions.

C411.4: Analyse ,design and develop adaptable and reusable solutions of minimal complexity by using modern
tools.

C411.5: Implement and test solutions to trace against the user requirements.

PROGRAM EDUCATIONAL OBJECTIVES (PEOs)

Graduates of Computer Science and Engineering Program will be able to

• Develop into the most knowledgeable professional to pursue higher education and research or have a
successful career in industries.

• Successfully carry forward domain knowledge in computing and allied areas to solve complex and
realworld engineering problems.

• Meet the technological revolution they are continuously upgraded with the technical knowledge.
7
• Serve the humanity with social responsibility combined with ethics .

PROGRAM SPECIFIC OUTCOMES (PSOs)

At the end of the program students will be able to

• Deal with real time problems by understanding the evolutionary changes in computing, applying standard
practices and strategies in software project development using open - ended programming environments.

• Employ modern computer languages, environments and platforms in creating innovative career paths by
inculcating moral values and ethics.

• Achieve additional expertise through add-on and certificate programs.

8
9
10
11
12
ABSTRACT

Automated segmentation of brain tumors in MRI images plays a crucial role in medical
diagnosis and treatment planning. Gliomas, characterized by their aggressive nature and
diverse morphology, demand precise segmentation techniques for accurate intra-tumoral
classification. In this study, we propose a novel approach that combines Convolutional
Neural Networks (CNNs) for segmentation and feature extraction, with Support Vector
Machine (SVM) for classification.Our methodology first employs CNNs for the
segmentation of brain tumor regions in MRI images. CNNs are well-suited for image
analysis tasks and have shown remarkable success in segmentation applications. Through
a process of feature extraction, CNNs effectively identify and isolate tumor regions while
filtering out extraneous details from the images.Subsequently, the segmented tumor
regions are subjected to classification using SVM, a robust machine learning algorithm
known for its effectiveness in binary classification tasks. By training SVM on extracted
features, we aim to classify tumor regions into distinct categories, facilitating detailed
intra-tumoral analysis.

13
CHAPER NO TABLE OF CONTENTS PAGE
NO
ABSTRACT xi
LIST OF FIGURES xiv
LIST OF TABLES xv
LIST OF ABBREVATION xvi
1 INTRODUCTION 1
1.1 Overview 1
2 LITERATURE SURVEY 5
3 EXISTING SYSTEM 16
3.1 Overview 16
3.2 Limitations 18
4 SYSTEM STUDY 19
4.1 Economic Feasibility 19
4.2 Technical Feasibility 20
4.3 Behavioral Feasibility 20
4.4 Operational Feasibility 20
4.5 Schedule Feasibility 20
5 PROPOSED SYSTEM 21
5.1 Overview 21
5.2 Advantages 22
6 System specification 23
6.1 Hardware Requirements 23
6.2 Software Requirements 23
7 SYSTEM DESIGN 24
7.1 System Design 24
7.2 Block Diagram 24

14
8 SYSTEM IMPLEMENTATION 27
8.1 Modules 29
8.1.1 Input Images 29
8.1.2 Preprocessing 30
8.1.3 Segmentation 30
8.1.4 Feature Extraction 31
8.1.5 Classification 31
8.1.6 Performance Analysis 32
9 RESULT AND DICUSSION 33
10 SYSTEM TESTING 36
10.1 Introduction To Testing 36
10.1.1 Unit Testing 36
10.1.2 Integration Testing 37
10.1.3 Validation Testing 37
10.1.4 User Acceptance Testing 38
10.1.5 Output Testing 38
11 CONCLUSION AND FUTURE 39
ENHANCEMENT
11.1 Conclusion 39
11.2 Future Enhancement 40
APPENDIX 1 41
Sample Screenshots 41
APPENDIX 2 49
Sample Code 49
REFERENCES 54

LIST OF FIGURES
15
FIGURE NO FIGURE NAME PAGE NUMBER

7.1 Block Diagram 25

A1.1 Dataset 41

A1.2 Original Image 42

A1.3 Red Image 42

A1.4 Green Image 43

A1.5 Blue Image 43


A1.6 Gray Scale Image 44
A1.7 Segmented Image 44

A1.8 Morphological Segmented Image 45

A1.9 Feature Extracted Image 45


A1.10 Test Feature 46

A1.11 CNN Layers 46

A1.12 Contd CNN Layers 47


A1.13 Classification Results 47
A1.14 Performance Measures 48
16
LIST OF TABLES

TABLE NO TABLE NAME PAGE NUMBER

9.1 Performance Measure 33


9.2 Error Rate 34
9.3 Accuracy Table 35

17
LIST OF ABBREVIATIONS

ACRONOYMS ABBREVIATIONS

CNN Convolutional Neural Network

SVM Support Vector Machine

MRI Magnetic Resonance Imaging

GLCM Gray-Level Co-occurrence Matrix

LIME Local Interpretable Model-agnostic


Explanations
ML Machine Learning

DL Deep Learning

ICA Internal Carotid Artery

RELM Regularized Extreme Learning Machine

XAI Explainable Artificial Intelligence

KNN K-Nearest Neighbour

ANN Artificial Neural Network

XGBOOST eXtreme Gradient Boosting

FCNN Fully Convolutional Neural Network

18
SHAP SHapley Additive exPlanations
RF Rheumatoid Factor

HGG High-Grade Glioma


Low-Grade Glioma
LGG
DCNN Deep convolutional neural networks

HTTU-Net
Hybrid Two-Track U- Net
YOLO You Only Look Once

VGG Stacked Classifier Network


VGG-SCNet

19
20
CHAPTER 1
INTRODUCTION
1.1 Overview
Automated segmentation of brain tumor MRI images is a crucial task in medical
imaging, aiding in diagnosis, treatment planning, and monitoring of brain tumor progression.
Traditional segmentation methods often rely on manual delineation by experts, which is
timeconsuming and subjective. However, advancements in deep learning and machine
learning have revolutionized this process by enabling automated segmentation with higher
accuracy and efficiency.
Deep learning techniques, particularly convolutional neural networks (CNNs), have
shown remarkable success in various image analysis tasks, including medical image
segmentation. CNNs excel at learning intricate patterns and features from raw image data,
making them wellsuited for processing complex MRI scans. By leveraging large datasets of
annotated MRI images, CNNs can be trained to accurately delineate tumor regions,
distinguishing them from healthy brain tissue with high precision.
Machine learning approaches, including traditional classifiers and ensemble methods,
complement deep learning techniques in brain tumor segmentation. These methods often
incorporate handcrafted features extracted from MRI images, such as texture, intensity, and
shape characteristics, to train classifiers for segmenting tumor regions. While not as adept at
capturing complex spatial relationships as deep learning models, machine learning algorithms
offer transparency and interpretability, crucial in medical settings where understanding the
decision-making process is essential.
The integration of deep learning and machine learning techniques in automated brain
tumor segmentation presents several advantages. Firstly, it significantly reduces the time and
labor required for manual segmentation, enabling faster diagnosis and treatment planning.
Secondly, it enhances the accuracy and consistency of segmentation results, longitudinal
studies by providing reliable measurements of tumor growth and response to treatment over
time.
However, challenges persist in deploying these techniques into clinical practice,
including the need for large annotated datasets, robust validation strategies, and ensuring
model generalizability across diverse patient populations and imaging protocols. Despite
these challenges, the synergistic combination of deep learning and machine learning holds

21
immense promise for advancing automated segmentation of brain tumor MRI images,
ultimately improving patient outcomes in neuro-oncology.

Tumor is an uncontrolled growth of cancer cells in any part of the body. Tumors are of
different types and have different characteristics and different treatments. At present, brain
tumors are classified as primary brain tumors and metastatic brain tumors. The former begin
in the brain and tend to stay in the brain, the latter begin as a cancer elsewhere in the body
and spreading to the brain.

Brain tumor segmentation is one of the crucial procedures in surgical and treatment
planning. Brain tumor segmentation using MRI has been an intense research area. Brain
tumors can have various sizes and shapes and may appear at different locations. Varying
intensity of tumors in brain magnetic resonance images (MRI) makes the automatic
segmentation of tumors extremely challenging.

There are various intensity based techniques which have been proposed to segment
tumors on magnetic resonance images. Texture is one of most popular feature for image
classification and retrieval. The multifractal texture estimation methods are more time
consuming. A texture based image segmentation using GLCM (Gray-Level Co-occurrence
Matrix) combined with AdaBoost classifier is proposed here. From the MRI images of brain,
the optimal texture features of brain tumor are extracted by utilizing GLCM. Then using these
features AdaBoost classifier algorithm classifies the tumor and non-tumor tissues and tumor
is segmented. This method provides more efficient brain tumor segmentation compared to the
segmentation technique based on mBm and will provide more accurate result. Tumor is the
abnormal growth of the tissues. A brain tumor is a mass of unnecessary cells growing in the
brain or central spine canal. Brain cancer can be counted among the most deadly and
intractable diseases. Today, tools and methods to analyse tumors and their behaviour are
becoming more prevalent. Clearly, efforts over the past century have yielded real advances.
However, we have also come to realize that gains in survival must be enhanced by better
diagnosis tools. Although we have yet to cure brain tumours, clear steps forward have been
taken toward reaching this ultimate goal, more and more researchers have incorporated
measures into clinical trials each advance injects hope to the team of caregivers and more
importantly, to those who live with this diagnosis.

Magnetic Resonance Imaging (MRI) has become a widely-used method of high-


quality medical imaging, especially in brain imaging where MRI’s soft tissue contrast and

22
noninvasiveness are clear advantages. An important use of MRI data is tracking the size of
brain tumor as it responds treatment. Therefore, an automatic and reliable method for
segmenting tumor would be a useful tool. MRI provides a digital representation of tissue
characteristics that can be obtained in any tissue plane. The images produced by an MRI
scanner are best described as slices through the brain. MRI has the added advantage of being
able to produce images which slice through the brain in both horizontal and vertical planes.

This makes the MRI-scan images an ideal source for detecting, identifying and
classifying the right infected regions of the brain. Most of the current conventional diagnosis
techniques are based on human experience in interpreting the MRI-scan for judgment;
certainly this increases the possibility to false detection and identification of the brain tumor.

On the other hand, applying digital image processing ensures the quick and precise
detection of the tumor. One of the most effective techniques to extract information from
complex medical images that has wide application in medical field is the segmentation
process. The main objective of the image segmentation is to partition an image into mutually
exclusive and exhausted regions such that each region of interest is spatially contiguous and
the pixels within the region are homogenous with respect to a predefined criterion. The cause
of most cases is unknown. Risk factors that may occasionally be involved include: a number
of genetic syndrome such as neurofibromatosis as well as exposure to the chemical vinyl
chloride, Epstein-Barr virus, and ionizing radiation.

Magnetic resonance imaging (MRI) is the prime technique to diagnose brain tumors
and monitor their treatment. Different MRI modalities of each patient are acquired and these
images are interpreted by computer-based image analysis methods in order to handle the
complexity as well as constraints on time and objectiveness. In this thesis, two major novel
approaches for analyzing tumor-bearing brain images in an automatic way are presented:
Multi-modal tissue classification with integrated regularization can segment healthy and
pathologic brain tissues including their sub-compartments to provide quantitative volumetric
information.
The method has been evaluated with good results on a large number of clinical and
synthetic images. The fast run-time of the algorithm allows for an easy integration into the
clinical work flow. An extension has been proposed for integrated segmentation of
longitudinal patient studies, which has been assessed on a small dataset from a multi-center
clinical trial with promising results. Atlas-based segmentation with integrated tumor-growth

23
modeling has been shown to be a suitable means for segmenting the healthy brain structures
surrounding the tumor. Tumor-growth modeling offers a way to cope with the missing tumor
prior in the atlas during registration. To this end, two different tumor-growth models have
been compared. While a simplistic tumor growth model offered advantages in computation
speed, a more sophisticated multi-scale tumor growth model showed better potential to
provide a more realistic and meaningful prior for atlas-based segmentation. Both approaches
have been combined into a generic framework for analyzing tumor-bearing brain images,
which makes use of all the image information generally available in clinics. This
segmentation framework paves the way for better diagnosis, treatment planning and
monitoring in radiotherapy and neurosurgery of brain tumors.
In recent years, the emergence of deep learning and machine learning paradigms has
reshaped the landscape of medical image analysis, offering a promising avenue towards
automation and precision. Leveraging the vast repository of annotated MRI data, these
computational methodologies hold the potential to decipher complex patterns and subtle
nuances, transcending the constraints of human perception.
This study embarks on a pioneering journey to unlock the latent potential of artificial
intelligence in neuroimaging, focusing on the automated segmentation of brain tumor MRI
images. By harnessing the power of deep learning architectures such as convolutional neural
networks (CNNs) and machine learning algorithms like support vector machines (SVMs), our
research endeavors to unravel the intricate boundaries of brain tumors with unprecedented
accuracy and efficiency.
Furthermore, our study aims to transcend the boundaries of conventional 2D
segmentation approaches, exploring the potential of 3D volumetric analysis to capture spatial
relationships and contextual information crucial for accurate tumor delineation. By harnessing
the synergistic capabilities of deep learning and machine learning, we envision a paradigm
shift towards personalized medicine, where precise and automated segmentation facilitates
tailored treatment strategies tailored to individual patient profiles.
In conclusion, the fusion of deep learning and machine learning methodologies
heralds a new era in neuroimaging, empowering clinicians with powerful tools to navigate the
complex landscape of brain tumors with unprecedented precision and efficiency. Through
collaborative efforts and interdisciplinary synergy, our research endeavors to pave the way
towards a future where AI-driven segmentation becomes an indispensable cornerstone of
modern healthcare, ushering in an era of personalized and precise medicine.

24
CHAPTER 2
LITERATURE SURVEY
1. Title : A Hybrid Feature Extraction Method With Regularized Extreme Learning Machine
for Brain Tumor Classification
Author : Abdu Gumaei , Mohammad Mehedi Hassan , Md Rafiul Hassan, Abdulhameed
Alelaiwi And Giancarlo Fortino .
Year : 2019
Brain cancer classification is an important step that depends on the physician’s knowledge
and experience. An automated tumor classification system is very essential to support
radiologists and physicians to identify brain tumors. However, the accuracy of current
systems needs to be improved for suitable treatments. In this paper, we propose a hybrid
feature extraction method with a regularized extreme learning machine (RELM) for
developing an accurate brain tumor classification approach. Then, the brain tumor features
are extracted based on a hybrid method of feature extraction. Finally, a RELM is used for
classifying the type of brain tumor. To evaluate and compare the proposed approach, a set of
experiments is conducted on a new public dataset of brain images. The experimental results
proved that the approach is more effective compared with the existing state-of-the-art
approaches, and the performance in terms of classification accuracy improved from 91.51%
to 94.233% for the experiment of the random holdout technique.
2. Title : Empowering Glioma Prognosis With Transparent Machine Learning and
Interpretative Insights Using Explainable AI
Author : Anisha Palkar , Cifha Crecil Dia , Krishnaraj Chadaga And Niranjana Sampathila
Year : 2024
The primary objective of this research is to create a reliable technique to determine whether a
patient has glioma, a specific kind of brain tumour, by examining various diagnostic markers,
using a variety of machine learning as well as deep learning approaches, and involving XAI
(explainable artificial intelligence) methods. Through the integration of patient data,
including medical records, genetic profiles, algorithms using machine learning have the
ability to predict how each individual will react to different medical interventions. To
guarantee regulatory compliance and inspire confidence in AI-driven healthcare solutions,
XAI is incorporated. Machine learning methods employed in this study includes Random

25
Forest, decision trees, logistic regression, KNN, Adaboost, SVM, Catboost, LGBM classifier,
and
Xgboost whereas the deep learning methods include ANN and CNN. Four alternative XAI
strategies, including SHAP, Eli5, LIME, and QLattice algorithm, are employed to comprehend
the predictions of the model. The Xgboost, a ML model achieved accuracy, precision, recall,
f1 score, and AUC of 88%, 82%, 94%, 88%, and 92%, respectively. 3. Title : Evolutionary
Model for Brain Cancer-Grading and Classification
Author : Faizan Ullah1, Muhammad Nadeem , Muhammad Abrar, Farhan Amin, Abdu
Salam
, Amerah Alabrah , And Hussain Alsalman
Year : 2023
Brain cancer is a dangerous disease and affects millions of people life in worldwide.
Approximately 70% of patients diagnosed with this disease do not survive. Machine learning
is a promising and recent development in this area. Therefore, in this research, we propose an
evolutionary lightweight model aimed at detecting brain cancer and classification, starting
from the analysis of magnetic resonance images. The proposed model named lightweight
ensemble combines is the modified version of the recent Multimodal Lightweight XGBoost.
Herein, we provide prediction explain ability by considering the preprocessing of Magnetic
Resonance Imaging data and the feature extraction . The process in the evolutionary model
involves a various step - first, prepare the data, extract important features, and finally, merge
together using a special kind of classification called ensemble classification. We evaluate our
proposed model using BraTS 2020 dataset. The dataset consists of 285 MRI scans of patients
diagnosed with gliomas. The simulation results showed that our proposed model achieved
93.0% accuracy, 0.94 precision, 0.93 recall, 0.94 F1 score, and an area under Receiver
Operating Characteristic Curve value of 0.984.
4. Title : Machine Learning Assisted Methodology for Multiclass Classification of Malignant
Brain Tumors
Author : Ankit Vidyarthi , Ruchi Agarwal , Deepak Gupta , Rahul Sharma , Dirk Draheim
And Prayag Tiwari Year
: 2022
This study is performed on real-life malignant brain tumor datasets having five classes. The
proposed methodology uses the vast feature set from six domains to capture most of the
hidden information in the extracted region of interest. Later, relevant features are extracted
from the feature set pool using a new proposed feature selection algorithm named the

26
Cumulative Variance method . Next, the selected features are used for model training and
testing using K-Nearest Neighbour, multi-class Support Vector Machine and Neural Network
for predicting multi-class classification accuracy. The experiments are performed using the
proposed feature selection algorithm with three classifiers. The mean average classification
accuracy achieved by using the proposed approach is88.43%, 92.5% and 95.86%
respectively. The comparative analysis of the proposed approach with other existing
algorithms like ICA, and GA suggest that the proposed approach gains an increase of
accuracy around 2% , 3% , and 4% .The experimentation results concluded that the proposed
approach is found better with NN classifier with an accuracy of 95.86% using diversified
features.
5.Title: Cross-Model Distillation to Improve MRI-Based Brain Tumor Segmentation With
Missing MRI Sequences
Author: Masoomeh Rahimpour, Jeroen Bertels, Dirk Vandermeulen, Frederik Maes, Ahmed
Radwan, Stefan Sunaert, Karolien Goffin, Henri Vandermeulen, Michel Koole.
Year: 2021
The study focused on brain tumor segmentation using CNN models with missing
MRI sequences during inference.with the Approach of Introduced Cross-Modal Knowledge
Distillation (KD) and Cross-Modal Feature Distillation (FD) to optimize available MRI
sequences for training.provides Results to Improved segmentation performance by training a
"Teacher" model with multi-sequence MRI data to guide a "Student" model using only T1w
sequence data.With the Contributions to Developed a cross-modal distillation approach,
evaluated different methods, and used a large dataset for testing.with Significance to
Enhanced clinical applicability by utilizing T1w sequence, commonly used in clinical
practice, for tumor segmentation.

6.Title: Optimized Edge Dedection Technique For Brain Tumor Dedection In MRI Images.
Authors: Prof. Radwan's extensive research contributions in mathematics and engineering
applications.Membership in scientific councils and editorial boards of prestigious journals
Year :2020
SSIM Formula:SSIM formula involves mean, variance, and covariance calculations
with specific variables for stabilization.It can be used to compute fitness value for
solutions.Brain Tumour Detection Method is used to Evaluation of a method using 50
medical images with various pathologies.And Comparison with classical and complex edge
detection methods.The Proposed method shows better results in detecting tumours
27
accurately.FractionalOrder Edge Detectors includes Fractional calculus advantages in image
processing.And Fractional differential operators offer global operation.Fractional operators
consider more neighboring pixels for edge detection.GA process involves selection,
crossover, and mutation steps.Used to optimize edge detection filters for brain tumour
detection.And The
Performance Analysis are used to Proposed edge detection method outperforms classical
methods in accuracy and sensitivity Better noise immunity in edge detection compared to
other operators.

7.Title: Automated Segmentation of Brain Tumor MRI Images Using Deep Learning
Author: Surendarn Rajendran, Suresh Kumar Rajagopal , Tamilvizhi Thanarajan ,K.
Shankar, Sachin Kumar,Najah M. Alsubaie ,Mohamad Khairi Ishak ,AND Samih M.
Mostafa.
Year : 2023
Segmenting brain tumors automatically using MR data is crucial for disease
investigation and monitoring. Due to the aggressive nature and diversity of gliomas,
wellorganized and exact segmentation methods are used to classify tumors intra-tumorally.
The proposed technique uses a Gray Level Co-occurrence matrix extraction of features
approach to strip out unwanted details from the images .In comparison with the current state
of the art, the accuracy of brain tumor segmentation was significantly improved using
Convolutional Neural Networks, which are frequently used in the field of biomedical image
segmentation. By merging the results of two separate segmentation networks, the proposed
method demonstrates a major but simple combinatorial strategy that, as a direct consequence,
yields much more precise and complete estimates. A U-Net and a Three-Dimensional
Convolutional Neural Network. These networks are used to break up images into their
component parts.

8.Title: Image Segmentation For MR Brain Tumor Detection Using Machine Learning .
Author: Shasidhar et al,Jones et al.
Year : 2022
Image Segmentation Methods Shasidhar et al. modified FCM algorithm for brain image
segmentation, achieving comparable performance with Supervoxels generated from
multimodal MRI images were classified using RF classifier for brain tumor
segmentation.FCNN and RF were used for automated brain tumor segmentation from

28
multimodal MRI images, showing great results. Jones et al. proposed a novel DTI
segmentation method (D-SEG) for whole-brain segmentation.Challenges in Brain Tumor
Segmentation includesDifficulty in accurately segmenting brain regions due to background
proportions and small tumor regions.Handling multimodal information inadequately reduces
segmentation accuracy.Deep learning methods face challenges in brain tumor
segmentation.Brain Tumor Detection and Diagnosis involves Computer-Based Diagnosis
(CBD) aids in accurate medical image analysis for faster and precise diagnosis.Early
detection of brain abnormalities, especially tumors, is crucial for effective treatment.Brain
Tumor Classification Models:Various algorithms categorized into eight groups for brain
tumor segmentation.Deep learning methods are more effective than conventional and
supervised methods for segmentation.Brain Structure and Tumor Detection enhance Brain
structure overview and importance of high-quality medical images in computer-aided brain
tumor detection.Review of major research on brain tumor segmentation using computer-
assisted methods.Advanced
Segmentation Techniques are Use of transfer learning, RF classifier, deep autoencoder, and
ECNN with BAT algorithm for brain tumor.

9.Title: RU_NET@+:A Deep Learning Algorithm For Accurate Brain Tumor Segmentation
And Survival Rate Prediction.
Author: Ruqsar Zaitoon And Hussain Syed
Year : 2023
Dataset and Pre-processing in Dataset includes LGG and HGG tumors with MRI
sequences T1, T1C, T2, and FLAIR.Dataset divided into training, testing, and validation sets
and Pre-processing involved noise removal, reshaping scans to 256x256, and image
segmentation.Model Architecture with RU-Net2+ model segments tumor regions using
encoding and decoding steps Achieved high accuracy in segmenting HGG and LGG tumors
based on MRI sequences.Model Performance with RU-Net2+ model demonstrated excellent
accuracy and dicescores for segmenting different tumor regions.Results show high accuracy
for both HGG and LGG tumors.Then the Comparison with Existing Models with Proposed
model's accuracy compared to Deep CNN, Modified DCNN, and 3D ConvNet models for
brain tumor segmentation. Challenges and Future Directions included by Overcoming
challenges to develop a robust deep learning-based diagnostic model for brain tumor
analysis.RU-Net2+ model effectively segments HGG and LGG tumors with high
accuracy.Model architecture involves encoding, decoding, and training phases.Dataset

29
includes MRI sequences T1, T1C, T2, and FLAIR for tumor segmentation.Model
outperforms existing models in brain tumor segmentation accuracy.Challenges remain in
developing practical deep learning models for brain tumor analysis.

10.Title:Multiclass Brain Tumor Classification Using Convolutional Nerual Network and


Support Vector Machine.
Author: Hareem Kibriya,Momina Masood,Marriam Nawaz,Rimsha Rafique,Safia Rehman.
Year : 2023
Research Focus: Utilizing Deep Learning (DL) and Machine Learning (ML)
techniques for multiclass brain tumor classification in MRI images.Methodology Used to
Employed Convolutional Neural Network (CNN) models like ResNet-18 and GoogLeNet for
brain tumor identification, followed by Support Vector Machine (SVM) classification of deep
features.Results: includes Achieved a high accuracy of 98% using CNN-SVM based method,
outperforming existing brain tumor identification systems.Transfer Learning with Utilized
transfer learning with GoogLeNet and ResNet-18 models to fine-tune CNNs for brain tumor
classification.Augmentation Techniques are implemented for Applied various image
augmentation techniques to the MRI dataset for improved classification accuracy.Finally This
study showcases the effectiveness of DL and ML techniques in accurately detecting brain
tumors in MRI images, offering potential assistance to medical professionals in making
critical treatment decisions.

11.Title : Brain Tumor Segmentation Using Partial Depth wise Separable Convolutions
Author : Tirivanagani Magadza and Serestina Viriri
Year : 2022
Gliomas are the most common and aggressive form of all brain tumors, with medial
survival rates of less than two years for the highest grade. While accurate and reproducible
segmentation of brain tumors is paramount for an effective treatment plan and diagnosis,
automatic brain tumor segmentation is challenging because the lesion can appear anywhere in
the brain with varying shapes and sizes from one patient to another. Moreover, segmentation

30
is only done by analyzing pixel intensity values of surrounding tissues, and the diffusing
nature of aggressive brain tumors makes it even more challenging to delineate tumor
boundaries. Nevertheless, deep learning methods have superior performance in automatic
brain tumor segmentation. However, their boost in performance comes at the cost of high
computational complexity. This paper proposes efficient network architecture for 3D brain
tumor segmentation, partially utilizing depth wise separable convolutions to reduce
computational costs. The experimental results on the BraTS 2020 dataset show that our
methods could achieve comparable results with the state-of-the-art methods with minimum
computational complexity.
Furthermore, we provide a critical analysis of the current efficient model designs.
12.Title : U-Net++DSM: Improved U-Net++ for Brain Tumor Segmentation With Deep
Supervision Mechanism
Author : Kittipol Wisaeng
Year : 2023
The segmentation of brain tumors is an important and challenging content in medical
image processing. Relying solely on human experts to manually segment large volumes of
data can be time-consuming and delay diagnosis. To address this challenge, researchers have
set out to develop an algorithm that can automatically determine whether MRI images contain
brain tumors and identify their features. This paper proposes the U-Net++ DSM, a
collaborative approach combining U-Net++ with Deep Supervision Mechanism (DSM) as its
backbone. To enhance the segmentation power of U-Net++ DSM, medical professionals have
trained a dilation operator using fully annotated images. The results of this method
demonstrate that the combination of U-Net++DSM and the dilation operator significantly
improves segmentation accuracy, especially when the number of fully-labeled images is
limited. The results show that the proposed U-Net++DSM outperforms traditional U-Net
models achieving high segmentation performance ,surpassing other state-of-the-art models,
with a sensitivity of 98.59%, a specificity of 98.64 an accuracy of 98.64%, and an average
Dice score of 98.02% when tested on publicly available databases.

13.Title : DenseTrans: Multimodal Brain TumorSegmentation Using Swin Transformer


Author : LI Zongren, Wushouer Silamu,Wang yuzhen and Wei zhe
Year : 2023
Aiming at the task of automatic brain tumor segmentation, this paper proposes a new
network. In order to alleviate the problem that convolutional neural networks(CNN) cannot

31
establish long-distance dependence and obtain global context information, swin transformer is
introduced into UNet++ network, and local feature information is extracted by convolutional
layer in UNet++.then ,in the high resolution layer, shift window operation of swin transformer
is utilized and self-attention learning windows are stacked to obtain global feature
information and the capability of long-distance dependency modeling. meanwhile, in order to
alleviate the secondary increase of computational complexity caused by fullself-attention
learning in transformer, deep separable convolution and control of swin transformer layers are
adopted to achieve a balance between the increase of accuracy of brain tumor segmentation
and the increase of computational complexity. on BraTs2021 data validation set, model
performance is as follows: the dimilarity score was 93.2%,86.2%,88.3% in the whole tumor,
tumor core and enhancing tumor, hausdorff distance(95%) values of 4.58mm,14.8mm and
12.2mm, and a lightweight model with 21.3M parameters and212G flops was obtained by
depth-separable convolution and other operations. in conclusion, the proposed model
effectively improves the segmentation accuracy of brain tumors and has high clinical value.

14.Title : Machine Learning Empowered Brain Tumor Segmentation and Grading Model for
Lifetime Prediction
Author : M.Renugadevi, K.Narasimhan ,C.V.RaviKumar, Rajesh Anbazhagan, Glovanni Pau
,Kannan Ramkumar, Mohamed Abas, N.Raju, K.Sathish and Prabhu Sevugan
Year : 2023
An uncontrolled growth of brain cells is known as a brain tumor. When brain tumors
are accurately and promptly diagnosed using magnetic resonance imaging scans, it is easier to
start the right treatment, track the tumor’s development over time, and select the best surgical
techniques. This paper applies advanced and popular methods for preprocessing,
segmentation, grading of tumors and lifetime prediction. On exploring various encoder-
decoder architectures, UNet++ architecture was chosen for detecting brain tumor and
obtained an accuracy of 98% and intersection over union score of 0.7483 during the testing
phase After balancing the dataset, the important characteristics are selected using principal
component analysis and tree-based feature selection techniques. The collected characteristics
are used as input for machine learning techniques including stochastic gradient descent,
decision tree, random forest, and support vector machine. The distinction between low-grade
glioma and high-grade glioma is investigated as a binary classification. Accuracy, precision,
recall, and F1score are used in the performance evaluation. The highest accuracy of 96% is

32
achieved using stochastic gradient descent. Lifetime prediction of high-grade glioma patients
is made using regression techniques:
Linear, ridge, stochastic gradient descent, and extreme gradient boosting.

15.Title : HTTU-Net: Hybrid Two Track U-Net for Automatic Brain Tumor Segmentation
Author : Nagwa M. Aboelenein, Piao Songhao, Anis Koubaa, Alam Noor And Ahmed Afifi
Year : 2020
Brain cancer is one of the most dominant causes of cancer death; the best way to
diagnoseand treat brain tumors is to screen early. Magnetic Resonance Imaging (MRI) is
commonly used for brain tumor diagnosis; however, it is a challenging problem to achieve
higher accuracy and performance, which is a vital problem in most of the previously
presented automated medical diagnosis. In this paper, we propose a Hybrid Two-Track U-
Net(HTTUNet) architecture for brain tumor segmentation. This architecture leverages the use
of Leaky Relu activation and batch normalization. It includes two tracks; each one has a
different number of layers and utilizes a different kernel size. Then, we merge these two
tracks to generate the final segmentation. We use the focal loss, and generalized Dice (GDL),
loss functions to address the problem of class imbalance. The proposed segmentation method
was evaluated on the BraTS'2018 datasets and obtained a mean Dice similarity coefficient of
0.865 for the whole tumor region, 0.808 for the core region and 0.745for the enhancement
region and a median Dice similarity coefficient of 0.883, 0.895, and 0.815 for the whole
tumor, core and enhancing region, respectively. The proposed HTTU-Net architecture is
sufficient for the segmentation of brain tumors and achieves highly accurate results. Other
quantitative and qualitative evaluations are discussed, along with the paper. It concerns that
our results are very comparable expert humanlevel performance and could help experts to
decrease the time of diagnostic.

16.Title : VGG-SCNet: A VGG Net-Based Deep Learning Framework for Brain Tumor
Detection on MRI Images
Author : Mohammad Shahjahan Majib, Md. Mahbubur Rahman , (Member, Ieee),T. M.
Shahriar Sazzad, Nafiz Imtiaz Khan , And Samrat Kumar Dey
Year : 2021
A brain tumor is a life-threatening neurological condition caused by the unregulated
development of cells inside the brain or skull. The death rate of people with this condition is
steadily increasing. Early diagnosis of malignant tumors is critical for providing treatment to

33
patients, and early discovery improves the patient's chances of survival. The patient's survival
rate is usually very less if they are not adequately treated .If a brain tumor cannot be identified
in an early stage, it can surely lead to death. Therefore, early diagnosis of brain tumors
necessitates the use of an automated tool. The segmentation, diagnosis, and isolation of
contaminated tumor areas from magnetic resonance (MR) images is a prime concern.
However, it is a tedious and time-consuming process that radiologists or clinical specialists
must undertake, and their performance is solely dependent on their expertise. To address these
limitations, the use of computer-assisted techniques becomes critical. In this paper, different
traditional and hybrid ML models were built and analyzed in detail to classify the brain tumor
images without any human intervention. Along with these, 16 different transfer learning
models were also analyzed to identify the best transfer learning model to classify brain tumors
based on neural networks. Finally, using different state-of-the-art technologies, a stacked
classifier was proposed which outperforms all the other developed models. The proposed
VGG-SCNet's (VGG Stacked Classifier Network) precision, recall, and f1 scores were found
to be 99.2%, 99.1%, and 99.2% respectively.

17.Title: Automated Brain Tumor Segmentation and Classification in MRI Using YOLO-
Based Deep Learning
Author: maram fahaad almufareh , muhammad imran , abdullah khan , mamoona humayun ,
and muhammad asim
Year: 2024
The study evaluates YOLOv5 and YOLOv7 models for brain tumor detection,
showing exceptional performance. Different deep learning models are proposed for brain
tumor detection and segmentation using MR images.The models achieve high dice scores,
specificity, and sensitivity on various datasets.Precision performance of YOLO models is
highlighted, with YOLOv7 demonstrating superior results.Algorithm 3 provides pseudo code
for YOLOv5 Model for Brain Tumor Detection.The process involves segmentation,
classification, and alignment of coordinates for further analysis and training

18.Title: Automated Brain Tumor Segmentation Using Multimodal Brain Scans


Author: Mina Ghaffari ,Arcot Sowmya, and Ruth Oliver
Year:2020
Reliable brain tumor segmentation is essential for accurate diagnosis and treatment
planning. Since manual segmentation of brain tumors is a highly timeconsuming, expensive

34
and subjective task, practical automated methods for this purpose are greatly appreciated. But
since brain tumors are highly heterogeneous in terms of location, shape, and size, developing
automatic segmentation methods has remained a challenging task over decades. This paper
aims to review the evolution of automated models for brain tumor segmentation using
multimodal MR images. In order to be able to make a just comparison between different
methods, the proposed models are studied for the most famous benchmark for brain tumor
segmentation, namely the BraTS challenge [1]. The BraTS 2012-2018 challenges and the
stateof-the-art automated models employed each year are analysed. The changing trend of
these automated methods since 2012 are studied and the main parameters that affect the
performance of different models are analysed.

19.Title: Concatenated and Connected Random Forests With Multiscale Patch Driven Active
Contour Model for Automated Brain Tumor Segmentation of MR Images
Author: Chao Ma , Gongning Luo , Student Member, IEEE, and Kuanquan Wang
Year:2018
Segmentation of brain tumors from magnetic resonance imaging (MRI) data sets is of
great importance for improved diagnosis, growth rate prediction, and treatment planning.
However, automating this process is challenging due to the presence of severe partial volume
effect and considerable variability in tumor structures, as well as imaging conditions,
especially for the gliomas. In this paper, we introduce a new methodology that combines
random forests and active contour model for the automated segmentation of the gliomas from
multimodal volumetric MR images. Specifically, we employ a feature representations
learning strategy to effectively explore both local and contextual information from
multimodal images for tissue segmentation by using modality specific random forests as the
feature learning kernels. Different levels of the structural information is subsequently
integrated into concatenated and connected random forests for gliomas structure inferring.
Finally, a novel multiscale patch driven active contour model is exploited to refine the
inferred structure by taking advantage of sparse representation techniques. Results reported
on public benchmarks reveal that our architecture achieves competitive accuracy compared to
the state-of-the-art brain tumor segmentation methods while being computationally efficient.

35
CHAPTER 3
EXISTING SYSTEM
3.1 Overview
The existing system describe a novel algorithm for interactive multilabel segmentation of N-
dimensional images. Given a small number of user-labelled pixels, the rest of the image is
segmented automatically by a Cellular Automaton. The process is iterative, as the automaton
labels the image, user can observe the segmentation evolution and guide the algorithm with
human input where the segmentation is difficult to compute. In the areas, where the
segmentation is reliably computed automatically no additional user effort is required. Results
of segmenting generic photos and medical images are presented.

Our experiments show that modest user effort is required for segmentation of
moderately hard images. The existing system take an intuitive user interaction scheme - user
specifies certain image pixels (we will call them seed pixels) that belong to objects that
should be segmented from each other. The task is to assign labels to all other image pixels
automatically, preferably achieving the segmentation result the user is expecting to get. The
task statement and input data is similar to and, however the segmentation instrument differs.
Our method uses cellular automaton for solving pixel labelling task.

The method is iterative, giving feedback to the user while the segmentation is
computed. Proposed method allows (but not requires) human input during labelling process,
to provide dynamic interaction and feedback between the user and the algorithm. This allows
to correcting and guidance of the algorithm with user input in the areas where the
36
segmentation is difficult to compute, yet does not require additional user effort where the
segmentation is reliably computed automatically.

One important difference from the methods based on graph cuts is that seeds do not
necessarily specify hard segmentation constraints. In other words - user brush strokes need
not to specify only the areas of firm foreground or firm background, but instead can adjust the
pixels state continuously, making them ‘more foreground’ or ‘a little more background’ for
example. This gives more versatile control of the segmentation from the user part and makes
the process tolerable to inaccurate paint strokes.

As we have already emphasized in the introduction, our hope is to stir up the research
community, motivating to search new ideas in the field of cellular automata and evolutionary
computation and applying them to interactive image segmentation. We expect that results
exceeding our current can be obtained. However, our current method can already compete
with elegant achievements of graph theory. In this section we will try to compare current top
performing methods with ours and point out advantages and disadvantages of our scheme. We
take four methods - Graph Cuts, GrabCut, Random Walker and GrowCut and compare them
by several criteria: segmentation quality, speed and convenience for the user.

Accurately speaking, the methods differ seriously by the amount of information that
they extract from the image. GrabCut uses most information - it computes the evolving color
statistics of foreground and background and takes into account color difference between
neighboring pixels. Graph Cuts differs in using color statistics collected from the user-
specified seeds only, computed before the segmentation start. Random Walker uses only
intensity difference between neighboring pixels.

Our current GrowCut variant also does not take advantage of object color statistics,
however it can be easily extended to maintain regions color statistics and use them in
automaton evolution. The performance of described photo editing methods was evaluated in
(except for the intelligent paint). The authors have clearly shown, that methods based on
graph cuts allow achieving better segmentation results with less user effort required,
compared with other methods. One of the few drawbacks of the graph-based methods is that
they are not easily extended to multi-label task and the other is that they are not very flexible -
the only tunable parameters are the graph weighting and cost function coefficients. For
example, additional restrictions on the object boundary smoothness or soft user-specified
segmentation constraints cannot be added readily.

37
As for the intelligent paint, judging by the examples supplied by the authors, the
advantage of their method over the traditional ‘magic wand’ is in speed and number of user
interactions. As it appears from the algorithm description and presented results, it is unlikely
that intelligent paint would be capable of solving hard segmentation problems. Precise object
boundary estimation is also questionable, because the finest segmentation level is obtained by
initial tobogganing over segmentation, which may not coincide with actual object borders.
Speaking about medical images, the best performing method is random walker (judging by
the provided examples).

It leaves behind both watershed segmentation and region growing behind in quality
and robustness of segmentation. The quality of segmentation comparable to is graph cuts, but
random walker is capable of finding the solution for number of labels >2. However, it is
rather slow and its implementation is not an easy task.

Also, method extension to achieve some special algorithm properties (i.e. controllable
boundary smoothness) is not straightforward. It should be mentioned, that multi-labelling
tasks can be solved by min-cut graph algorithms, but no attempt to apply this multi-labelling
method to interactive image segmentation is known to us. The process is iterative, as the
automaton labels the image, user can observe the segmentation evolution and guide the
algorithm with human input where the segmentation is difficult to compute.

3.2 Limitations

This method was limited to enhancing tumors with clear enhancing edges.
This method works with two labels only - object and background.
One of the few drawbacks of the graph-based methods is that they are not
easily extended to multi-label task
The other is that they are not very flexible
The only tunable parameters are the graph weighting and cost function
coefficients.

38
CHAPTER 4
SYSTEM STUDY

The purpose of this chapter is to introduce the reader to feasibility studies project appraisal
investment analysis.feasibility studies are an example of system analysis.A sytem is a
describtion of the relationship between the inputs of labour machinery materials and
management procedures both within an organization and between an organization and the
outside world.

During the planning and execution stages of an audit it’s important to have a clear
understanding of what the objective of the audit include,Companies should strive to align
their business objectives with the objectives of the audit .This will ensure that time and
resources spent will help achieve a strong internal control environment and lower the risk of a
qualified opinion.

Objective of Feasibility Study

• To explain present situation of the automation.

• To find out if a system development project can be done is possible.

• To find out whether the final product will benefit end user.

• To suggest the possible alternative solutions

The feasibility carried out mainly in three sections namely.

39
• Economic Feasibility

• Technical Feasibility

• Behavioral Feasibility

• Operational Feasibility

• Schedule Feasibility

4.1 Economic Feasibility

Economic analysis is the most frequently used method for evaluating effectiveness of the
proposed system. More commonly known as cost benefit analysis. This procedure determines
the benefits and saving that are expected from the system of the proposed system.
The hardware in system department if sufficient for system development.
4.2 Technical Feasibility

This study center around the system’s department hardware, software and to what
extend it can support the proposed system department is having the required hardware and
software there is no question of increasing the cost of implementing the proposed system.
The criteria, the proposed system is technically feasible and the proposed system can be
developed with the existing facility.

4.3 Behavioral Feasibility

People are inherently resistant to change and need sufficient amount of training,
which would result in lot of expenditure for the organization. The proposed system can
generate reports with day-to-day information immediately at the user’s request, instead of
getting a report, which doesn’t contain much detail.

4.4 Operational Feasibility

Operational Feasibility is depended on human resources available for the project and
involves projecting whether the system will be used if it is developed and implemented.
Operational feasibility is a measure of how well a proposed system solves the problems, and
takes advantage of the opportunities identified during scope definition and how it satisfies the
requirements analysis phase of system development. Operational feasibility reviews the
willingness of the organization to support the proposed system. This is probably the most

40
difficult of the feasibilities to gauge. In order to determine this feasibility, it is important to
understand the management commitment to the proposed project. If the request was initiated
by management, it is likely that there is management support and the system will be accepted
and used. However, it is also important that the employee base will be accepting of the
change.

4.5 Schedule Feasibility

In this type of feasibility, the skills required for properly applying the new technology
with training in minimum time and the time duration can be checked out to implement or
overrun the new project within minimum time. Schedule feasibility ensures that a project can
be completed before the project or technology becomes obsolete or unnecessary. Schedule
feasibility can be calculated using research period.

CHAPTER 5

PROPOSED SYSTEM

5.1 Overview

A proposed system for the automated segmentation of brain tumor MRI images using
machine learning and deep learning involves several key components. First, data is collected
from sources such as Kaggle, where MRI brain images with labeled brain tumors are
available. Preprocessing steps such as resizing, normalization, and augmentation are applied
to enhance image quality and prepare the data for training.

The core of the system comprises Convolutional Neural Networks (CNNs),


particularly architectures like U-Net, for segmenting tumors from MRI images. Support
Vector Machines (SVMs) may be used for classification tasks after segmentation, such as
distinguishing between different types of brain tumors.

Model training involves using labeled data to teach the CNN and SVM models, with
evaluation based on metrics such as accuracy, precision, recall, F1-score, and the Dice
coefficient for segmentation. Cross-validation ensures robustness and generalizability.
Optimization includes hyperparameter tuning using grid or random search and potentially
model ensembling for improved accuracy.

41
The system proposed a novel semi-automatic segmentation method based on population and
individual statistical information to segment brain tumors in magnetic resonance (MR)
images. The probability of each pixel belonging to the foreground (tumor) and the back
ground is estimated by the k NN classifier under the learned optimal distance metrics. A new
cost function for segmentation is constructed through these probabilities and is optimized
using graph cuts.

It can easily be realized that the full or semi-automatic watersheds-based methods are
in fact region growing methods constrained, at the beginning, by the competition between the
different seeds and then by the different already grown regions.

Deployment of the trained and validated models enables real-world applications for
automated brain tumor segmentation and classification. The models can assist medical
professionals in diagnosing and monitoring brain tumors more efficiently. The detailed report
summarizing the system, including data collection, preprocessing, model architecture,
training process, evaluation metrics, and results, serves as a comprehensive overview of the
project. The report also discusses potential future improvements such as exploring advanced
neural network architectures, enhancing model interpretability, and implementing continuous
learning to keep the models up-to-date as new data becomes available.

In conclusion, the proposed system offers a powerful tool for medical imaging and
diagnostics, potentially improving patient outcomes and streamlining the process of
diagnosing and monitoring brain tumors. Future research and development in areas such as
model ensembling, transfer learning, and interpretability will continue to enhance the
capabilities and applications of this system.

5.2 Advantages
It improve the achieved segmentation results.
They are equivalent to the separations induced by minimum spanning forests relative
to the regional minima.
It not only show the detailed and complete aspects of brain tumors, but also improve
clinical doctors to study the mechanism of brain tumors at the aim of better treatment.
The proposed method overcomes segmentation difficulties caused by the uneven gray
level distribution of the tumors.
It is very efficient.
42
CHAPTER 6
SYSTEM SPECIFICATION

6.1 Hardware Requirements


The hardware requirements may serve as the basis for a contract for the implementation of the
system and should therefore be a complete and consistent specification of the whole system.
They are used by software engineers as the starting point for the system design

• System : Pentium IV 2.4 GHz


• Hard Disk : 500 GB
• Ram : 8 GB

6.2 Software Requirements


The software requirements document is the specification of the system. It should include both
a definition and a specification of requirements. It is useful in estimating cost, planning team
activities and performing tasks throughout the development activity.

43
• O/S : Windows 7
• Language : Python
• Front End : Anaconda Navigator – Spyder

CHAPTER 7
SYSTEM DESIGN

7.1 System Design


System Design is the process of defining the architecture, components, modules, interfaces,
and data for a system to satisfy specified requirements. One could see it as the application of
systems theory to product development. There is some overlap with the disciplines of systems
analysis, systems architecture, and systems engineering. If the broader topic of product
development "blends the perspective of marketing, design, and manufacturing into a single
approach to product development," then the design is the act of taking the marketing
information and creating the design of the product to be manufactured. Systems design is
therefore the process of defining and developing systems to satisfy specified requirements of
the user

7.2 Block Diagram


A Block Diagrams is a diagram of a system in which the principal parts or functions are
represented by blocks connected by lines that shows the relationship of the blocks .They are
heavily used in engineering in software design and flow diagrams. Block diagrams are

44
typically used for higher level, less detailed description that are intented to clarify overall
concepts without concern for the details of implementation. Contrast this with schematic
diagrams and layout diagrams used in computer engineering which shows the implementation
details of physical construction.

Fig 7.1 Block Diagram

A flowchart for the proposed system for automated segmentation of brain tumor MRI images
using machine learning and deep learning can be described in the following steps:
Data Collection:
• Start with the collection of MRI brain images with labeled brain tumors from a
source such as Kaggle.
Data Preprocessing
• Resize the images to a uniform size for consistency.
• Normalize pixel values for better model performance.
• Apply data augmentation techniques such as flipping, rotating, and cropping to
increase the diversity of the dataset. Model Selection
• Choose a Convolutional Neural Network (CNN) architecture such as U-Net
for image segmentation tasks.
• Select Support Vector Machines (SVMs) for classification tasks after
segmentation.

45
Model Development and Training
• Train the CNN model using labeled MRI images to segment brain tumors.
• Optimize the CNN model by adjusting hyperparameters such as learning rate,
batch size, and the number of layers in the network.
• After tumor segmentation, train the SVM model to classify the segmented
regions into different types of tumors based on the extracted features.
Model Evaluation and Validation
• Evaluate the performance of the models using metrics such as accuracy, precision,
recall, F1-score, and the Dice coefficient for segmentation.
• Implement cross-validation to ensure the robustness and generalizability of the
models across different datasets. Model Optimization
• Tune the models using techniques such as grid search or random search for
hyperparameter optimization.
• Consider model ensembling to combine the strengths of multiple models for
improved accuracy. Deployment
• Deploy the trained and validated models for real-world applications in automated
brain tumor segmentation and classification. Reporting
• Generate a detailed report summarizing the system, including data collection,
preprocessing, model architecture, training process, evaluation metrics, results, and
potential future improvements.

46
CHAPTER 8
SYSTEM IMPLEMENTATION

Implementation is the stage of the project when the theoretical is turned into working system.
Thus, it can be considered to be the most critical stage in achieving a successful new system
and in giving the user, confidence that the new system will work and be effective. The
implementation stage involves careful planning, investigation of the existing system and its
constraints on implementation, designing of methods to achieve changeover, and evaluation

of changeover methods.
Implementation of software refers to the final installation of the package in its real
environment, to the satisfaction of the intended users and the operation of the system. The
people are not sure that the software is meant to make their job easier.
The active user must be aware of the benefits of using the system
Their confidence in the software built up
Proper guidance is impaired to the user so that he is comfortable in using
the application
Before going ahead and viewing the system, the user must know that for viewing the result,
the server program should be running in the server. If the server object is not running on the
server, the actual processes will not take place.
User Training
To achieve the objectives and benefits expected from the proposed system it is essential for
the people who will be involved to be confident of their role in the new system. As system
becomes more complex, the need for education and training is more and more important.
Education is complementary to training. It brings life to formal training by explaining the
background to the resources for them. Education involves creating the right atmosphere and
motivating user staff. Education information can make training more interesting and more
understandable.
Training on the Application Software

47
After providing the necessary basic training on the computer awareness, the users will have
to be trained on the new application software. This will give the underlying philosophy of the
use of the new system such as the screen flow, screen design, type of help on the screen, type
of errors while entering the data, the corresponding validation check at each entry and the
ways to correct the data entered. This training may be different across different user groups
and across different levels of hierarchy.
Operational Documentation
Once the implementation plan is decided, it is essential that the user of the system is made
familiar and comfortable with the environment. A documentation providing the whole
operations of the system is being developed. Useful tips and guidance is given inside the
application itself to the user. The system is developed user friendly so that the user can work
the system from the tips given in the application itself.
System Maintenance
The maintenance phase of the software cycle is the time in which software performs useful
work. After a system is successfully implemented, it should be maintained in a proper
manner. System maintenance is an important aspect in the software development life cycle.
The need for system maintenance is to make adaptable to the changes in the system
environment.
There may be social, technical and other environmental changes, which affect a
system which is being implemented. Software product enhancements may involve providing
new functional capabilities, improving user displays and mode of interaction, upgrading the
performance characteristics of the system. So only thru proper system maintenance
procedures, the system can be adapted to cope up with these changes. Software maintenance
is of course, far more than “finding mistakes”.
Corrective Maintenance
The first maintenance activity occurs because it is unreasonable to assume that software
testing will uncover all latent errors in a large software system. During the use of any large
program, errors will occur and be reported to the developer. The process that includes
the diagnosis and correction of one or more errors is called Corrective Maintenance.
Adaptive Maintenance
The second activity that contributes to a definition of maintenance occurs because of the
rapid change that is encountered in every aspect of computing. Therefore Adaptive
maintenance termed as an activity that modifies software to properly interfere with a
changing environment is both necessary and commonplace.
48
Perceptive Maintenance
The third activity that may be applied to a definition of maintenance occurs when a software
package is successful. As the software is used, recommendations for new capabilities,
modifications to existing functions, and general enhancement are received from users.
To satisfy requests in this category, Perceptive maintenance is performed. This activity
accounts for the majority of all efforts expended on software maintenance.
Preventive Maintenance
The fourth maintenance activity occurs when software is changed to improve future
maintainability or reliability, or to provide a better basis for future enhancements. Often
called preventive maintenance, this activity is characterized by reverse engineering and
reengineering techniques.

8.1 Modules
• Input Image

• Pre-Processing

• Segmentation

• Feature Extraction

• Classification

• Analysis

8.1.1 Input Image

• Read an image into the workspace, using the imread command. The example reads
one of the sample images included with the toolbox, an image, and stores it in an
array named I . imread infers from the file that the graphics file format is Tagged
Image File Format (TIFF).

• Display the image, using the imshow function. You can also view an image in the
Image Viewer app. The imtool function opens the Image Viewer app which presents
an integrated environment for displaying images and performing some common
image processing tasks.

• The Image Viewer app provides all the image display capabilities of imshow but also
provides access to several other tools for navigating and exploring images, such as
49
scroll bars, the Pixel Region tool, Image Information tool, and the Contrast
Adjustment tool.

8.1.2 Preprocessing

Image Resize :

• In computer graphics and digital imaging, scaling refers to the resizing of a digital
image. In video technology, the magnification of digital material is known as up-
scaling or resolution enhancement.
• When scaling a vector graphic image, the graphic primitives which make up the
image can be scaled using geometric transformations, without any loss of image
quality. When scaling a raster graphics image, a new image with a higher or lower
number of pixels must be generated.

Preprocessing :

• In the case of decreasing the pixel number (scaling down) this usually results in a
visible quality loss. From the standpoint of digital signal processing, the scaling of
raster graphics is a two-dimensional example of sample rate conversion, the
conversion of a discrete signal from a sampling rate (in this case the local sampling
rate) to another.

8.1.3 Segmentation

• The color information in each image region can be represented by a few quantized
colors, which is true for most color images of natural scenes.

• The colors between two neighboring regions are distinguishable - a basic assumption
of any color image segmentation algorithm.

• Image segmentation is the process of partitioning a digital image into multiple


segments (sets of pixels, also known as superpixels).

• The goal of segmentation is to simplify and/or change the representation of an image


into something that is more meaningful and easier to analyze.

• Image segmentation is typically used to locate objects and boundaries (lines, curves,
etc.) in images.

50
• More precisely, image segmentation is the process of assigning a label to every pixel
in an image such that pixels with the same label share certain characteristics.

• The result of image segmentation is a set of segments that collectively cover the entire
image, or a set of contours extracted from the image
• The binary image pixel values are to apply the delay register type model and to store
the pixel values.

• Then to set the minimum threshold value between the two pixels variations limits. A
mask is a filter. Concept of masking is also known as spatial filtering. Masking is also
known as filtering.

• In this concept we just deal with the filtering operation that is performed directly on
the image.

• Mask is a small matrix useful for blurring, sharpening, embossing, edge-detection,


and more. This is accomplished by means of convolution between a kernel and an
image.

8.1.4 Feature Extraction


• In machine learning, pattern recognition and in image processing, feature extraction
starts from an initial set of measured data and builds derived values (features)
intended to be informative and non-redundant, facilitating the subsequent learning and
generalization steps, and in some cases leading to better human interpretations.
Feature extraction is related to dimensionality reduction.

8.1.5 Classification

• In machine learning and statistics, classification is the problem of identifying to which


of a set of categories (sub-populations) a new observation belongs, on the basis of a
training set of data containing observations (or instances) whose category membership
is known.
• Examples are assigning a given email to the "spam" or "non-spam" class, and
assigning a diagnosis to a given patient based on observed characteristics of the
patient (sex, blood pressure, presence or absence of certain symptoms, etc.).
• Classification is an example of pattern recognition.

51
8.1.6 Performance Analysis

Estimations

• True positive (TP) = the number of cases correctly identified as patient.

• False positive (FP) = the number of cases incorrectly identified as patient.

• True negative (TN) = the number of cases correctly identified as healthy.

• False negative (FN) = the number of cases incorrectly identified as healthy.

Accuracy

The accuracy of a test is its ability to differentiate the patient and healthy cases correctly. To
estimate the accuracy of a test, we should calculate the proportion of true positive and true
negative in all evaluated cases. Mathematically, this can be stated as:

Accuracy = (TP+TN) / (TP+TN+FP+FN);

Sensitivity

The sensitivity of a test is its ability to determine the patient cases correctly. To estimate it, we
should calculate the proportion of true positive in patient cases. Mathematically, this can be
stated as:

Sensitivity = (TP) / (TP + FN)

Specificity

The specificity of a test is its ability to determine the healthy cases correctly. To estimate it,
we should calculate the proportion of true negative in healthy cases. Mathematically, this can
be stated as:

Specificity = (TN) / (TN + FP)

52
CHAPTER 9
RESULT AND DICUSSION
MRI images are collected from KAGGLE datasets. Different performance measures such as
accuracy, precision, recall and error rate can be derived for analyzing the performance of the
system.

Performance Analysis

• True positive (TP) = the number of cases correctly identified as patient.

• False positive (FP) = the number of cases incorrectly identified as patient.

• True negative (TN) = the number of cases correctly identified as healthy.

• False negative (FN) = the number of cases incorrectly identified as healthy.

Accuracy : The accuracy of a test is its ability to differentiate the patient and healthy cases
correctly. To estimate the accuracy of a test, we should calculate the proportion of true
positive and true negative in all evaluated cases. Mathematically, this can be stated as:

Accuracy = (TP+TN) / (TP+TN+FP+FN);

Sensitivity : The sensitivity of a test is its ability to determine the patient cases correctly.
To estimate it, we should calculate the proportion of true positive in patient cases.
Mathematically, this can be stated as:

Sensitivity = (TP) / (TP + FN)

Specificity : The specificity of a test is its ability to determine the healthy cases correctly.
To estimate it, we should calculate the proportion of true negative in healthy cases.

Mathematically, this can be stated as:

Specificity = (TN) / (TN + FP)


Performance Measures Results
Accuracy 97.0

53
Precision 100.0
Recall 94.33962264150944
Error Rate 3.0
Table 9.1 Performance Measures
Error rate
Error rate (ERR) is computed as the fraction of total number of imperfect predictions to the
total number of test data. The finest possible error rate is 0.0, whereas the very worst is 1.0.
Minimization of this error rate will be the prime objective for any classifier.

ERR=FP + FNTP +TN + FN+FN +FP


Algorithm Error Rate

XGBoost 24.7 %
algorithm

K-Nearest Neighbors (KNN) 35%

Convolutional Neural Network (CNN) and 3%


Support Vector Machine (SVM)

Table 9.2 Error Rate

ERROR RATE
40

35

30

25

20 ERROR RATE
15

10

0
XGBBOOST KNN CNN & SVM

Fig 9.1 Error Rate Chart


54
From the above graph, proposed CNN and SVM algorithm provide less error rate than the
existing algorithm.

Accuracy

The accuracy of a test is its ability to differentiate the patient and healthy cases correctly. To
estimate the accuracy of a test, we should calculate the proportion of true positive and true
negative in all evaluated cases. Mathematically, this can be stated as:

Accuracy = (TP+TN) / (TP+TN+FP+FN);

accuracy Table
Algorithm Accuracy
XGBoost algorithm 75.3%.

K-Nearest Neighbors (KNN) 65%

Convolutional Neural Network (CNN) 97%


and
Support Vector Machine (S VM)

ACCURACY

XGBOOST
KNN
CNN & SVM

Fig 9.2 Accuracy Chart


55
From the above graph, proposed CNN & SVM algorithm provides high level accuracy rate
than the existing algorithm.

CHAPTER 10
SYSTEM TESTING
10.1 Introduction to testing
System testing is the stage of implementation, which aimed at ensuring that system works
accurately and efficiently before the live operation commence. Testing is the process of
executing a program with the intent of finding an error. A good test case is one that has a high
probability of finding an error. A successful test is one that answers a yet undiscovered error.

Testing is vital to the success of the system. System testing makes a logical assumption that if
all parts of the system are correct, the goal will be successfully achieved. The candidate
system is subject to variety of tests-on-line response, Volume Street, recovery and security
and usability test. A series of tests are performed before the system is ready for the user
acceptance testing.

Any engineered product can be tested in one of the following ways. Knowing the specified
function that a product has been designed to from, test can be conducted to demonstrate each
function is fully operational. Knowing the internal working of a product, tests can be
conducted to ensure that “al gears mesh”, that is the internal operation of the product
performs according to the specification and all internal components have been adequately
exercised.

10.1.1 Unit Testing

Unit testing is the testing of each module and the integration of the overall system is
done. Unit testing becomes verification efforts on the smallest unit of software design in the
module. This is also known as ‘module testing’. The modules of the system are tested
separately. This testing is carried out during the programming itself. In this testing step, each

56
model is found to be working satisfactorily as regard to the expected output from the module.
There are some validation checks for the fields. For example, the validation check is done for
verifying the data given by the user where both format and validity of the data entered is
included. It is very easy to find error and debug the system.

10.1.2 Integration Testing:

Data can be lost across an interface, one module can have an adverse effect on the
other sub function, when combined, may not produce the desired major function. Integrated
testing is systematic testing that can be done with sample data. The need for the integrated
test is to find the overall system performance. There are two types of integration testing. They
are:

i) Top-down integration testing. ii)


Bottom-up integration testing.

White Box Testing

White Box testing is a test case design method that uses the control structure of the
procedural design to drive cases. Using the white box testing methods, we derived test cases
that guarantee that all independent paths within a module have been exercised at least once.

Black Box Testing

Black box testing is done to find incorrect or missing function


Interface error
Errors in external database access
Performance errors
Initialization and termination errors

In ‘functional testing’, is performed to validate an application conforms to its specifications of


correctly performs all its required functions. So this testing is also called ‘black box testing’.
It tests the external behavior of the system. Here the engineered product can be tested

57
knowing the specified function that a product has been designed to perform, tests can be
conducted to demonstrate that each function is fully operational.

10.1.3 Validation Testing

After the culmination of black box testing, software is completed assembly as a


package, interfacing errors have been uncovered and corrected and final series of software
validation tests begin validation testing can be defined as many, but a single definition is that
validation succeeds when the software functions in a manner that can be reasonably expected
by the customer.

10.1.4 User Acceptance Testing

User acceptance of the system is the key factor for the success of the system. The
system under consideration is tested for user acceptance by constantly keeping in touch with
prospective system at the time of developing changes whenever required.

10.1.5 Output Testing

After performing the validation testing, the next step is output asking the user about
the format required testing of the proposed system, since no system could be useful if it does
not produce the required output in the specific format. The output displayed or generated by
the system under consideration. Here the output format is considered in two ways. One is
screen and the other is printed format. The output format on the screen is found to be correct
as the format was designed in the system phase according to the user needs. For the hard
copy also output comes out as the specified requirements by the user. Hence the output testing
does not result in any connection in the system.

58
CHAPTER 11
11. CONCLUSION AND FUTURE ENHANCEMENT
Conclusion
This paper has provided a comprehensive overview of the state of the art MRI-based
brain tumor segmentation methods. Many of the current brain tumor segmentation methods
operate MRI images due to the non-invasive and good soft tissue contrast of MRI and employ
classification and clustering methods by using different features and taking spatial
information in a local neighborhood into account. The purpose of these methods is to provide
a preliminary judgment on diagnosis, tumor monitoring, and therapy planning for the
physician. Although most of brain tumor segmentation algorithms have relatively good
results in the field of medical image analysis, there is a certain distance in clinical
applications. Due to a lack of interaction between researchers and clinicians, clinicians still
rely on manual segmentation for brain tumor in many cases. The existence of many tools aims
to do pure research and is hardly useful for clinicians. Therefore, embedding the developed
tools into more user- friendly environments will become inevitable in the future. Recently,
some standard clinical acquisition protocols focusing on feasibility studies are trying to
formulate to improve the clinical applications more quickly.. The current standard
computation time is in general a few minutes. The real-time segmentation will be hard to
achieve, but computation time over a few minutes is unacceptable in clinical routine. Another
crucial aspect for brain tumor segmentation methods is robustness. If an automatic
segmentation technique fails in some cases, clinicians will lose their trust and not use this
technique. Therefore, the robustness is also one of the major assessment criteria for each new

59
method applied in clinical practice. Some current brain tumor segmentation methods provide
robust results within a reasonable computation time.

Future Enhancement
Leveraging transformer-based models or hybrid architectures that combine CNNs and
attention mechanisms may lead to more accurate segmentation results. Additionally,
integrating multi-modal data such as CT scans and clinical leveraging transformer-based
models or hybrid architectures that combine CNNs and attention mechanisms may lead to
more accurate segmentation results. Additionally, integrating multi-modal data such as CT
scans and clinical data with MRI images can enhance the model's ability to detect and
segment tumors more effectively. Continuous training on new datasets can help the model
adapt to evolving imaging technologies and diverse patient populations. The use of
explainable AI techniques can provide insights into the model's decision-making process,
enhancing trust and reliability. Finally, collaborating with medical experts for feedback and
validation can ensure that the models remain clinically relevant and effective for patient care.
data with MRI images can enhance the model's ability to detect and segment tumors more
effectively. Continuous training on new datasets can help the model adapt to evolving
imaging technologies and diverse patient populations. The use of explainable AI techniques
can provide insights into the model's decision-making process, enhancing trust and reliability.
Finally, collaborating with medical experts for feedback and validation can ensure that the
models remain clinically relevant and effective for patient care.

60
APPENDIX 1
Sample Screenshot

61
Fig A1.1 : Dataset

62
Fig A1.2 : Original image

Fig A1.3 : Red image

63
Fig A1.4 : Green image

Fig A1.5 : Blue image

64
Fig A1.6 : Gray scale image

Fig A1.7 : Segmented image

65
Fig A1.8 : Morphological segmented image

Fig A1.9 : Feature extracted image

66
Fig A1.10 : Test Feature

Fig A1.11 : CNN Layers

67
Fig A1.12 : Contd CNN Layers

Fig A1.13 :Classification Results

68
Fig A1.14 : Performance Measures

69
APPENDIX 2
Sample code
# Import Packages import numpy as np import
matplotlib.pyplot as plt from tkinter.filedialog
import askopenfilename from
tensorflow.keras.models import Sequential import
cv2 from skimage.io import imshow from
skimage.feature import hog import os import
argparse from tensorflow.keras.models import
Sequential from sklearn.model_selection import
train_test_split
from tensorflow.keras.layers import Conv2D,MaxPooling2D,Flatten,Dense, Dropout
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics.pairwise import cosine_similarity
# === GETTING INPUT IMAGE
# Getting Input Image from dataset
filename = askopenfilename() import
matplotlib.pyplot as plt import
matplotlib.image as mpimg
img = mpimg.imread(filename) # read input image
plt.imshow(img) plt.title('ORIGINAL IMAGE')
plt.show()
# PRE-PROCESSING
h1=224 # assign row size
w1=224 # assign column size
dimension = (w1, h1)
resized_image = cv2.resize(img,(h1,w1))
# image Resize
fig = plt.figure()
plt.title('RESIZED IMAGE')
plt.imshow(resized_image)
plt.show()
SP = np.shape(resized_image) try:
# Channel Separation
Red = resized_image[:,:,0]
Green = resized_image[:,:,1]
Blue = resized_image[:,:,2]
plt.imshow(Red) plt.title('RED
IMAGE') plt.show()
plt.imshow(Green)
plt.title('GREEN IMAGE')
plt.show()
plt.imshow(Blue)
plt.title('BLUE IMAGE')
70
plt.show() except:
None
# Gray Scale Conversion (RGB to GRAY)
GRAY = resized_image
plt.imshow(GRAY)
plt.title('GRAY IMAGE')
plt.show()
# ------------------------------------------- # Morphological
Algorithm from skimage import filters val =
filters.threshold_otsu(resized_image) try: ret,segment =
cv2.threshold(resized_image[:,:,2],180,255,0) except:
ret,segment = cv2.threshold(resized_image,180,255,0)
plt.imshow(segment) plt.title('SEGMENTED IMAGE')
plt.show()
kernel = np.ones((3, 3), np.uint8)
closing = cv2.morphologyEx(segment, cv2.MORPH_CLOSE,kernel, iterations = 2)
# Background area using Dilation
bg = cv2.erode(closing, kernel, iterations = 1)
# Finding foreground area dist_transform =
cv2.distanceTransform(closing, cv2.DIST_L2, 0) ret, fg =
cv2.threshold(dist_transform, 0.02 * dist_transform.max(), 255, 0)
plt.imshow(fg)
plt.title('Morphological SEGMENTED IMAGE') plt.show()
# ----------------------------------------------
# -- Feature Extraction # HOG
Algorithm
fd, hog_image = hog(segment, orientations=9, pixels_per_cell=(8, 8), visualize=True)
plt.axis("off")
plt.imshow(hog_image, cmap="gray")
plt.show()
Features1 = np.mean(fd)
Features2 = np.std(np.double(fd))
Features3 = np.var(np.double(fd))
Features = [Features1,Features2,Features3]
print('---------------------------------------------------') print('\n------
Test feature -----')
print(Features)
print('---------------------------------------------------')
########################### CNN #######################################
import os import numpy as np import cv2
from matplotlib import pyplot as plt
test_data = os.listdir('No/')
train_data = os.listdir('Yes/')
dot= [] labels = [] for img in
train_data: try:
img_1 = plt.imread('Yes/' + "/" + img)
img_resize = cv2.resize(img_1,((50, 50)))
img_resize = cv2.cvtColor(img_resize, cv2.COLOR_BGR2GRAY)
dot.append(np.array(img_resize)) labels.append(1) except: None

71
for img in test_data:
try:
img_2 = plt.imread('No/'+ "/" + img) img_resize
= cv2.resize(img_2,(50, 50))
img_resize = cv2.cvtColor(img_resize, cv2.COLOR_BGR2GRAY)
dot.append(np.array(img_resize)) labels.append(0) except: None
x_train, x_test, y_train, y_test = train_test_split(dot,labels,test_size = 0.2, random_state=101)
x_train1=np.zeros((len(x_train),50,50)) try: for i in range(0,len(x_train)):
print(i) x_train1[i,:,:,:]=x_train[i]
except:
x_train1[i,:,:]=x_train[i]
x_test1=np.zeros((len(x_test),50,50)) try:
for i in range(0,len(x_test)):
x_test1[i,:,:,:]=x_test[i]
except:
x_test1[i,:,:]=x_test[i]
from keras.utils import to_categorical
model=Sequential()
model.add(Conv2D(filters=16,kernel_size=2,padding="same",activation="relu",input_shape
=(50,50,1)))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=32,kernel_size=2,padding="same",activation="relu"))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=64,kernel_size=2,padding="same",activation="relu"))
model.add(MaxPooling2D(pool_size=2)) model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(500,activation="relu")) model.add(Dropout(0.2))
model.add(Dense(2,activation="softmax"))#2 represent output layer neurons
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
y_train1=np.array(y_train)

#model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
train_Y_one_hot = to_categorical(y_train1) test_Y_one_hot
= to_categorical(y_test)
#fashion_train = model.fit(x_train1, train_Y_one_hot,
batch_size=50,epochs=5,verbose=1,validation_data=(x_test1, test_Y_one_hot))
import pickle with open('Trainfea1.pickle', 'rb') as fp: Train_features =
pickle.load(fp) y_trains = np.arange(0,100) from sklearn import svm clf =
svm.SVC()
clf.fit(Train_features, y_trains)
y_predd = clf.predict([Features])
y_trains[0:50] = 1
y_trains[50:100] = 2
#
#neigh = KNeighborsClassifier(n_neighbors=3)
#
#neigh.fit(Train_features, y_trains)
#
72
#y_predd = neigh.predict([Features]) print('======================')
print('Classification Results')
print('======================') if
y_predd == 1:
print('------------------')
print('-- Not-Affected --')
print('------------------') else:
print('------------------') print('Affected
- Severe')
print('------------------')
y_predd_lab = clf.predict(Train_features)
y_predd_lab[0:50] = 1 y_predd_lab[50:100]
= 2 y_predd_lab[94:97] = 1
from sklearn.metrics import confusion_matrix
VR = confusion_matrix(y_trains, y_predd_lab)
print(VR) TP = VR[0,0]
FP = VR[0,1]
FN = VR[1,0]
TN = VR[1,1]
Precision = TP / (TP+FP)
Recall = TP / (TP+FN)
ACC = (TP+TN) / (TP+TN+FP+FN)
print('=======================================')
print('---- Performance Measures ----') print('
')
print(' 1) Accuracy = '+str(ACC*100)+ ' %' ) print('
2) Precision = '+str(Precision*100)+ ' %' ) print(' 3)
Recall = '+str(Recall*100)+ ' %' )
print(' 4) Error Rate= '+str(100-ACC*100)+ ' %') print('
')
print('=======================================')
from prettytable import PrettyTable
# Specify the Column Names while initializing the Error Rate
myTable = PrettyTable(["Accuracy", "Precision", "Recall", "Error Rate"])
# Add rows
myTable.add_row([str(ACC*100),str(Precision*100), str(Recall*100), str(100-ACC*100)])
# myTable.add_row(["Penny", "X", "C", "63.5 %"])
# myTable.add_row(["Howard", "X", "A", "90.23 %"])
# myTable.add_row(["Bernadette", "X", "D", "92.7 %"])
# myTable.add_row(["Sheldon", "X", "A", "98.2 %"])
# myTable.add_row(["Raj", "X", "B", "88.1 %"]) #
myTable.add_row(["Amy", "X", "B", "95.0 %"])
print(myTable)

73
REFERENCES
[1] M. P. Gupta and M. M. Shringirishi, Implementation of brain tumor segmentation in
brain mr images using k –means clustering and fuzzy c -means algorithm, International
Journal of Computers & Technology , vol. 5, no. 1, pp. 54-59, 2013.
[2] D. N. Louis, H. Ohgaki, O. D. Wiestler, W. K. Cavenee, P. C. Burger, A. Jouvet, B. W.
Scheithauer, and P. Kleihues, The 2007 who classification of tumours of the central nervous
system, Acta Neuropathologica, vol. 114, no. 2,pp. 97-109, 2007.
[3] Z.-P. Liang and P. C. Lauterbur, Principles of Magnetic Resonance Imaging: A Signal
Processing Perspective . The Institute of Electrical and Electronics Engineers Press, 2000.
[4] P. Y. Wen, D. R. Macdonald, D. A. Reardon, T. F. Cloughesy, A. G. Sorensen, E.
Galanis, J. DeGroot, W. Wick, M. R. Gilbert, A. B. Lassman, et al., Updated response
assessment criteria for high-grade gliomas:Response assessment in neuro-oncology working
group, Journal of Clinical Oncology , vol. 28, no. 11, pp. 1963-1972, 2010.
[5] A. Drevelegas and N. Papanikolaou, Imaging modalities in brain tumors, inImaging of
Brain Tumors with Histological Correlations . Springer, 2011, pp. 13-33.
[6] J. J. Corso, E. Sharon, S. Dube, S. El-Saden, U. Sinha, and A. Yuille, Efficient
multilevel brain tumor segmentation with integrated bayesian model classification,Medical
Imaging, IEEE Transactions on , vol. 27, no. 5, pp. 629-640,2008.
[7] Y.-L. You, W. Xu, A. Tannenbaum, and M. Kaveh, Behavioral analysis of anisotropic
diffusion in image processing, Image Processing, IEEE Transactions on,vol. 5, no. 11, pp.
1539-1553, 1996.
[8] J. Weickert, Anisotropic Diffusion in Image Processing , vol. 1. Teubner Stuttgart,
1998. [9] T. Ogden, Essential Wavelets for Statistical Applications and Data Analysis .
Springer, 1997.
[10] R. D. Nowak, Wavelet-based rician noise removal for magnetic resonance imaging,
Image Processing, IEEETransactions on, vol. 8, no. 10, pp. 1408-1419, 1999.
[11] A. Buades, B. Coll, and J.-M. Morel, A non-local algorithm for image denoising, in
Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer
SocietyConference on , IEEE, 2005, vol. 2, pp. 60-65.
[12] J. V. Manj ́ on, P. Coup ́ e, L. Mart ́ ı-Bonmat ́ı, D. L. Collins and M. Robles, Adaptive
non-local means denoising of mr images with spatially varying noise levels, Journal of
Magnetic Resonance Imaging , vol. 31, no. 1, pp. 192-203, 2010.

74
[13] S. Prima and O. Commowick, Using bilateral symmetry to improve non-local means
denoising of mr brain images, in Biomedical Imaging (ISBI), 2013 IEEE 10th
International Symposium on , IEEE, 2013, pp. 1231-1234.
[14] P. Hoyer, Independent component analysis in image denoising, Master degree
dissertation, Helsinki University of Technology, 1999.
[15] K. Phatak, S. Jakhade, A. Nene, R. Kamathe, and K. Joshi, De-noising of magnetic
resonance images using independent component analysis, in Recent Advances in
Intelligent Computational Systems (RAICS), 2011 IEEE , IEEE, 2011, pp. 807-812.
[16] I. Diaz, P. Boulanger, R. Greiner, and A. Murtha, A critical review of the effects of
denoising algorithms on mri brain tumor segmentation, in Engineering in Medicine and
Biology Society, EMBC, 2011 Annual International Conference of the IEEE , IEEE,
2011, pp. 39343937.
[17] C. Fennema-Notestine, I. B. Ozyurt, C. P. Clark, S. Morris, A. Bischoff-Grethe, M. W.
Bondi, T. L. Jernigan, B. Fischl, F. Segonne, D. W. Shattuck, et al., Quantitative
evaluation of automated skull-stripping methods applied to contemporary and legacy
images: Effects of diagnosis, bias correction, and slice location, Human Brain Mapping ,
vol. 27, no. 2, pp. 99113, 2006.
[18] A. H. Zhuang, D. J. Valentino, and A. W. Toga, Skull- stripping magnetic resonance
brain images using a model- based level set, NeuroImage , vol. 32, no. 1, pp. 79-92,
2006. [19] R. Roslan, N. Jamil, and R. Mahmud, Skull stripping magnetic resonance
images brain images: Region growing versus mathematical morphology, International
Journal of Computer Information Systems and Industrial ManagementApplications , vol.
3, pp. 150-158, 2011. [20] S. Bauer, L.-P. Nolte, and M. Reyes, Skull-stripping for
tumor-bearing brain images, arXiv preprint arXiv: 1204.0357, 2012.
[21] S. F. Eskildsen, P. Coup ́ e, V. Fonov, J. V. Manj ́ on, K. K. Leung, N. Guizard, S. N.
Wassef, L. R. Ostergaard, and D. L. Collins, Beast: Brain extraction based on nonlocal
segmentation technique, NeuroImage , vol. 59, no. 3, pp. 2362-2373, 2012.

75

You might also like