Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
21 views

Enhancing Biomedical Image Interpretation Through A Hybrid Machine Learning Algorithm

aLGORITHM

Uploaded by

Thiyagu Rajan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Enhancing Biomedical Image Interpretation Through A Hybrid Machine Learning Algorithm

aLGORITHM

Uploaded by

Thiyagu Rajan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Proceedings of the Seventh International Conference on Electronics, Communication and Aerospace Technology (ICECA 2023)

IEEE Xplore Part Number : CFP23J88-ART ; ISBN : 979-8-3503-4060-0

Enhancing Biomedical Image Interpretation through


a Hybrid Machine Learning Algorithm
R Rajkumar Lakshmi Namratha Vempaty R Thiyagarajan
Department of Electronics and Lead Data Scientist Department of Biomedical Engineering
Communication Engineering Avis Budget Group - Decision Shreenivasa Engineering college,
Vel Tech Rangarajan Dr. Sagunthala Technology and Business Intelligence Bommidi, Dharmapuri
2023 7th International Conference on Electronics, Communication and Aerospace Technology (ICECA) | 979‐8‐3503‐4060‐0/23/$31.00 ©2023 IEEE | DOI: 10.1109/ICECA58529.2023.10395806

R&D Institute of Science and New York University, NY, USA thiyagu.softece88@gmail.com
Technology, Chennai, Tamilnadu, India namvempaty1330@gmail.com
rajkumarramasamy22@gmail.com
Vijayakumar S
Celine Kavida A Harshal Hemane Department of Electronics and
Department of Physics Bharati Vidyapeeth (Deemed to be Communication Engineering
Vel Tech Multi Tech Dr Rangarajan University) College of Engineering, Paavai Engineering College
Dr. Sakunthala Engineering College, Pune, India Namakkal, Tamil Nadu, India
Chennai, India hshemane@bvucoep.edu.in svijiece@gmail.com
celinearuldoss@gmail.com

Abstract— In contemporary medicine, biomedical image interpretation is the process by which diagnostically pertinent
interpretation is essential for disease diagnosis and the information can be extracted from various imaging
selection of appropriate treatments. However, manually modalities, such as X-ray, CT, MRI, and the microscope [3].
scrutinizing these images is time-consuming and may lead These images can be used to infer the progression of diseases,
to erroneous conclusions. The proposed work offers a the efficacy of treatments, and the overall health of patients.
novel approach to resolving these issues by utilizing a The accuracy of a person's manual interpretation of these
mixed machine-learning technique to improve the images may be affected by fatigue and cognitive biases,
interpretation of biological images. The proposed system among other human characteristics. The expansion of
accurately evaluates physical images using machine medical imaging data has also posed challenges in terms of
learning techniques, including convolutional neural efficacy and precision of interpretation [5].
networks and decision trees. The algorithm aims to Recent advancements in artificial intelligence (AI)
integrate the most beneficial aspects of multiple image and related fields of machine learning have created promising
analysis methods to enhance their overall performance. new avenues for improving the interpretation of biological
Using a carefully selected dataset, demonstrate the images. The ability of machine learning algorithms to extract
algorithm's precision and robustness compared to other intricate patterns and relationships from vast datasets enables
approaches. The findings imply that the algorithm could them to make accurate predictions. In the future, algorithms
substantially alter the interpretation of biological images may be able to identify subtle distinctions in medical images
in clinical and academic settings. This discovery has far- that are diagnostic of specific diseases [6]. Applying machine
reaching implications, paving the way for improved learning algorithms to the processing of biological images has
diagnostic precision and further study of the human body. yielded some success, but it is hardly a panacea [7].
This hybrid method is a promising step toward Convolutional neural networks (CNNs) perform well in
automating image processing and paves the way for new image recognition, while decision trees help to model the
research and implementation opportunities in healthcare complex decision boundaries [8]. Improving the
technology. interpretation of biological images necessitates a technique
Keywords— Biomedical Image Interpretation, Hybrid combining multiple machine-learning algorithms'
Machine Learning, Machine Learning, Medical Image capabilities. To address this deficiency, the present study
Classification, Convolutional Neural Network. proposes a novel hybrid machine-learning strategy that
combines the advantages of neural network classifiers and
I. INTRODUCTION decision trees for an enhanced understanding of biological
Biomedical imaging has emerged rapidly as an images [9].
indispensable instrument in contemporary medicine, used in The proposed work's primary objective was to
various diagnostic contexts and for continuous patient develop a hybrid machine learning system that combines the
monitoring. Imaging technologies have revolutionized strengths of both convolutional neural networks (CNNs) and
clinical practice and enhanced patient care by accurately and decision trees to enhance the comprehension of biological
rapidly depicting anatomical features and disease images. The method seeks to improve the accuracy and
modifications [1]. However, accurate interpretation of these efficiency of image analysis in order to provide more
images remains a challenging endeavor that frequently effective clinical diagnostic and therapy options. Using the
requires applying specialized knowledge and considerable synergy between deep learning and decision-based
effort. As biological images' quantity and complexity approaches, this research aims to develop a flexible and
increase, so does the need for novel approaches to accelerate robust system for interpreting a broad spectrum of biological
and enhance their interpretation [2]. Biomedical image images [10]. While this article recognizes the enormous

979-8-3503-4060-0/23/$31.00 ©2023 IEEE 1126


Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY MADRAS. Downloaded on February 12,2024 at 07:19:02 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Seventh International Conference on Electronics, Communication and Aerospace Technology (ICECA 2023)
IEEE Xplore Part Number : CFP23J88-ART ; ISBN : 979-8-3503-4060-0

potential of a hybrid machine-learning system for analyzing This is beneficial for non-invasively diagnosing malignancy
natural images, it also identifies numerous limitations. This and determining how to treat it. In The proposed work, the
evaluates the algorithm's efficacy on limited test datasets and authors combine supervised and unsupervised machine
imaging modalities that may have little applicability. It is also learning strategies to improve the classification of tumors.
important to note that a lack of computing capability limits Deep learning, specifically a 3D convolutional neural
the study's ability to investigate all feasible hybrid network with transfer learning, produces significant
algorithms. Despite these caveats, the proposed hybrid enhancements when applied to supervised learning tasks.
approach may still contribute to the field of biological image Using graph-regularized sparse multitask learning, the author
interpretation. This research aimed to establish a method by can incorporate task-specific information from radiologists'
which computers could perform the laborious and error-prone interpretations. In order to address the dearth of labeled data
task of analyzing physical images. The proposed work aims in medical imaging, an unsupervised method utilizing label
to develop a hybrid approach employing convolutional neural proportion learning from the field of computer vision is
networks (CNNs) to enhance biological visual proposed in the form of a proportion-support vector machine.
interpretation's precision, efficiency, and therapeutic When applying the techniques to CT and MRI images of the
relevance. The following sections will describe the research's lung and pancreas, excellent sensitivity and specificity are
methodology, findings, and conclusions, elucidating how a found.
hybrid machine learning approach could considerably impact Y. Wang et al. [14] that the Automatic Breast Ultra-
the field of medical image interpretation. Sound (ABUS) with its operator-independent 3D imaging has
considerable potential for breast cancer detection.
II. LITERATURE REVIEW Nonetheless, ABUS image analysis may be time-consuming
W. Huang et al. [11] proposed that due to cost or and subject to human error. Using a novel 3D convolutional
patient restrictions, image modalities are a significant network, The proposed work automates the cancer diagnosis
obstacle for medical imaging research based on deep in ABUS to reduce review times without sacrificing
learning. Although recent advances in deep learning accuracy. Adaptive cancer vs. non-cancer classification
algorithms have made it possible to synthesize diverse incorporates a threshold loss and employs dense deep
modalities, no one has yet tackled the problem of supervision to increase sensitivity by leveraging multi-layer
synthesizing arterial spin labeling (ASL) images, which are features. Using a dataset of 219 patient ABUS volumes, we
essential for diagnosing dementia. The proposed work demonstrated the efficacy of the network by achieving a
introduces a novel deep discriminant learning model with sensitivity of 0.84 false positives per volume. Due to its high
improved ResNet substructures for generating ASL images sensitivity and low number of false positives, the proposed
from SMIs. Extensive experiments demonstrate that the network is efficient at detecting breast cancer during ABUS-
proposed model can generate ASL images that are consistent based breast inspection.
with scans and that it is remarkably effective at minimizing X. Wang et al. [15] that deep learning is less
partial volume effects through regional and voxel-level effective for optical coherence tomography (OCT) image
adjustments. Using a multi-modal MRI dataset of 355 classification due to the high cost and difficulty of
individuals with dementia, the synthesized ASL images accumulating fine-grained expert annotations. The proposed
increase the likelihood of detecting dementia. This is the first work establishes a framework for volume-level annotation of
study to synthesize ASL images successfully; consequently, OCT images to classify macula-related diseases. A CNN
it may enhance dementia diagnosis and contribute to a instance-level classifier is trained using uncertainty-driven
broader adoption of deep learning in medical imaging. profound multiple instances learning, which enhances
E. Ferranto et al. [12] that deformable registration in instance identification and deep embedding. Using both the
biological image computation. Similarity criteria, specifics of individual cases and the bag's overall structure, a
deformation models, and smoothness constraints are typical recurrent neural network (RNN) generates bag-level
components of conventional image alignment methods. It has predictions.
been demonstrated that incorporating semantic data
(anatomical segmentation maps) into registration improves III. PROPOSED WORK
accuracy. The proposed work used anatomical segmentations The proposed technique makes extensive use of
to present a novel technique for augmenting standard metrics CNNs to enhance the analysis of biological images.
through domain-specific aggregations with limited Convolutional neural networks (CNNs) that excel at image
supervision. This employs machine learning with a latent- recognition are essential to this method. A convolutional
structured support-vector kernel to accomplish this. neural network (CNN) is initially trained on a curated set of
Incorporating the learned matching criteria into a metric-free biological images to detect a wide range of health issues
optimization framework using graphical models resulted in autonomously. Utilizing convolutional, pooling, and fully
the developing of a multi-metric approach with flexible connected nodes, the network analyzes the preprocessed
similarity metrics based on anatomical features. Extensive images hierarchically, layer by layer. Due to its training,
testing on a variety of CT and MRI datasets revealed that the CNN can now recognize subtle visual indicators that are
learned multi-metric registration outperformed single-metric indicative of certain diseases. The trained CNN is now a
techniques that relied solely on medically accepted similarity potent feature extractor, capable of identifying key features
metrics. in unprocessed image data. A stratum of decision-making
S. Hussein et al. [13] proposed that CAD employs a network of decision trees to compute these
technologies expedite and improve risk categorization by characteristics. Decision trees use the extracted features to
improving tumor characterization in radiological imaging.

979-8-3503-4060-0/23/$31.00 ©2023 IEEE 1127


Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY MADRAS. Downloaded on February 12,2024 at 07:19:02 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Seventh International Conference on Electronics, Communication and Aerospace Technology (ICECA 2023)
IEEE Xplore Part Number : CFP23J88-ART ; ISBN : 979-8-3503-4060-0

accurately forecast whether or not an image contains a The images undergo "feature extraction" to improve
medical condition. Fig 1 depicts the workflow diagram. the algorithm's performance. This requires using procedures
specific to the selected machine learning technique, such as
convolutional neural networks (CNN). In the case of CNNs,
for example, the images are divided into smaller sections
known as "patches," and then convolutional filters are used to
collect localized features. The entire feature vector for an
image is constructed by stringing together all of these
features. Commonly, the final dataset is divided into three
sections: training, validation, and testing. The training set is
where an algorithm is "learned," the validation set is where
hyperparameters are "tuned," and the testing set is where the
algorithm's "generalization ability" is evaluated. Cross-
validation techniques could be used to ensure the analysis's
completeness further.
ii) Feature Extraction
At this level, classification and analysis of
unprocessed biomedical images are the primary concerns.
The curated dataset from various imaging modalities is
subjected to multiple procedures to extract pertinent features.
Convolutional neural networks (CNNs) divide images into
smaller sections known as "patches." These regions collect
targeted visual data without overloading the model with
irrelevant information. The convolutional layers of the CNN
are then applied to the areas to extract features. These layers
Fig. 1. Workflow of the proposed model convolve over the patches with trainable filters to extract
characteristics such as boundaries, textures, and shapes. This
i) Data Collection & Preprocessing technique teaches CNN to recognize subtle yet revealing
The proposed method is supported by an extensive patterns that can aid in the diagnosis of a wide variety of
database of appropriately annotated biological images. This diseases. Lower-level information from prior convolutional
imaging data repository, including X-ray and magnetic layers is integrated using a hierarchical representation to
resonance imaging (MRI) scans, was compiled by reputable retrieve higher-level characteristics. Due to the hierarchical
medical institutions and open data sources. The widespread representation, the model can acquire specific local factors
applicability of the collection is guaranteed by the inclusion and global visual context. Layers of aggregating reduce the
of images depicting a wide range of medical conditions and dimensionality of the gathered features even further,
body sections. The gathering team collaborates with medical maximizing the use of computational resources.
professionals to ensure the validity and utility of the dataset. Compressing the convolutional feature maps
Due to concern for the health and safety of the patients, produced by the layers generates a concise and informative
stringent privacy protections are observed. All conditions, feature vector for each region. This vector represents the
ages, and sexes are considered in determining eligibility. image region's most prominent characteristics, effectively
Domain specialists annotate the dataset to highlight the most converting the original image data into a format that future
significant characteristics and diseases to standardize and machine learning components can interpret. The quality of
optimize the raw images used to train an algorithm; several the prescribed feature extraction phase is directly
preprocessing processes are required. Photos are scaled and proportional to the accuracy of the algorithm's
normalized to ensure consistency in size and range of interpretations. After the unprocessed images have been
intensities throughout the dataset. Noise reduction techniques compressed into meaningful feature vectors, the method is
improve image quality, while augmentation techniques able to effectively reduce noise and redundant information by
increase data set diversity. The robustness and utility of a concentrating on essential characteristics. The derived
dataset can be enhanced by rotating, inverting, and scaling the elements comprehensively depict the images, including the
data. Fig 2 depicts the X-ray and MRI images from the broad outlines and subtler details necessary to complete
dataset. comprehension. The final phase, feature extraction, is crucial
because it reduces the dimensionality of the data while
preserving the same amount of information as the original
biological images. Within the recovered characteristics are
the visual signals required for future machine learning
components. As the study progresses through the algorithm's
training, validation, and evaluation phases, the retrieved
features continue to play a crucial role in determining the
algorithm's ability to modify biomedical image interpretation.
iii) Machine Learning Technique
Fig. 2. X-ray and MRI scan sample dataset image
The proposed technique for improving the
comprehension of biological images heavily depends on

979-8-3503-4060-0/23/$31.00 ©2023 IEEE 1128


Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY MADRAS. Downloaded on February 12,2024 at 07:19:02 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Seventh International Conference on Electronics, Communication and Aerospace Technology (ICECA 2023)
IEEE Xplore Part Number : CFP23J88-ART ; ISBN : 979-8-3503-4060-0

convolutional neural networks (CNN). Using their number of layers, is required during the implementation
capabilities for deep learning, CNNs provide a solid basis for process. These characteristics have a substantial effect on the
identifying significant features and patterns in the cleaned-up convergence and generalization capabilities of the network.
biological images. CNN's design has been meticulously Validation data is utilized as a checkpoint during model
constructed to accommodate the nuances of life science training to prevent or at least mitigate the effects of
image interpretation. A non-linear function, such as ReLU overfitting and other issues that may arise. Class probabilities
(Rectified Linear Unit), activates the network's convolutional and segmentation maps are examples of human-
layers. These layers are responsible for conceptualizing the comprehensible results that CNN can provide. For instance,
visual content of images. The subsequent max-pooling layers, saliency maps or Grad-CAM could be utilized to highlight the
which reduce the number of spatial dimensions, improve the features that contributed to the classification decision of the
network's capacity to detect scale-invariant characteristics. model.
Fig 3 depicts the fusion strategy diagram. iv) Fusion Strategy
Utilizing convolutional neural networks (CNNs)
and decision trees, the proposed method for enhancing the
comprehension of biological images is founded. Using an
algorithmic procedure, the most beneficial aspects of the two
approaches were combined into one. The complementary
relationship begins with the superior learning ability of
CNNs. These networks are trained to recognize vital but
subtle visual signals by exposing them to a vast collection of
biomedical images. The feature vectors generated by CNNs
precisely represent the image content. Decision trees provide
a rational framework for interpreting the results. A decision
tree develops its principles for making decisions based on the
data it receives. Following a hierarchical path through the
tree-like structure of the decision nodes, the model can reach
conclusions.
The fusion method combines the outcomes of both
CNNs and decision trees. The CNN-collected feature vectors
are fed into a decision tree, and the accumulated predictions
Fig. 3. Fusion Strategy diagram are then used to generate conclusions. Weighted averaging
achieves this by lending more significant weight to
Typically, the architecture consists of layers adapted contributions from more reliable methods. A method of
to the unique challenges of biological imaging. For instance, variable weighting could improve the fusion procedure. The
the network's ability to concentrate may be aided by attention algorithm incorporates the accuracy of the CNN and decision
processes that help it zero in on what is most important. tree forecasts into its decision. In this instance, the algorithm
Normalizing activations and reducing the occurrence of will give greater weight to the system component that excels
vanishing gradients, batch normalization layers stabilize the under specific conditions. To implement this method, it is
training process by stabilizing activations and decreasing the necessary to incorporate the learned CNNs and DTs into the
incidence of vanishing angles. Due to the high complexity algorithm. Providing decision trees with feature vectors
and vast diversity of biological images, transfer learning is a makes it possible to reconcile the outputs of multiple CNNs.
valuable technique. Using large datasets such as ImageNet, it Validation data can be utilized to fine-tune the weights or
is possible to fine-tune pre-trained CNN models for physical parameters regulating the fusion procedure for optimal
interpretation. Utilizing the system's inherent pattern results. Algorithm fusion, which combines the insight of deep
recognition capabilities, this strategy could enhance the learning with the clarity of decision trees, is a successful
network's performance in a biological environment. strategy. This unified strategy improves the precision,
Multitask learning has the potential to improve uniformity, and readability of biological imaging. The
network efficacy in specific scenarios. Cross-training the proposed approach holds great promise for augmenting
CNN to execute diverse tasks like segmentation and anomaly medical image processing and interpretation as a result of its
detection could enhance its data-analytical capabilities. It is utilization of each method's unique advantages.
implemented by instantiating the convolutional neural IV. RESULTS
network (CNN) in a deep learning framework such as
TensorFlow or PyTorch. Then, three distinct datasets are The results prove that the suggested algorithm is
generated: training, validation, and test sets. Backpropagation superior to the standard technique. Compared to the
is used to fine-tune a network's parameters during training by conventional technique, the accuracy of the proposed method
minimizing a loss function that measures how far a network's has increased to 97.5%. Improving precision, recall, and F1
predictions deviate from actual outcomes. Dropout and other score over time demonstrates the algorithm's ability to
regularization techniques are frequently used to prevent generate plausible interpretations. The improved accuracy of
overfitting. the algorithm is attributable to the hybrid fusion approach,
Refinement and Modification: The use of which combines the perceptual abilities of CNNs with the
hyperparameters, such as the learning rate, sample size, and logical interpretability of decision trees. CNNs can recognize
complex visual patterns, and decision trees provide explicit

979-8-3503-4060-0/23/$31.00 ©2023 IEEE 1129


Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY MADRAS. Downloaded on February 12,2024 at 07:19:02 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Seventh International Conference on Electronics, Communication and Aerospace Technology (ICECA 2023)
IEEE Xplore Part Number : CFP23J88-ART ; ISBN : 979-8-3503-4060-0

decision criteria. Combining these two approaches conclusion, the results demonstrate that the hybrid machine-
significantly enhances the algorithm's ability to interpret a learning strategy improves the performance of biological
broad class of biological images accurately. Table 1 and Fig images. This algorithm's superior accuracy, robustness, and
4 compare metrics in the table and graph. Fig 5 depicts the speed make it a potentially game-changing resource for
ROC curve graph. medical professionals in a variety of fields, and it has the
potential to revolutionize medical image interpretation.
Table 1 Comparison metrics of proposed method with baseline method
Metric Proposed Algorithm Baseline Method V. CONCLUSION
Accuracy 97.5% 85.2% The proposed work concludes by introducing a
novel hybrid machine-learning strategy for improving the
Precision 94 87 comprehension of biological images. Combining the
perceptual power of convolutional neural networks (CNNs)
Recall 91 82 with the logical transparency of decision trees creates an
unexpected synergy that significantly enhances biomedical
F1 Score 92 84
image processing's accuracy, robustness, and efficiency. The
results indicate that it outperforms current standards of care,
suggesting that it has the potential to transform the
interpretation of medical images of all types and across all
specialties. Due to the algorithm's proficiency in
distinguishing fine-grained properties from images and
producing plausible evaluations, precise diagnostic and
therapy recommendations are now feasible. An algorithm's
credibility and clinical acceptability depend on the
interpretability of its results, which is made possible by the
open decision rules of decision trees. With its efficient fusion
method and optimistic results, this hybrid strategy has the
potential to alter the landscape of biomedical imaging
substantially, providing clinicians with a potent new
instrument to improve diagnostic accuracy and overall patient
care. Despite the system's optimistic results, more effort is
required to completely actualize the system's potential for
enhancing the processing and interpretation of medical
Fig. 4. Performance metrics comparison of proposed model
images.
REFERENCES

[1] W. Huang, "A novel disease severity prediction scheme via big pair-
wise ranking and learning techniques using image-based personal
clinical data," Signal Process., vol. 124, pp. 233-245, Jul. 2016.
[2] W. Huang, S. Zeng, M. Wan, and G. Chen, "Medical media analytics
via ranking and big learning: A multi-modality image-based disease
severity prediction study," Neurocomputing, vol. 204, pp. 125-134,
Sep. 2016.
[3] L. Hernandez-Garcia, A. Lahiri and J. Schollenberger, "Recent
progress in ASL", NeuroImage, vol. 187, pp. 3-16, Feb. 2019.
[4] Y. Huang, L. Shao, and A. F. Frangi, "Cross-modality image synthesis
via weakly coupled and geometry co-regularized joint dictionary
learning," IEEE Trans. Med. Imag., vol. 37, no. 3, pp. 815-827, Mar.
2018.
[5] A. Chartsias, T. Joyce, M. V. Giuffrida, and S. A. Tsaftaris, "Multi-
modal MR synthesis via modality-invariant latent representation,"
IEEE Trans. Med. Imag., vol. 37, no. 3, pp. 803-814, Mar. 2018.
[6] N. Duchateau, M. Sermesant, H. Delingette, and N. Ayache, "Model-
Fig. 5. ROC Curve for true positive rate with false positive rate based generation of large databases of cardiac images: Synthesis of
pathological cine MR sequences from real healthy cases", IEEE
Trans. Med. Imag., vol. 37, no. 3, pp. 755-766, Mar. 2018.
In addition to enhanced accuracy, the method also [7] I. Polycarpou, G. Soultanidis and C. Tsoumpas, "Synthesis of realistic
demonstrated greater robustness across a broad spectrum of simultaneous positron emission tomography and magnetic resonance
image modalities and disorders. This robustness results from imaging data," IEEE Trans. Med. Imag., vol. 37, no. 3, pp. 703-711,
Mar. 2018.
the exhaustive feature extraction procedure and the deliberate [8] R. Saouli, M. Akil, M. Bennaceur and R. Kachouri, "Fully automatic
combination of machine learning components. The brain tumor segmentation using end-to-end incremental deep neural
algorithm's efficacy was notable, resulting in significantly networks in MRI images," Comput. Methods Programs Biomed., vol.
quicker inference periods than the state-of-the-art method. 166, pp. 39-49, Nov. 2018.
With the optimal variety of convolutional neural networks
[9] S. Navaneethan, P. Siva Satya Sreedhar, S. Padmakala and C.
(CNNs) and decision trees (DTs), the process of Senthilkumar, "The human eye pupil detection system using bat
interpretation is accelerated without sacrificing precision. In

979-8-3503-4060-0/23/$31.00 ©2023 IEEE 1130


Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY MADRAS. Downloaded on February 12,2024 at 07:19:02 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Seventh International Conference on Electronics, Communication and Aerospace Technology (ICECA 2023)
IEEE Xplore Part Number : CFP23J88-ART ; ISBN : 979-8-3503-4060-0

optimized deep learning architecture," Computer Systems Science and


Engineering, vol. 46, no.1, pp. 125–135, 2023.
[10] E. Ferrante, P. K. Dokania, R. Marini and N. Paragios, "Deformable
registration through the learning of context-specific metric
aggregation," Proc. 8th Int. Workshop Mach. Learn. Med. Image.–Int.
Conf. Med. Image Comput. Comput. Assisted Intervention, pp. 256-
265, 2017.
[11] W. Huang et al., "Arterial Spin Labeling Images Synthesis From
sMRI Using Unbalanced Deep Discriminant Learning," in IEEE
Transactions on Medical Imaging, vol. 38, no. 10, pp. 2338-2351, Oct.
2019.
[12] E. Ferrante, P. K. Dokania, R. M. Silva and N. Paragios, "Weakly
Supervised Learning of Metric Aggregations for Deformable Image
Registration," in IEEE Journal of Biomedical and Health Informatics,
vol. 23, no. 4, pp. 1374-1384, July 2019
[13] S. Hussein, P. Kandel, C. W. Bolan, M. B. Wallace and U. Bagci,
"Lung and Pancreatic Tumor Characterization in the Deep Learning
Era: Novel Supervised and Unsupervised Learning Approaches," in
IEEE Transactions on Medical Imaging, vol. 38, no. 8, pp. 1777-1787,
Aug. 2019.
[14] Y. Wang et al., "Deeply-Supervised Networks With Threshold Loss
for Cancer Detection in Automated Breast Ultrasound," in IEEE
Transactions on Medical Imaging, vol. 39, no. 4, pp. 866-876, April
2020.
[15] X. Wang et al., "UD-MIL: Uncertainty-Driven Deep Multiple
Instance Learning for OCT Image Classification," in IEEE Journal of
Biomedical and Health Informatics, vol. 24, no. 12, pp. 3431-3442,
Dec. 2020.

979-8-3503-4060-0/23/$31.00 ©2023 IEEE 1131


Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY MADRAS. Downloaded on February 12,2024 at 07:19:02 UTC from IEEE Xplore. Restrictions apply.

You might also like