MR_NET: A Method for Breast Cancer Detection and Localization from Histological Images Through Explainable Convolutional Neural Networks
Abstract
:1. Introduction
- We propose a method to classify HE-stained breast histology images;
- We design and develop a dedicated CNN for the breast histology image classification task, i.e., MR_NET;
- We evaluate several state-of-the-art CNN architectures with the aim to show the effectiveness of the proposed CNN;
- We take into account the explainability, aimed to show which areas of the image under analysis is symptomatic of the cancerous area, with the goal to improve the trustworthiness by medical staff and patients in deep learning;
- We evaluate the proposed CNN from a quantitative (by computing performance metrics) and a qualitative point of view (by analyzing the explainability behind the model prediction);
- We consider a dataset freely available for research purposes for replication purposes.
2. Method
- The second phase (box 2 in Figure 1) involved the design of a new network architecture called MR_Net, execution of the experiments and generation of Grad-CAMs based on this model, and an analysis of the results.
- In the third phase (box 3 in Figure 1), fine-tuning of the previously trained networks that obtained the best performance in the first phase was carried out with consequent generation of the Grad-CAMs in order to compare the results obtained with and without fine-tuning.
2.1. Dataset and Preprocessing
- The atypical class included images related to Flat Epithelial Atypia (FEA) and Atypical Ductal Hyperplasia (ADH).
- The malignant class included images of Ductal Carcinoma in Situ (DCIS) and Invasive Carcinoma (IC).
- The benign class included images labeled as Normal (N), Pathological Benign (PB), and Usual Ductal Hyperplasia (UDH).
- Training set: 4500 images, with 1500 classified as benign, 1500 as atypical, and 1500 as malignant.
- Validation set: 564 images, with 188 classified as benign, 188 as atypical, and 188 as malignant.
- Test set: 564 images, with 188 classified as benign, 188 as atypical, and 188 as malignant.
2.2. The CNN Models
- Standard_CNN: This network consists of 13 layers. The convolutional block includes three Conv2D layers with 3 × 3 filters of sizes 32, 64, and 128, each followed by ReLU activation, and alternates with three MaxPooling2D layers. The classification block contains three Dense layers with 512 and 256 units, respectively, using ReLU activation, followed by a final layer with 3 neurons and SoftMax activation. Dropout layers with a rate of 0.5 are interspersed to regularize the network. Since it is a multiclass classification task, the network utilized the categorical cross-entropy loss function.
- EfficientNet [17]: EfficientNets are a family of CNNs built upon a concept called “compound scaling” by Mingxing Tan and Quoc V. Le [17]. They proposed a technique that used a simple compound coefficient to scale uniformly each dimension of the network (depth, width, and resolution) with a fixed ratio. They managed to develop seven models of different dimensions that achieved better accuracy and efficiency than previous ConvNets. In particular, to conduct our analysis, we resorted to the base model, EfficientNet-B0. This model consists of a total of 28 layers, divided in 9 blocks: 1 stem layer for the input, 7 fundamental blocks (whose number of layers included can vary) in the middle, and a head layer for the output. The first layer is a Conv3 × 3 that accepts input images with size 224 × 224 × 3, while the last one is a Conv1 × 1. The building blocks of this architecture are the Mobile Inverted Bottleneck (MBConv) layers, that are based on the inverted residual blocks from MobileNet, but obviously with some modifications. The MBConv layer begins with a depthwise convolution, followed by a pointwise (1 × 1) convolution that increases the number of channels. It then applies another 1 × 1 convolution to reduce the number of channels back to the original count.
- ResNet-50 [18]: ResNets, or Residual Networks, are a family of CNNs launched in 2015 by Microsoft Research Asia. ResNet-50 has this name because it consists of 50 layers: one MaxPooling layer before the main blocks, one AveragePooling at the end and 48 intermediate layers. The networks are inspired by VGG nets: the convolutional layers mostly have 3 × 3 filters, however the model has fewer filters and lower complexity. ResNets became popular because of the introduction of a new building block, the residual block, that in the case of ResNet-50 includes 3 layers, all with ReLU activation. This means that all the layers are connected; however, during the training some of the connections are “skipped” in order to preserve the information learned. The first convolutional layer takes input images with a maximum size of 224 × 224 × 3 that are then fed into the main layers. The convolutional layers are divided into 4 main blocks, each of which includes 3 layers as stated before, but repeated a different number of times. At last, before going into the classifier, the images go through an AveragePooling layer.
- VGG [19]: VGG-16 is a neural network architecture developed by the Visual Geometry Group at the University of Oxford’s engineering sciences department. The two most commonly used versions, VGG-16 and VGG-19, differ in the number of layers. Inspired by AlexNet, VGG uses smaller convolutional filters, with an architecture consisting of 5 blocks of 3 × 3 convolutional layers. The first two blocks contain 2 convolutional layers, while the last three have either 3 (in VGG-16) or 4 (in VGG-19) layers. Max-pooling layers are placed between each block, followed by a final block of 3 fully connected layers.The input image size is 224 × 224 × 3. In this paper, we examined two variants of this network: VGG-16 and VGG-19. The primary difference is in the number of layers—VGG-16 has 16 layers (13 convolutional and 3 fully connected), whereas VGG-19 includes 19 layers (16 convolutional and 3 fully connected). The extra layers in VGG-19 result from additional convolutional layers added in the middle of the network.
- MobileNet [20]: this network primarily employs depthwise separable convolutions instead of the standard convolutions used in earlier architectures to create more lightweight models. Each depthwise separable convolution layer is composed of a depthwise convolution followed by a pointwise convolution. When counting depthwise and pointwise convolutions as separate layers, a MobileNet contains 28 layers. The input image size is 224 × 224 × 3.
- MR_Net: The network we designed and developed in this paper is composed of 13 layers constituting a convolutional block and a classification block. The convolutional block is composed of three Conv2D layers, alternating with MaxPooling2D layers, with the aim of reducing the dimensions of the images, maintaining their main characteristics. The classification block consists of three Dense layers, alternating with Dropout layers, used to improve the generalization of the network. The ReLU` was chosen as the activation function for the intermediate layers and the Softmax for the last layer linked to the classification. Finally, since it was a multiclass classification problem, categorical cross-entropy was used for the loss of function. Figure 2 shows the code snippet related to the MR_Net Python implementation, with the aim to understand in details the network structure and layers.
2.3. Training
- The image size: this hyperparameter indicates the size of the input image which corresponds to the size of the input layer of the network. The size of the image can have different values depending on the network in which it is inserted. The typical dimensions of images for classification are 224 × 224 × 3, but in this study, we also used the dimensions 110 × 110 × 3 as seen in Table 1.
- The number of epochs is a hyperparameter that specifies how many times an algorithm processes the entire dataset. Typically, a higher number of epochs is used to enable the model to learn as much as possible. However, it is important to monitor the number of epochs closely, as an excessively high value can lead to overfitting.
- The batch size refers to the number of examples processed in each batch during training. For instance, with a batch size of 32 and a training set of 4500 examples, there would be 32 batches, each containing 140 examples. Consequently, an epoch consists of 32 iterations. It is crucial to choose an appropriate batch size, as a value that is too small (less than 10) can hinder performance optimization, while a value that is too large may lead to memory issues or increased risk of overfitting. Commonly used batch sizes are 16, 32, 64, or 128.
- The learning rate determines how frequently the neural network updates its parameters during training. If the learning rate is too high, the model might update its parameters too quickly, potentially overshooting the optimal solution. Conversely, if the learning rate is too low, updates may be too slow, which can impede convergence and necessitate more training iterations to achieve optimal results. Commonly used learning rate values are 0.01, 0.001, and 0.0001.
2.4. Fine-Tuning
- Feature extraction: it consists of using the representations learned from a model in a previous training session to extract features from new data. When dealing with pre-trained CNNs, the classifier (the last part of the network, consisting of fully connected layers) is usually discarded and only the so-called convolutional base (the first part of the network, consisting of Conv2D and pooling layers) is considered. Before a new classification task can be submitted to the model, a new classifier built specifically for the dataset must be added and then trained. While this new training occurs, the layers of the convolutional base should not update their weights, otherwise there is the chance that the representations previously learned will be lost due to the error caused by a random initialization of the weights of the classifier. This is achieved through a process called freezing, which makes all the parameters of a layer untrainable. The convolutional base is kept because the patterns it identified are more generic compared to the ones of the classifier, therefore they are also more easily applicable to various domains. A key aspect to bear in mind is that networks learn hierarchies of patterns, so that the initial layers learn local patterns, down to the last, which will recognize patterns that are gradually more global, abstract, and specific to the dataset. For this reason, if the datasets are extremely different, it is better not to freeze the entire convolutional base, but only the first few layers.
- Fine-tuning: after the classifier has been trained, some of the layers that are closer to it are unfrozen and re-trained. Training these layers together with the classifier allows the representations of these layers to be adapted to the specific dataset. Usually, rather low learning rates are used so the weights are not completely modified. The choice of the number of layers to be re-trained must be made wisely, as the more parameters are trained, the greater the risk of overfitting on a small dataset like ours.
- For MobileNet, the weights of the layers in the last two convolutional blocks, starting with “conv_dw_12”.
- For VGG-16, the weights of all three layers in the last convolutional block, starting with “block5_conv1”.
- For VGG-19, the weights of all four layers in the last convolutional block, starting with “block5_conv1”.
2.5. Grad-CAM
- Forward pass: first, the input image is fed forward through the CNN to obtain the final convolutional feature maps.
- Backpropagation: during the training phase, gradients are calculated with respect to the predicted class score in the final layer of the CNN, typically using backpropagation.
- Gradient aggregation: The gradients flowing backward are utilized to assess the significance of each feature map in the final prediction. This is achieved by averaging the gradients of the target class across all spatial locations in the feature maps.
- Weighted combination: these gradients are then used to weight the feature maps, highlighting the regions in each feature map that are most relevant to the predicted class.
- Activation map generation: finally, the weighted combination of feature maps is passed through a ReLU activation to obtain the Grad-CAM activation map.
3. Experimental Analysis
3.1. Quantitative Analysis
- Accuracy is the proportion of correctly classified instances (both true positives and true negatives) out of all instances. It gives a general idea of how well the model is performing but can be misleading if the data are imbalanced (e.g., more negative cases than positive).
- Loss refers to how well the model’s predictions align with the actual outcomes.
- Precision is the proportion of correctly predicted positive instances (true positives) out of all instances predicted as positive (including false positives). It focuses on how accurate the positive predictions are.
- Recall is the proportion of correctly predicted positive instances out of all actual positive instances. It measures how well the model can identify true positives.
- True positive (TP): 322 (patients truly positive), with 144 affected by atypical and 178 by malignant tumors.
- True negative (TN): 132 (patients truly negative).
- False positive (FP): 56 (patients negative but classified as positive).
- False negative (FN): 54 (patients positive but classified as negative).
- True positive (TP): 319 (patients truly positive), with 122 affected by atypical 197 and by malignant tumors.
- True negative (TN): 134 (patients truly negative).
- False positive (FP): 54 (patients negative but classified as positive).
- False negative (FN): 57 (patients positive but classified as negative).
- True positive (TP): 278 (patients truly positive), with 127 affected by atypical and 151 by malignant tumors.
- True negative (TN): 126 (patients truly negative).
- False positive (FP): 62 (patients negative but classified as positive).
- False negative (FN): 98 (patients positive but classified as negative).
- True positive (TP): 303 (patients truly positive), with 143 affected by atypical and 160 by malignant tumors.
- True negative (TN): 120 (patients truly negative).
- False positive (FP): 68 (patients negative but classified as positive).
- False negative (FN): 73 (patients positive but classified as negative).
- True positive (TP): 305 (patients truly positive), with 135 affected by atypical 170 and by malignant tumors.
- True negative (TN): 132 (patients truly negative).
- False positive (FP): 56 (patients negative but classified as positive).
- False negative (FN): 71 (patients positive but classified as negative).
- True positive (TP): 285 (patients truly positive), with 128 affected by atypical 157 and by malignant tumors.
- True negative (TN): 151 (patients truly negative).
- False positive (FP): 37 (patients negative but classified as positive).
- False negative (FN): 91 (patients positive but classified as negative).
3.2. Qualitative Analysis
- In the case of the atypical category (a precancerous condition of the breast), the classifier utilized the area where the breast duct walls were a darker purple in color. Those walls were a little too thick with an excessive number of cells, since epithelial atypia can grow to a thickness of five or six cubic epithelial cells, as opposed to the normal thickness of the breast duct lining of about two cells. In fact, epithelial atypia is a proliferation of epithelial cells in the terminal duct–lobular units (TDLU) of the breast. The cells are clustered in acini that have rigid contours, round nuclei, and even chromatin, and the cell borders are readily appreciated, creating the impression of a mosaic pattern. Secretions and calcifications are present in the acinar lumens.
- In the case of the benign category, the classifier detected the area of normal tissue, consisting of glandular tissue and adipose tissue. Ducts, lobules, and acini of the mammary gland are lined with epithelial cells and immersed in adipose tissue. The model focused on areas of the image containing the fibroadenoma, a benign pathological nodule that results from the proliferation of the glandular epithelium and fibrous stroma of the breast. It is characterized by a fibroblastic stroma with glandular structures with cystic spaces, surrounded by connective tissue forming an enveloping capsule.
- In the case of the malignant category, the classifier relied on large areas of the image, characterized by undifferentiated malignant tissue, in which the tumor cells had lost all their specific, normal histological features and were therefore difficult to classify. In fact, it was an invasive carcinoma.
3.3. Comparison of Heatmaps: MR_Net vs. Standard_CNN
3.3.1. Heatmap Analysis of MR_Net Model
3.3.2. Heatmap Analysis of Standard_CNN Model
3.3.3. Comparison and Implications for Explainability
3.3.4. Remarks on Explainability
4. Related Work
5. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Siegel, R.L.; Miller, K.D.; Fuchs, H.E.; Jemal, A. Cancer statistics, 2022. CA Cancer J. Clin. 2022, 72, 7–33. [Google Scholar] [CrossRef] [PubMed]
- Abunasser, B.S.; Al-Hiealy, M.R.J.; Zaqout, I.S.; Abu-Naser, S.S. Convolution neural network for breast cancer detection and classification using deep learning. Asian Pac. J. Cancer Prev. APJCP 2023, 24, 531. [Google Scholar] [CrossRef] [PubMed]
- Narod, S.A.; Iqbal, J.; Miller, A.B. Why have breast cancer mortality rates declined? J. Cancer Policy 2015, 5, 8–17. [Google Scholar] [CrossRef]
- Sahu, A.; Das, P.K.; Meher, S. High accuracy hybrid CNN classifiers for breast cancer detection using mammogram and ultrasound datasets. Biomed. Signal Process. Control 2023, 80, 104292. [Google Scholar] [CrossRef]
- Gnanasekaran, V.S.; Joypaul, S.; Meenakshi Sundaram, P.; Chairman, D.D. Deep learning algorithm for breast masses classification in mammograms. IET Image Process. 2020, 14, 2860–2868. [Google Scholar] [CrossRef]
- Vaidyanathan, A.; Kaklamani, V. Understanding the clinical implications of low penetrant genes and breast cancer risk. Curr. Treat. Options Oncol. 2021, 22, 85. [Google Scholar] [CrossRef] [PubMed]
- Nawaz, W.; Ahmed, S.; Tahir, A.; Khan, H.A. Classification of breast cancer histology images using alexnet. In Proceedings of the Image Analysis and Recognition: 15th International Conference, ICIAR 2018, Póvoa de Varzim, Portugal, 27–29 June 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 869–876. [Google Scholar]
- Elston, C.W.; Ellis, I.O. Pathological prognostic factors in breast cancer. I. The value of histological grade in breast cancer: Experience from a large study with long-term follow-up. Histopathology 1991, 19, 403–410. [Google Scholar] [CrossRef] [PubMed]
- Gurcan, M.N.; Boucheron, L.E.; Can, A.; Madabhushi, A.; Rajpoot, N.M.; Yener, B. Histopathological image analysis: A review. IEEE Rev. Biomed. Eng. 2009, 2, 147–171. [Google Scholar] [CrossRef] [PubMed]
- He, L.; Long, L.R.; Antani, S.; Thoma, G.R. Histology image analysis for carcinoma detection and grading. Comput. Methods Programs Biomed. 2012, 107, 538–556. [Google Scholar] [CrossRef] [PubMed]
- Cui, M.; Zhang, D.Y. Artificial intelligence and computational pathology. Lab. Investig. 2021, 101, 412–422. [Google Scholar] [CrossRef] [PubMed]
- He, H.; Yang, H.; Mercaldo, F.; Santone, A.; Huang, P. Isolation Forest-Voting Fusion-Multioutput: A stroke risk classification method based on the multidimensional output of abnormal sample detection. Comput. Methods Programs Biomed. 2024, 253, 108255. [Google Scholar] [CrossRef] [PubMed]
- Di Giammarco, M.; Mercaldo, F.; Zhou, X.; Huang, P.; Santone, A.; Cesarelli, M.; Martinelli, F. A Robust and Explainable Deep Learning Method for Cervical Cancer Screening. In Proceedings of the International Conference on Applied Intelligence and Informatics, Dubai, United Arab Emirates, 29–31 October 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 111–125. [Google Scholar]
- Huang, P.; Li, C.; He, P.; Xiao, H.; Ping, Y.; Feng, P.; Tian, S.; Chen, H.; Mercaldo, F.; Santone, A.; et al. MamlFormer: Prioriexperience guiding transformer network via manifold adversarial multi-modal learning for laryngeal histopathological grading. Inf. Fusion 2024, 108, 102333. [Google Scholar] [CrossRef]
- Huang, P.; Xiao, H.; He, P.; Li, C.; Guo, X.; Tian, S.; Feng, P.; Chen, H.; Sun, Y.; Mercaldo, F.; et al. LA-ViT: A Network with Transformers Constrained by Learned-Parameter-Free Attention for Interpretable Grading in a New Laryngeal Histopathology Image Dataset. IEEE J. Biomed. Health Inform. 2024, 28, 3557–3570. [Google Scholar] [CrossRef] [PubMed]
- ICAR, Istituto di Calcolo e Reti ad Alte Prestazioni. BRACS: BReAst Carcinoma Subtyping. Available online: https://www.bracs.icar.cnr.it/ (accessed on 22 September 2024).
- Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Brancati, N.; Anniciello, A.M.; Pati, P.; Riccio, D.; Scognamiglio, G.; Jaume, G.; De Pietro, G.; Di Bonito, M.; Foncubierta, A.; Botti, G.; et al. Bracs: A dataset for breast carcinoma subtyping in h&e histology images. Database 2022, 2022, baac093. [Google Scholar] [PubMed]
- Ahmed, F.; Abdel-Salam, R.; Hamnett, L.; Adewunmi, M.; Ayano, T. Improved Breast Cancer Diagnosis through Transfer Learning on Hematoxylin and Eosin Stained Histology Images. arXiv 2023, arXiv:2309.08745. [Google Scholar]
- Kausar, T.; Lu, Y.; Kausar, A. Breast Cancer Diagnosis Using Lightweight Deep Convolution Neural Network Model. IEEE Access 2023, 11, 124869–124886. [Google Scholar] [CrossRef]
- Nam, S.; Chong, Y.; Jung, C.K.; Kwak, T.Y.; Lee, J.Y.; Park, J.; Rho, M.J.; Go, H. Introduction to digital pathology and computeraided pathology. J. Pathol. Transl. Med. 2020, 54, 125–134. [Google Scholar] [CrossRef] [PubMed]
Model | Image Size | Batch | Epochs | Learning Rate | Ex. Time |
---|---|---|---|---|---|
Standard CNN | 110 × 100 × 3 | 32 | 20 | 0.0001 | 0:08:57 |
EfficientNet | 224 × 224 × 3 | 32 | 20 | 0.00001 | 1:21:48 |
ResNet50 | 110 × 110 × 3 | 32 | 50 | 0.0001 | 2:24:58 |
VGG-16 | 224 × 224 × 3 | 32 | 50 | 0.00001 | 15:44:49 |
VGG-19 | 224 × 224 × 3 | 32 | 50 | 0.00001 | 18:40:30 |
MobileNet | 110 × 110 × 3 | 32 | 20 | 0.001 | 0:13:08 |
MR_Net | 110 × 110 × 3 | 32 | 20 | 0.0001 | 0:18:08 |
Model | Accuracy | Loss | Precision | Recall |
---|---|---|---|---|
EfficientNet | 0.6738 | 0.9003 | 0.6875 | 0.6631 |
ResNet50 | 0.7163 | 1.5485 | 0.7163 | 0.7163 |
VGG-16 | 0.7269 | 1.3160 | 0.7337 | 0.7181 |
VGG-19 | 0.7305 | 1.4156 | 0.7338 | 0.7234 |
MobileNet | 0.7305 | 1.5697 | 0.7351 | 0.7234 |
Standard CNN | 0.6737 | 0.9034 | 0.6824 | 0.6401 |
Model | Accuracy | Loss | Precision | Recall |
---|---|---|---|---|
MR_Net | 0.6897 | 0.8354 | 0.7028 | 0.6542 |
Model | Accuracy | Loss | Precision | Recall |
---|---|---|---|---|
MobileNet | 0.7270 | 0.7132 | 0.7589 | 0.7198 |
VGG-16 | 0.7234 | 0.8434 | 0.7431 | 0.7181 |
VGG-19 | 0.7145 | 1.1417 | 0.7171 | 0.7057 |
Study | Task | Model | Dataset | Performance |
---|---|---|---|---|
Brancati et al. (2022) [21] | Subtyping BC lesions | ResNet, EfficientNet | BRACS | 66% accuracy |
Ahmed et al. (2023) [22] | Transfer learning | Xception, ResNet50 | BRACS | 96.2% F1-score |
Kausar et al. (2023) [23] | Multiclass BC classification | Lightweight CNN | BRACS | 72.2% accuracy |
Nawaz et al. (2018) [7] | BC classification | AlexNet | ICIAR 2018 | 87% accuracy |
Sahu et al. (2023) [4] | Hybrid model for BC | Hybrid CNN | Mammogram + Ultrasound | 94.5% accuracy |
He et al. (2024) [12] | Fusion for BC detection | Multi-CNN Fusion | Custom | Improved specificity |
Tan and Le (2019) [17] | Model scaling for BC | EfficientNet-B0 | BRACS | High efficiency |
Simonyan and Zisserman (2014) [19] | Fine-tuned CNNs | VGG-16, VGG-19 | BRACS | 73% accuracy |
Howard et al. (2017) [20] | Mobile vision | MobileNet | BRACS | 73% accuracy |
He et al. (2016) [18] | Deep residual learning | ResNet-50 | BRACS | 71.6% accuracy |
Gurcan et al. (2009) [9] | Review of AI in histology | Traditional + AI methods | Various | Review study |
Di Giammarco et al. (2023) [13] | Explainable cancer detection | Custom CNN | Cervical images | High interpretability |
Vaidyanathan and Kaklamani (2021) [6] | Genetic data integration | Custom CNN | Genetic + Imaging | Improved precision |
Nam et al. (2020) [24] | AI in pathology | Various CNNs | Digital pathology | Integration challenges |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Catalano, R.; Tibaldi, M.G.; Lombardi, L.; Santone, A.; Cesarelli, M.; Mercaldo, F. MR_NET: A Method for Breast Cancer Detection and Localization from Histological Images Through Explainable Convolutional Neural Networks. Sensors 2024, 24, 7022. https://doi.org/10.3390/s24217022
Catalano R, Tibaldi MG, Lombardi L, Santone A, Cesarelli M, Mercaldo F. MR_NET: A Method for Breast Cancer Detection and Localization from Histological Images Through Explainable Convolutional Neural Networks. Sensors. 2024; 24(21):7022. https://doi.org/10.3390/s24217022
Chicago/Turabian StyleCatalano, Rachele, Myriam Giusy Tibaldi, Lucia Lombardi, Antonella Santone, Mario Cesarelli, and Francesco Mercaldo. 2024. "MR_NET: A Method for Breast Cancer Detection and Localization from Histological Images Through Explainable Convolutional Neural Networks" Sensors 24, no. 21: 7022. https://doi.org/10.3390/s24217022