A Wireless Sensor System for Diabetic Retinopathy Grading Using MobileViT-Plus and ResNet-Based Hybrid Deep Learning Framework
Abstract
:1. Introduction
- (1)
- Irregular region distribution. The regional distribution of DR is irregular, and there is some individual variability in the vascular pathways of the fundus. Fundus lesions and complete fundus information are required to rate DR [25]. Further, the deep learning model’s ability to capture global lesion locations is equally essential. CNN focuses only on the presence or absence of the target to be detected and not on the location and relative spatial relationships between components. Chen et al. argued that the local receptive wild operations, such as convolution operations, limit the acquisition of long-range pixel relationships [26]. Yang et al. concluded that the CNN algorithm ignored some valuable DR-related lesions [27]. The Transformer, a neural network architecture that relies on a self-attentional mechanism to process sequential data, has a large effective acceptance area and can learn global representation [28,29]. However, since it requires a large amount of training data, the time required for training is relatively long.
- (2)
- Blurred pixels and low contrast. The lesion area may have blurred pixels and low contrast with the surrounding area [30,31]. Lesion pixels that are too close to pixels in other areas will show up as blurred pathological features in fundus images. In image classification tasks, “fuzzy pixels” refer to the situation where an image pixel has an unclear or ambiguous boundary, making it difficult for the model to classify the pixel’s content accurately. “Low contrast” refers to images with a narrow range of brightness or colour values, resulting in reduced visual contrast between objects and making it difficult for the model to distinguish between them. The dataset in their experiment had images with various distortion types, including low-contrast and fuzzy pixels. These could negatively impact the performance of the classification model. The contrast between light and dark is not apparent (Figure 1e), which means the image has low contrast and looks blurry.
- (3)
- Image diversity. The data diversity of fundus images is mainly affected by the following factors: patient demographics, imaging equipment, and image acquisition settings. The age, race, gender, medical history, and other factors of patients will affect the appearance of fundus images [32]. For example, elderly patients may show more signs of age-related macular degeneration, while diabetes patients may show characteristic symptoms of diabetic retinopathy. The imaging device’s type, brand, model, and parameter settings also affect the quality and resolution of fundus images [33]. Image acquisition settings, such as lighting, contrast, and magnification, and image preprocessing steps, such as image enhancement, filtering, and normalization, also affect the appearance and characteristics of images. An unbalanced number of images under each category for a dataset, including multiple sample categories, will also affect the model training effect. In addition, there are different conditions under which fundus images are taken, as demonstrated by differences in the lighting, shooting angles, and varying degrees of noise. Additionally, the radius of the effective area of the image (circular retinal area) varies depending on the picture’s size.
- (1)
- To support health caregivers in the effective management and provide a proactive prevention of diabetic retinopathy, we propose a wireless sensor system architecture to perform the task of grading the DR in the ubiquitous environment and provide clinical services for DR diagnosis and treatment. The architecture employs portable retinal cameras, a blood glucose monitor, and a tablet computer as the data collection nodes of the wireless sensor network. A database server implemented with AI algorithms is responsible for processing the data transmitted via the wireless network and making decisions for various clinical roles (e.g., clinicians, patients, and hospital staff).
- (2)
- To solve the problems of irregular region distribution, fuzzy pixels, and low contrast, we propose a parallel deep learning framework (HybridLG) for learning both the local and global information of 2D fundus images. The framework includes a CNN backbone, a Transformer backbone, a neck network for feature fusion, and a head network. The CNN and Transformer backbones are adopted to extract local and global information, respectively. The neck network is adopted to fuse the local and global information and prepare for making final grading decisions. In addition, to deal with the problem of model performance affected by image diversity, we propose a model training strategy inspired by an ensemble learning strategy. It aims to improve the model generalization ability of the parallel framework. We use a fully connected layer to simulate this weighted voting process by adding weight parameters to each soft-voting model to identify the best result.
- (3)
- Considering the high computation complexity of the Transformer backbone, we propose a novel deep learning model named MobileViT-Plus and use it to implement the Transformer backbone of the HybridLG. Specifically, the MobileViT-Plus model is constructed by introducing a light Transformer block (LTB) into the MobileViT model. In the original Transformer block, multi-head self-attention involves computing the pairwise similarity between all pairs of input elements, which results in a quadratic computational complexity. To mitigate the computation overhead, in LTB, we use a k × k depth-wise convolution with stride k to reduce the spatial size of some parameters before the attention operation. In addition, we use a pre-trained ResNet101 for implementing the CNN backbone of the HybridLG framework.
2. Related Work
2.1. Clinical DR Grading Application
2.2. WSNs-Aided DR Grading
2.3. Deep Learning-Based DR Grading
3. Methodology
3.1. Methodology Architecture
3.2. Data Preparation and Augmentation
3.3. HybridLG Framework Construction
3.3.1. Framework Structure
3.3.2. ResNet Backbone for Learning Local Information
3.3.3. MobileViT-Plus Backbone for Learning Global Information
3.3.4. Feature Fusion Network and Head Network
3.4. Model Training Strategy
- (1)
- We first trained ResNet101 on our dataset using the default hyperparameters, and Cross-Entropy Loss was used.
- (2)
- After ResNet101 has been trained, we used a learning rate of 0.0001 and a batch size of 4 and 50 epochs to train MobileViT-Plus. The cross-entropy loss becomes stable after 50 epochs. This process can happen when ResNet101 is trained because of the parallel HybridLG framework. In training ResNet and MobileViT-Plus, we utilized 10-fold cross-validation to evaluate the performance of our proposed model. The performance metrics were averaged across the ten folds to provide a more reliable estimate of the model’s performance.
- (3)
- While the sub-models are being connected, the previous parameters are frozen and only the parameters of the new fully connected layer are trained. Therefore, this process takes place after all the sub-models are trained, but due to the few parameters that can be changed, five epochs are sufficient to find the balanced classification boundary at a learning rate of 0.0001. Then, the classification result will be output by the softmax layer of the head network.
4. Performance Evaluation
4.1. Experimental Setup
4.2. Evaluation Metrics
4.3. Ablation Study
4.4. Comparison Study
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Akyildiz, I.F.; Su, W.; Sankarasubramaniam, Y.; Cayirci, E. Wireless sensor networks: A survey. Comput. Netw. 2002, 38, 393–422. [Google Scholar] [CrossRef]
- Juang, P.; Oki, H.; Wang, Y.; Martonosi, M.; Peh, L.S.; Rubenstein, D. Energy-efficient computing for wildlife tracking: Design tradeoffs and early experiences with ZebraNet. In Proceedings of the 10th International Conference on Architectural Support for Programming Languages and Operating Systems, San Jose, CA, USA, 5–9 October 2002; pp. 96–107. [Google Scholar]
- Aminian, M.; Naji, H.R. A hospital healthcare monitoring system using wireless sensor networks. J. Health Med. Inf. 2013, 4, 121. [Google Scholar] [CrossRef]
- DeBuc, D.C. The role of retinal imaging and portable screening devices in tele-ophthalmology applications for diabetic retinopathy management. Curr. Diabetes Rep. 2016, 16, 132. [Google Scholar] [CrossRef]
- Das, A.; Rad, P.; Choo, K.-K.R.; Nouhi, B.; Lish, J.; Martel, J. Distributed machine learning cloud teleophthalmology IoT for predicting AMD disease progression. Future Gener. Comput. Syst. 2019, 93, 486–498. [Google Scholar] [CrossRef]
- Lin, K.Y.; Hsih, W.H.; Lin, Y.B.; Wen, C.Y.; Chang, T.J. Update in the epidemiology, risk factors, screening, and treatment of diabetic retinopathy. J. Diabetes Investig. 2021, 12, 1322–1325. [Google Scholar] [CrossRef] [PubMed]
- Yau, J.W.; Rogers, S.L.; Kawasaki, R.; Lamoureux, E.L.; Kowalski, J.W.; Bek, T.; Chen, S.J.; Dekker, J.M.; Fletcher, A.; Grauslund, J.; et al. Global prevalence and major risk factors of diabetic retinopathy. Diabetes Care 2012, 35, 556–564. [Google Scholar] [CrossRef]
- Long, S.; Huang, X.; Chen, Z.; Pardhan, S.; Zheng, D. Automatic Detection of Hard Exudates in Color Retinal Images Using Dynamic Threshold and SVM Classification: Algorithm Development and Evaluation. BioMed Res. Int. 2019, 2019, 3926930. [Google Scholar] [CrossRef] [PubMed]
- Ruamviboonsuk, P.; Tiwari, R.; Sayres, R.; Nganthavee, V.; Hemarat, K.; Kongprayoon, A.; Raman, R.; Levinstein, B.; Liu, Y.; Schaekermann, M.; et al. Real-time diabetic retinopathy screening by deep learning in a multisite national screening programme: A prospective interventional cohort study. Lancet. Digit. Health 2022, 4, e235–e244. [Google Scholar] [CrossRef] [PubMed]
- Henriques, J.; Vaz-Pereira, S.; Nascimento, J.; Rosa, P.C. Diabetic eye disease. Acta Med. Port. 2015, 28, 107–113. [Google Scholar] [CrossRef]
- Chaudhary, S.; Zaveri, J.; Becker, N. Proliferative diabetic retinopathy (PDR). Disease-a-Month 2021, 67, 101140. [Google Scholar] [CrossRef]
- Wang, W.; Lo, A.C.Y. Diabetic Retinopathy: Pathophysiology and Treatments. Int. J. Mol. Sci. 2018, 19, 1816. [Google Scholar] [CrossRef]
- Liu, Y.; Wu, N. Progress of Nanotechnology in Diabetic Retinopathy Treatment. Int. J. Nanomed. 2021, 16, 1391–1403. [Google Scholar] [CrossRef]
- Wilkinson, C.P.; Ferris, F.L., III; Klein, R.E.; Lee, P.P.; Agardh, C.D.; Davis, M.; Dills, D.; Kampik, A.; Pararajasegaram, R.; Verdaguer, J.T. Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology 2003, 110, 1677–1682. [Google Scholar] [CrossRef] [PubMed]
- Ghanchi, F. The Royal College of Ophthalmologists’ clinical guidelines for diabetic retinopathy: A summary. Eye 2013, 27, 285–287. [Google Scholar] [CrossRef]
- American Diabetes Association. Microvascular Complications and Foot Care: Standards of Medical Care in Diabetes-2020. Diabetes Care 2020, 43, S135–S151. [Google Scholar] [CrossRef] [PubMed]
- Kuwayama, S.; Ayatsuka, Y.; Yanagisono, D.; Uta, T.; Usui, H.; Kato, A.; Takase, N.; Ogura, Y.; Yasukawa, T. Automated Detection of Macular Diseases by Optical Coherence Tomography and Artificial Intelligence Machine Learning of Optical Coherence Tomography Images. J. Ophthalmol. 2019, 2019, 6319581. [Google Scholar] [CrossRef] [PubMed]
- Monemian, M.; Rabbani, H. Red-lesion extraction in retinal fundus images by directional intensity changes’ analysis. Sci. Rep. 2021, 11, 18223. [Google Scholar] [CrossRef]
- Wu, Z.; Shi, G.; Chen, Y.; Shi, F.; Chen, X.; Coatrieux, G.; Yang, J.; Luo, L.; Li, S. Coarse-to-fine classification for diabetic retinopathy grading using convolutional neural network. Artif. Intell. Med. 2020, 108, 101936. [Google Scholar] [CrossRef]
- Li, X.; La, R.; Wang, Y.; Hu, B.; Zhang, X. A Deep Learning Approach for Mild Depression Recognition Based on Functional Connectivity Using Electroencephalography. Front. Neurosci. 2020, 14, 192. [Google Scholar] [CrossRef]
- Hazra, D.; Byun, Y.C. SynSigGAN: Generative Adversarial Networks for Synthetic Biomedical Signal Generation. Biology 2020, 9, 441. [Google Scholar] [CrossRef]
- Russo, V.; Lallo, E.; Munnia, A.; Spedicato, M.; Messerini, L.; D’Aurizio, R.; Ceroni, E.G.; Brunelli, G.; Galvano, A.; Russo, A.; et al. Artificial Intelligence Predictive Models of Response to Cytotoxic Chemotherapy Alone or Combined to Targeted Therapy for Metastatic Colorectal Cancer Patients: A Systematic Review and Meta-Analysis. Cancers 2022, 14, 4012. [Google Scholar] [CrossRef] [PubMed]
- Bhimavarapu, U.; Battineni, G. Deep Learning for the Detection and Classification of Diabetic Retinopathy with an Improved Activation Function. Healthcare 2022, 11, 97. [Google Scholar] [CrossRef] [PubMed]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Tseng, V.S.; Chen, C.L.; Liang, C.M.; Tai, M.C.; Liu, J.T.; Wu, P.Y.; Deng, M.S.; Lee, Y.W.; Huang, T.Y.; Chen, Y.H. Leveraging Multimodal Deep Learning Architecture with Retina Lesion Information to Detect Diabetic Retinopathy. Transl. Vis. Sci. Technol. 2020, 9, 41. [Google Scholar] [CrossRef]
- Chen, J.; Frey, E.C.; He, Y.; Segars, W.P.; Li, Y.; Du, Y. TransMorph: Transformer for unsupervised medical image registration. Med. Image Anal. 2022, 82, 102615. [Google Scholar] [CrossRef] [PubMed]
- Yang, Y.; Shang, F.; Wu, B.; Yang, D.; Wang, L.; Xu, Y.; Zhang, W.; Zhang, T. Robust Collaborative Learning of Patch-Level and Image-Level Annotations for Diabetic Retinopathy Grading From Fundus Image. IEEE Trans. Cybern. 2022, 52, 11407–11417. [Google Scholar] [CrossRef] [PubMed]
- Zhang, T.H.; Hasib, M.M.; Chiu, Y.C.; Han, Z.F.; Jin, Y.F.; Flores, M.; Chen, Y.; Huang, Y. Transformer for Gene Expression Modeling (T-GEM): An Interpretable Deep Learning Model for Gene Expression-Based Phenotype Predictions. Cancers 2022, 14, 4763. [Google Scholar] [CrossRef]
- Chefer, H.; Gur, S.; Wolf, L. Transformer interpretability beyond attention visualization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 782–791. [Google Scholar]
- Li, C.F.; Xu, Y.D.; Ding, X.H.; Zhao, J.J.; Du, R.Q.; Wu, L.Z.; Sun, W.P. MultiR-Net: A Novel Joint Learning Network for COVID-19 segmentation and classification. Comput. Biol. Med. 2022, 144, 105340. [Google Scholar] [CrossRef]
- Albahli, S.; Ahmad Hassan Yar, G.N. Automated detection of diabetic retinopathy using custom convolutional neural network. J. X-Ray Sci. Technol. 2022, 30, 275–291. [Google Scholar] [CrossRef]
- Mookiah, M.R.; Acharya, U.R.; Koh, J.E.; Chandran, V.; Chua, C.K.; Tan, J.H.; Lim, C.M.; Ng, E.Y.; Noronha, K.; Tong, L.; et al. Automated diagnosis of Age-related Macular Degeneration using greyscale features from digital fundus images. Comput. Biol. Med. 2014, 53, 55–64. [Google Scholar] [CrossRef]
- Shen, D.; Wu, G.; Suk, H.I. Deep Learning in Medical Image Analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [Google Scholar] [CrossRef] [PubMed]
- Virgili, G.; Menchini, F.; Casazza, G.; Hogg, R.; Das, R.R.; Wang, X.; Michelessi, M. Optical coherence tomography (OCT) for detection of macular oedema in patients with diabetic retinopathy. Cochrane Database Syst. Rev. 2015. [Google Scholar] [CrossRef] [PubMed]
- Rabiolo, A.; Parravano, M.; Querques, L.; Cicinelli, M.V.; Carnevali, A.; Sacconi, R.; Centoducati, T.; Vujosevic, S.; Bandello, F.; Querques, G. Ultra-wide-field fluorescein angiography in diabetic retinopathy: A narrative review. Clin. Ophthalmol. 2017, 11, 803–807. [Google Scholar] [CrossRef] [PubMed]
- Deschler, E.K.; Sun, J.K.; Silva, P.S. Side-effects and complications of laser treatment in diabetic retinal disease. Semin. in Ophthalmology 2014, 29, 290–300. [Google Scholar] [CrossRef]
- Mishra, S.; Kim, Y.-S.; Intarasirisawat, J.; Kwon, Y.-T.; Lee, Y.; Mahmood, M.; Lim, H.-R.; Herbert, R.; Yu, K.J.; Ang, C.S. Soft, wireless periocular wearable electronics for real-time detection of eye vergence in a virtual reality toward mobile eye therapies. Sci. Adv. 2020, 6, eaay1729. [Google Scholar] [CrossRef]
- Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 2016, 316, 2402–2410. [Google Scholar] [CrossRef]
- Wang, X.-N.; Dai, L.; Li, S.-T.; Kong, H.-Y.; Sheng, B.; Wu, Q. Automatic grading system for diabetic retinopathy diagnosis using deep learning artificial intelligence software. Curr. Eye Res. 2020, 45, 1550–1555. [Google Scholar] [CrossRef]
- Wu, J.; Hu, R.; Xiao, Z.; Chen, J.; Liu, J. Vision Transformer-based recognition of diabetic retinopathy grade. Med. Phys. 2021, 48, 7850–7863. [Google Scholar] [CrossRef]
- Araújo, T.; Aresta, G.; Mendonça, L.; Penas, S.; Maia, C.; Carneiro, Â.; Mendonça, A.M.; Campilho, A. DR| GRADUATE: Uncertainty-aware deep learning-based diabetic retinopathy grading in eye fundus images. Med. Image Anal. 2020, 63, 101715. [Google Scholar] [CrossRef]
- Wang, X.; Tang, F.; Chen, H.; Cheung, C.Y.; Heng, P.-A. Deep semi-supervised multiple instance learning with self-correction for DME classification from OCT images. Med. Image Anal. 2023, 83, 102673. [Google Scholar] [CrossRef]
- Vocaturo, E.; Zumpano, E. Diabetic retinopathy images classification via multiple instance learning. In Proceedings of the 2021 IEEE/ACM Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE), Washington, DC, USA, 16–18 December 2021; pp. 143–148. [Google Scholar]
- Zhu, W.; Qiu, P.; Lepore, N.; Dumitrascu, O.M.; Wang, Y. Self-supervised equivariant regularization reconciles multiple-instance learning: Joint referable diabetic retinopathy classification and lesion segmentation. In Proceedings of the 18th International Symposium on Medical Information Processing and Analysis, Valparaiso, Chile, 9–11 November 2022; pp. 100–107. [Google Scholar]
- SOVIT RANJAN RATH. Homepage. Available online: https://www.kaggle.com/datasets/sovitrath/diabetic-retinopathy-224x224-2019-data (accessed on 16 April 2023).
- Wang, Z.; Xin, J.; Wang, Z.; Yao, Y.; Zhao, Y.; Qian, W. Brain functional network modeling and analysis based on fMRI: A systematic review. Cogn. Neurodyn. 2021, 15, 389–403. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Mehta, S.; Rastegari, M. Mobilevit: Light-weight, general-purpose, and mobile-friendly vision transformer. arXiv 2021, arXiv:2110.02178. [Google Scholar]
- Timmerman, V.; Strickland, A.V.; Züchner, S. Genetics of Charcot-Marie-Tooth (CMT) disease within the frame of the human genome project success. Genes 2014, 5, 13–32. [Google Scholar] [CrossRef]
- Cao, J.; Kwong, S.; Wang, R.; Li, X.; Li, K.; Kong, X.J.N. Class-specific soft voting based multiple extreme learning machines ensemble. Neurocomputing 2015, 149, 275–284. [Google Scholar] [CrossRef]
- Lipton, Z.C.; Elkan, C.; Narayanaswamy, B. Thresholding classifiers to maximize F1 score. arXiv 2014, arXiv:1402.1892. [Google Scholar]
- Hajian-Tilaki, K. Receiver operating characteristic (ROC) curve analysis for medical diagnostic test evaluation. Casp. J. Intern. Med. 2013, 4, 627–635. [Google Scholar]
- Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1492–1500. [Google Scholar]
- Huang, S.; Xu, Z.; Tao, D.; Zhang, Y. Part-stacked CNN for fine-grained visual categorization. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1173–1182. [Google Scholar]
- Caroprese, L.; Vocaturo, E.; Zumpano, E. Argumentation approaches for explanaible ai in medical informatics. Intell. Syst. Appl. 2022, 16, 200109. [Google Scholar] [CrossRef]
- Han, K.; Wang, Y.; Guo, J.; Tang, Y.; Wu, E. Vision GNN: An image is worth graph of nodes. arXiv 2022, arXiv:2206.00272. [Google Scholar]
- Hu, Z.; Dong, Y.; Wang, K.; Chang, K.-W.; Sun, Y. Gpt-gnn: Generative pre-training of graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Virtual, 6–10 July 2020; pp. 1857–1867. [Google Scholar]
- Suganthi, M.; Sathiaseelan, J. An exploratory of hybrid techniques on deep learning for image classification. In Proceedings of the 2020 4th International Conference on Computer, Communication and Signal Processing (ICCCSP), Chennai, India, 28–29 September 2020; pp. 1–4. [Google Scholar]
Model | Accuracy | Precision | Recall | F1 Score | AUC |
---|---|---|---|---|---|
MobileViT-Plus | 88.00% | 87.97% | 88.00% | 0.8797 | 0.963 |
ResNet101 | 81.33% | 81.31% | 81.33% | 0.8123 | 0.943 |
Our model | 93.67% | 93.71% | 93.67% | 0.9366 | 0.994 |
Model | Accuracy | Precision | Recall | F1 Score | AUC |
---|---|---|---|---|---|
Resnext101 | 79.83% | 80.40% | 79.83% | 0.7943 | 0.947 |
Se_resnet101 | 86.50% | 86.47% | 86.50% | 0.8640 | 0.963 |
Se_resnext50 | 90.17% | 90.30% | 90.16% | 0.9020 | 0.973 |
Senet154 | 88.00% | 88.17% | 88.00% | 0.8796 | 0.961 |
MobileViT | 87.33% | 87.54% | 87.33% | 0.8721 | 0.967 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wan, Z.; Wan, J.; Cheng, W.; Yu, J.; Yan, Y.; Tan, H.; Wu, J. A Wireless Sensor System for Diabetic Retinopathy Grading Using MobileViT-Plus and ResNet-Based Hybrid Deep Learning Framework. Appl. Sci. 2023, 13, 6569. https://doi.org/10.3390/app13116569
Wan Z, Wan J, Cheng W, Yu J, Yan Y, Tan H, Wu J. A Wireless Sensor System for Diabetic Retinopathy Grading Using MobileViT-Plus and ResNet-Based Hybrid Deep Learning Framework. Applied Sciences. 2023; 13(11):6569. https://doi.org/10.3390/app13116569
Chicago/Turabian StyleWan, Zhijiang, Jiachen Wan, Wangxinjun Cheng, Junqi Yu, Yiqun Yan, Hai Tan, and Jianhua Wu. 2023. "A Wireless Sensor System for Diabetic Retinopathy Grading Using MobileViT-Plus and ResNet-Based Hybrid Deep Learning Framework" Applied Sciences 13, no. 11: 6569. https://doi.org/10.3390/app13116569