Object Detection in Multispectral Remote Sensing Images Based on Cross-Modal Cross-Attention
Abstract
:1. Introduction
- 1.
- A dual-stream real-time detector is proposed to perform target detection using both visible and infrared images with stable detection performance in extreme environments such as night and fog.
- 2.
- The process of network deepening inevitably brings information loss. In this paper, the features of each modality are filtered and enhanced by cross-modal attention, avoiding the information loss in the process of network deepening, and improving the detection effect of the detector on weak targets.
- 3.
- The features of different modalities often possess large differences, and one-time fusion does not mix them well. In this paper, a three-stage fusion strategy is designed to fuse features of different modalities from three different perspectives: spatial, channel, and overall. It is worth noting that the cross-modal feature fusion module is end-to-end during training.
- 4.
- Extensive experiments on two datasets show that the method in this paper achieves SOTA performance in the detection of remotely sensed objects.
2. Related Work
2.1. Single Source Remote Sensing Object Detection
2.2. Multimodal Remote Sensing Object Detection
3. Methodology
3.1. Algorithm Overview
3.2. Single Module Information Processing Module
3.2.1. C2f Module
3.2.2. SPPF Module
3.3. Cross-Modal Information Processing Module
3.3.1. Cross-Modal Feature Enhancement Module
3.3.2. Cross-Modal Feature Fusion Module
3.4. Loss Function
3.4.1. Binary Cross-Entropy Loss
3.4.2. Border Regression Loss
4. Experiment
4.1. Datasets
4.1.1. DroneVehicle Dataset
4.1.2. VEDAI Dataset
4.2. Implementation Details
4.3. Evaluation Indicators
4.4. Ablation Experiment
4.5. Comparison Experiment
4.5.1. Comparative experiments on the DroneVehicle dataset
4.5.2. Comparative Experiments on the VEDAI Dataset
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Han, Y.; Liu, H.; Wang, Y.; Liu, C. A Comprehensive Review for Typical Applications Based Upon Unmanned Aerial Vehicle Platform. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 9654–9666. [Google Scholar] [CrossRef]
- Zhang, R.; Cao, Z.; Yang, S.; Si, L.; Sun, H.; Xu, L.; Sun, F. Cognition-Driven Structural Prior for Instance-Dependent Label Transition Matrix Estimation. IEEE Trans. Neural Netw. Learn. Syst. 2024, 1–14. [Google Scholar] [CrossRef]
- Huang, S.; Ren, S.; Wu, W.; Liu, Q. Discriminative features enhancement for low-altitude UAV object detection. Pattern Recognit. 2024, 147, 110041. [Google Scholar] [CrossRef]
- Burger, W.; Burge, M.J. Scale-Invariant Feature Transform (SIFT). In Digital Image Processing: An Algorithmic Introduction Using Java; Springer: London, UK, 2016; pp. 609–664. [Google Scholar] [CrossRef]
- Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 886–893. [Google Scholar] [CrossRef]
- Felzenszwalb, P.; McAllester, D.; Ramanan, D. A discriminatively trained, multiscale, deformable part model. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar] [CrossRef]
- Bai, X.; Wang, X.; Liu, X.; Liu, Q.; Song, J.; Sebe, N.; Kim, B. Explainable deep learning for efficient and robust pattern recognition: A survey of recent developments. Pattern Recognit. 2021, 120, 108102. [Google Scholar] [CrossRef]
- Quan, Y.; Chen, Y.; Shao, Y.; Teng, H.; Xu, Y.; Ji, H. Image denoising using complex-valued deep CNN. Pattern Recognit. 2021, 111, 107639. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar] [CrossRef]
- Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar] [CrossRef]
- Zhong, Z.; Sun, L.; Huo, Q. Improved localization accuracy by LocNet for Faster R-CNN based text detection in natural scene images. Pattern Recognit. 2019, 96, 106986. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the Computer Vision—ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2999–3007. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
- Choi, J.; Chun, D.; Kim, H.; Lee, H.J. Gaussian YOLOv3: An Accurate and Fast Object Detector Using Localization Uncertainty for Autonomous Driving. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 502–511. [Google Scholar] [CrossRef]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. Scaled-YOLOv4: Scaling Cross Stage Partial Network. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 13024–13033. [Google Scholar] [CrossRef]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar] [CrossRef]
- Song, K.; Wang, H.; Zhao, Y.; Huang, L.; Dong, H.; Yan, Y. Lightweight multi-level feature difference fusion network for RGB-D-T salient object detection. J. King Saud Univ. Comput. Inf. Sci. 2023, 35, 101702. [Google Scholar] [CrossRef]
- Feng, M.; Su, J. Learning reliable modal weight with transformer for robust RGBT tracking. Knowl.-Based Syst. 2022, 249, 108945. [Google Scholar] [CrossRef]
- Wang, C.; Sun, D.; Yang, J.; Li, Z.; Gao, Q. DFECF-DET: All-Weather Detector Based on Differential Feature Enhancement and Cross-Modal Fusion With Visible and Infrared Sensors. IEEE Sens. J. 2023, 23, 29200–29210. [Google Scholar] [CrossRef]
- Wang, C.; Sun, D.; Gao, Q.; Wang, L.; Yan, Z.; Wang, J.; Wang, E.; Wang, T. MLFFusion: Multi-level feature fusion network with region illumination retention for infrared and visible image fusion. Infrared Phys. Technol. 2023, 134, 104916. [Google Scholar] [CrossRef]
- Zhang, Y.; Xu, C.; Yang, W.; He, G.; Yu, H.; Yu, L.; Xia, G.S. Drone-based RGBT tiny person detection. ISPRS J. Photogramm. Remote Sens. 2023, 204, 61–76. [Google Scholar] [CrossRef]
- An, Z.; Liu, C.; Han, Y. Effectiveness Guided Cross-Modal Information Sharing for Aligned RGB-T Object Detection. IEEE Signal Process. Lett. 2022, 29, 2562–2566. [Google Scholar] [CrossRef]
- Sun, Y.; Cao, B.; Zhu, P.; Hu, Q. Drone-Based RGB-Infrared Cross-Modality Vehicle Detection Via Uncertainty-Aware Learning. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 6700–6713. [Google Scholar] [CrossRef]
- Razakarivony, S.; Jurie, F. Vehicle detection in aerial imagery: A small target detection benchmark. J. Vis. Commun. Image Represent. 2016, 34, 187–203. [Google Scholar] [CrossRef]
- Jia, X.; Zhu, C.; Li, M.; Tang, W.; Zhou, W. LLVIP: A Visible-infrared Paired Dataset for Low-light Vision. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, Canada, 11–17 October 2021; pp. 3489–3497. [Google Scholar] [CrossRef]
- Jiang, S.; Yao, W.; Wong, M.S.; Li, G.; Hong, Z.; Kuc, T.Y.; Tong, X. An Optimized Deep Neural Network Detecting Small and Narrow Rectangular Objects in Google Earth Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1068–1081. [Google Scholar] [CrossRef]
- Haroon, M.; Shahzad, M.; Fraz, M.M. Multisized Object Detection Using Spaceborne Optical Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 3032–3046. [Google Scholar] [CrossRef]
- Gao, P.; Tian, T.; Zhao, T.; Li, L.; Zhang, N.; Tian, J. Double FCOS: A Two-Stage Model Utilizing FCOS for Vehicle Detection in Various Remote Sensing Scenes. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 4730–4743. [Google Scholar] [CrossRef]
- Jiang, P.; Ergu, D.; Liu, F.; Cai, Y.; Ma, B. A Review of Yolo Algorithm Developments. Procedia Comput. Sci. 2022, 199, 1066–1073. [Google Scholar] [CrossRef]
- Ma, X.; Ji, K.; Xiong, B.; Zhang, L.; Feng, S.; Kuang, G. Light-YOLOv4: An Edge-Device Oriented Target Detection Method for Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 10808–10820. [Google Scholar] [CrossRef]
- Liu, Z.; Qiu, S.; Chen, M.; Han, D.; Qi, T.; Li, Q.; Lu, Y. CCH-YOLOX: Improved YOLOX for Challenging Vehicle Detection from UAV Images. In Proceedings of the 2023 International Joint Conference on Neural Networks (IJCNN), Gold Coast, Australia, 18–23 June 2023; pp. 1–9. [Google Scholar] [CrossRef]
- Deng, L.; Bi, L.; Li, H.; Chen, H.; Duan, X. Lightweight aerial image object detection algorithm based on improved YOLOv5s. Sci. Rep. 2023, 13, 7817. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Z.; Lu, X.; Cao, G.; Yang, Y.; Jiao, L.; Liu, F. ViT-YOLO:Transformer-Based YOLO for Object Detection. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, Canada, 11–17 October 2021; pp. 2799–2808. [Google Scholar] [CrossRef]
- Hui, Y.; Wang, J.; Li, B. DSAA-YOLO: UAV remote sensing small target recognition algorithm for YOLOV7 based on dense residual super-resolution and anchor frame adaptive regression strategy. J. King Saud Univ. Comput. Inf. Sci. 2024, 36, 101863. [Google Scholar] [CrossRef]
- Gómez-Chova, L.; Tuia, D.; Moser, G.; Camps-Valls, G. Multimodal Classification of Remote Sensing Images: A Review and Future Directions. Proc. IEEE 2015, 103, 1560–1584. [Google Scholar] [CrossRef]
- Qingyun, F.; Dapeng, H.; Zhaokui, W. Cross-Modality Fusion Transformer for Multispectral Object Detection. arXiv 2022, arXiv:2111.00273. [Google Scholar] [CrossRef]
- Bao, C.; Cao, J.; Hao, Q.; Cheng, Y.; Ning, Y.; Zhao, T. Dual-YOLO Architecture from Infrared and Visible Images for Object Detection. Sensors 2023, 23, 2934. [Google Scholar] [CrossRef] [PubMed]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path Aggregation Network for Instance Segmentation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8759–8768. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef]
- Li, X.; Wang, W.; Wu, L.; Chen, S.; Hu, X.; Li, J.; Tang, J.; Yang, J. Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS 2020, Red Hook, NY, USA, 6–12 December 2020. [Google Scholar]
- Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression. AAAI Conf. Artif. Intell. 2020, 34, 12993–13000. [CrossRef]
- Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W.; et al. YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications. arXiv 2022, arXiv:2209.02976. [Google Scholar] [CrossRef]
- Zhao, Y.; Lv, W.; Xu, S.; Wei, J.; Wang, G.; Dang, Q.; Liu, Y.; Chen, J. DETRs Beat YOLOs on Real-time Object Detection. arXiv 2023, arXiv:2304.08069. [Google Scholar] [CrossRef]
- Qingyun, F.; Zhaokui, W. Cross-modality attentive feature fusion for object detection in multispectral remote sensing imagery. Pattern Recognit. 2022, 130, 108786. [Google Scholar] [CrossRef]
- Li, C.; Song, D.; Tong, R.; Tang, M. Illumination-aware faster R-CNN for robust multispectral pedestrian detection. Pattern Recognit. 2019, 85, 161–171. [Google Scholar] [CrossRef]
Parameters | Configuration |
---|---|
CPU | Intel Xeon-2690v4 |
GPU | NVIDIA TESLA P100 16 GB |
System | Ubuntu 18.04 |
Deep learning architecture | Pytorch1.9.2 + Cuda11.4 + cudnn11.4 |
Training Epochs | 200 |
Batch size | 8 |
Weight Decay | 0.0005 |
Momentum | 0.937 |
Model | M1 (Vis) | M2 (Inf) | M3 | M4 | M5 | M6 |
---|---|---|---|---|---|---|
FFM | × | × | × | ✓ | × | ✓ |
FEM | × | × | × | × | ✓ | ✓ |
Precision | 0.757 | 0.796 | 0.834 | 0.835 | 0.931 | 0.840 |
Recall | 0.676 | 0.762 | 0.781 | 0.795 | 0.808 | 0.796 |
mAP0.5 | 0.717 | 0.804 | 0.825 | 0.839 | 0.838 | 0.84 |
mAP0.75 | - | - | 0.701 | 0.712 | 0.716 | 0.718 |
mAP0.5:0.95 | 0.433 | 0.576 | 0.585 | 0.591 | 0.596 | 0.596 |
Model | M1 (Vis) | M2 (Inf) | M3 | M4 | M5 | M6 |
---|---|---|---|---|---|---|
FFM | × | × | × | ✓ | × | ✓ |
FEM | × | × | × | × | ✓ | ✓ |
Precision | 0.464 | 0.549 | 0.798 | 0.721 | 0.687 | 0.799 |
Recall | 0.533 | 0.496 | 0.677 | 0.628 | 0.639 | 0.671 |
mAP0.5 | 0.521 | 0.569 | 0.697 | 0.698 | 0.674 | 0.701 |
mAP0.75 | - | - | 0.522 | 0.549 | 0.542 | 0.529 |
mAP0.5:0.95 | 0.310 | 0.353 | 0.429 | 0.437 | 0.425 | 0.439 |
Method | Modality | Car | Truck | Bus | Van | mAP0.5 | mAP0.5:0.95 | FPS |
---|---|---|---|---|---|---|---|---|
YOLOv3-Tiny [16] | Visible | 0.850 | 0.507 | 0.833 | 0.351 | 0.635 | 0.352 | 166 |
YOLOv5 [34] | Visible | 0.878 | 0.503 | 0.827 | 0.401 | 0.652 | 0.377 | 75 |
YOLOv6 [44] | Visible | 0.883 | 0.509 | 0.837 | 0.354 | 0.646 | 0.378 | 84 |
YOLOv8 1 | Visible | 0.901 | 0.602 | 0.881 | 0.483 | 0.717 | 0.433 | 89 |
RT-DETR [45] | Visible | 0.84 | 0.37 | 0.778 | 0.198 | 0.546 | 0.295 | 32 |
YOLOv3-Tiny [16] | Thermal | 0.956 | 0.67 | 0.907 | 0.0489 | 0.755 | 0.519 | 166 |
YOLOv5 [34] | Thermal | 0.968 | 0.673 | 0.901 | 0.534 | 0.769 | 0.533 | 75 |
YOLOv6 [44] | Thermal | 0.967 | 0.658 | 0.899 | 0.45 | 0.743 | 0.518 | 84 |
YOLOv8 1 | Thermal | 0.973 | 0.738 | 0.919 | 0.584 | 0.804 | 0.576 | 89 |
RT-DETR [45] | Thermal | 0.951 | 0.593 | 0.873 | 0.327 | 0.686 | 0.47 | 32 |
CMAFF [46] | Visible + Thermal | 0.975 | 0.762 | 0.941 | 0.604 | 0.82 | 0.576 | 43 |
CMT [38] | Visible + Thermal | 0.976 | 0.768 | 0.939 | 0.606 | 0.822 | 0.576 | 25 |
IAW [47] | Visible + Thermal | 0.898 | 0.625 | 0.892 | 0.487 | 0.726 | 0.416 | 50 |
CFDet [21] | Visible + Thermal | 0.976 | 0.774 | 0.94 | 0.632 | 0.83 | 0.582 | 51 |
Ours | Visible + Thermal | 0.977 | 0.792 | 0.946 | 0.643 | 0.84 | 0.596 | 53 |
Method | Modality | Car | Truck | Pickup | Tractor | Camping-Car | Boat | Van | Other | mAP0.5 | mAP0.5:0.95 | FPS |
---|---|---|---|---|---|---|---|---|---|---|---|---|
YOLOv3-Tiny [16] | Visible | 0.847 | 0.501 | 0.73 | 0.692 | 0.805 | 0.454 | 0.513 | 0.543 | 0.565 | 0.297 | 169 |
YOLOv5 [34] | Visible | 0.761 | 0.308 | 0.563 | 0.43 | 0.699 | 0.17 | 0.489 | 0.443 | 0.428 | 0.25 | 78 |
YOLOv6 [44] | Visible | 0.663 | 0.221 | 0.504 | 0.266 | 0.539 | 0.378 | 0.337 | 0.33 | 0.36 | 0.214 | 86 |
YOLOv8 1 | Visible | 0.824 | 0.406 | 0.706 | 0.669 | 0.745 | 0.419 | 0.441 | 0.481 | 0.521 | 0.31 | 117 |
RT-DETR [45] | Visible | 0.787 | 0.462 | 0.828 | 0.88 | 0.781 | 0.493 | 0.628 | 0.559 | 0.602 | 0.392 | 31 |
YOLOv3-Tiny [16] | Thermal | 0.823 | 0.218 | 0.698 | 0.563 | 0.62 | 0.323 | 0.541 | 0.301 | 0.454 | 0.261 | 169 |
YOLOv5 [34] | Thermal | 0.789 | 0.321 | 0.73 | 0.502 | 0.626 | 0.397 | 0.505 | 0.398 | 0.474 | 0.271 | 78 |
YOLOv6 [44] | Thermal | 0.786 | 0.296 | 0.733 | 0.587 | 0.717 | 0.149 | 0.348 | 0.407 | 0.447 | 0.263 | 86 |
YOLOv8 1 | Thermal | 0.876 | 0.546 | 0.852 | 0.779 | 0.696 | 0.456 | 0.54 | 0.371 | 0.569 | 0.353 | 117 |
RT-DETR [45] | Thermal | 0.798 | 0.502 | 0.688 | 0.575 | 0.687 | 0.404 | 0.697 | 0.249 | 0.52 | 0.327 | 31 |
CMAFF [46] | Visible + Thermal | 0.917 | 0.566 | 0.908 | 0.887 | 0.895 | 0.591 | 0.793 | 0.564 | 0.68 | 0.426 | 41 |
CMT [38] | Visible + Thermal | 0.902 | 0.586 | 0.857 | 0.962 | 0.889 | 0.594 | 0.764 | 0.556 | 0.679 | 0.409 | 25 |
IAW [47] | Visible + Thermal | 0.92 | 0.622 | 0.917 | 0.843 | 0.882 | 0.653 | 0.729 | 0.597 | 0.685 | 0.422 | 45 |
CFDet [21] | Visible + Thermal | 0.908 | 0.575 | 0.853 | 0.96 | 0.835 | 0.68 | 0.747 | 0.61 | 0.685 | 0.428 | 47 |
Ours | Visible + Thermal | 0.900 | 0.593 | 0.91 | 0.931 | 0.907 | 0.637 | 0.787 | 0.565 | 0.692 | 0.437 | 51 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhao, P.; Ye, X.; Du, Z. Object Detection in Multispectral Remote Sensing Images Based on Cross-Modal Cross-Attention. Sensors 2024, 24, 4098. https://doi.org/10.3390/s24134098
Zhao P, Ye X, Du Z. Object Detection in Multispectral Remote Sensing Images Based on Cross-Modal Cross-Attention. Sensors. 2024; 24(13):4098. https://doi.org/10.3390/s24134098
Chicago/Turabian StyleZhao, Pujie, Xia Ye, and Ziang Du. 2024. "Object Detection in Multispectral Remote Sensing Images Based on Cross-Modal Cross-Attention" Sensors 24, no. 13: 4098. https://doi.org/10.3390/s24134098
APA StyleZhao, P., Ye, X., & Du, Z. (2024). Object Detection in Multispectral Remote Sensing Images Based on Cross-Modal Cross-Attention. Sensors, 24(13), 4098. https://doi.org/10.3390/s24134098