Radar Signal Processing and Its Impact on Deep Learning-Driven Human Activity Recognition
Abstract
:1. Introduction
- Evaluation of radar 2D domain techniques: we empirically evaluated range-FFT-based time-range (TR) maps and time-Doppler (TD) maps generated using STFT and SPWVD, quantifying their computational efficiency in real-time HAR systems.
- Optimizing models with transfer learning (TL): we evaluated the performance of state-of-the-art CNN architectures, including VGG-16, VGG-19, ResNet-50, and MobileNetV2, to improve the accuracy of the proposed HAR system using TL methods.
- Performance and computational analysis of model-domain pairs: We conducted a comprehensive analysis of 12 model-domain pairs, focusing on real-time performance to optimize the balance between accuracy and computational efficiency (preprocessing, training, and inference times). The analysis is also extended to performance metrics beyond accuracy, such as recall, precision, and F1 score, which are critical to evaluating effectiveness in real-world applications.
2. Related Work
3. Radar-Based HAR System
3.1. Data Acquisition
FMCW Radar Principle
3.2. Data Preprocessing
3.2.1. Range-FFT-Based Time-Range (TR) Maps
3.2.2. STFT-Based TD Maps
3.2.3. SPWVD-Based TD Maps
3.3. Training Pipeline and Optimization
3.3.1. Data Preparation
3.3.2. CNN Pre-Trained Models
3.3.3. Model Optimization
3.3.4. Model Training
3.3.5. Performance Evaluation
3.3.6. Computational Efficiency
- Training time: This is the time required to train the model using a particular radar-based domain. Training time is an important parameter because extended training can be difficult in cases where models need frequent updates or computing resources are limited. Achieving fast training times improves the utility of the model in a range of applications.
- Inference time: To measure the inference time () of the model, we adopted the simple method from [53], focusing on the time required to perform a single inference cycle on the test set. Specifically, the inference time is calculated as follows:
3.4. Proposed Radar-Based HAR Algorithm
Algorithm 1: Proposed radar-based HAR system |
Require: Raw radar data |
Ensure: Classified human activities |
|
3.5. Runtime Environment
4. Results and Discussion
4.1. Performance Comparison of Proposed HAR Models
4.2. Generalization Performance of HAR Models
4.3. Computational Efficient and Lightweight HAR Model
4.4. Computational Cost Across Radar Domains
4.5. Comprehensive Evaluation of Model–Domain Pairs
4.6. Comparison of Pair M8 with State-of-the-Art Models
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Use of Artificial Intelligence
References
- Jiao, W.; Li, R.; Wang, J.; Wang, D.; Zhang, K. Activity recognition in rehabilitation training based on ensemble stochastic configuration networks. Neural Comput. Appl. 2023, 35, 21229–21245. [Google Scholar] [CrossRef]
- Deotale, D.; Verma, M.; Suresh, P.; Kumar, N. Physiotherapy-based human activity recognition using deep learning. Neural Comput. Appl. 2023, 35, 11431–11444. [Google Scholar] [CrossRef]
- Golestani, N.; Moghaddam, M. Human activity recognition using magnetic induction-based motion signals and deep recurrent neural networks. Nat. Commun. 2020, 11, 1551. [Google Scholar] [CrossRef] [PubMed]
- Ranieri, C.M.; MacLeod, S.; Dragone, M.; Vargas, P.A.; Romero, R.A.F. Activity recognition for ambient assisted living with videos, inertial units and ambient sensors. Sensors 2021, 21, 768. [Google Scholar] [CrossRef]
- Storf, H.; Kleinberger, T.; Becker, M.; Schmitt, M.; Bomarius, F.; Prueckner, S. An event-driven approach to activity recognition in ambient assisted living. In Proceedings of the Ambient Intelligence: European Conference, AmI 2009, Salzburg, Austria, 18–21 November 2009; pp. 123–132. [Google Scholar]
- Zam, A.; Bohlooli, A.; Jamshidi, K. Unsupervised deep domain adaptation algorithm for video-based human activity recognition via recurrent neural networks. Eng. Appl. Artif. Intell. 2024, 136, 108922. [Google Scholar] [CrossRef]
- Cob-Parro, A.C.; Losada-Gutiérrez, C.; Marrón-Romera, M.; Gardel-Vicente, A.; Bravo-Muñoz, I. A new framework for deep learning video-based human action recognition on the edge. Expert Syst. Appl. 2024, 238, 122220. [Google Scholar] [CrossRef]
- Lu, Y.; Zhou, L.; Zhang, A.; Zha, S.; Zhuo, X.; Ge, S. Application of deep learning and intelligent sensing analysis in smart home. Sensors 2024, 24, 953. [Google Scholar] [CrossRef]
- Kumar, E.K.; Kumar, D.A.; Murali, K.; Kiran, P.; Kumar, M. Three stream human action recognition using Kinect. AIP Conf. Proc. 2024, 1, 2512. [Google Scholar]
- Zhang, S.; Li, Y.; Zhang, S.; Shahabi, F.; Xia, S.; Deng, Y.; Alshurafa, N. Deep learning in human activity recognition with wearable sensors: A review on advances. Sensors 2022, 22, 1476. [Google Scholar] [CrossRef] [PubMed]
- Mekruksavanich, S.; Jitpattanakul, A. Device position-independent human activity recognition with wearable sensors using deep neural networks. Appl. Sci. 2024, 14, 2107. [Google Scholar] [CrossRef]
- Krishnan, N.C.; Cook, D.J. Activity recognition on streaming sensor data. Pervasive Mob. Comput. 2014, 10, 138–154. [Google Scholar] [CrossRef] [PubMed]
- Li, X.; He, Y.; Jing, X. A survey of deep learning-based human activity recognition in radar. Remote Sens. 2019, 11, 1068. [Google Scholar] [CrossRef]
- Yao, Y.; Liu, W.; Zhang, G.; Hu, W. Radar-based human activity recognition using hyperdimensional computing. IEEE Trans. Microw. Theory Tech. 2021, 70, 1605–1619. [Google Scholar] [CrossRef]
- Yousaf, J.; Yakoub, S.; Karkanawi, S.; Hassan, T.; Almajali, E.; Zia, H.; Ghazal, M. Through-the-wall human activity recognition using radar technologies: A review. IEEE Open J. Antennas Propag. 2024, 5, 1815–1837. [Google Scholar] [CrossRef]
- Huan, S.; Wu, L.; Zhang, M.; Wang, Z.; Yang, C. Radar human activity recognition with an attention-based deep learning network. Sensors 2023, 23, 3185. [Google Scholar] [CrossRef]
- Yu, R.; Du, Y.; Li, J.; Napolitano, A.; Kernec, J.L. Radar-based human activity recognition using denoising techniques to enhance classification accuracy. IET Radar Sonar Navig. 2024, 18, 277–293. [Google Scholar] [CrossRef]
- Ayaz, F.; Alhumaily, B.; Hussain, S.; Mohjazi, L.; Imran, M.A.; Zoha, A. Integrating millimeter-wave FMCW radar for investigating multi-height vital sign monitoring. In Proceedings of the 2024 IEEE Wireless Communications and Networking Conference (WCNC), Dubai, United Arab Emirates, 21–24 April 2024; pp. 1–6. [Google Scholar] [CrossRef]
- Paterniani, G.; Sgreccia, D.; Davoli, A.; Guerzoni, G.; Viesti, P.D.; Valenti, A.C.; Vitolo, M.; Vitetta, G.M.; Boriani, G. Radar-based monitoring of vital signs: A tutorial overview. Proc. IEEE 2023, 111, 277–317. [Google Scholar] [CrossRef]
- Iyer, S.; Zhao, L.; Mohan, M.P.; Jimeno, J.; Siyal, M.Y.; Alphones, A.; Karim, M.F. mm-Wave radar-based vital signs monitoring and arrhythmia detection using machine learning. Sensors 2022, 22, 3106. [Google Scholar] [CrossRef]
- Qu, L.; Wang, Y.; Yang, T.; Sun, Y. Human activity recognition based on WRGAN-GP-synthesized micro-Doppler spectrograms. IEEE Sens. J. 2022, 22, 8960–8973. [Google Scholar] [CrossRef]
- Li, X.; He, Y.; Fioranelli, F.; Jing, X. Semisupervised human activity recognition with radar micro-Doppler signatures. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5103112. [Google Scholar] [CrossRef]
- Kim, W.-Y.; Seo, D.-H. Radar-based HAR combining range–time–Doppler maps and range-distributed-convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1002311. [Google Scholar] [CrossRef]
- Ding, W.; Guo, X.; Wang, G. Radar-based human activity recognition using hybrid neural network model with multidomain fusion. IEEE Trans. Aerosp. Electron. Syst. 2021, 57, 2889–2898. [Google Scholar] [CrossRef]
- Liu, Z.; Xu, L.; Jia, Y.; Guo, S. Human activity recognition based on deep learning with multi-spectrogram. In Proceedings of the 2020 IEEE 5th International Conference on Signal and Image Processing (ICSIP), Nanjing, China, 23–25 October 2020; pp. 11–15. [Google Scholar]
- Qian, Y.; Chen, C.; Tang, L.; Jia, Y.; Cui, G. Parallel LSTM-CNN network with radar multispectrogram for human activity recognition. IEEE Sens. J. 2022, 23, 1308–1317. [Google Scholar] [CrossRef]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1–9. [Google Scholar] [CrossRef]
- Alkasimi, A.; Pham, A.-V.; Gardner, C.; Funsten, B. Human activity recognition based on 4-domain radar deep transfer learning. In Proceedings of the 2023 IEEE Radar Conference (RadarConf23), San Antonio, TX, USA, 1–5 May 2023; pp. 1–6. [Google Scholar]
- Dixit, A.; Kulkarni, V.; Reddy, V.V. Cross frequency adaptation for radar-based human activity recognition using few-shot learning. IEEE Geosci. Remote Sens. Lett. 2023, 20, 3508604. [Google Scholar] [CrossRef]
- Pavliuk, O.; Mishchuk, M.; Strauss, C. Transfer learning approach for human activity recognition based on continuous wavelet transform. Algorithms 2023, 16, 77. [Google Scholar] [CrossRef]
- Theckedath, D.; Sedamkar, R.R. Detecting affect states using VGG16, ResNet50 and SE-ResNet50 networks. SN Comput. Sci. 2020, 1, 79. [Google Scholar] [CrossRef]
- Dey, N.; Zhang, Y.-D.; Rajinikanth, V.; Pugalenthi, R.; Raja, N.S.M. Customized VGG19 architecture for pneumonia detection in chest X-rays. Pattern Recognit. Lett. 2021, 143, 67–74. [Google Scholar] [CrossRef]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 18–20 June 2018; pp. 4510–4520. [Google Scholar]
- Kulsoom, F.; Narejo, S.; Mehmood, Z.; Chaudhry, H.N.; Butt, A.; Bashir, A.K. A review of machine learning-based human activity recognition for diverse applications. Neural Comput. Appl. 2022, 34, 18289–18324. [Google Scholar] [CrossRef]
- Biswal, A.; Nanda, S.; Panigrahi, C.R.; Cowlessur, S.K.; Pati, B. Human activity recognition using machine learning: A review. In Proceedings of the Progress in Advanced Computing and Intelligent Engineering: ICACIE 2020, Beau Bassin-Rose Hill, Mauritius, 25–27 June 2020; pp. 323–333. [Google Scholar]
- Zenaldin, M.; Narayanan, R.M. Radar micro-Doppler based human activity classification for indoor and outdoor environments. Radar Sens. Technol. 2016, 9829, 364–373. [Google Scholar]
- Tang, L.; Jia, Y.; Qian, Y.; Yi, S.; Yuan, P. Human activity recognition based on mixed CNN with radar multi-spectrogram. IEEE Sens. J. 2021, 21, 25950–25962. [Google Scholar] [CrossRef]
- Abdu, F.J.; Zhang, Y.; Deng, Z. Activity classification based on feature fusion of FMCW radar human motion micro-Doppler signatures. IEEE Sens. J. 2022, 22, 8648–8662. [Google Scholar] [CrossRef]
- Shrestha, A.; Murphy, C.; Johnson, I.; Anbulselvam, A.; Fioranelli, F.; Kernec, J.L.; Gurbuz, S.Z. Cross-frequency classification of indoor activities with dnn transfer learning. In Proceedings of the 2019 IEEE Radar Conference (RadarConf), Boston, MA, USA, 22–26 April 2019; pp. 1–6. [Google Scholar]
- Sadeghi, Z.A.; Ahmad, F. Whitening-aided learning from radar micro-Doppler signatures for human activity recognition. Sensors 2023, 23, 7486. [Google Scholar] [CrossRef]
- Zhou, X.; Tian, J.; Du, H. A lightweight network model for human activity classification based on pre-trained MobileNetV2. IET Conf. Proc. CP779 2020, 9, 1483–1487. [Google Scholar]
- Shrestha, A.; Li, H.; Kernec, J.L.; Fioranelli, F. Continuous human activity classification from FMCW radar with Bi-LSTM networks. IEEE Sens. J. 2020, 20, 13607–13619. [Google Scholar] [CrossRef]
- Fioranelli, F.; Shah, S.A.; Li, H.; Shrestha, A.; Yang, S.; Le Kernec, J. Radar Signatures of Human Activities; University of Glasgow: Glasgow, UK, 2019; Available online: https://researchdata.gla.ac.uk/848/ (accessed on 15 August 2023).
- Yang, S.; Kernec, J.L.; Romain, O.; Fioranelli, F.; Cadart, P.; Fix, J. The Human Activity Radar Challenge: Benchmarking Based on the ‘Radar Signatures of Human Activities’ Dataset From Glasgow University. IEEE J. Biomed. Health Inform. 2023, 27, 1813–1824. [Google Scholar] [CrossRef]
- Safa, A.; Corradi, F.; Keuninckx, L.; Ocket, I.; Bourdoux, A.; Catthoor, F.; Gielen, G.G.E. Improving the Accuracy of Spiking Neural Networks for Radar Gesture Recognition Through Preprocessing. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 2869–2881. [Google Scholar] [CrossRef]
- Deng, J.; Dong, W.; Socher, R.; Li, L.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Faghihi, A.; Fathollahi, M.; Rajabi, R. Diagnosis of skin cancer using VGG16 and VGG19 based transfer learning models. Multimed. Tools Appl. 2024, 83, 57495–57510. [Google Scholar] [CrossRef]
- Wu, Z.; Shen, C.; Hengel, A.V.D. Wider or deeper: Revisiting the resnet model for visual recognition. Pattern Recognit. 2019, 90, 119–133. [Google Scholar] [CrossRef]
- Howard, A.G. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Chakraborty, M.; Kumawat, H.C.; Dhavale, S.V.; A, A.B.R. Application of DNN for radar micro-Doppler signature-based human suspicious activity recognition. Pattern Recognit. Lett. 2022, 162, 1–6. [Google Scholar] [CrossRef]
- Takano, S. Chapter 2—Traditional microarchitectures. In Thinking Machines; Takano, S., Ed.; Academic Press: Cambridge, MA, USA, 2021; pp. 19–47. [Google Scholar]
- Papadopoulos, K.; Jelali, M. A Comparative Study on Recent Progress of Machine Learning-Based Human Activity Recognition with Radar. Appl. Sci. 2023, 13, 12728. [Google Scholar] [CrossRef]
Ref. | Radar Domains | Data Preprocessing | CNN/LSTM | TL-Based | ||
---|---|---|---|---|---|---|
TR | STFT | SPWVD | Time | Methods | ||
[16] | ✗ | ✓ | ✗ | ✗ | ✓ | ✗ |
[23] | ✓ | ✓ | ✗ | ✗ | ✓ | ✗ |
[24] | ✓ | ✓ | ✗ | ✗ | ✓ | ✗ |
[25] | ✗ | ✓ | ✓ | ✗ | ✗ | ✓ |
[26] | ✗ | ✓ | ✓ | ✗ | ✓ | ✗ |
[38] | ✗ | ✓ | ✓ | ✗ | ✓ | ✗ |
[41] | ✗ | ✓ | ✗ | ✗ | ✓ | ✗ |
[42] | ✗ | ✓ | ✗ | ✗ | ✗ | ✓ |
[43] | ✗ | ✓ | ✗ | ✗ | ✓ | ✗ |
Our | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
Short Name | Activity Description | Samples | Duration |
---|---|---|---|
A1 | Walking back and forth | 312 | 10 s |
A2 | Sitting on a chair | 311 | 5 s |
A3 | Standing | 311 | 5 s |
A4 | Bend to pick an object | 309 | 5 s |
A5 | Drinking water | 311 | 5 s |
A6 | Fall | 197 | 5 s |
Parameters | VGG-16 | VGG-19 | ResNet-50 | MobileNetV2 |
---|---|---|---|---|
Batch size | 32 | 32 | 32 | 32 |
Dropout | 0.5 | 0.2 | 0.2 | 0.2 |
Learning rate | 2 | 2 | 2 | 4 |
Optimizer | SGD | SGD | SGD | Adam |
Decay | - | 1 | 1 | - |
Momentum | 0.9 | 0.9 | 0.9 | - |
Epochs/fold | 25 | 25 | 25 | 25 |
MDPs | Radar Domains | Models | Accuracy (%) | Precision | Recall | F1 Score |
---|---|---|---|---|---|---|
M1 | TR | VGG-16 | 95.73 | 0.9576 | 0.9573 | 0.9572 |
M2 | TR | VGG-19 | 94.30 | 0.9436 | 0.9430 | 0.9429 |
M3 | TR | ResNet-50 | 93.73 | 0.9373 | 0.9373 | 0.9368 |
M4 | TR | MobileNetV2 | 92.88 | 0.9307 | 0.9288 | 0.9284 |
M5 | STFT | VGG-16 | 96.87 | 0.9697 | 0.9687 | 0.9687 |
M6 | STFT | VGG-19 | 96.01 | 0.9639 | 0.9624 | 0.9624 |
M7 | STFT | ResNet-50 | 97.15 | 0.9721 | 0.9731 | 0.9721 |
M8 | STFT | MobileNetV2 | 96.30 | 0.9635 | 0.9651 | 0.9642 |
M9 | SPWVD | VGG-16 | 97.44 | 0.9764 | 0.9744 | 0.9745 |
M10 | SPWVD | VGG-19 | 98.01 | 0.9803 | 0.9801 | 0.9801 |
M11 | SPWVD | ResNet-50 | 97.15 | 0.9720 | 0.9715 | 0.9715 |
M12 | SPWVD | MobileNetV2 | 96.01 | 0.9629 | 0.9580 | 0.9600 |
Model–Domain | Training | Inference |
---|---|---|
Pairs | Time/Epoch (s) | Time/Sample (ms) |
M1 | 3.40 | 7.16 |
M2 | 3.77 | 8.11 |
M3 | 2.77 | 3.80 |
M4 | 1.79 | 2.78 |
M5 | 3.38 | 7.10 |
M6 | 4.38 | 6.90 |
M7 | 2.74 | 3.54 |
M8 | 1.49 | 2.57 |
M9 | 3.50 | 7.02 |
M10 | 3.76 | 6.88 |
M11 | 2.73 | 3.99 |
M12 | 1.34 | 2.76 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ayaz, F.; Alhumaily, B.; Hussain, S.; Imran, M.A.; Arshad, K.; Assaleh, K.; Zoha, A. Radar Signal Processing and Its Impact on Deep Learning-Driven Human Activity Recognition. Sensors 2025, 25, 724. https://doi.org/10.3390/s25030724
Ayaz F, Alhumaily B, Hussain S, Imran MA, Arshad K, Assaleh K, Zoha A. Radar Signal Processing and Its Impact on Deep Learning-Driven Human Activity Recognition. Sensors. 2025; 25(3):724. https://doi.org/10.3390/s25030724
Chicago/Turabian StyleAyaz, Fahad, Basim Alhumaily, Sajjad Hussain, Muhammad Ali Imran, Kamran Arshad, Khaled Assaleh, and Ahmed Zoha. 2025. "Radar Signal Processing and Its Impact on Deep Learning-Driven Human Activity Recognition" Sensors 25, no. 3: 724. https://doi.org/10.3390/s25030724
APA StyleAyaz, F., Alhumaily, B., Hussain, S., Imran, M. A., Arshad, K., Assaleh, K., & Zoha, A. (2025). Radar Signal Processing and Its Impact on Deep Learning-Driven Human Activity Recognition. Sensors, 25(3), 724. https://doi.org/10.3390/s25030724