Abstract
In the current era of rapid technological innovation, human activity recognition (HAR) has emerged as a principal research area in the field of multimedia information retrieval. The capacity to monitor people remotely is a main determinant of HAR’s central role. Multiple gyroscope and accelerometer sensors can be used to aggregate data which can be used to recognise human activities—one of the key research objectives of this study. Optimal results are attained through the use of deep learning models to carry out HAR in the collected data. We propose the use of a hierarchical multi-resolution convolutional neural networks in combination with gated recurrent uni. We conducted an experiment on the mHealth and UCI data sets, the results of which demonstrate the efficiency of the proposed model, as it achieved acceptable accuracies: 99.35% in the mHealth data set and 94.50% in the UCI data set.
Similar content being viewed by others
References
Kankanhalli MS, Rui Y (2008) Application potential of multimedia information retrieval. Proc IEEE 96(4):712–720
Lu W (2020) An empirical evaluation of deep learning techniques for human activity recognition (Doctoral dissertation, Auckland University of Technology)
Bouchabou D, Nguyen SM, Lohr C, LeDuc B, Kanellos I (2021) A survey of human activity recognition in smart homes based on IoT sensors algorithms: taxonomies, challenges, and opportunities with deep learning. Sensors 21(18):6037
Chen K, Zhang D, Yao L, Guo B, Yu Z, Liu Y (2021) Deep learning for sensor-based human activity recognition: overview, challenges, and opportunities. ACM Comput Surv (CSUR) 54(4):1–40
Dua N, Singh SN, Semwal VB (2021) Multi-input CNN-GRU based human activity recognition using wearable sensors. Computing 66:1–18
Hanif M, Akram T, Shahzad A, Khan M, Tariq U, Choi J, Nam Y, Zulfiqar Z (2022) Smart devices based multisensory approach for complex human activity recognition. Comput Mater Contin 70:3221–3234. https://doi.org/10.32604/cmc.2022.019815
Sun B, Kong D, Wang S, Wang L, Yin B (2021) Joint transferable dictionary learning and view adaptation for multi-view human action recognition. ACM Trans Knowl Discov Data 15(2):1–23
Khowaja SA, Yahya BN, Lee SL (2017) Hierarchical classification method based on selective learning of slacked hierarchy for activity recognition systems. Expert Syst Appl 88:165–177
O’Halloran J, Curry E (2019) A comparison of deep learning models in human activity recognition and behavioural prediction on the MHEALTH dataset. In: AICS, pp 212–223
Das DB, BIrant D (2021) Ordered physical human activity recognition based on ordinal classification. Turk J Electr Eng Comput Sci 29(5):2416–2436
Deotale D, Verma M, Perumbure S, Jangir S, Kaur M, Mohammed Ali SA, Alshazly H (2021) HARTIV: human activity recognition using temporal information in videos. Comput Mater Contin. https://doi.org/10.32604/cmc.2022.020655
Canizo M, Triguero I, Conde A, Onieva E (2019) Multi-head CNN-RNN for multi-time series anomaly detection: an industrial case study. Neurocomputing 363:246–260
Ahmad Z, Khan N (2021) Inertial sensor data to image encoding for human action recognition. IEEE Sens J 21(9):10978–10988
Dong M, Fang Z, Li Y, Bi S, Chen J (2021) AR3D: attention residual 3d network for human action recognition. Sensors 21(5):1656
Mutegeki R, Han DS (2020) A CNN-LSTM approach to human activity recognition. In: 2020 International conference on artificial intelligence in information and communication (ICAIIC). IEEE, pp 362–366
Xia K, Huang J, Wang H (2020) LSTM-CNN architecture for human activity recognition. IEEE Access 8:56855–56866
Singh T, Vishwakarma DK (2021) A deeply coupled ConvNet for human activity recognition using dynamic and RGB images. Neural Comput Appl 33(1):469–485
Qin Z, Zhang Y, Meng S, Qin Z, Choo KKR (2020) Imaging and fusing time series for wearable sensor-based human activity recognition. Inf Fusion 53:80–87
Nafea O, Abdul W, Muhammad G, Alsulaiman M (2021) Sensor-based human activity recognition with spatio-temporal deep learning. Sensors 21(6):2141
Onyekpe U, Palade V, Kanarachos S, Christopoulos SRG (2021) A quaternion gated recurrent unit neural network for sensor fusion. Information 12(3):117
Okai J, Paraschiakos S, Beekman M, Knobbe A, de Sá CR (2019) Building robust models for human activity recognition from raw accelerometers data using gated recurrent units and long short term memory neural networks. In: 2019 41st Annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE, pp 2486–2491
Banos O, Garcia R, Holgado JA, Damas M, Pomares H, Rojas I, Saez A, Villalonga C (2014) mHealthDroid: a novel framework for agile development of mobile health applications. In: Proceedings of the 6th international work-conference on ambient assisted living an active ageing (IWAAL 2014), Belfast, Northern Ireland
Anguita D, Ghio A, Oneto L, Parra X, Reyes-Ortiz JL (2013) A public domain dataset for human activity recognition using smartphones. In: Esann, vol 3, p 3
Cosma G, Mcginnity TM (2019) Feature extraction and classification using leading eigenvectors: applications to biomedical and multi-modal mHealth data. IEEE Access 7:107400–107412
Brophy E, Veiga JJD, Wang Z, Smeaton AF, Ward TE (2018) An interpretable machine vision approach to human activity recognition using photoplethysmograph sensor data. arXiv preprint arxiv:1812.00668
Acknowledgements
The authors extend their appreciation to Researchers Supporting Project Number (RSP-2021/34), King Saud University, Riyadh, Saudi Arabia.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Nafea, O., Abdul, W. & Muhammad, G. Multi-sensor human activity recognition using CNN and GRU. Int J Multimed Info Retr 11, 135–147 (2022). https://doi.org/10.1007/s13735-022-00234-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13735-022-00234-9