Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
survey

A Survey on Deep Learning for Human Activity Recognition

Published: 04 October 2021 Publication History
  • Get Citation Alerts
  • Abstract

    Human activity recognition is a key to a lot of applications such as healthcare and smart home. In this study, we provide a comprehensive survey on recent advances and challenges in human activity recognition (HAR) with deep learning. Although there are many surveys on HAR, they focused mainly on the taxonomy of HAR and reviewed the state-of-the-art HAR systems implemented with conventional machine learning methods. Recently, several works have also been done on reviewing studies that use deep models for HAR, whereas these works cover few deep models and their variants. There is still a need for a comprehensive and in-depth survey on HAR with recently developed deep learning methods.

    References

    [1]
    Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. Retrieved from https://www.tensorflow.org/.
    [2]
    Heba Abdelnasser, Moustafa Youssef, and Khaled A. Harras. 2015. Wigest: A ubiquitous wifi-based gesture recognition system. In Proceedings of the 2015 IEEE Conference on Computer Communications (INFOCOM’15). IEEE, 1472–1480.
    [3]
    Jake K. Aggarwal and Michael S. Ryoo. 2011. Human activity analysis: A review. ACM Comput. Surv. 43, 3 (2011), 16.
    [4]
    Unaiza Ahsan, Chen Sun, and Irfan Essa. 2017. DiscrimNet: Semi-supervised action recognition from videos using generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops .Women in Computer Vision (WiCV’17).
    [5]
    Bandar Almaslukh, Jalal AlMuhtadi, and Abdelmonim Artoli. 2017. An effective deep autoencoder approach for online smartphone-based human activity recognition. Int. J. Comput. Sci. Netw. Secur. 17, 4 (2017), 160–165.
    [6]
    Mohammad Abu Alsheikh, Ahmed Selim, Dusit Niyato, Linda Doyle, Shaowei Lin, and Hwee-Pink Tan. 2016. Deep activity recognition models with triaxial accelerometers. In Proceedings of the Workshops at the 30th AAAI Conference on Artificial Intelligence.
    [7]
    Kerem Altun, Billur Barshan, and Orkun Tunçel. 2010. Comparative study on classifying human activities with miniature inertial and magnetic sensors. Pattern Recogn. 43, 10 (2010), 3605–3620.
    [8]
    Arnon Amir, Brian Taba, David Berg, Timothy Melano, Jeffrey McKinstry, Carmelo Di Nolfo, Tapan Nayak, Alexander Andreopoulos, Guillaume Garreau, Marcela Mendoza, et al. 2017. A low power, fully event-based gesture recognition system. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 7243–7252.
    [9]
    Ian Anderson and Henk Muller. 2006. Practical activity recognition using gsm data. In Proceedings of the 5th International Semantic Web Conference (ISWC’06), Vol. 1.
    [10]
    Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra, and Jorge Luis Reyes-Ortiz. 2013. A public domain dataset for human activity recognition using smartphones. In Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN’13).
    [11]
    Ery Arias-Castro, David L. Donoho, et al. 2009. Does median filtering truly preserve edges better than linear filtering?Ann. Stat. 37, 3 (2009), 1172–1206.
    [12]
    Martin Arjovsky, Soumith Chintala, and Léon Bottou. 2017. Wasserstein gan. arXiv:1701.07875. Retrieved from https://arxiv.org/abs/1701.07875.
    [13]
    Martin Arjovsky, Amar Shah, and Yoshua Bengio. 2016. Unitary evolution recurrent neural networks. In Proceedings of the International Conference on Machine Learning. 1120–1128.
    [14]
    Marc Bachlin, Meir Plotnik, Daniel Roggen, Inbal Maidan, Jeffrey M. Hausdorff, Nir Giladi, and Gerhard Troster. 2009. Wearable assistant for Parkinson’s disease patients with the freezing of gait symptom. IEEE Trans. Inf. Technol. Biomed. 14, 2 (2009), 436–446.
    [15]
    Marc Bachlin, Daniel Roggen, Gerhard Troster, Meir Plotnik, Noit Inbar, Inbal Meidan, Talia Herman, Marina Brozgol, Eliya Shaviv, Nir Giladi, et al. 2009. Potentials of enhanced context awareness in wearable assistants for Parkinson’s disease patients with the freezing of gait syndrome. In Proceedings of the International Symposium on Wearable Computers. IEEE, 123–130.
    [16]
    Oresti Banos, Rafael Garcia, Juan A. Holgado-Terriza, Miguel Damas, Hector Pomares, Ignacio Rojas, Alejandro Saez, and Claudia Villalonga. 2014. mHealthDroid: A novel framework for agile development of mobile health applications. In Proceedings of the International Workshop on Ambient Assisted Living. Springer, 91–98.
    [17]
    Oresti Banos, Mate Toth, Miguel Damas, Hector Pomares, and Ignacio Rojas. 2014. Dealing with the effects of sensor displacement in wearable activity recognition. Sensors 14, 6 (2014), 9995–10023.
    [18]
    Inci M. Baytas, Cao Xiao, Xi Zhang, Fei Wang, Anil K. Jain, and Jiayu Zhou. 2017. Patient subtyping via time-aware LSTM networks. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 65–74.
    [19]
    Yoshua Bengio et al. 2009. Learning deep architectures for AI. Found. Trends Mach. Learn. 2, 1 (2009), 1–127.
    [20]
    Yoshua Bengio, Nicolas Boulanger-Lewandowski, and Razvan Pascanu. 2013. Advances in optimizing recurrent networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 8624–8628.
    [21]
    Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35, 8 (2013), 1798–1828.
    [22]
    James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. 2010. Theano: A CPU and GPU math compiler in Python. In Proceedings of the 9th Python in Science Conference, Vol. 1. 3–10.
    [23]
    David Berthelot, Thomas Schumm, and Luke Metz. 2017. Began: Boundary equilibrium generative adversarial networks. arXiv:1703.10717. Retrieved from https://arxiv.org/abs/1703.10717.
    [24]
    Sourav Bhattacharya and Nicholas D. Lane. 2016. From smart to deep: Robust activity recognition on smartwatches using deep learning. In Proceedings of the IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops’16). IEEE, 1–6.
    [25]
    Ronald Newbold Bracewell and Ronald N. Bracewell. 1986. The Fourier Transform and its Applications. Vol. 31999. McGraw–Hill New York.
    [26]
    Jason Brownlee. 2019. How to Evaluate the Skill of Deep Learning Models. Retrieved from https://machinelearningmastery.com/evaluate-skill-deep-learning-models/.
    [27]
    Andreas Bulling, Ulf Blanke, and Bernt Schiele. 2014. A tutorial on human activity recognition using body-worn inertial sensors. ACM Comput. Surv. 46, 3 (2014), 33.
    [28]
    Andreas Bulling, Ulf Blanke, and Bernt Schiele. 2014. A Tutorial on Human Activity Recognition Using Body-worn Inertial Sensors. Comput. Surv. 46, 3 (2014), 33:1–33:33. https://doi.org/10.1145/2499621
    [29]
    Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. 2015. Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 961–970.
    [30]
    Tom Campbell. 1981. Seven Theories of Human Society. Clarendon Press, 169–229.
    [31]
    Joao Carreira, Eric Noland, Chloe Hillier, and Andrew Zisserman. 2019. A short note on the kinetics-700 human action dataset. arXiv:1907.06987. Retrieved from https://arxiv.org/abs/1907.06987.
    [32]
    Eduardo Casilari, Jose A. Santoyo-Ramón, and Jose M. Cano-García. 2017. Umafall: A multisensor dataset for the research on automatic fall detection. Proc. Comput. Sci. 110 (2017), 32–39.
    [33]
    Kaixuan Chen, Lina Yao, Dalin Zhang, Bin Guo, and Zhiwen Yu. 2019. Multi-agent attentional activity recognition. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19). 1344–1350.
    [34]
    Liming Chen, Jesse Hoey, Chris D. Nugent, Diane J. Cook, and Zhiwen Yu. 2012. Sensor-based activity recognition. IEEE Trans. Syst. Man Cyberneti. C 42, 6 (2012), 790–808.
    [35]
    Ling Chen, Yi Zhang, and Liangying Peng. 2020. METIER: A deep multi-task learning based activity and user recognition model using wearable sensors. Proc. ACM Interact. Mob. Wear. Ubiq. Technol. 4, 1 (2020), 1–18.
    [36]
    Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. 2015. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems.
    [37]
    Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2016. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems. 2172–2180.
    [38]
    Yanbei Chen, Xiatian Zhu, and Shaogang Gong. 2018. Semi-supervised deep learning with memory. In Proceedings of the European Conference on Computer Vision (ECCV’18). 268–283.
    [39]
    Zhibo Chen, Wei Zhou, and Weiping Li. 2017. Blind stereoscopic video quality assessment: From depth perception to overall experience. IEEE Trans. Image Process. 27, 2 (2017), 721–734.
    [40]
    Kyunghyun Cho, Bart Van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In Proceedings of Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. 103–111.
    [41]
    Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP’14). 1724–1734.
    [42]
    Wongun Choi, Khuram Shahid, and Silvio Savarese. 2009. What are they doing?: Collective activity classification using spatio-temporal relationship among people. In Proceedings of the IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops’09). IEEE, 1282–1289.
    [43]
    François Chollet et al. 2015. Keras. Retrieved from https://keras.io.
    [44]
    T. Choudhury, G. Borriello, S. Consolvo, D. Haehnel, B. Harrison, B. Hemingway, J. Hightower, P. P. Klasnja, K. Koscher, A. LaMarca, J. A. Landay, L. LeGrand, J. Lester, A. Rahimi, A. Rea, and D. Wyatt. 2008. The mobile sensing platform: An embedded activity recognition system. IEEE Perv. Comput. 7, 2 (Apr. 2008), 32–41. https://doi.org/10.1109/MPRV.2008.39
    [45]
    Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In Workshop on Deep Learning (NIPS’14).
    [46]
    Ronan Collobert, Koray Kavukcuoglu, and Clément Farabet. 2011. Torch7: A matlab-like environment for machine learning. In Proceedings of the Conference on Neural Information Processing Systems BigLearn Workshop (NIPS BigLearn Workshop’11).
    [47]
    Antonia Creswell, Tom White, Vincent Dumoulin, Kai Arulkumaran, Biswa Sengupta, and Anil A. Bharath. 2018. Generative adversarial networks: An overview. IEEE Sign. Process. Mag. 35, 1 (2018), 53–65.
    [48]
    Julien Cumin, Grégoire Lefebvre, Fano Ramparany, and James L. Crowley. 2017. A dataset of routine daily activities in an instrumented home. In Proceedings of the International Conference on Ubiquitous Computing and Ambient Intelligence. Springer, 413–425.
    [49]
    Jason Dai, Yiheng Wang, Xin Qiu, Ding Ding, Yao Zhang, Yanzhang Wang, Xianyan Jia, Cherry Zhang, Yan Wan, Zhichao Li, et al. 2018. Bigdl: A distributed deep learning framework for big data. In Proceedings of the ACM Symposium on Cloud Computing (SoCC’19). 50–60.
    [50]
    Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, and Michael Wray. 2018. Scaling egocentric vision: The EPIC-KITCHENS dataset. In Proceedings of the European Conference on Computer Vision (ECCV’18).
    [51]
    Fernando De la Torre, Jessica Hodgins, Javier Montano, Sergio Valcarcel, R. Forcada, and J. Macey. 2008. Guide to the Carnegie Mellon University Multimodal Activity (cmu-mmac) Database. Technical Report. Carnegie Mellon University, Pittsburgh, PA.
    [52]
    Li Deng, Dong Yu, et al. 2014. Deep learning: Methods and applications. Foundations and Trends® in Signal Processing 7, 3–4 (2014), 197–387.
    [53]
    Yunbin Deng. 2019. Deep learning on mobile devices: A review. In Proc. SPIE 10993, Mobile Multimedia/Image Processing, Security, and Applications. May 2019. Art. no. 109930.
    [54]
    Marcus Edel and Enrico Köppe. 2016. Binarized-blstm-rnn based human activity recognition. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation (IPIN’16). IEEE, 1–7.
    [55]
    Shahla Faisal and Gerhard Tutz. 2017. Missing value imputation for gene expression data by tailored nearest neighbors. Stat. Appl. Genet. Molec. Biol. 16, 2 (2017), 95–106.
    [56]
    Xiaoyi Fan, Fangxin Wang, Feng Wang, Wei Gong, and Jiangchuan Liu. 2019. When rfid meets deep learning: Exploring cognitive intelligence for activity identification. IEEE Wireless Commun. 26, 3 (2019), 19–25.
    [57]
    Hongqing Fang and Chen Hu. 2014. Recognizing human activity in smart home using deep learning algorithm. In Proceedings of the 33rd Chinese Control Conference. IEEE, 4716–4720.
    [58]
    Tom Fawcett. 2006. An introduction to ROC analysis. Pattern Recogn. Lett. 27, 8 (2006), 861–874.
    [59]
    Harshala Gammulle, Simon Denman, Sridha Sridharan, and Clinton Fookes. 2018. Multi-level sequence GAN for group activity recognition. In Proceedings of the Asian Conference on Computer Vision. Springer, 331–346.
    [60]
    Salvador García, Julián Luengo, and Francisco Herrera. 2015. Data Preprocessing in Data Mining. Springer.
    [61]
    Enrique Garcia-Ceja and Ramon Brena. 2013. Long-term activity recognition from accelerometer data. Proc. Technol. 7 (2013), 248–256.
    [62]
    Ross Girshick. 2015. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision. 1440–1448.
    [63]
    Ary L. Goldberger, Luis A. N. Amaral, Leon Glass, Jeffrey M. Hausdorff, Plamen Ch Ivanov, Roger G. Mark, Joseph E. Mietus, George B. Moody, Chung-Kang Peng, and H. Eugene Stanley. 2000. PhysioBank, physiotoolkit, and physionet: Components of a new research resource for complex physiologic signals. Circulation 101, 23 (2000), e215–e220.
    [64]
    Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press.
    [65]
    Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. 2672–2680.
    [66]
    Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv:1308.0850. Retrieved from https://arxiv.org/abs/1308.0850.
    [67]
    Fuqiang Gu, Xuke Hu, Milad Ramezani, Debaditya Acharya, Kourosh Khoshelham, Shahrokh Valaee, and Jianga Shang. 2019. Indoor localization improved by spatial context—A survey. Comput. Surv. 52, 3, Article 64 (Jul. 2019), 35 pages. https://doi.org/10.1145/3322241
    [68]
    Fuqiang Gu, Allison Kealy, Kourosh Khoshelham, and Jianga Shang. 2015. User-independent motion state recognition using smartphone sensors. Sensors 15, 12 (2015), 30636–30652.
    [69]
    Fuqiang Gu, Kourosh Khoshelham, and Shahrokh Valaee. 2017. Locomotion activity recognition: A deep learning approach. In Proceedings of the IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC’17). IEEE, 1–5.
    [70]
    Fuqiang Gu, Kourosh Khoshelham, Shahrokh Valaee, Jianga Shang, and Rui Zhang. 2018. Locomotion activity recognition using stacked denoising autoencoders. IEEE IoT J. 5, 3 (2018), 2085–2093.
    [71]
    Fuqiang Gu, Kourosh Khoshelham, Chunyang Yu, and Jianga Shang. 2018. Accurate step length estimation for pedestrian dead reckoning localization using stacked autoencoders. IEEE Trans. Instrum. Meas. 68, 8 (2018), 2705–2713.
    [72]
    Jiuxiang Gu, Zhenhua Wang, Jason Kuen, Lianyang Ma, Amir Shahroudy, Bing Shuai, Ting Liu, Xingxing Wang, Gang Wang, Jianfei Cai, et al. 2018. Recent advances in convolutional neural networks. Pattern Recogn. 77 (2018), 354–377.
    [73]
    Yu Guan and Thomas Plötz. 2017. Ensembles of deep lstm learners for activity recognition using wearables. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. 1, 2 (2017), 1–28.
    [74]
    Guodong Guo and Alice Lai. 2014. A survey on still image based human action recognition. Pattern Recogn. 47, 10 (2014), 3343–3361.
    [75]
    Sojeong Ha and Seungjin Choi. 2016. Convolutional neural networks for human activity recognition using multiple accelerometer and gyroscope sensors. In Proceedings of the International Joint Conference on Neural Networks (IJCNN’16). IEEE, 381–388.
    [76]
    Eldad Haber, Keegan Lensink, Eran Triester, and Lars Ruthotto. 2019. IMEXnet: Aforward stable deep neural network. In Proceedings of the 36th International Conference on Machine Learning (ICML’19). PMLR 97:2525–2534.
    [77]
    Eldad Haber and Lars Ruthotto. 2017. Stable architectures for deep neural networks. Inverse Probl. 34, 1 (2017), 014004.
    [78]
    Mahmudul Hasan and Amit K. Roy-Chowdhury. 2015. A continuous learning framework for activity recognition using deep hybrid feature models. IEEE Trans. Multimedia 17, 11 (2015), 1909–1922.
    [79]
    Mohammed Mehedi Hassan, Md Zia Uddin, Amr Mohamed, and Ahmad Almogren. 2018. A robust human activity recognition system using smartphone sensors and deep learning. Fut. Gener. Comput. Syst. 81 (2018), 307–313.
    [80]
    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770–778.
    [81]
    Marko Helen and Tuomas Virtanen. 2005. Separation of drums from polyphonic music using non-negative matrix factorization and support vector machine. In Proceedings of the 13th European Signal Processing Conference. IEEE, 1–4.
    [82]
    Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. 2018. Deep reinforcement learning that matters. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.
    [83]
    Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta-vae: Learning basic visual concepts with a constrained variational framework. In Proceedings of the International Conference on Learning Representations, Vol. 3.
    [84]
    Geoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Brian Kingsbury, et al. 2012. Deep neural networks for acoustic modeling in speech recognition. IEEE Sign. Process. Mag. 29 (2012).
    [85]
    Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. 2006. A fast learning algorithm for deep belief nets. Neural Comput. 18, 7 (2006), 1527–1554.
    [86]
    Geoffrey E. Hinton and Ruslan R. Salakhutdinov. 2006. Reducing the dimensionality of data with neural networks. Science 313, 5786 (2006), 504–507.
    [87]
    Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Comput. 9, 8 (1997), 1735–1780.
    [88]
    Mohammad Hossin and M. N. Sulaiman. 2015. A review on evaluation metrics for data classification evaluations. Int. J. Data Min. Knowl. Manage. Process 5, 2 (2015), 1.
    [89]
    Yuhuang Hu, Hongjie Liu, Michael Pfeiffer, and Tobi Delbruck. 2016. DVS benchmark datasets for object tracking, action recognition, and object recognition. Front. Neurosci. 10 (2016), 405.
    [90]
    Dana Hughes and Nikolaus Correll. 2018. Distributed convolutional neural networks for human activity recognition in wearable robotics. In Distributed Autonomous Robotic Systems. Springer, 619–631.
    [91]
    Tâm Huynh, Mario Fritz, and Bernt Schiele. 2008. Discovery of activity patterns using topic models. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp’08), Vol. 8., 10–19.
    [92]
    Mostafa S. Ibrahim, Srikanth Muralidharan, Zhiwei Deng, Arash Vahdat, and Greg Mori. 2016. A hierarchical deep temporal model for group activity recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1971–1980.
    [93]
    Ozlem Durmaz Incel, Mustafa Kose, and Cem Ersoy. 2013. A review and taxonomy of activity recognition on mobile phones. BioNanoScience 3, 2 (2013), 145–171.
    [94]
    Masaya Inoue, Sozo Inoue, and Takeshi Nishida. 2018. Deep recurrent neural network for mobile human activity recognition with high throughput. Artif. Life Robot. 23, 2 (2018), 173–185.
    [95]
    Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM International Conference on Multimedia. ACM, 675–678.
    [96]
    Li Jing, Caglar Gulcehre, John Peurifoy, Yichen Shen, Max Tegmark, Marin Soljacic, and Yoshua Bengio. 2019. Gated orthogonal recurrent units: On learning to forget. Neural Comput. 31, 4 (2019), 765–783.
    [97]
    Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2018. Progressive growing of gans for improved quality. In Sixth International Conference on Learning Representations (ICLR’18).
    [98]
    Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4401–4410.
    [99]
    Nobuo Kawaguchi, Nobuhiro Ogawa, Yohei Iwasaki, Katsuhiko Kaji, Tsutomu Terada, Kazuya Murao, Sozo Inoue, Yoshihiro Kawahara, Yasuyuki Sumi, and Nobuhiko Nishio. 2011. HASC Challenge: Gathering large scale human activity corpus for the real-world activity understandings. In Proceedings of the 2nd Augmented Human International Conference. ACM, 27.
    [100]
    Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. 2017. The kinetics human action video dataset. arXiv:1705.06950. Retrieved from https://arxiv.org/abs/1705.06950.
    [101]
    Agnan Kessy, Alex Lewin, and Korbinian Strimmer. 2018. Optimal whitening and decorrelation. Am. Stat. 72, 4 (2018), 309–314.
    [102]
    Ji-Hyun Kim. 2009. Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap. Comput. Stat. Data Anal. 53, 11 (2009), 3735–3745.
    [103]
    Diederik P. Kingma and Max Welling. 2013. Auto-encoding variational bayes. In Proceedings of the 2nd International Conference on Learning Representations (ICLR’13).
    [104]
    Kris M. Kitani, Takahiro Okabe, Yoichi Sato, and Akihiro Sugimoto. 2011. Fast unsupervised ego-action learning for first-person sports videos. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR’11). IEEE, 3241–3248.
    [105]
    Alex Krizhevsky and Geoffrey Hinton. 2009. Learning Multiple Layers of Features from Tiny Images. Master’s thesis. Department of Computer Science, University of Toronto.
    [106]
    Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. 1097–1105.
    [107]
    Hildegard Kuehne, Hueihan Jhuang, Estíbaliz Garrote, Tomaso Poggio, and Thomas Serre. 2011. HMDB: a large video database for human motion recognition. In Proceedings of the International Conference on Computer Vision. IEEE, 2556–2563.
    [108]
    Jennifer R. Kwapisz, Gary M. Weiss, and Samuel A. Moore. 2011. Activity recognition using cell phone accelerometers. ACM SigKDD Explor. Newslett. 12, 2 (2011), 74–82.
    [109]
    Nicholas D. Lane, Petko Georgiev, and Lorena Qendro. 2015. DeepEar: robust smartphone audio sensing in unconstrained acoustic environments using deep learning. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 283–294.
    [110]
    Oscar D. Lara and Miguel A. Labrador. 2012. A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutor. 15, 3 (2012), 1192–1209.
    [111]
    Quoc V. Le, Jiquan Ngiam, Zhenghao Chen, Daniel Chia, Pang W. Koh, and Andrew Y. Ng. 2010. Tiled convolutional neural networks. In Advances in Neural Information Processing Systems. 1279–1287.
    [112]
    Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature 521, 7553 (2015), 436.
    [113]
    Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. 1998. Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278–2324.
    [114]
    Chen-Yu Lee, Patrick Gallagher, and Zhuowen Tu. 2017. Generalizing pooling functions in cnns: Mixed, gated, and tree. IEEE Trans. Pattern Anal. Mach. Intell. 40, 4 (2017), 863–875.
    [115]
    Honglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y. Ng. 2009. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th Annual International Conference on Machine Learning. ACM, 609–616.
    [116]
    Dan Li, Jitender Deogun, William Spaulding, and Bill Shuart. 2004. Towards missing data imputation: a study of fuzzy k-means clustering method. In Proceedings of the International Conference on Rough Sets and Current Trends in Computing. Springer, 573–579.
    [117]
    Fei-Fei Li, Justin Johnson, and Serena Yeung. 2019. CS231n Convolutional Neural Networks for Visual Recognition. Retrieved from http://cs231n.github.io/neural-networks-2/.
    [118]
    Xinyu Li, Yanyi Zhang, Ivan Marsic, Aleksandra Sarcevic, and Randall S. Burd. 2016. Deep learning for rfid-based activity recognition. In Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM. 164–175.
    [119]
    Xinyu Li, Yanyi Zhang, Jianyu Zhang, Yueyang Chen, Huangcan Li, Ivan Marsic, and Randall S. Burd. 2017. Region-based activity recognition using conditional GAN. In Proceedings of the 25th ACM International Conference on Multimedia. ACM, 1059–1067.
    [120]
    Yin Li, Miao Liu, and James M. Rehg. 2018. In the eye of beholder: Joint learning of gaze and actions in first person video. In Proceedings of the European Conference on Computer Vision (ECCV’18). 619–635.
    [121]
    Yongmou Li, Dianxi Shi, Bo Ding, and Dongbo Liu. 2014. Unsupervised feature learning for human activity recognition using smartphone sensors. In Mining Intelligence and Knowledge Exploration. Springer, 99–107.
    [122]
    Zuhe Li, Yangyu Fan, and Weihua Liu. 2015. The effect of whitening transformation on pooling operations in convolutional autoencoders. EURASIP J. Adv. Sign. Process. 2015, 1 (2015), 37.
    [123]
    Ming Liang and Xiaolin Hu. 2015. Recurrent convolutional neural network for object recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3367–3375.
    [124]
    Lin Liao, Dieter Fox, and Henry A. Kautz. 2005. Location-Based Activity Recognition using Relational Markov Networks. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI’05), Vol. 5. 773–778.
    [125]
    Kang Ling, Haipeng Dai, Yuntang Liu, Alex X. Liu, Wei Wang, and Qing Gu. 2020. Ultragesture: Fine-grained gesture sensing and recognition. IEEE Trans. Mobile Comput. (2020).
    [126]
    Si Liu, Yao Sun, Defa Zhu, Renda Bao, Wei Wang, Xiangbo Shu, and Shuicheng Yan. 2017. Face aging with contextual generative adversarial nets. In Proceedings of the 25th ACM International Conference on Multimedia. ACM, 82–90.
    [127]
    Yunhao Liu, Yiyang Zhao, Lei Chen, Jian Pei, and Jinsong Han. 2011. Mining frequent trajectory patterns for activity monitoring using radio frequency tag arrays. IEEE Trans. Parallel Distrib. Syst. 23, 11 (2011), 2138–2149.
    [128]
    Hong Lu, A. J. Bernheim Brush, Bodhi Priyantha, Amy K. Karlson, and Jie Liu. 2011. Speakersense: Energy efficient unobtrusive speaker identification on mobile phones. In Proceedings of the International Conference on Pervasive Computing. Springer, 188–205.
    [129]
    Hong Lu, Denise Frauendorfer, Mashfiqui Rabbi, Marianne Schmid Mast, Gokul T. Chittaranjan, Andrew T. Campbell, Daniel Gatica-Perez, and Tanzeem Choudhury. 2012. Stresssense: Detecting stress in unconstrained acoustic environments using smartphones. In Proceedings of the ACM Conference on Ubiquitous Computing. ACM, 351–360.
    [130]
    Hong Lu, Jun Yang, Zhigang Liu, Nicholas D. Lane, Tanzeem Choudhury, and Andrew T. Campbell. 2010. The Jigsaw continuous sensing engine for mobile phone applications. In Proceedings of the 8th ACM Conference on Embedded Networked Sensor Systems. ACM, 71–84.
    [131]
    Haojie Ma, Wenzhong Li, Xiao Zhang, Songcheng Gao, and Sanglu Lu. 2019. AttnSense: multi-level attention mechanism for multimodal human activity recognition. In Proceedings of the 28th International Joint Conference on Artificial Intelligence. AAAI Press, 3109–3115.
    [132]
    Xudong Mao, Qing Li, Haoran Xie, Raymond Y. K. Lau, Zhen Wang, and Stephen Paul Smolley. 2017. Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision. 2794–2802.
    [133]
    Mathworks. 2016. Deep Learning Toolbox. Retrieved from https://www.mathworks.com/products/deep-learning.html.
    [134]
    Shu Miao, Guang Chen, Xiangyu Ning, Yang Zi, Kejia Ren, Zhenshan Bing, and Alois Knoll. 2019. Neuromorphic vision datasets for pedestrian detection, action recognition, and fall detection. Front. Neurorobot. 13 (2019), 38.
    [135]
    Daniela Micucci, Marco Mobilio, and Paolo Napoletano. 2017. Unimib shar: A dataset for human activity recognition using acceleration data from smartphones. Appl. Sci. 7, 10 (2017), 1101.
    [136]
    Tomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, and Marc’Aurelio Ranzato. 2014. Learning longer memory in recurrent neural networks. arXiv:1412.7753. Retrieved from https://arxiv.org/abs/1412.7753.
    [137]
    Mehdi Mirza and Simon Osindero. 2014. Conditional generative adversarial nets. arXiv:1411.1784. Retrieved from https://arxiv.org/abs/1411.1784.
    [138]
    Thomas B. Moeslund, Adrian Hilton, and Volker Krüger. 2006. A survey of advances in vision-based human motion capture and analysis. Comput. Vis. Image Understand. 104, 2–3 (2006), 90–126.
    [139]
    Mohammad Moghimi, Pablo Azagra, Luis Montesano, Ana C. Murillo, and Serge Belongie. 2014. Experiments on an rgb-d wearable vision system for egocentric activity recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 597–603.
    [140]
    Abdel-rahman Mohamed, George Dahl, and Geoffrey Hinton. 2009. Deep belief networks for phone recognition. In Proceedings of the NIPS Workshop on Deep Learning for Speech Recognition and Related Applications, Vol. 1. Vancouver, Canada, 39.
    [141]
    Subhas Chandra Mukhopadhyay. 2014. Wearable sensors for human activity monitoring: A review. IEEE Sens. J. 15, 3 (2014), 1321–1330.
    [142]
    Andreas C. Müller, Sarah Guido, et al. 2016. Introduction to Machine Learning with Python: A Guide for Data Scientists. O’Reilly Media, Inc.
    [143]
    Abdulmajid Murad and Jae-Young Pyun. 2017. Deep recurrent neural networks for human activity recognition. Sensors 17, 11 (2017), 2556.
    [144]
    Steve Mutuvi. 2019. Introduction to Machine Learning Model Evaluation. Retrieved from https://heartbeat.fritz.ai/introduction-to-machine-learning-model-evaluation-fa859e1b2d7f.
    [145]
    Tushar Nagarajan, Yanghao Li, Christoph Feichtenhofer, and Kristen Grauman. 2020. EGO-TOPO: Environment Affordances from Egocentric Video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 163–172.
    [146]
    Woonhyun Nam, Piotr Dollár, and Joon Hee Han. 2014. Local decorrelation for improved pedestrian detection. In Advances in Neural Information Processing Systems. 424–432.
    [147]
    Andrew Ng. 2019. UFLDL Tutorial: PCA Whitening. Retrieved from http://ufldl.stanford.edu/tutorial/unsupervised/PCAWhitening/.
    [148]
    Andrew Ng et al. 2011. Sparse autoencoder. CS294A Lect. Not. 72, 2011 (2011), 1–19.
    [149]
    Trung Thanh Ngo, Yasushi Makihara, Hajime Nagahara, Yasuhiro Mukaigawa, and Yasushi Yagi. 2015. Similar gait action recognition using an inertial sensor. Pattern Recogn. 48, 4 (2015), 1289–1301.
    [150]
    Kai Niu, Fusang Zhang, Zhaoxin Chang, and Daqing Zhang. 2018. A Fresnel Diffraction Model Based Human Respiration Detection System Using COTS Wi-Fi Devices. In Proceedings of the ACM International Joint Conference and International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers. ACM, 416–419.
    [151]
    Henry Friday Nweke, Ying Wah Teh, Mohammed Ali Al-Garadi, and Uzoma Rita Alo. 2018. Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges. Expert Syst. Appl. 105 (2018), 233–261.
    [152]
    Augustus Odena, Christopher Olah, and Jonathon Shlens. 2017. Conditional image synthesis with auxiliary classifier gans. In Proceedings of the 34th International Conference on Machine Learning-Volume 70. 2642–2651.
    [153]
    Ferda Ofli, Rizwan Chaudhry, Gregorij Kurillo, René Vidal, and Ruzena Bajcsy. 2013. Berkeley mhad: A comprehensive multimodal human action database. In Proceedings of the IEEE Workshop on Applications of Computer Vision (WACV’13). IEEE, 53–60.
    [154]
    Sangmin Oh, Anthony Hoogs, Amitha Perera, Naresh Cuntoor, Chia-Chih Chen, Jong Taek Lee, Saurajit Mukherjee, J. K. Aggarwal, Hyungtae Lee, Larry Davis, et al. 2011. A large-scale benchmark dataset for event recognition in surveillance video. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR’11). IEEE, 3153–3160.
    [155]
    Paul Over, George Awad, Martial Michel, Jonathan Fiscus, Greg Sanders, Barbara Shaw, Wessel Kraaij, Alan F. Smeaton, and Georges Quéot. 2013. TRECVID 2012-an overview of the goals, tasks, data, evaluation mechanisms and metrics. TRECVID Publications. Retrieved August 22, 2021 http://www-nlpir.nist.gov/projects/tvpubs/tv.pubs.org.html.
    [156]
    Filippo Palumbo, Claudio Gallicchio, Rita Pucci, and Alessio Micheli. 2016. Human activity recognition using multisensor data fusion based on reservoir computing. J. Ambient Intell. Smart Environ. 8, 2 (2016), 87–107.
    [157]
    Zhaoqing Pan, Weijie Yu, Xiaokai Yi, Asifullah Khan, Feng Yuan, and Yuhui Zheng. 2019. Recent progress on generative adversarial networks (GANs): A survey. IEEE Access 7 (2019), 36322–36333.
    [158]
    Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
    [159]
    Ling Pei, Jingbin Liu, Robert Guinness, Yuwei Chen, Heidi Kuusniemi, and Ruizhi Chen. 2012. Using LS-SVM based motion recognition for smartphone indoor wireless positioning. Sensors 12, 5 (2012), 6155–6175.
    [160]
    Ling Pei, Songpengcheng Xia, Lei Chu, Fanyi Xiao, Qi Wu, Wenxian Yu, and Robert Qiu. 2021. MARS: Mixed Virtual and Real Wearable Sensors for Human Activity Recognition with Multi-Domain Deep Learning Model. IEEE IoT J. (2021).
    [161]
    Cuong Pham and Patrick Olivier. 2009. Slice&dice: Recognizing food preparation activities using embedded accelerometers. In Proceedings of the European Conference on Ambient Intelligence. Springer, 34–43.
    [162]
    Thomas Plötz, Nils Y. Hammerla, and Patrick L. Olivier. 2011. Feature learning for activity recognition in ubiquitous computing. In Proceedings of the 22d International Joint Conference on Artificial Intelligence.
    [163]
    Yair Poleg, Chetan Arora, and Shmuel Peleg. 2014. Temporal segmentation of egocentric videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2537–2544.
    [164]
    Ronald Poppe. 2010. A survey on vision-based human action recognition. Image Vis. Comput. 28, 6 (2010), 976–990.
    [165]
    David Martin Powers. 2011. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation.
    [166]
    Mengshi Qi, Jie Qin, Annan Li, Yunhong Wang, Jiebo Luo, and Luc Van Gool. 2018. stagnet: An attentive semantic RNN for group activity recognition. In Proceedings of the European Conference on Computer Vision (ECCV’18). 101–117.
    [167]
    Xin Qi, Gang Zhou, Yantao Li, and Ge Peng. 2012. Radiosense: Exploiting wireless communication patterns for body sensor network activity recognition. In Proceedings of the IEEE 33rd Real-Time Systems Symposium. IEEE, 95–104.
    [168]
    Kiran K. Rachuri, Mirco Musolesi, Cecilia Mascolo, Peter J. Rentfrow, Chris Longworth, and Andrius Aucinas. 2010. EmotionSense: a mobile phones based adaptive platform for experimental social psychology research. In Proceedings of the 12th ACM International Conference on Ubiquitous Computing. ACM, 281–290.
    [169]
    Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434. Retrieved from https://arxiv.org/abs/1511.06434.
    [170]
    Valentin Radu and Maximilian Henne. 2019. Vision2Sensor: Knowledge transfer across sensing modalities for human activity recognition. Proc. ACM Interact. Mob. Wear. Ubiq. Technol. 3, 3 (2019), 84.
    [171]
    Valentin Radu, Panagiota Katsikouli, Rik Sarkar, and Mahesh K. Marina. 2014. A semi-supervised learning approach for robust indoor-outdoor detection with smartphones. In Proceedings of the 12th ACM Conference on Embedded Network Sensor Systems. ACM, 280–294.
    [172]
    Valentin Radu, Nicholas D. Lane, Sourav Bhattacharya, Cecilia Mascolo, Mahesh K. Marina, and Fahim Kawsar. 2016. Towards multimodal deep learning for activity recognition on mobile devices. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. ACM, 185–188.
    [173]
    Valentin Radu, Catherine Tong, Sourav Bhattacharya, Nicholas D. Lane, Cecilia Mascolo, Mahesh K. Marina, and Fahim Kawsar. 2018. Multimodal deep learning for activity and context recognition. Proc. ACM Interact. Mob. Wear. Ubiq. Technol. 1, 4 (2018), 157.
    [174]
    Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Ding, Aarti Bagul, Curtis Langlotz, Katie Shpanskaya, et al. 2017. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv:1711.05225. Retrieved from https://arxiv.org/abs/1711.05225.
    [175]
    Nishkam Ravi, Nikhil Dandekar, Preetham Mysore, and Michael L. Littman. 2005. Activity recognition from accelerometer data. In Proceedings of the AAAI Annual Conference on Artificial Intelligence (AAAI’05), Vol. 5. 1541–1546.
    [176]
    Kishore K. Reddy and Mubarak Shah. 2013. Recognizing 50 human action categories of web videos. Mach. Vis. Appl. 24, 5 (2013), 971–981.
    [177]
    Attila Reiss and Didier Stricker. 2012. Introducing a new benchmarked dataset for activity monitoring. In Proceedings of the 16th International Symposium on Wearable Computers. IEEE, 108–109.
    [178]
    Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems. 91–99.
    [179]
    Malcolm Reynolds, Gabriel Barth-Maron, Frederic Besse, Diego de Las Casas, Andreas Fidjeland, Tim Green, Adrià Puigdomènech, Sébastien Racanière, Jack Rae, and Fabio Viola. 2017. Open Sourcing Sonnet—A New Library for Constructing Neural Networks. Retrieved from https://deepmind.com/blog/open-sourcing-sonnet/.
    [180]
    Daniel Roggen, Alberto Calatroni, Mirco Rossi, Thomas Holleczek, Kilian Förster, Gerhard Tröster, Paul Lukowicz, David Bannach, Gerald Pirkl, Alois Ferscha, et al. 2010. Collecting complex activity datasets in highly rich networked sensor environments. In Proceedings of the 7th International Conference on Networked Sensing Systems (INSS’10). IEEE, 233–240.
    [181]
    Charissa Ann Ronao and Sung-Bae Cho. 2015. Deep convolutional neural networks for human activity recognition with smartphone sensors. In Proceedings of the International Conference on Neural Information Processing. Springer, 46–53.
    [182]
    Charissa Ann Ronao and Sung-Bae Cho. 2016. Human activity recognition with smartphone sensors using deep learning neural networks. Expert Syst. Appl. 59 (2016), 235–244.
    [183]
    David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning representations by back-propagating errors. Nature 323 (1986), 533–536.
    [184]
    Ruslan Salakhutdinov. 2015. Learning deep generative models. Annu. Rev. Stat. Appl. 2 (2015), 361–385.
    [185]
    Ruslan Salakhutdinov and Geoffrey Hinton. 2009. Deep boltzmann machines. In Artificial Intelligence and Statistics. 448–455.
    [186]
    Hojjat Salehinejad, Sharan Sankar, Joseph Barfett, Errol Colak, and Shahrokh Valaee. 2017. Recent advances in recurrent neural networks. arXiv:1801.01078. Retrieved from https://arxiv.org/abs/1801.01078.
    [187]
    Philip Schmidt, Attila Reiss, Robert Duerichen, Claus Marberger, and Kristof Van Laerhoven. 2018. Introducing wesad, a multimodal dataset for wearable stress and affect detection. In Proceedings of the 20th ACM International Conference on Multimodal Interaction. 400–408.
    [188]
    Markus Scholz, Till Riedel, Mario Hock, and Michael Beigl. 2013. Device-free and device-bound activity recognition using radio signal strength. In Proceedings of the 4th Augmented Human International Conference. ACM, 100–107.
    [189]
    Christian Schuldt, Ivan Laptev, and Barbara Caputo. 2004. Recognizing human actions: A local SVM approach. In Proceedings of the 17th International Conference on Pattern Recognition (ICPR’04), Vol. 3. IEEE, 32–36.
    [190]
    Frank Seide and Amit Agarwal. 2016. CNTK: Microsoft’s open-source deep-learning toolkit. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2135–2135.
    [191]
    Ervin Sejdic, Igor Djurovic, et al. 2008. Quantitative performance analysis of scalogram as instantaneous frequency estimator. IEEE Trans. Sign. Process. 56, 8 (2008), 3837–3845.
    [192]
    Mehmet Saygın Seyfioğlu, Ahmet Murat Özbayoğlu, and Sevgi Zubeyde Gürbüz. 2018. Deep convolutional autoencoder for radar-based classification of similar aided and unaided human activities. IEEE Trans. Aerospace Electron. Syst. 54, 4 (2018), 1709–1723.
    [193]
    Jianga Shang, Fuqiang Gu, Xuke Hu, and Allison Kealy. 2015. Apfiloc: An infrastructure-free indoor localization method fusing smartphone inertial sensors, landmarks and map information. Sensors 15, 10 (2015), 27251–27272.
    [194]
    Shuyu Shi, Stephan Sigg, and Yusheng Ji. 2012. Passive detection of situations from ambient fm-radio signals. In Proceedings of the ACM Conference on Ubiquitous Computing. ACM, 1049–1053.
    [195]
    Shuyu Shi, Stephan Sigg, Wei Zhao, and Yusheng Ji. 2014. Monitoring attention using ambient FM radio signals. IEEE Perv. Comput. 13, 1 (2014), 30–36.
    [196]
    Muhammad Shoaib, Stephan Bosch, Ozlem Incel, Hans Scholten, and Paul Havinga. 2015. A survey of online activity recognition using mobile phones. Sensors 15, 1 (2015), 2059–2085.
    [197]
    Stephan Sigg, Ulf Blanke, and Gerhard Tröster. 2014. The telepathic phone: Frictionless activity recognition from wifi-rssi. In Proceedings of the IEEE International Conference on Pervasive Computing and Communications (PERCOM’14). IEEE, 148–155.
    [198]
    Jae Mun Sim, Yonnim Lee, and Ohbyung Kwon. 2015. Acoustic sensor based recognition of human activity in everyday life for smart home services. Int. J. Distrib. Sens. Netw. 11, 9 (2015), 679123.
    [199]
    Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In 3rd International Conference on Learning Representations (ICLR’15).
    [200]
    Timothy Sohn, Alex Varshavsky, Anthony LaMarca, Mike Y. Chen, Tanzeem Choudhury, Ian Smith, Sunny Consolvo, Jeffrey Hightower, William G. Griswold, and Eyal De Lara. 2006. Mobility detection using everyday GSM traces. In Proceedings of the International Conference on Ubiquitous Computing. Springer, 212–224.
    [201]
    Cuiling Lan Junliang Xing Wenjun Zeng Song, Sijie and Jiaying Liu. 2017. An end-to-end spatio-temporal attention model for human action recognition from skeleton data. In The Proceedings of the 31st AAAI Conference on Artificial Intelligence. 4263–4270.
    [202]
    Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. 2012. UCF101: A Dataset of 101 Human Action Classes From Videos in The Wild, CRCV-TR-12-01, November, 2012.
    [203]
    Allan Stisen, Henrik Blunck, Sourav Bhattacharya, Thor Siiger Prentow, Mikkel Baun Kjærgaard, Anind Dey, Tobias Sonne, and Mads Møller Jensen. 2015. Smart devices are different: Assessing and mitigatingmobile sensing heterogeneities for activity recognition. In Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems. ACM, 127–140.
    [204]
    Ke Sun, Ting Zhao, Wei Wang, and Lei Xie. 2018. Vskin: Sensing touch gestures on surfaces of mobile devices using acoustic signals. In Proceedings of the 24th Annual International Conference on Mobile Computing and Networking. 591–605.
    [205]
    Mohammed Sunasra. 2019. Performance Metrics for Classification problems in Machine Learning. Retrieved from https://medium.com/thalus-ai/performance-metrics-for-classification-problems-in-machine-learning-part-i-b085d432082b.
    [206]
    Ilya Sutskever, James Martens, and Geoffrey E. Hinton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML’11). 1017–1024.
    [207]
    Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. 2017. Efficient processing of deep neural networks: A tutorial and survey. Proc. IEEE 105, 12 (2017), 2295–2329.
    [208]
    Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A. Alemi. 2017. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the 31st AAAI Conference on Artificial Intelligence.
    [209]
    Qu Tang, Dinesh John, Binod Thapa-Chhetry, Diego Jose Arguello, and Stephen Intille. 2020. Posture and physical activity detection: Impact of number of sensors and feature type.Med. Sci. Sports and Exercise (2020).
    [210]
    Yansong Tang, Yi Tian, Jiwen Lu, Jianjiang Feng, and Jie Zhou. 2017. Action recognition in rgb-d egocentric videos. In Proceedings of the IEEE International Conference on Image Processing (ICIP’17). IEEE, 3410–3414.
    [211]
    Yansong Tang, Zian Wang, Jiwen Lu, Jianjiang Feng, and Jie Zhou. 2018. Multi-stream deep neural networks for rgb-d egocentric action recognition. IEEE Trans. Circ. Syst. Vid. Technol. 29, 10 (2018), 3001–3015.
    [212]
    Eclipse Deeplearning4j Development Team. Deeplearning4j: Open-source Distributed Deep Learning for the jvm. Retreived from http://deeplearning4j.org.
    [213]
    Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton. 2015. Chainer: A next-generation open source framework for deep learning. In Proceedings of Workshop on Machine Learning Systems (LearningSys) in the 29th Annual Conference on Neural Information Processing Systems (NIPS’15), Vol. 5. 1–6.
    [214]
    Toan Tran, Thanh-Toan Do, Ian Reid, and Gustavo Carneiro. 2019. Bayesian generative active deep learning. In Proceedings of the 36th International Conference on Machine Learning (ICML’19).
    [215]
    Olga Troyanskaya, Michael Cantor, Gavin Sherlock, Pat Brown, Trevor Hastie, Robert Tibshirani, David Botstein, and Russ B. Altman. 2001. Missing value estimation methods for DNA microarrays. Bioinformatics 17, 6 (2001), 520–525.
    [216]
    Michael Tschannen, Olivier Bachem, and Mario Lucic. 2018. Recent advances in autoencoder-based representation learning. In Third workshop on Bayesian Deep Learning (NeurIPS’18).
    [217]
    Md Zia Uddin, Mohammad Mehedi Hassan, Ahmad Almogren, Atif Alamri, Majed Alrubaian, and Giancarlo Fortino. 2017. Facial expression recognition utilizing local direction-based robust features and deep belief network. IEEE Access 5 (2017), 4525–4536.
    [218]
    Tim Van Kasteren, Athanasios Noulas, Gwenn Englebienne, and Ben Kröse. 2008. Accurate activity recognition in a home setting. In Proceedings of the 10th International Conference on Ubiquitous Computing. ACM, 1–9.
    [219]
    Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, (Dec.2010), 3371–3408.
    [220]
    Kishor H. Walse, Rajiv V. Dharaskar, and Vilas M. Thakare. 2016. Pca based optimal ann classifiers for human activity recognition using mobile sensors data. In Proceedings of 1st International Conference on Information and Communication Technology for Intelligent Systems: Volume 1. Springer, 429–436.
    [221]
    Fei Wang, Jianwei Feng, Yinliang Zhao, Xiaobin Zhang, Shiyuan Zhang, and Jinsong Han. 2019. Joint Activity Recognition and Indoor Localization with WiFi Fingerprints. IEEE Access 7 (2019), 80058–80068.
    [222]
    Fangxin Wang, Wei Gong, and Jiangchuan Liu. 2018. On spatial diversity in WiFi-based human activity recognition: A deep learning-based approach. IEEE IoT J. 6, 2 (2018), 2035–2047.
    [223]
    Fangxin Wang, Wei Gong, Jiangchuan Liu, and Kui Wu. 2018. Channel selective activity recognition with WiFi: A deep learning approach exploring wideband information. IEEE Trans. Netw. Sci. Eng. 7, 1 (2018), 181–192.
    [224]
    Jindong Wang, Yiqiang Chen, Shuji Hao, Xiaohui Peng, and Lisha Hu. 2019. Deep learning for sensor-based activity recognition: A survey. Pattern Recogn. Lett. 119 (2019), 3–11.
    [225]
    Jie Wang, Xiao Zhang, Qinhua Gao, Hao Yue, and Hongyu Wang. 2016. Device-free wireless localization and activity recognition: A deep learning approach. IEEE Trans. Vehic. Technol. 66, 7 (2016), 6258–6267.
    [226]
    Lukun Wang. 2016. Recognition of human activities using continuous autoencoders with wearable sensors. Sensors 16, 2 (2016), 189.
    [227]
    Shuangquan Wang and Gang Zhou. 2015. A review on radio based activity recognition. Dig. Commun. Netw. 1, 1 (2015), 20–29.
    [228]
    Weijie Wang, Gaopeng Zhang, Luming Yang, V. S. Balaji, V. Elamaran, and N. Arunkumar. 2019. Revisiting signal processing with spectrogram analysis on EEG, ECG and speech signals. Fut. Gener. Comput. Syst. 98 (2019), 227–232.
    [229]
    Wei-zhong Wang, Yan-wei Guo, Bang-yu Huang, Guo-ru Zhao, Bo-qiang Liu, and Lei Wang. 2011. Analysis of filtering methods for 3D acceleration signals in body sensor network. In Proceedings of the International Symposium on Bioelectronics and Bioinformations. IEEE, 263–266.
    [230]
    Xuanhan Wang, Lianli Gao, Jingkuan Song, Xiantong Zhen, Nicu Sebe, and Heng Tao Shen. 2018. Deep appearance and motion learning for egocentric activity recognition. Neurocomputing 275 (2018), 438–447.
    [231]
    Xiaohan Wang, Yu Wu, Linchao Zhu, and Yi Yang. 2020. Symbiotic attention with privileged information for ego-centric action recognition. In the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI’20).
    [232]
    Yanwen Wang, Jiaxing Shen, and Yuanqing Zheng. 2020. Push the limit of acoustic gesture recognition. IEEE Trans. Mobile Comput. (2020).
    [233]
    Yanwen Wang and Yuanqing Zheng. 2018. Modeling RFID signal reflection for contact-free activity recognition. Proc. ACM Interact. Mob. Wear. Ubiq. Technol. 2, 4 (2018), 1–22.
    [234]
    Zhiguang Wang and Tim Oates. 2015. Encoding time series as images for visual inspection and classification using tiled convolutional neural networks. In Proceedings of the Workshops at the 29th AAAI Conference on Artificial Intelligence.
    [235]
    Zhengwei Wang, Qi She, and Tomás E. Ward. 2021. Generative adversarial networks in computer vision: A survey and taxonomy. ACM Comput. Surv. 54, 2, Article 37 (February 2021), 38 pages. https://doi.org/10.1145/3439723
    [236]
    Dan Wu, Daqing Zhang, Chenren Xu, Hao Wang, and Xiang Li. 2017. Device-free WiFi human sensing: From pattern-based to model-based approaches. IEEE Commun. Mag. 55, 10 (2017), 91–97.
    [237]
    Fu Xiao, Jing Chen, Xiaohui Xie, Linqing Gui, Lijuan Sun, and Ruchuan Wang. 2018. SEARE: A system for exercise activity recognition and quality evaluation based on green sensing. IEEE Trans. Emerg. Top. Comput. 8, 3 (2018), 752–761.
    [238]
    Weidi Xu, Haoze Sun, Chao Deng, and Ying Tan. 2017. Variational autoencoder for semi-supervised text classification. In Proceedings of the 31st AAAI Conference on Artificial Intelligence.
    [239]
    Yanyang Yan, Wenqi Ren, and Xiaochun Cao. 2018. Recolored image detection via a deep discriminative model. IEEE Trans. Inf. Forens. Secur. 14, 1 (2018), 5–17.
    [240]
    Allen Y. Yang, Roozbeh Jafari, S. Shankar Sastry, and Ruzena Bajcsy. 2009. Distributed recognition of human actions using wearable motion sensor networks. J. Ambient Intell. Smart Environ. 1, 2 (2009), 103–115.
    [241]
    Jianbo Yang, Minh Nhut Nguyen, Phyo Phyo San, Xiao Li Li, and Shonali Krishnaswamy. 2015. Deep convolutional neural networks on multichannel time series for human activity recognition. In Proceedings of the 24th International Joint Conference on Artificial Intelligence.
    [242]
    Jianfei Yang, Han Zou, Yuxun Zhou, and Lihua Xie. 2019. Learning gestures from WiFi: A siamese recurrent convolutional architecture. IEEE IoT J. 6, 6 (2019), 10763–10772.
    [243]
    Haibo Ye, Tao Gu, Xianping Tao, and Jian Lu. 2016. Scalable floor localization using barometer on smartphone. Wireless Commun. Mobile Comput. 16, 16 (2016), 2557–2571.
    [244]
    Siamak Yousefi, Hirokazu Narui, Sankalp Dayal, Stefano Ermon, and Shahrokh Valaee. 2017. A survey on behavior recognition using wifi channel state information. IEEE Commun. Mag. 55, 10 (2017), 98–104.
    [245]
    Piero Zappi, Clemens Lombriser, Thomas Stiefmeier, Elisabetta Farella, Daniel Roggen, Luca Benini, and Gerhard Tröster. 2008. Activity recognition from on-body sensors: Accuracy-power trade-off by dynamic sensor selection. In Proceedings of the European Conference on Wireless Sensor Networks. Springer, 17–33.
    [246]
    Ming Zeng, Le T. Nguyen, Bo Yu, Ole J. Mengshoel, Jiang Zhu, Pang Wu, and Joy Zhang. 2014. Convolutional neural networks for human activity recognition using mobile sensors. In Proceedings of the 6th International Conference on Mobile Computing, Applications and Services. IEEE, 197–205.
    [247]
    Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. 2019. Self-attention generative adversarial networks. In the 36th International Conference on Machine Learning (ICML’19).
    [248]
    Jin Zhang, Fuxiang Wu, Bo Wei, Qieshi Zhang, Hui Huang, Syed W. Shah, and Jun Cheng. 2020. Data augmentation and dense-lstm for human activity recognition using wifi signal. IEEE IoT J. (2020).
    [249]
    Mi Zhang and Alexander A. Sawchuk. 2012. USC-HAD: A daily activity dataset for ubiquitous activity recognition using wearable sensors. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing. ACM, 1036–1043.
    [250]
    Xiang Zhang, Lina Yao, Xianzhi Wang, Jessica Monaghan, and David Mcalpine. 2020. A survey on deep learning based brain computer interface: Recent advances and new frontiers. J Neural Eng. Epub ahead of print. 33171452.
    [251]
    Xiang Zhang, Lina Yao, Dalin Zhang, Xianzhi Wang, Quan Z. Sheng, and Tao Gu. 2017. Multi-person brain activity recognition via comprehensive EEG signal analysis. In Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services. ACM, 28–37.
    [252]
    Xiao-Lei Zhang and Ji Wu. 2012. Deep belief networks based voice activity detection. IEEE Trans. Aud. Speech Lang. Process. 21, 4 (2012), 697–710.
    [253]
    Junbo Zhao, Michael Mathieu, and Yann LeCun. 2017. Energy-based generative adversarial network. In the 5th International Conference on Learning Representations (ICLR’17).
    [254]
    Fang Zheng, Guoliang Zhang, and Zhanjiang Song. 2001. Comparison of different implementations of MFCC. J. Comput. Sci. Technol. 16, 6 (2001), 582–589.
    [255]
    Weilong Zheng, Jiayi Zhu, and Baoliang Lu. 2018. Identifying Stable Patterns over Time for Emotion Recognition from EEG. IEEE Trans. Affect. Comput. (2018). https://doi.org/10.1109/TAFFC.2017.2712143
    [256]
    Wei-Long Zheng, Jia-Yi Zhu, Yong Peng, and Bao-Liang Lu. 2014. EEG-based emotion classification using deep belief networks. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME’14). IEEE, 1–6.
    [257]
    Yu Zheng, Yukun Chen, Quannan Li, Xing Xie, and Wei-Ying Ma. 2010. Understanding transportation modes based on GPS data for web applications. ACM Trans. Web 4, 1 (2010), 1.
    [258]
    Yu Zheng, Quannan Li, Yukun Chen, Xing Xie, and Wei-Ying Ma. 2008. Understanding mobility based on GPS data. In Proceedings of the 10th International Conference on Ubiquitous Computing. 312–321.
    [259]
    Yu Zheng, Quannan Li, Yukun Chen, Xing Xie, and Wei-Ying Ma. 2008. Understanding mobility based on GPS data. Proceedings of the ACM international Joint Conference on Pervasive and Ubiquitous Computing (UbiComp’08). 312. https://doi.org/10.1145/1409635.1409677
    [260]
    Baoding Zhou, Jun Yang, and Qingquan Li. 2019. Smartphone-Based Activity Recognition for Indoor Localization Using a Convolutional Neural Network. Sensors 19, 3 (2019), 621.
    [261]
    Pengfei Zhou, Yuanqing Zheng, Zhenjiang Li, Mo Li, and Guobin Shen. 2012. IODetector: A Generic Service for Indoor Outdoor Detection. In Proceedings of the 10th ACM Conference on Embedded Network Sensor Systems (SenSys’12). ACM, 113–126. https://doi.org/10.1145/2426656.2426668
    [262]
    Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision. 2223–2232.
    [263]
    Xiaodan Zhu, Parinaz Sobihani, and Hongyu Guo. 2015. Long short-term memory over recursive structures. In Proceedings of the International Conference on Machine Learning. 1604–1612.
    [264]
    Han Zou, Yuxun Zhou, Jianfei Yang, Hao Jiang, Lihua Xie, and Costas J. Spanos. 2018. Deepsense: Device-free human activity recognition via autoencoder long-term recurrent convolutional network. In Proceedings of the IEEE International Conference on Communications (ICC’18). IEEE, 1–6.

    Cited By

    View all
    • (2024)A Review and Comparative Study of Works that Care is Monitoring Detection and Therapy of Children with Autism Spectrum DisorderWSEAS TRANSACTIONS ON COMPUTER RESEARCH10.37394/232018.2024.12.2412(244-263)Online publication date: 7-Mar-2024
    • (2024)STA-HARApplied Computational Intelligence and Soft Computing10.1155/2024/18322982024Online publication date: 17-Jun-2024
    • (2024)Deep Learning for Time Series Classification and Extrinsic Regression: A Current SurveyACM Computing Surveys10.1145/364944856:9(1-45)Online publication date: 25-Apr-2024
    • Show More Cited By

    Index Terms

    1. A Survey on Deep Learning for Human Activity Recognition

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Computing Surveys
      ACM Computing Surveys  Volume 54, Issue 8
      November 2022
      754 pages
      ISSN:0360-0300
      EISSN:1557-7341
      DOI:10.1145/3481697
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 04 October 2021
      Accepted: 01 June 2021
      Revised: 01 June 2021
      Received: 01 June 2020
      Published in CSUR Volume 54, Issue 8

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Machine learning
      2. deep learning
      3. activity recognition
      4. mobile sensing
      5. deep models

      Qualifiers

      • Survey
      • Refereed

      Funding Sources

      • National Natural Science Foundation of China
      • National Key Research and Development Program of China
      • Guangdong Basic and Applied Basic Research Foundation
      • Shenzhen Scientific Research and Development Funding Program

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)1,109
      • Downloads (Last 6 weeks)119
      Reflects downloads up to 27 Jul 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)A Review and Comparative Study of Works that Care is Monitoring Detection and Therapy of Children with Autism Spectrum DisorderWSEAS TRANSACTIONS ON COMPUTER RESEARCH10.37394/232018.2024.12.2412(244-263)Online publication date: 7-Mar-2024
      • (2024)STA-HARApplied Computational Intelligence and Soft Computing10.1155/2024/18322982024Online publication date: 17-Jun-2024
      • (2024)Deep Learning for Time Series Classification and Extrinsic Regression: A Current SurveyACM Computing Surveys10.1145/364944856:9(1-45)Online publication date: 25-Apr-2024
      • (2024)A novel design of layered recurrent neural networks for fractional order Caputo–Fabrizio stiff electric circuit modelsModern Physics Letters B10.1142/S0217984924503937Online publication date: 18-May-2024
      • (2024)Novel design of recurrent neural network for the dynamical of nonlinear piezoelectric cantilever mass–beam modelThe European Physical Journal Plus10.1140/epjp/s13360-023-04708-5139:1Online publication date: 3-Jan-2024
      • (2024)Midas++: Generating Training Data of mmWave Radars From Videos for Privacy-Preserving Human Sensing With MobilityIEEE Transactions on Mobile Computing10.1109/TMC.2023.332539923:6(6650-6666)Online publication date: Jun-2024
      • (2024)ContextGPT: Infusing LLMs Knowledge into Neuro-Symbolic Activity Recognition Models2024 IEEE International Conference on Smart Computing (SMARTCOMP)10.1109/SMARTCOMP61445.2024.00029(55-62)Online publication date: 29-Jun-2024
      • (2024)Exploring Activity Recognition in Multi-device Environments using Hierarchical Federated Learning2024 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops)10.1109/PerComWorkshops59983.2024.10503023(720-726)Online publication date: 11-Mar-2024
      • (2024)WiFi-Based Human Sensing With Deep Learning: Recent Advances, Challenges, and OpportunitiesIEEE Open Journal of the Communications Society10.1109/OJCOMS.2024.34115295(3595-3623)Online publication date: 2024
      • (2024)Multi-Framework Evidential Association Rule Fusion for Wearable Human Activity RecognitionIEEE Sensors Journal10.1109/JSEN.2024.336390824:7(11805-11816)Online publication date: 1-Apr-2024
      • Show More Cited By

      View Options

      Get Access

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media