Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Public Access

AMIR: Active Multimodal Interaction Recognition from Video and Network Traffic in Connected Environments

Published: 28 March 2023 Publication History

Abstract

Activity recognition using video data is widely adopted for elder care, monitoring for safety and security, and home automation. Unfortunately, using video data as the basis for activity recognition can be brittle, since models trained on video are often not robust to certain environmental changes, such as camera angle and lighting changes. There has been a proliferation of network-connected devices in home environments. Interactions with these smart devices are associated with network activity, making network data a potential source for recognizing these device interactions. This paper advocates for the synthesis of video and network data for robust interaction recognition in connected environments. We consider machine learning-based approaches for activity recognition, where each labeled activity is associated with both a video capture and an accompanying network traffic trace. We develop a simple but effective framework AMIR (Active Multimodal Interaction Recognition)1 that trains independent models for video and network activity recognition respectively, and subsequently combines the predictions from these models using a meta-learning framework. Whether in lab or at home, this approach reduces the amount of "paired" demonstrations needed to perform accurate activity recognition, where both network and video data are collected simultaneously. Specifically, the method we have developed requires up to 70.83% fewer samples to achieve 85% F1 score than random data collection, and improves accuracy by 17.76% given the same number of samples.

References

[1]
Abbas Acar, Hossein Fereidooni, Tigist Abera, Amit Kumar Sikder, Markus Miettinen, Hidayet Aksu, Mauro Conti, Ahmad-Reza Sadeghi, and Selcuk Uluagac. 2020. Peek-a-boo: I see your smart home activities, even encrypted!. In Proceedings of the 13th ACM Conference on Security and Privacy in Wireless and Mobile Networks. 207--218.
[2]
Fadel Adib, Zachary Kabelac, and Dina Katabi. 2015. Multi-Person Localization via RF Body Reflections. In 12th USENIX Symposium on Networked Systems Design and Implementation, NSDI 15, Oakland, CA, USA, May 4-6, 2015. USENIX Association, 279--292. https://www.usenix.org/conference/nsdi15/technical-sessions/presentation/adib
[3]
Fadel Adib, Zachary Kabelac, Dina Katabi, and Robert C. Miller. 2014. 3D Tracking via Body Radio Reflections. In Proceedings of the 11th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2014, Seattle, WA, USA, April 2-4, 2014, Ratul Mahajan and Ion Stoica (Eds.). USENIX Association, 317--329. https://www.usenix.org/conference/nsdi14/technical-sessions/presentation/adib
[4]
Fadel Adib, Hongzi Mao, Zachary Kabelac, Dina Katabi, and Robert C. Miller. 2015. Smart Homes that Monitor Breathing and Heart Rate. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI 2015, Seoul, Republic of Korea, April 18-23, 2015, Bo Begole, Jinwoo Kim, Kori Inkpen, and Woontack Woo (Eds.). ACM, 837--846. https://doi.org/10.1145/2702123.2702200
[5]
Ahmet Aksoy and Mehmet Hadi Gunes. 2019. Automated iot device identification using network traffic. In ICC 2019-2019 IEEE International Conference on Communications (ICC). IEEE, 1--7.
[6]
Noah Apthorpe, Dillon Reisman, Srikanth Sundaresan, Arvind Narayanan, and Nick Feamster. 2017. Spying on the smart home: Privacy attacks and defenses on encrypted iot traffic. arXiv preprint arXiv:1708.05044 (2017).
[7]
Noah J. Apthorpe, Dillon Reisman, and Nick Feamster. 2017. A Smart Home is No Castle: Privacy Vulnerabilities of Encrypted IoT Traffic. CoRR abs/1705.06805 (2017). arXiv:1705.06805 http://arxiv.org/abs/1705.06805
[8]
Noah J. Apthorpe, Dillon Reisman, Srikanth Sundaresan, Arvind Narayanan, and Nick Feamster. 2017. Spying on the Smart Home: Privacy Attacks and Defenses on Encrypted IoT Traffic. CoRR abs/1708.05044 (2017). arXiv:1708.05044 http://arxiv.org/abs/1708.05044
[9]
Pradeep K Atrey, M Anwar Hossain, Abdulmotaleb El Saddik, and Mohan S Kankanhalli. 2010. Multimodal fusion for multimedia analysis: a survey. Multimedia systems 16, 6 (2010), 345--379.
[10]
Tadas Baltrušaitis, Chaitanya Ahuja, and Louis-Philippe Morency. 2018. Multimodal machine learning: A survey and taxonomy. IEEE transactions on pattern analysis and machine intelligence 41, 2 (2018), 423--443.
[11]
Onur Barut, Yan Luo, Tong Zhang, Weigang Li, and Peilong Li. 2020. NetML: a challenge for network traffic analytics. arXiv preprint arXiv:2004.13006 (2020).
[12]
Nipun Batra, Jack Kelly, Oliver Parson, Haimonti Dutta, William Knottenbelt, Alex Rogers, Amarjeet Singh, and Mani Srivastava. 2014. NILMTK: An open source toolkit for non-intrusive load monitoring. In Proceedings of the 5th international conference on Future energy systems. 265--276.
[13]
Gedas Bertasius, Heng Wang, and Lorenzo Torresani. 2021. Is space-time attention all you need for video understanding. arXiv preprint arXiv:2102.05095 2, 3 (2021), 4.
[14]
Daniel J Butler, Justin Huang, Franziska Roesner, and Maya Cakmak. 2015. The privacy-utility tradeoff for remotely teleoperated robots. In Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction. 27--34.
[15]
Bradford Campbell and Prabal Dutta. 2014. Gemini: A non-invasive, energy-harvesting true power meter. In 2014 IEEE Real-Time Systems Symposium. IEEE, 324--333.
[16]
Jingjing Cao, Sam Kwong, Ran Wang, Xiaodong Li, Ke Li, and Xiangfei Kong. 2015. Class-specific soft voting based multiple extreme learning machines ensemble. Neurocomputing 149 (2015), 275--284.
[17]
Z. Cao, G. Hidalgo Martinez, T. Simon, S. Wei, and Y. A. Sheikh. 2019. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. IEEE Transactions on Pattern Analysis and Machine Intelligence (2019).
[18]
Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2017. Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. In CVPR.
[19]
João Carreira, Eric Noland, Andras Banki-Horvath, Chloe Hillier, and Andrew Zisserman. 2018. A Short Note about Kinetics-600. CoRR abs/1808.01340 (2018). arXiv:1808.01340 http://arxiv.org/abs/1808.01340
[20]
João Carreira, Eric Noland, Chloe Hillier, and Andrew Zisserman. 2019. A Short Note on the Kinetics-700 Human Action Dataset. CoRR abs/1907.06987 (2019). arXiv:1907.06987 http://arxiv.org/abs/1907.06987
[21]
Liming Chen, Jesse Hoey, Chris D. Nugent, Diane J. Cook, and Zhiwen Yu. 2012. Sensor-Based Activity Recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 42, 6 (2012), 790--808. https://doi.org/10.1109/TSMCC.2012.2198883
[22]
MMAction2 Contributors. 2020. OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark. https://github.com/open-mmlab/mmaction2.
[23]
Fernando De la Torre, Jessica Hodgins, Adam Bargteil, Xavier Martin, Justin Macey, Alex Collado, and Pep Beltran. 2009. Guide to the Carnegie Mellon University Multimodal Activity (CMU-MMAC) Database. (2009).
[24]
Samuel DeBruin, Branden Ghena, Ye-Sheng Kuo, and Prabal Dutta. 2015. Powerblade: A low-profile, true-power, plug-through energy meter. In Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems. 17--29.
[25]
Haodong Duan, Yue Zhao, Yuanjun Xiong, Wentao Liu, and Dahua Lin. 2020. Omni-sourced webly-supervised learning for video recognition. In European Conference on Computer Vision. Springer, 670--688.
[26]
Bernard Ghanem Fabian Caba Heilbron, Victor Escorcia and Juan Carlos Niebles. 2015. ActivityNet: A Large-Scale Video Benchmark for Human Activity Understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 961--970.
[27]
Christoph Feichtenhofer. 2020. X3d: Expanding architectures for efficient video recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 203--213.
[28]
Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. 2019. SlowFast Networks for Video Recognition. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019. IEEE, 6201--6210. https://doi.org/10.1109/ICCV.2019.00630
[29]
Pedro Garcia Lopez, Alberto Montresor, Dick Epema, Anwitaman Datta, Teruo Higashino, Adriana Iamnitchi, Marinho Barcellos, Pascal Felber, and Etienne Riviere. 2015. Edge-centric computing: Vision and challenges. 37--42 pages.
[30]
Peter Gehler and Sebastian Nowozin. 2009. On feature combination for multiclass object classification. In 2009 IEEE 12th International Conference on Computer Vision. IEEE, 221--228.
[31]
Alexander Grushin, Derek Monner, James A. Reggia, and Ajay Mishra. 2013. Robust human action recognition via long short-term memory. In The 2013 International Joint Conference on Neural Networks, IJCNN 2013, Dallas, TX, USA, August 4-9, 2013. IEEE, 1--8. https://doi.org/10.1109/IJCNN.2013.6706797
[32]
Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. 2021. Natural adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 15262--15271.
[33]
Jordan Holland, Paul Schmitt, Nick Feamster, and Prateek Mittal. 2021. New directions in automated traffic analysis. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security. 3366--3383.
[34]
Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. 2017. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950 (2017).
[35]
Will Kay, João Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, and Andrew Zisserman. 2017. The Kinetics Human Action Video Dataset. CoRR abs/1705.06950 (2017). arXiv:1705.06950 http://arxiv.org/abs/1705.06950
[36]
Gierad Laput, Yang Zhang, and Chris Harrison. 2017. Synthetic sensors: Towards general-purpose sensing. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 3986--3999.
[37]
Ang Li, Meghana Thotakuri, David A. Ross, João Carreira, Alexander Vostrikov, and Andrew Zisserman. 2020. The AVA-Kinetics Localized Human Actions Video Dataset. CoRR abs/2005.00214 (2020). arXiv:2005.00214 https://arxiv.org/abs/2005.00214
[38]
Yunpeng Li, Dominik Roblek, and Marco Tagliasacchi. 2019. From Here to There: Video Inbetweening Using Direct 3D Convolutions. CoRR abs/1905.10240 (2019). arXiv:1905.10240 http://arxiv.org/abs/1905.10240
[39]
Dawei Liang and Edison Thomaz. 2019. Audio-based activities of daily living (adl) recognition with large-scale acoustic embeddings from online videos. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 1 (2019), 1--18.
[40]
Ye Liu, Liqiang Nie, Li Liu, and David S. Rosenblum. 2016. From action to activity: Sensor-based activity recognition. Neurocomputing 181 (2016), 108--115. https://doi.org/10.1016/j.neucom.2015.08.096 Big Data Driven Intelligent Transportation Systems.
[41]
Zhaoyang Liu, Limin Wang, Wayne Wu, Chen Qian, and Tong Lu. 2021. Tam: Temporal adaptive module for video recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 13708--13718.
[42]
Ching-Hu Lu and Li-Chen Fu. 2009. Robust Location-Aware Activity Recognition Using Wireless Sensor Network in an Attentive Home. IEEE Trans Autom. Sci. Eng. 6, 4 (2009), 598--609. https://doi.org/10.1109/TASE.2009.2021981
[43]
Yair Meidan, Michael Bohadana, Asaf Shabtai, Juan David Guarnizo, Martín Ochoa, Nils Ole Tippenhauer, and Yuval Elovici. 2017. ProfilIoT: a machine learning approach for IoT device identification based on network traffic analysis. In Proceedings of the symposium on applied computing. 506--509.
[44]
Robert Munro Monarch. 2021. Human-in-the-Loop Machine Learning: Active learning and annotation for human-centered AI. Simon and Schuster.
[45]
Dario Pavllo, Christoph Feichtenhofer, David Grangier, and Michael Auli. 2019. 3D human pose estimation in video with temporal convolutions and semi-supervised training. In Conference on Computer Vision and Pattern Recognition (CVPR).
[46]
Caetano Mazzoni Ranieri, Scott MacLeod, Mauro Dragone, Patrícia Amâncio Vargas, and Roseli Aparecida Francelin Romero. 2021. Activity Recognition for Ambient Assisted Living with Videos, Inertial Units and Ambient Sensors. Sensors 21, 3 (2021), 768. https://doi.org/10.3390/s21030768
[47]
Andreas Reinhardt, Paul Baumann, Daniel Burgstahler, Matthias Hollick, Hristo Chonov, Marc Werner, and Ralf Steinmetz. 2012. On the accuracy of appliance identification based on distributed load metering data. In 2012 Sustainable Internet and ICT for Sustainability (SustainIT). IEEE, 1--9.
[48]
Burr Settles. 2009. Active learning literature survey. (2009).
[49]
Mustafizur R. Shahid, Gregory Blanc, Zonghua Zhang, and Hervé Debar. 2018. IoT Devices Recognition Through Network Traffic Analysis. In IEEE International Conference on Big Data (IEEE BigData 2018), Seattle, WA, USA, December 10-13, 2018, Naoki Abe, Huan Liu, Calton Pu, Xiaohua Hu, Nesreen K. Ahmed, Mu Qiao, Yang Song, Donald Kossmann, Bing Liu, Kisung Lee, Jiliang Tang, Jingrui He, and Jeffrey S. Saltz (Eds.). IEEE, 5187--5192. https://doi.org/10.1109/BigData.2018.8622243
[50]
Shankar T Shivappa, Mohan Manubhai Trivedi, and Bhaskar D Rao. 2010. Audiovisual information fusion in human-computer interfaces and intelligent environments: A survey. Proc. IEEE 98, 10 (2010), 1692--1715.
[51]
Tomas Simon, Hanbyul Joo, Iain Matthews, and Yaser Sheikh. 2017. Hand Keypoint Detection in Single Images using Multiview Bootstrapping. In CVPR.
[52]
Lucas Smaira, João Carreira, Eric Noland, Ellen Clancy, Amy Wu, and Andrew Zisserman. 2020. A Short Note on the Kinetics-700-2020 Human Action Dataset. CoRR abs/2010.10864 (2020). arXiv:2010.10864 https://arxiv.org/abs/2010.10864
[53]
Ekaterina H. Spriggs, Fernando De la Torre, and Martial Hebert. 2009. Temporal segmentation and activity classification from first-person sensing. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR Workshops 2009, Miami, FL, USA, 20-25 June, 2009. IEEE Computer Society, 17--24. https://doi.org/10.1109/CVPRW.2009.5204354
[54]
Sebastian Stein and Stephen J. McKenna. 2013. Combining embedded accelerometers with computer vision for recognizing food preparation activities. In The 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, UbiComp '13, Zurich, Switzerland, September 8-12, 2013, Friedemann Mattern, Silvia Santini, John F. Canny, Marc Langheinrich, and Jun Rekimoto (Eds.). ACM, 729--738. https://doi.org/10.1145/2493432.2493482
[55]
Johannes Andreas Stork, Luciano Spinello, Jens Silva, and Kai Oliver Arras. 2012. Audio-based human activity recognition using Non-Markovian Ensemble Voting. In The 21st IEEE International Symposium on Robot and Human Interactive Communication, IEEE RO-MAN 2012, Paris, France, September 9-13, 2012. IEEE, 509--514. https://doi.org/10.1109/ROMAN.2012.6343802
[56]
Emmanuel Munguia Tapia, Stephen S Intille, and Kent Larson. 2004. Activity recognition in the home using simple and ubiquitous sensors. In International conference on pervasive computing. Springer, 158--175.
[57]
Moritz Tenorth, Jan Bandouch, and Michael Beetz. 2009. The TUM Kitchen Data Set of everyday manipulation activities for motion tracking and action recognition. In 12th IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2009, Kyoto, Japan, September 27 - October 4, 2009. IEEE Computer Society, 1089--1096. https://doi.org/10.1109/ICCVW.2009.5457583
[58]
Jindong Wang, Yiqiang Chen, Shuji Hao, Xiaohui Peng, and Lisha Hu. 2019. Deep learning for sensor-based activity recognition: A survey. Pattern Recognition Letters 119 (2019), 3--11. https://doi.org/10.1016/j.patrec.2018.02.010 Deep Learning for Pattern Recognition.
[59]
Shuangquan Wang and Gang Zhou. 2015. A review on radio based activity recognition. Digital Communications and Networks 1, 1 (2015), 20--29. https://doi.org/10.1016/j.dcan.2015.02.006
[60]
Wei Wang, Alex X. Liu, Muhammad Shahzad, Kang Ling, and Sanglu Lu. 2017. Device-Free Human Activity Recognition Using Commercial WiFi Devices. IEEE J. Sel. Areas Commun. 35, 5 (2017), 1118--1131. https://doi.org/10.1109/JSAC.2017.2679658
[61]
Ying Wang, Kaiqi Huang, and Tieniu Tan. 2007. Human activity recognition based on r transform. In 2007 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1--8.
[62]
Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh. 2016. Convolutional pose machines. In CVPR.
[63]
Kun Yang, Samory Kpotufe, and Nick Feamster. 2020. Feature Extraction for Novelty Detection in Network Traffic. arXiv preprint arXiv:2006.16993 (2020).
[64]
Shibo Zhang, Yaxuan Li, Shen Zhang, Farzad Shahabi, Stephen Xia, Yu Deng, and Nabil Alshurafa. 2022. Deep Learning in Human Activity Recognition with Wearable Sensors: A Review on Advances. Sensors 22, 4 (2022), 1476.
[65]
Mingmin Zhao, Fadel Adib, and Dina Katabi. 2016. Emotion recognition using wireless signals. In Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking, MobiCom 2016, New York City, NY, USA, October 3-7, 2016, Yingying Chen, Marco Gruteser, Y. Charlie Hu, and Karthik Sundaresan (Eds.). ACM, 95--108. https://doi.org/10.1145/2973750.2973762
[66]
Xiaojin Zhu and Andrew B Goldberg. 2009. Introduction to semi-supervised learning. Synthesis lectures on artificial intelligence and machine learning 3, 1 (2009), 1--130.

Cited By

View all
  • (2024)ChatIoT: Zero-code Generation of Trigger-action Based IoT ProgramsProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785858:3(1-29)Online publication date: 9-Sep-2024
  • (2024)Toolkit Design for Building Camera Sensor-Driven DIY Smart HomesCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678363(256-261)Online publication date: 5-Oct-2024
  • (2024)G-VOILA: Gaze-Facilitated Information Querying in Daily ScenariosProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36596238:2(1-33)Online publication date: 15-May-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 7, Issue 1
March 2023
1243 pages
EISSN:2474-9567
DOI:10.1145/3589760
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 March 2023
Published in IMWUT Volume 7, Issue 1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. activity recognition
  2. datasets
  3. multimodal learning

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)433
  • Downloads (Last 6 weeks)59
Reflects downloads up to 02 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)ChatIoT: Zero-code Generation of Trigger-action Based IoT ProgramsProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785858:3(1-29)Online publication date: 9-Sep-2024
  • (2024)Toolkit Design for Building Camera Sensor-Driven DIY Smart HomesCompanion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing10.1145/3675094.3678363(256-261)Online publication date: 5-Oct-2024
  • (2024)G-VOILA: Gaze-Facilitated Information Querying in Daily ScenariosProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36596238:2(1-33)Online publication date: 15-May-2024
  • (2024)NetDiffusion: Network Data Augmentation Through Protocol-Constrained Traffic GenerationProceedings of the ACM on Measurement and Analysis of Computing Systems10.1145/36390378:1(1-32)Online publication date: 21-Feb-2024
  • (2024)Deep Heterogeneous Contrastive Hyper-Graph Learning for In-the-Wild Context-Aware Human Activity RecognitionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314447:4(1-23)Online publication date: 12-Jan-2024
  • (2024)HyperTrackingProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314347:4(1-26)Online publication date: 12-Jan-2024
  • (2024)Who Should Hold Control? Rethinking Empowerment in Home Automation among Cohabitants through the Lens of Co-DesignProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642866(1-19)Online publication date: 11-May-2024
  • (2024)Beyond the Dimensions: A Structured Evaluation of Multivariate Time Series Distance Measures2024 IEEE 40th International Conference on Data Engineering Workshops (ICDEW)10.1109/ICDEW61823.2024.00020(107-112)Online publication date: 13-May-2024
  • (2024)An Interactive Dive into Time-Series Anomaly Detection2024 IEEE 40th International Conference on Data Engineering (ICDE)10.1109/ICDE60146.2024.00409(5382-5386)Online publication date: 13-May-2024
  • (2024)Deep learning for computer vision based activity recognition and fall detection of the elderly: a systematic reviewApplied Intelligence10.1007/s10489-024-05645-154:19(8982-9007)Online publication date: 8-Jul-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media