Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

IMUTube: Automatic Extraction of Virtual on-body Accelerometry from Video for Human Activity Recognition

Published: 04 September 2020 Publication History

Abstract

The lack of large-scale, labeled data sets impedes progress in developing robust and generalized predictive models for on-body sensor-based human activity recognition (HAR). Labeled data in human activity recognition is scarce and hard to come by, as sensor data collection is expensive, and the annotation is time-consuming and error-prone. To address this problem, we introduce IMUTube, an automated processing pipeline that integrates existing computer vision and signal processing techniques to convert videos of human activity into virtual streams of IMU data. These virtual IMU streams represent accelerometry at a wide variety of locations on the human body. We show how the virtually-generated IMU data improves the performance of a variety of models on known HAR datasets. Our initial results are very promising, but the greater promise of this work lies in a collective approach by the computer vision, signal processing, and activity recognition communities to extend this work in ways that we outline. This should lead to on-body, sensor-based HAR becoming yet another success story in large-dataset breakthroughs in recognition.

References

[1]
S. Alireza Golestaneh and L. Karam. 2017. Spatially-varying blur detection based on multiscale fused and sorted transform coefficients of gradient magnitudes. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 5800--5809.
[2]
T. Alldieck, M. Magnor, B. Bhatnagar, C. Theobalt, and G. Pons-Moll. 2019. Learning to reconstruct people in clothing from a single RGB camera. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1175--1186.
[3]
M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. 2014. 2D Human Pose Estimation: New Benchmark and State of the Art Analysis. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4]
P. Asare, R. Dickerson, X. Wu, J. Lach, and J. Stankovic. 2013. BodySim: A Multi-Domain Modeling and Simulation Framework for Body Sensor Networks Research and Design. In International Conference on Body Area Networks (BODYNETS). ICST.
[5]
M. Bächlin, M. Plotnik, and G. Tröster. 2010. Wearable assistant for Parkinson's disease patients with the freezing of gait symptom. IEEE Trans. Inf. Technol. Biomed. 14, 2 (2010), 436--446.
[6]
P.J. Besl and N. McKay. 1992. A method for registration of 3-D shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence 14, 2 (February 1992), 239--256.
[7]
A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft. 2016. Simple online and realtime tracking. In IEEE International Conference on Image Processing (ICIP). 3464--3468.
[8]
O. Bogdan, V. Eckstein, F. Rameau, and J. Bazin. 2018. DeepCalib: a deep learning approach for automatic intrinsic calibration of wide field-of-view cameras. In Proceedings of the ACM SIGGRAPH European Conference on Visual Media Production. ACM, 6:1-6:10.
[9]
F. Caba Heilbron, V. Escorcia, B. Ghanem, and J. Carlos Niebles. 2015. Activitynet: A large-scale video benchmark for human activity understanding. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 961--970.
[10]
Z. Cao, T. Simon, S. Wei, and Y. Sheikh. 2017. Realtime multi-person 2d pose estimation using part affinity fields. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 7291--7299.
[11]
B. Caprile and V. Torre. 1990. Using vanishing points for camera calibration. International Journal of Computer Vision 4, 2 (March 1990), 127--139.
[12]
J. Carreira, E. Noland, C. Hillier, and A. Zisserman. 2019. A short note on the kinetics-700 human action dataset. arXiv preprint arXiv:1907.06987 (2019).
[13]
Y. Chang, A. Mathur, A. Isopoussu, J. Song, and F. Kawsar. 2020. A Systematic Study of Unsupervised Domain Adaptation for Robust Human-Activity Recognition. 4, 1, Article 39 (March 2020), 30 pages.
[14]
R. Chavarriaga, H. Sagha, and D. Roggen. 2013. The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition. Pattern Recognition Letter 34, 15 (2013), 2033--2042.
[15]
C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, P. Koehn, and T. Robinson. 2013. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005 (2013).
[16]
Blender Online Community. 2018. Blender - a 3D modelling and rendering package. Blender Foundation, Stichting Blender Foundation, Amsterdam. http://www.blender.org
[17]
W. Conover and R. Iman. 1981. Rank transformations as a bridge between parametric and nonparametric statistics. The American Statistician 35, 3 (1981), 124--129.
[18]
J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 248--255.
[19]
J. Devlin, M. Chang, K. Lee, and K. Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 1 (2019), 4171--4186.
[20]
H. Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P. Muller. 2018. Data augmentation using synthetic data for time series classification with deep residual networks. arXiv preprint arXiv:1808.02455 (2018).
[21]
S. Feng and M. Duarte. 2019. Few-shot learning-based human activity recognition. Expert Systems with Applications 138 (2019), 112782.
[22]
A. Fernández, S. Garcia, F. Herrera, and N. Chawla. 2018. SMOTE for learning from imbalanced data: progress and challenges, marking the 15-year anniversary. Journal of artificial intelligence research 61 (2018), 863--905.
[23]
R. Girshick. 2015. Fast r-cnn. In IEEE International Conference on Computer Vision (ICCV). 1440--1448.
[24]
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. 2014. Generative adversarial nets. 2672--2680.
[25]
A. Gordon, H. Li, R. Jonschkowski, and A. Angelova. 2019. Depth From Videos in the Wild: Unsupervised Monocular Depth Learning From Unknown Cameras. In IEEE International Conference on Computer Vision (ICCV). IEEE.
[26]
C. Gu, C. Sun, D. Ross, C. Vondrick, C. Pantofaru, Y. Li, S. Vijayanarasimhan, G. Toderici, S. Ricco, R. Sukthankar, C. Schmid, and J. Malik. 2018. Ava: A video dataset of spatio-temporally localized atomic visual actions. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 6047--6056.
[27]
Y. Guan and T. Plötz. 2017. Ensembles of deep lstm learners for activity recognition using wearables. Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies (IMWUT) 1, 2 (2017), 1--28.
[28]
N. Hammerla, R. Kirkham, P. Andras, and T. Ploetz. 2013. On preserving statistical characteristics of accelerometry data using their empirical cumulative distribution. In Proceedings of the ACM International Symposium on Wearable Computers. 65--68.
[29]
N. Y. Hammerla, S. Halloran, and T. Plötz. 2016. Deep, convolutional, and recurrent models for human activity recognition using wearables. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI). AAAI Press, 1533--1540.
[30]
S. Haradal, H. Hayashi, and S. Uchida. 2018. Biosignal data augmentation based on generative adversarial networks. In Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 368--371.
[31]
H. Haresamudram, D. Anderson, and T. Plötz. 2019. On the role of features in human activity recognition. In Proceedings of the ACM International Symposium on Wearable Computers. 78--88.
[32]
K. He, X. Zhang, S. Ren, and J. Sun. 2016. Deep residual learning for image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 770--778.
[33]
M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in neural information processing systems. 6626--6637.
[34]
G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, and B. Kingsbury. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine 29, 6 (2012), 82--97.
[35]
K. Hovsepian, M. Al'Absi, E. Ertin, T. Kamarck, M. Nakajima, and S. Kumar. 2015. cStress: towards a gold standard for continuous stress assessment in the mobile environment. In Proceedings of the ACM international joint conference on pervasive and ubiquitous computing. 493--504.
[36]
Y. Huang, M. Kaufmann, E. Aksan, M. Black, O. Hilliges, and G. Pons-Moll. 2018. Deep inertial poser: learning to reconstruct human pose from sparse inertial measurements in real time. 37, 6 (2018), 1--15.
[37]
S. Ioffe and C. Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015).
[38]
I. Joel, A.and Stergios. 2011. A Direct Least-Squares (DLS) method for PnP. In IEEE International Conference on Computer Vision (ICCV). IEEE.
[39]
A. Kanazawa, M. Black, D. Jacobs, and J. Malik. 2018. End-to-end recovery of human shape and pose. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 7122--7131.
[40]
C. Kang, H. Jung, and Y. Lee. 2019. Towards Machine Learning with Zero Real-World Data. In The ACM Workshop on Wearable Systems and Applications. 41--46.
[41]
S. Kang, H. Choi, H. Park, B. Choi, H. Im, D. Shin, Y. Jung, J. Lee, H. Park, S. Park, and J. Roh. 2017. The development of an IMU integrated clothes for postural monitoring using conductive yarn and interconnecting technology. Sensors 17, 11 (2017), 2560.
[42]
P. Karlsson, B. Lo, and G. Z. Yang. 2014. Inertial sensing simulations using modified motion capture data. In Proceedings of the International Conference on Wearable and Implantable Body Sensor Networks (BSN). 16--19.
[43]
D. Kingma and J. Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
[44]
H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. 2011. HMDB: a large video database for human motion recognition. In IEEE International Conference on Computer Vision (ICCV). IEEE, 2556--2563.
[45]
Carnegie Mellon Graphics Lab. 2008. Carnegie Mellon Motion Capture Database. http://mocap.cs.cmu.edu/
[46]
N. Lane, Y. Xu, H. Lu, S. Hu, T. Choudhury, A. Campbell, and F. Zhao. 2011. Enabling Large-Scale Human Activity Inference on Smartphones Using Community Similarity Networks. In Proceedings of the International Conference on Ubiquitous Computing. ACM, 355--364.
[47]
G. Laput and C. Harrison. 2019. Sensing Fine-Grained Hand Activity with Smartwatches. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1--13.
[48]
A. Le Guennec, S. Malinowski, and R. Tavenard. 2016. Data Augmentation for Time Series Classification using Convolutional Neural Networks. In ECML/PKDD Workshop on Advanced Analytics and Learning on Temporal Data.
[49]
W. Li, Z. Zhang, and Z. Liu. 2010. Action recognition based on a bag of 3D points. In The IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 9--14.
[50]
D. Liaqat, M. Abdalla, Pegah Abed-Esfahani, Moshe Gabel, Tatiana Son, Robert Wu, Andrea Gershon, Frank Rudzicz, and Eyal De Lara. 2019. WearBreathing: Real World Respiratory Rate Monitoring Using Smartwatches. Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies (IMWUT) 3, 2 (2019), 1--22.
[51]
J. Liu, A. Shahroudy, M. Perez, G. Wang, L. Duan, and A. Kot. 2019. NTU RGB+D 120: A Large-Scale Benchmark for 3D Human Activity Understanding. IEEE Transactions on Pattern Analysis and Machine Intelligence (2019).
[52]
M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. Black. 2015. SMPL: A skinned multi-person linear model. 34, 6 (2015), 1--16.
[53]
M. Lucic, K. Kurach, M. Michalski, S. Gelly, and O. Bousquet. 2018. Are gans created equal? a large-scale study. In Advances in neural information processing systems. 700--709.
[54]
N. Mahmood, N. Ghorbani, N. Troje, G. Pons-Moll, and M. Black. 2019. AMASS: Archive of motion capture as surface shapes. In IEEE International Conference on Computer Vision (ICCV). 5442--5451.
[55]
A. Mathur, T. Zhang, S. Bhattacharya, P. Velickovic, L. Joffe, N. Lane, F. Kawsar, and P. Lió. 2018. Using deep data augmentation training to address software and hardware heterogeneities in wearable and smartphone sensing devices. In IEEE International Conference on Information Processing in Sensor Networks (IPSN). IEEE, 200--211.
[56]
A. Muhammad Sayem, S. Hon Teay, H. Shahariar, P. Fink, and A. Albarbar. 2020. Review on Smart Electro-Clothing Systems (SeCSs). Sensors 20, 3 (2020), 587.
[57]
V. Nair and G. Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the international conference on machine learning (ICML). 807--814.
[58]
A. Odena, V. Dumoulin, and C. Olah. 2016. Deconvolution and checkerboard artifacts. Distill 1, 10 (2016), e3.
[59]
F. Ofli, R. Chaudhry, G. Kurillo, R. Vidal, and R. Bajcsy. 2013. Berkeley mhad: A comprehensive multimodal human action database. In IEEE Workshop on Applications of Computer Vision (WACV). IEEE, 53--60.
[60]
A. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499 (2016).
[61]
F. J. Ordóñez and D. Roggen. 2016. Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 16, 1 (2016), 115.
[62]
J. Park, Q. Zhou, and V. Koltun. 2017. Colored Point Cloud Registration Revisited. In IEEE International Conference on Computer Vision (ICCV). 143--152.
[63]
D. Pavllo, C. Feichtenhofer, D. Grangier, and M. Auli. 2019. 3D human pose estimation in video with temporal convolutions and semi-supervised training. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 7753--7762.
[64]
T. Phąm and Y. Suh. 2018. Spline Function Simulation Data Generation for Walking Motion Using Foot-Mounted Inertial Sensors. In Sensors. MDPI, 199--210.
[65]
T. Plötz, C. Chen, N. Hammerla, and G. Abowd. 2012. Automatic synchronization of wearable sensors and video-cameras for ground truth annotation-a practical approach. In Proceedings of the ACM International Symposium on Wearable Computers. IEEE, 100--103.
[66]
F. Pomerleau, F. Colas, and R. Siegwart. 2015. A Review of Point Cloud Registration Algorithms for Mobile Robotics. Found. Trends Robot 4, 1 (May 2015), 1--104.
[67]
G. Pons-Moll, S. Pujades, S. Hu, and M. Black. 2017. ClothCap: Seamless 4D clothing capture and retargeting. 36, 4 (2017), 1--15.
[68]
G. Pons-Moll, J. Romero, N. Mahmood, and M Black. 2015. Dyna: A model of dynamic human shape in motion. 34, 4 (2015), 1--14.
[69]
G. Ramponi, P. Protopapas, M. Brambilla, and R.Janssen. 2018. T-cgan: Conditional generative adversarial network for data augmentation in noisy time series with irregular sampling. arXiv preprint arXiv:1811.08295 (2018).
[70]
K. Rashid and J. Louis. 2019. Times-series data augmentation and deep learning for construction equipment activity recognition. Advanced Engineering Informatics 42 (2019), 100944.
[71]
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. 2016. You only look once: Unified, real-time object detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 779--788.
[72]
A. Reiss and D. Stricker. 2012. Introducing a new benchmarked dataset for activity monitoring. In Proceedings of the ACM International Symposium on Wearable Computers. IEEE, 108--109.
[73]
A. Reiss and D. Stricker. 2013. Personalized mobile physical activity recognition. In Proceedings of the ACM International Symposium on Wearable Computers. 25--28.
[74]
V. Rey, P. Hevesi, O. Kovalenko, and P. Lukowicz. 2019. Let there be IMU data: generating training data for wearable, motion sensor based activity recognition from monocular RGB videos. In Adjunct Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the ACM International Symposium on Wearable Computers. 699--708.
[75]
M. Rosca, B. Lakshminarayanan, and S. Mohamed. 2018. Distribution matching in variational inference. arXiv preprint arXiv:1802.06847 (2018).
[76]
S. Rusinkiewicz and M. Levoy. 2001. Efficient variants of the ICP algorithm. In Proceedings Third International Conference on 3-D Digital Imaging and Modeling. IEEE.
[77]
A. Saeed, T. Ozcelebi, and J. Lukkien. 2019. Multi-task Self-Supervised Learning for Human Activity Detection. Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies (IMWUT) 3, 2 (2019), 1--30.
[78]
P. M. Scholl, M. Wille, and K. Van Laerhoven. 2015. Wearables in the wet lab: a laboratory system for capturing and guiding experiments. In Proceedings of the International Conference on Ubiquitous Computing. ACM, 589--599.
[79]
S. Shah and J.K. Aggarwal. 1996. Intrinsic parameter calibration procedure for a (high-distortion) fish-eye lens camera with distortion model and accuracy estimation. Pattern Recognition 29, 11 (November 1996), 1775--1788.
[80]
Z. Shen, W. Wang, X. Lu, J. Shen, H. Ling, T. Xu, and L. Shao. 2019. Human-Aware Motion Deblurring. In IEEE International Conference on Computer Vision (ICCV). 5572--5581.
[81]
J. Shi, L. Xu, and J. Jia. 2014. Discriminative blur detection features. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2965--2972.
[82]
C. Shorten and T. Khoshgoftaar. 2019. A survey on image data augmentation for deep learning. Journal of Big Data 6, 1 (2019), 60.
[83]
G. Sigurdsson, G. Varol, X. Wang, I. Laptev, A. Farhadi, and A. Gupta. 2016. Hollywood in Homes: Crowdsourcing Data Collection for Activity Understanding. arXiv preprint arXiv:1604.01753 (2016).
[84]
K. Soomro, A. Zamir, and M. Shah. 2012. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012).
[85]
O. Steven Eyobu and D. Han. 2018. Feature representation and data augmentation for human activity classification based on wearable IMU sensor data using a deep LSTM neural network. Sensors 18, 9 (2018), 2892.
[86]
T. Sztyler and H. Stuckenschmidt. 2016. On-body localization of wearable devices: An investigation of position-aware activity recognition. In IEEE International Conference on Pervasive Computing and Communications (PerCom). IEEE, 1--9.
[87]
S. Takeda, T. Okita, P. Lago, and S. Inoue. 2018. A multi-sensor setting activity recognition simulation tool. In Proceedings of the ACM International Joint Conference and International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers. 1444--1448.
[88]
E. Thomaz, I. Essa, and G. Abowd. 2015. A practical approach for recognizing eating moments with wrist-mounted inertial sensing. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing. 1029--1040.
[89]
C. Tong, S. Tailor, and N. Lane. 2020. Are Accelerometers for Activity Recognition a Dead-End?. In Proceedings of the International Workshop on Mobile Computing Systems and Applications. ACM, 39--44.
[90]
M. Trumble, A. Gilbert, C. Malleson, A. Hilton, and J. Collomosse. 2017. Total Capture: 3D Human Pose Estimation Fusing Video and Inertial Sensors. In British Machine Vision Conference (BMVC).
[91]
T. Um, F. Pfister, D. Pichler, S. Endo, M. Lang, S. Hirche, U. Fietzek, and D. Kulić. 2017. Data augmentation of wearable sensor data for parkinson's disease monitoring using convolutional neural networks. In Proceedings of the ACM International Conference on Multimodal Interaction. 216--220.
[92]
F. Xiao, L. Pei, L. Chu, D. Zou, W. Yu, Y. Zhu, and T. Li. 2020. A Deep Learning Method for Complex Human Activity Recognition Using Virtual Wearable Sensors. arXiv preprint arXiv:2003.01874 (2020).
[93]
S. Yao, Y. Zhao, H. Shao, C. Zhang, A. Zhang, S. Hu, D. Liu, S. Liu, Lu Su, and T. Abdelzaher. 2018. Sensegan: Enabling deep learning for internet of things with a semi-supervised framework. Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies (IMWUT) 2, 3 (2018), 1--21.
[94]
J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. 2014. How transferable are features in deep neural networks?. In Advances in neural information processing systems. 3320--3328.
[95]
A. Young, M. Ling, and D. Arvind. 2011. IMUSim: A simulation environment for inertial sensing algorithm design and evaluation. In Proceedings of the International Conference on Information Processing in Sensor Networks (IPSN). IEEE, 199--210.
[96]
J. Yu and R. Ramamoorthi. 2019. Robust Video Stabilization by Optimization in CNN Weight Space. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 3800--3808.
[97]
M. Zhang and A. A. Sawchuk. 2012. USC-HAD: a daily activity dataset for ubiquitous activity recognition using wearable sensors. In Proceedings of the International Conference on Ubiquitous Computing.
[98]
Q. Zhang and R. Pless. 2004. Extrinsic calibration of a camera and laser range finder (improves camera calibration). In IEEE International Conference on Intelligent Robots and Systems (IROS). IEEE.
[99]
Z. Zhao, Y. Chen, J. Liu, Z. Shen, and M. Liu. 2011. Cross-people mobile-phone based activity recognition. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI).
[100]
T. Zhou, M. Brown, Noah S., and D. Lowe. 2017. Unsupervised learning of depth and ego-motion from video. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1851--1858.
[101]
H. Zhuang. 1995. A self-calibration approach to extrinsic parameter estimation of stereo cameras. Robotics and Autonomous Systems 15, 3 (August 1995), 189--197.

Cited By

View all
  • (2024)CrossHAR: Generalizing Cross-dataset Human Activity Recognition via Hierarchical Self-Supervised PretrainingProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36595978:2(1-26)Online publication date: 15-May-2024
  • (2024)Physical Non-inertial Poser (PNP): Modeling Non-inertial Effects in Sparse-inertial Human Motion CaptureACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657436(1-11)Online publication date: 13-Jul-2024
  • (2024)ModifyAug: Data Augmentation for Virtual IMU Signal based on 3D Motion Modification Used for Real Activity RecognitionExtended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650806(1-7)Online publication date: 11-May-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 4, Issue 3
September 2020
1061 pages
EISSN:2474-9567
DOI:10.1145/3422862
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 September 2020
Published in IMWUT Volume 4, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Activity Recognition
  2. Data Collection
  3. Machine Learning

Qualifiers

  • Research-article
  • Research
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)492
  • Downloads (Last 6 weeks)34
Reflects downloads up to 30 Aug 2024

Other Metrics

Citations

Cited By

View all
  • (2024)CrossHAR: Generalizing Cross-dataset Human Activity Recognition via Hierarchical Self-Supervised PretrainingProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36595978:2(1-26)Online publication date: 15-May-2024
  • (2024)Physical Non-inertial Poser (PNP): Modeling Non-inertial Effects in Sparse-inertial Human Motion CaptureACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657436(1-11)Online publication date: 13-Jul-2024
  • (2024)ModifyAug: Data Augmentation for Virtual IMU Signal based on 3D Motion Modification Used for Real Activity RecognitionExtended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650806(1-7)Online publication date: 11-May-2024
  • (2024)Enhancing the Applicability of Sign Language TranslationIEEE Transactions on Mobile Computing10.1109/TMC.2024.335011123:9(8634-8648)Online publication date: Sep-2024
  • (2024)Midas++: Generating Training Data of mmWave Radars From Videos for Privacy-Preserving Human Sensing With MobilityIEEE Transactions on Mobile Computing10.1109/TMC.2023.332539923:6(6650-6666)Online publication date: Jun-2024
  • (2024)CROMOSim: A Deep Learning-Based Cross-Modality Inertial Measurement SimulatorIEEE Transactions on Mobile Computing10.1109/TMC.2022.323037023:1(302-312)Online publication date: Jan-2024
  • (2024)Physical Exercise Classification from Body Keypoints Using Machine Learning Techniques2024 3rd International Conference on Applied Artificial Intelligence and Computing (ICAAIC)10.1109/ICAAIC60222.2024.10575612(504-510)Online publication date: 5-Jun-2024
  • (2024)mTanaaw: A System for Assessment and Analysis of Mental Health with Wearables2024 16th International Conference on COMmunication Systems & NETworkS (COMSNETS)10.1109/COMSNETS59351.2024.10427432(105-110)Online publication date: 3-Jan-2024
  • (2024)Sensor Data Augmentation from Skeleton Pose Sequences for Improving Human Activity Recognition2024 International Conference on Activity and Behavior Computing (ABC)10.1109/ABC61795.2024.10652200(1-8)Online publication date: 29-May-2024
  • (2024)Neural network algorithm for predicting human speed based on computer vision and machine learningITM Web of Conferences10.1051/itmconf/2024590300359(03003)Online publication date: 25-Jan-2024
  • Show More Cited By

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media