Let there be IMU data: generating training data for wearable, motion sensor based activity recognition from monocular RGB videos

VF Rey, P Hevesi, O Kovalenko… - Adjunct proceedings of the …, 2019 - dl.acm.org
Adjunct proceedings of the 2019 ACM international joint conference on …, 2019dl.acm.org
Recent advances in Machine Learning, in particular Deep Learning have been driving rapid
progress in fields such as computer vision and natural language processing. Human activity
recognition (HAR) using wearable sensors, which has been a thriving research field for the
last 20 years, has benefited much less from such advances. This is largely due to the lack of
adequate amounts of labeled training data. In this paper we propose a method to mitigate
the labeled data problem in wearable HAR by generating wearable motion data from …
Recent advances in Machine Learning, in particular Deep Learning have been driving rapid progress in fields such as computer vision and natural language processing. Human activity recognition (HAR) using wearable sensors, which has been a thriving research field for the last 20 years, has benefited much less from such advances. This is largely due to the lack of adequate amounts of labeled training data. In this paper we propose a method to mitigate the labeled data problem in wearable HAR by generating wearable motion data from monocular RGB videos, which can be collected from popular video platforms such as YouTube. Our method works by extracting 2D poses from video frames and then using a regression model to map them to sensor signals. We have validated it on fitness exercises as the domain for which activity recognition is trained and shown that we can improve a HAR recognition model using data that was produced from a YouTube video.
ACM Digital Library