Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3341163.3347744acmconferencesArticle/Chapter ViewAbstractPublication PagesubicompConference Proceedingsconference-collections
research-article

Handling annotation uncertainty in human activity recognition

Published: 09 September 2019 Publication History

Abstract

Developing systems for Human Activity Recognition (HAR) using wearables typically relies on datasets that were manually annotated by human experts with regards to precise timings of instances of relevant activities. However, obtaining such data annotations is often very challenging in the predominantly mobile scenarios of Human Activity Recognition. As a result, labels often carry a degree of uncertainty-label jitter-with regards to: i) correct temporal alignments of activity boundaries; and ii) correctness of the actual label provided by the human annotator. In this work, we present a scheme that explicitly incorporates label jitter into the model training process. We demonstrate the effectiveness of the proposed method through a systematic experimental evaluation on standard recognition tasks for which our method leads to significant increases of mean F1 scores.

References

[1]
A. M. Aung and J. Whitehill. 2018. Harnessing label uncertainty to improve modeling: An application to student engagement recognition. In FG. IEEE, 166--170.
[2]
M. Bächlin, M. Plotnik, and G. Tröster. 2010. Wearable assistant for Parkinson's disease patients with the freezing of gait symptom. IEEE Trans. Inf. Technol. Biomed. 14, 2 (2010), 436--446.
[3]
M. Barz, M. M Moniri, and D. Sonntag. 2016. Multimodal multisensor activity annotation tool. In Ubicomp. ACM, 17--20.
[4]
J. C. Bezdek. 1992. Computing with uncertainty. Complement 2 (1992), 3.
[5]
A. Bulling, U. Blanke, and B. Schiele. 2014. A tutorial on human activity recognition using body-worn inertial sensors. ACM CSUR 46, 3 (2014), 33.
[6]
R. Chavarriaga, H. Sagha, and D. Roggen. 2013. The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition. Pattern Recognit. Lett. 34, 15 (2013), 2033--2042.
[7]
K. Fukunaga. 2013. Introduction to statistical pattern recognition. Elsevier.
[8]
N. Y. Hammerla S. Halloran, and T. Plötz. 2016. Deep, convolutional, and recurrent models for human activity recognition using wearables. In IJCAI. AAAI Press, 1533--1540.
[9]
N. Y. Hammerla, R. Kirkham, P. Andras, and T. Plötz. 2013. On preserving statistical characteristics of accelerometry data using their empirical cumulative distribution. In ISWC.
[10]
D. Hao Hu, S. J. Pan, V. W. Zheng, N. N. Liu, and Q. Yang. 2008. Real world activity recognition with multiple goals. In Proceedings of the 10th international conference on Ubiquitous computing. ACM, 30--39.
[11]
N. Hu, G. Englebienne, and B. Kröse. 2015. A hierarchical representation for human activity recognition with noisy labels. In IROS. 2517--2522.
[12]
N. Hu, G. Englebienne, and B. Kröse. 2017. Learning to Recognize Human Activities Using Soft Labels. TPAMI 39, 10 (2017), 1973--1984.
[13]
M. Kipp. 2008. Spatiotemporal Coding in ANVIL. In LREC.
[14]
R. Kirkham, A. Khan, and T. Plötz. 2013. Automatic correction of annotation boundaries in activity datasets by class separation maximization. In Ubicomp 2013. ACM, 673--678.
[15]
A. Kurakin, I. Goodfellow, and S. Bengio. 2016. Adversarial examples in the physical world. arXiv:1607.02533 (2016).
[16]
C. Lea, R. Vidal, and G. D Hager. 2016. Learning convolutional action primitives for fine-grained action recognition. In 2016 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 1642--1649.
[17]
C. Lim, E. Vats, and C. Chan. 2015. Fuzzy human motion analysis: A review. Pattern Recognition 48, 5 (2015), 1773--1796.
[18]
T. R Morris, C. Cho, and S. T. Moore. 2012. A comparison of clinical and objective measures of freezing of gait in Parkinson's disease. Parkinsonism Relat. Disord. 18, 5 (2012), 572--577.
[19]
R. Müller, S. Kornblith, and G. Hinton. 2019. When Does Label Smoothing Help? arXiv preprint arXiv:1906.02629 (2019).
[20]
M. Nasir, B. Baucom, and S. Narayanan. 2015. Redundancy analysis of behavioral coding for couples therapy and improved estimation of behavior from noisy annotations. In ICASSP. IEEE, 1886--1890.
[21]
A. Nguyen, J. Yosinski, and J. Clune. 2015. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In CVPR. 427--436.
[22]
L. Nguyen-Dinh, C. Waldburger, and G. Tröster. 2013. Tagging human activities in video by crowdsourcing. In ICMR. ACM, 263--270.
[23]
K. Niazmand, K. Tonn, Y. Zhao, UM. Fietzek, F. Schroeteler, K. Ziegler, AO. Ceballos-Baumann, and TC. Lueth. 2011. Freezing of Gait detection in Parkinson's disease using accelerometer based smart clothes. In BioCAS. IEEE, 201--204.
[24]
F. J. Ordóñez and D. Roggen. 2016. Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 16, 1 (2016), 115.
[25]
G. Pereyra, G. Tucker, and G. Hinton. 2017. Regularizing neural networks by penalizing confident output distributions. In arXiv:1701.06548.
[26]
T. Plötz, C. Chen, and G. D. Abowd. 2012. Automatic Synchronization of Wearable Sensors and Video-Cameras for Ground Truth Annotation-A Practical Approach. In ISWC. IEEE, 100--103.
[27]
J. Reyes-Ortiz, L. Oneto, and D. Anguita. 2016. Transition-aware human activity recognition using smartphones. Neurocomputing 171 (2016), 754--767.
[28]
A. Ruiz, O. Martinez, and F. M. Sukno. 2017. Fusion of valence and arousal annotations through dynamic subjective ordinal modelling. In FG. IEEE, 331--338.
[29]
P. M. Scholl, M. Wille, and K. Van Laerhoven. 2015. Wearables in the wet lab: a laboratory system for capturing and guiding experiments. In Ubicomp. ACM, 589--599.
[30]
S. Stein and S. J. McKenna. 2013. Combining embedded accelerometers with computer vision for recognizing food preparation activities. In Ubicomp. ACM, 729--738.
[31]
C. Szegedy, V. Vanhoucke, and Z. Wojna. 2016. Rethinking the inception architecture for computer vision. In CVPR. 2818--2826.
[32]
T. Toda, S. Inoue, and N. Ueda. 2014. Training human activity recognition for labels with inaccurate time stamps. In Ubicomp. ACM, 863--872.
[33]
V. Vapnik. 1992. Principles of risk minimization for learning theory. In NIPS. 831--838.
[34]
J. A. Ward, P. Lukowicz, and H. W. Gellersen. 2011. Performance metrics for activity recognition. TIST 2, 1 (2011), 6.
[35]
E. B. Wilson. 1927. Probable inference, the law of succession, and statistical inference. J. Am. Stat. Assoc. 22, 158 (1927), 209--212.
[36]
L. Xie, J. Wang, and Q. Tian. 2016. Disturblabel: Regularizing cnn on the loss layer. In CVPR. 4753--4762.
[37]
K. Yordanova and F. Krüger. 2018. Creating and Exploring Semantic Annotation for Behaviour Analysis. Sensors 18, 9 (2018), 2778.
[38]
L. A Zadeh. 1997. Toward a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. Fuzzy sets and systems 90, 2 (1997), 111--127.

Cited By

View all
  • (2024)Collecting Self-reported Physical Activity and Posture Data Using Audio-based Ecological Momentary AssessmentProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785848:3(1-35)Online publication date: 9-Sep-2024
  • (2024)IMUGPT 2.0: Language-Based Cross Modality Transfer for Sensor-Based Human Activity RecognitionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785458:3(1-32)Online publication date: 9-Sep-2024
  • (2023)On the Utility of Virtual On-body Acceleration Data for Fine-grained Human Activity RecognitionProceedings of the 2023 ACM International Symposium on Wearable Computers10.1145/3594738.3611364(55-59)Online publication date: 8-Oct-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ISWC '19: Proceedings of the 2019 ACM International Symposium on Wearable Computers
September 2019
355 pages
ISBN:9781450368704
DOI:10.1145/3341163
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 September 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. activity recognition
  2. label jitter
  3. machine learning

Qualifiers

  • Research-article

Conference

UbiComp '19

Acceptance Rates

Overall Acceptance Rate 38 of 196 submissions, 19%

Upcoming Conference

UbiComp '24

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)88
  • Downloads (Last 6 weeks)5
Reflects downloads up to 22 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Collecting Self-reported Physical Activity and Posture Data Using Audio-based Ecological Momentary AssessmentProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785848:3(1-35)Online publication date: 9-Sep-2024
  • (2024)IMUGPT 2.0: Language-Based Cross Modality Transfer for Sensor-Based Human Activity RecognitionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785458:3(1-32)Online publication date: 9-Sep-2024
  • (2023)On the Utility of Virtual On-body Acceleration Data for Fine-grained Human Activity RecognitionProceedings of the 2023 ACM International Symposium on Wearable Computers10.1145/3594738.3611364(55-59)Online publication date: 8-Oct-2023
  • (2023)X-CHARProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35808047:1(1-28)Online publication date: 28-Mar-2023
  • (2023)MultiSense: Cross-labelling and Learning Human Activities Using Multimodal Sensing DataACM Transactions on Sensor Networks10.1145/357826719:3(1-26)Online publication date: 17-Apr-2023
  • (2023)Locomotion Mode Recognition Using Sensory Data With Noisy Labels: A Deep Learning ApproachIEEE Transactions on Mobile Computing10.1109/TMC.2021.313587822:6(3460-3471)Online publication date: 1-Jun-2023
  • (2023)When the Ground Truth is not True: Modelling Human Biases in Temporal Annotations2023 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops)10.1109/PerComWorkshops56833.2023.10150351(527-533)Online publication date: 13-Mar-2023
  • (2023)If only we had more data!: Sensor-Based Human Activity Recognition in Challenging Scenarios2023 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops)10.1109/PerComWorkshops56833.2023.10150267(565-570)Online publication date: 13-Mar-2023
  • (2023)Evolving multi-user fuzzy classifier system with advanced explainability and interpretability aspectsInformation Fusion10.1016/j.inffus.2022.10.02791(458-476)Online publication date: Mar-2023
  • (2023)Daily unbalanced action recognition based on active learningMultimedia Tools and Applications10.1007/s11042-023-16181-483:6(16255-16274)Online publication date: 14-Jul-2023
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media