Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2968219.2968290acmconferencesArticle/Chapter ViewAbstractPublication PagesubicompConference Proceedingsconference-collections
research-article

Exploring human activity annotation using a privacy preserving 3D model

Published: 12 September 2016 Publication History

Abstract

Annotating activity recognition datasets is a very time consuming process. Using lay annotators (e.g. using crowd-sourcing) has been suggested to speed this up. However, this requires to preserve privacy of users and may preclude relying on video for annotation. We investigate to which extent using a 3D human model animated from the data of inertial sensors placed on the limbs allows for annotation of human activities. We animate the upper body of the 3D model with the data from 5 inertial measurement sensors obtained from the OPPORTUNITY dataset. The animated model is shown to 6 people in a suite of experiments in order to understand to which extent it can be used for labelling. We present 3 experiments where we investigate the use of a 3D model for i) activity segmentation, ii) for "open-ended" annotation where users freely describe the activity they see on screen, and iii) traditional annotation, where users pick one activity among a pre-defined list of activities. In the latter case, results show that users recognise the model's activities with 56% accuracy when picking from 11 possible activities.

Supplementary Material

ZIP File (p803-ciliberto.zip)
Supplemental material.

References

[1]
Amazon. 2005. Amazon Mechanical Turk. (2005). https://www.mturk.com/mturk/welcome, accessed 16/06/2016.
[2]
Akin Avci, Stephan Bosch, Mihai Marin-Perianu, Raluca Marin-Perianu, and Paul Havinga. 2010. Activity recognition using inertial sensing for healthcare, wellbeing and sports applications: A survey. In Architecture of computing systems (ARCS), 2010 23rd international conference on. VDE, 1--10.
[3]
Ling Bao and Stephen S Intille. 2004. Activity recognition from user-annotated acceleration data. In Pervasive computing. Springer, 1--17.
[4]
Michael Boyle, Christopher Edwards, and Saul Greenberg. 2000. The effects of filtered video on awareness and privacy. In Proceedings of the 2000 ACM conference on Computer supported cooperative work. ACM, 1--10.
[5]
Andreas Bulling, Ulf Blanke, and Bernt Schiele. 2014. A tutorial on human activity recognition using body-worn inertial sensors. ACM Computing Surveys (CSUR) 46, 3 (2014), 33.
[6]
J. Dai, J. Wu, B. Saghafi, J. Konrad, and P. Ishwar. 2015. Towards privacy-preserving activity recognition using extremely low temporal and spatial resolution cameras. In 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 68--76.
[7]
Susumu Harada, Jonathan Lester, Kayur Patel, T Scott Saponas, James Fogarty, James A Landay, and Jacob O Wobbrock. 2008. VoiceLabel: using speech to label mobile sensor data. In Proceedings of the 10th international conference on Multimodal interfaces. ACM, 69--76.
[8]
The jME core team. 2016. jMonkeyEngine. (2016). http://jmonkeyengine.org/.
[9]
Long-Van Nguyen-Dinh, Cédric Waldburger, Daniel Roggen, and Gerhard Tröster. 2013. Tagging human activities in video by crowdsourcing. In Proceedings of the 3rd ACM conference on International conference on multimedia retrieval. ACM, 263--270.
[10]
Gabriel Parent and Maxine Eskenazi. 2011. Speaking to the Crowd: Looking at Past Achievements in Using Crowdsourcing for Speech and Predicting Future Challenges. In INTERSPEECH. Citeseer, 3037--3040.
[11]
Daniel Roggen, Alberto Calatroni, Mirco Rossi, Thomas Holleczek, Kilian Förster, Gerhard Tröster, Paul Lukowicz, David Bannach, Gerald Pirkl, Alois Ferscha, and others. 2010. Collecting complex activity datasets in highly rich networked sensor environments. In Networked Sensing Systems (INSS), 2010 Seventh International Conference on. IEEE, 233--240.
[12]
Tim Van Kasteren, Athanasios Noulas, Gwenn Englebienne, and Ben Kröse. 2008. Accurate activity recognition in a home setting. In Proceedings of the 10th international conference on Ubiquitous computing. ACM, 1--9.
[13]
Carl Vondrick, Donald Patterson, and Deva Ramanan. 2013. Efficiently scaling up crowdsourced video annotation. International Journal of Computer Vision 101, 1 (2013), 184--204.
[14]
Aobo Wang, Cong Duy Vu Hoang, and Min-Yen Kan. 2013. Perspectives on crowdsourcing annotations for natural language processing. Language resources and evaluation 47, 1 (2013), 9--31.
[15]
XSens. 2000. MTx 3D Tracker. (2000). https://www.xsens.com/products/mtx/

Cited By

View all
  • (2021)Opportunity++: A Multimodal Dataset for Video- and Wearable, Object and Ambient Sensors-Based Human Activity RecognitionFrontiers in Computer Science10.3389/fcomp.2021.7920653Online publication date: 20-Dec-2021
  • (2020)Annotation Performance for multi-channel time series HAR Dataset in Logistics2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)10.1109/PerComWorkshops48775.2020.9156170(1-6)Online publication date: Mar-2020
  • (2019)Semantic human activity annotation tool using skeletonized surveillance videosAdjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers10.1145/3341162.3343807(312-315)Online publication date: 9-Sep-2019
  • Show More Cited By

Index Terms

  1. Exploring human activity annotation using a privacy preserving 3D model

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    UbiComp '16: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct
    September 2016
    1807 pages
    ISBN:9781450344623
    DOI:10.1145/2968219
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 12 September 2016

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. 3D human model
    2. activity recognition
    3. annotation
    4. wearable technologies

    Qualifiers

    • Research-article

    Funding Sources

    • EPSRC

    Conference

    UbiComp '16

    Acceptance Rates

    Overall Acceptance Rate 764 of 2,912 submissions, 26%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)5
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 23 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2021)Opportunity++: A Multimodal Dataset for Video- and Wearable, Object and Ambient Sensors-Based Human Activity RecognitionFrontiers in Computer Science10.3389/fcomp.2021.7920653Online publication date: 20-Dec-2021
    • (2020)Annotation Performance for multi-channel time series HAR Dataset in Logistics2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)10.1109/PerComWorkshops48775.2020.9156170(1-6)Online publication date: Mar-2020
    • (2019)Semantic human activity annotation tool using skeletonized surveillance videosAdjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers10.1145/3341162.3343807(312-315)Online publication date: 9-Sep-2019
    • (2018)Fundamental Concept of University Living Laboratory for Appropriate FeedbackProceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers10.1145/3267305.3267511(1454-1461)Online publication date: 8-Oct-2018

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media