Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3371382.3378270acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
abstract

The Unexpected Daily Situations (UDS) Dataset: A New Benchmark for Socially-Aware Assistive Robots

Published: 01 April 2020 Publication History

Abstract

This article presents the progress in building a new dataset of 'unexpected daily situations' (like someone tripping on a box, while carrying a tray to the kitchen, or someone burning him/herself with hot water and dropping a mug). Each of the situations involve one or two humans in a familiar, structured environment (eg, a kitchen, a living room) with rich semantics. Correctly interpreting the situation (including recognising an error, undesired effect or incongruity when it occurs, as well as selecting the best repair action) requires beyond-state-of-art spatio-temporal, semantic and socio-cognitive modelling. As such, the aim of the dataset is to offer (i) a realistic source of data to train and test such novel algorithms and (ii) provide a new benchmark against which algorithms can be demonstrated.

References

[1]
João Carreira, Eric Noland, Chloe Hillier, and Andrew Zisserman. 2019. A Short Note on the Kinetics-700 Human Action Dataset. CoRR abs/1907.06987 (2019). arXiv:1907.06987 http://arxiv.org/abs/1907.06987
[2]
Gunnar A Sigurdsson, Santosh Divvala, Ali Farhadi, and Abhinav Gupta. 2017. Asynchronous temporal fields for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 585--594.
[3]
Gunnar A. Sigurdsson, Abhinav Gupta, Cordelia Schmid, Ali Farhadi, and Karteek Alahari. 2018. Actor and Observer: Joint Modeling of First and Third-Person Videos. CoRR abs/1804.09627 (2018). arXiv:1804.09627 http://arxiv.org/abs/1804. 09627
[4]
Gunnar A. Sigurdsson, Abhinav Gupta, Cordelia Schmid, Ali Farhadi, and Karteek Alahari. 2018. Charades-Ego: A Large-Scale Dataset of Paired Third and First Person Videos. CoRR abs/1804.09626 (2018). arXiv:1804.09626 http://arxiv.org/ abs/1804.09626
[5]
Gunnar A Sigurdsson, Gül Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. 2016. Hollywood in homes: Crowdsourcing data collection for activity understanding. In European Conference on Computer Vision. Springer, 510--526.
[6]
Karen Simonyan and Andrew Zisserman. 2014. Two-stream convolutional networks for action recognition in videos. In Advances in neural information processing systems. 568--576.
[7]
FelixWarneken and Michael Tomasello. 2006. Altruistic helping in human infants and young chimpanzees. science 311, 5765 (2006), 1301--1303.
[8]
Felix Warneken and Michael Tomasello. 2007. Helping and cooperation at 14 months of age. Infancy 11, 3 (2007), 271--294.
[9]
Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, and Kevin Murphy. 2018. Rethinking Spatiotemporal Feature Learning: Speed-Accuracy Trade-offs in Video Classification. In The European Conference on Computer Vision (ECCV).
[10]
Hang Zhao, Zhicheng Yan, Lorenzo Torresani, and Antonio Torralba. 2019. HACS: Human Action Clips and Segments Dataset for Recognition and Temporal Localization. arXiv preprint arXiv:1712.09374 (2019).

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
HRI '20: Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction
March 2020
702 pages
ISBN:9781450370578
DOI:10.1145/3371382
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 April 2020

Check for updates

Author Tags

  1. assistive robot
  2. benchmark
  3. dataset
  4. error detection
  5. human-robot collaboration
  6. intention recognition
  7. joint action

Qualifiers

  • Abstract

Conference

HRI '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 192 of 519 submissions, 37%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 189
    Total Downloads
  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)0
Reflects downloads up to 23 Dec 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media