Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- ArticleJuly 2024
Egocentric Behaviour Analysis Based on Object Relationship Extraction for Cognitive Rehabilitation Support
- Adnan Rachmat Anom Besari,
- Syadza Atika Rahmah,
- Fernando Ardilla,
- Azhar Aulia Saputra,
- Takenori Obo,
- Naoyuki Kubota
AbstractHuman behaviour recognition plays a significant role in early intervention for the cognitive rehabilitation of older people. While existing methods focus on improving third-person vision, human visual attention has been largely ignored in ...
- keynoteOctober 2020
Human-Centric Object Interactions - A Fine-Grained Perspective from Egocentric Videos
HuMA'20: Proceedings of the 1st International Workshop on Human-centric Multimedia AnalysisPage 1https://doi.org/10.1145/3422852.3423569This talk aims to argue for a fine(r)-grained perspective onto human-object interactions. Motivation: Observe a person chopping some parsley. Can you detect the moment at which the parsley was first chopped? Whether the parsley was chopped coarsely or ...
- research-articleApril 2019
Egocentric Visitors Localization in Cultural Sites
Journal on Computing and Cultural Heritage (JOCCH), Volume 12, Issue 2Article No.: 11, Pages 1–19https://doi.org/10.1145/3276772We consider the problem of localizing visitors in a cultural site from egocentric (first-person) images. Localization information can be useful both to assist the user during his visit (e.g., by suggesting where to go and what to see next) and to ...
- research-articleJune 2018
Visual features for ego-centric activity recognition: a survey
WearSys '18: Proceedings of the 4th ACM Workshop on Wearable Systems and ApplicationsPages 48–53https://doi.org/10.1145/3211960.3211978Wearable cameras, which are becoming common mobile sensing platforms to capture the environment surrounding a person, can also be used to infer activities of the wearer. In this paper we critically discuss features for ego-centric activity recognition ...
- research-articleJanuary 2018
Using context from inside‐out vision for improved activity recognition
IET Computer Vision (CVI2), Volume 12, Issue 3Pages 276–287https://doi.org/10.1049/iet-cvi.2017.0141The authors propose a method to improve activity recognition by including the contextual information from first person vision (FPV). Adding the context, i.e. objects seen while performing an activity, increases the activity recognition precision. This is ...
- research-articleJune 2017
Wearable for Wearable: A Social Signal Processing Perspective for Clothing Analysis using Wearable Devices
WearMMe '17: Proceedings of the 2017 Workshop on Wearable MultiMediaPages 5–9https://doi.org/10.1145/3080538.3080540Clothing conveys a strong communicative message in terms of social signals, influencing the impression and behaviour of others towards a person; unfortunately, the nature of this message is not completely clear, and social signal processing approaches ...
- research-articleDecember 2016
EgoCap: egocentric marker-less motion capture with two fisheye cameras
- Helge Rhodin,
- Christian Richardt,
- Dan Casas,
- Eldar Insafutdinov,
- Mohammad Shafiei,
- Hans-Peter Seidel,
- Bernt Schiele,
- Christian Theobalt
ACM Transactions on Graphics (TOG), Volume 35, Issue 6Article No.: 162, Pages 1–11https://doi.org/10.1145/2980179.2980235Marker-based and marker-less optical skeletal motion-capture methods use an outside-in arrangement of cameras placed around a scene, with viewpoints converging on the center. They often create discomfort with marker suits, and their recording volume is ...
- research-articleMarch 2012
Automatic acquisition of a 3D eye model for a wearable first-person vision device
ETRA '12: Proceedings of the Symposium on Eye Tracking Research and ApplicationsPages 213–216https://doi.org/10.1145/2168556.2168597A wearable gaze tracking device can work with users in daily-life. For long time of use, a non-active method that does not employ an infrared illumination system is desirable from safety standpoint. It is well known that the eye model constraints ...
- ArticleNovember 2011
Attention prediction in egocentric video using motion and visual saliency
PSIVT'11: Proceedings of the 5th Pacific Rim conference on Advances in Image and Video Technology - Volume Part IPages 277–288https://doi.org/10.1007/978-3-642-25367-6_25We propose a method of predicting human egocentric visual attention using bottom-up visual saliency and egomotion information. Computational models of visual saliency are often employed to predict human attention; however, its mechanism and ...