Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3279778.3279795acmconferencesArticle/Chapter ViewAbstractPublication PagesissConference Proceedingsconference-collections
research-article

EagleView: A Video Analysis Tool for Visualising and Querying Spatial Interactions of People and Devices

Published: 19 November 2018 Publication History

Abstract

To study and understand group collaborations involving multiple handheld devices and large interactive displays, researchers frequently analyse video recordings of interaction studies to interpret people's interactions with each other and/or devices. Advances in ubicomp technologies allow researchers to record spatial information through sensors in addition to video material. However, the volume of video data and high number of coding parameters involved in such an interaction analysis makes this a time-consuming and labour-intensive process. We designed EagleView, which provides analysts with real-time visualisations during playback of videos and an accompanying data-stream of tracked interactions. Real-time visualisations take into account key proxemic dimensions, such as distance and orientation. Overview visualisations show people's position and movement over longer periods of time. EagleView also allows the user to query people's interactions with an easy-to-use visual interface. Results are highlighted on the video player's timeline, enabling quick review of relevant instances. Our evaluation with expert users showed that EagleView is easy to learn and use, and the visualisations allow analysts to gain insights into collaborative activities.

Supplementary Material

suppl.mov (iss1099.zip)
Supplemental video
MP4 File (p61-brudy.mp4)

References

[1]
Frederik Brudy, Joshua Kevin Budiman, Steven Hou-ben, and Nicolai Marquardt. 2018. Investigating the Role of an Overview Device in Multi-Device Collabora-tion. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 300:1--300:13.
[2]
Brandon Burr. 2006. VACA: a tool for qualitative video analysis. In CHI'06 extended abstracts on Human fac-tors in computing systems, 622--627. Retrieved from
[3]
Stamatia Dasiopoulou, Eirini Giannakidou, Geor-gios Litos, Polyxeni Malasioti, and Yiannis Kom-patsiaris. 2011. A Survey of Semantic Image and Video Annota-tion Tools. In Knowledge-Driven Multimedia Infor-mation Extraction and Ontology Evolution, Georgios Paliouras, Constantine D. Spyropoulos and George Tsatsaronis (eds.). Springer Berlin Heidelberg, 196--239.
[4]
Pryce Davis, Michael Horn, Florian Block, Brenda Phil-lips, E. Margaret Evans, Judy Diamond, and Chia Shen. 2015. "Whoa! We're going deep in the trees!": Patterns of collaboration around an inter-active information visu-alization exhibit. Interna-tional Journal of Computer-Supported Collabora-tive Learning 10, 1: 53--76.
[5]
Adam Fouse, Nadir Weibel, Edwin Hutchins, and James D. Hollan. 2011. ChronoViz: a system for supporting navigation of time-coded data. In CHI'11 Extended Ab-stracts on Human Factors in Computing Systems, 299--304. Retrieved May 11, 2017 from
[6]
Saul Greenberg, Nicolai Marquardt, Till Ballendat, Rob Diaz-Marino, and Miaosen Wang. 2011. Proxemic inter-actions: the new ubicomp? interac-tions 18, 42--50.
[7]
Joey Hagedorn, Joshua Hailpern, and Karrie G. Kara-halios. 2008. VCode and VData: illustrating a new framework for supporting the video annota-tion work-flow. In Proceedings of the working conference on Ad-vanced visual interfaces, 317--321. Retrieved from
[8]
Edward Twitchell Hall. 1969. The hidden dimen-sion. Anchor Books New York.
[9]
Gang Hu, Derek Reilly, Mohammed Alnusayri, Ben Swinden, and Qigang Gao. 2014. DT-DT: Top-down Human Activity Analysis for Interac-tive Surface Appli-cations. 167--176.
[10]
N. Hu, G. Englebienne, and B. Kröse. 2013. Pos-ture recognition with a top-view camera. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2152--2157.
[11]
Petra Isenberg, Danyel Fisher, Paul Sharoda A., Meredith Ringel Morris, Kori Inkpen, and Mary Czerwinski. 2012. Co-located Collaborative Visual Analytics Around a Tabletop Display. IEEE Transactions on Visualization and Computer Graphics 18, 5: 689--702.
[12]
Mikkel R. Jakobsen and Kasper Hornbæk. 2014. Up close and personal: Collaborative work on a high-resolution multitouch wall display. ACM Transactions on Computer-Human Interaction 21, 2: 1--34.
[13]
Brigitte Jordan and Austin Henderson. 1995. Inter-action Analysis: Foundations and Practice. Journal of the Learning Sciences 4, 1: 39--103.
[14]
Wendy Ju, Brian A. Lee, and Scott R. Klemmer. 2008. Range: exploring implicit interaction through electronic whiteboard design. In Proceed-ings of the 2008 ACM conference on Computer supported cooperative work, 17--26. Retrieved from
[15]
Adam Kendon. 2010. Spacing and Orientation in Co-present Interaction. In Development of Multi-modal Interfaces: Active Listening and Synchrony. Springer, Berlin, Heidelberg, 1--15.
[16]
Walter S. Lasecki, Mitchell Gordon, Danai Koutra, Malte F. Jung, Steven P. Dow, and Jeffrey P. Bigham. 2014. Glance: rapidly coding behavioral video with the crowd. 551--562.
[17]
David Ledo, Steven Houben, Jo Vermeulen, Nico-lai Marquardt, Lora Oehlberg, and Saul Greenberg. 2018. Evaluation Strategies for HCI Toolkit Research. In Proceedings of the 2018 CHI Confer-ence on Human Factors in Computing Systems (CHI '18), 36:1--36:17.
[18]
S. C. Lin, A. S. Liu, T. W. Hsu, and L. C. Fu. 2015. Representative Body Points on Top-View Depth Se-quences for Daily Activity Recognition. In 2015 IEEE International Conference on Systems, Man, and Cyber-netics, 2968--2973.
[19]
Nicolai Marquardt, Robert Diaz-Marino, Sebastian Boring, and Saul Greenberg. 2011. The proximity toolkit: prototyping proxemic interactions in ubiq-uitous computing ecologies. In Proceedings of the 24th annual ACM symposium on User interface software and technology, 315--326. Retrieved from
[20]
Nicolai Marquardt, Ken Hinckley, and Saul Greenberg. 2012. Cross-device interaction via mi-cro-mobility and f-formations. In Proceedings of the 25th annual ACM symposium on User inter-face software and technology, 13--22. Retrieved from
[21]
Nicolai Marquardt, Frederico Schardong, and An-thony Tang. 2015. EXCITE: EXploring Collabora-tive Interaction in Tracked Environments. In Hu-man-Computer Interaction, 89--97. Retrieved from
[22]
Paul Marshall, Yvonne Rogers, and Nadia Pantidi. 2011. Using F-formations to analyse spatial pat-terns of interaction in physical environments. In Proceedings of the ACM 2011 conference on Computer supported cooperative work, 445--454. Retrieved from
[23]
Daniel Michelis and Jörg Müller. 2011. The audi-ence funnel: Observations of gesture based interaction with multiple large displays in a city center. Intl. Journal of Human--Computer Interaction 27, 6: 562--579.
[24]
Roshanak Zilouchian Moghaddam and Brian Bai-ley. 2011. VICPAM: a visualization tool for exam-ining interaction data in multiple display environ-ments. In Symposium on Human Interface, 278--287. Retrieved from
[25]
Nvivo. 2017. Nvivo, www.qsrinternational.com. Nvivo. Retrieved November 25, 2016 from http://www.qsrinternational.com/
[26]
Stacey D. Scott, M. Sheelagh T. Carpendale, and Kori M. Inkpen. 2004. Territoriality in collabora-tive tabletop workspaces. In Proceedings of the 2004 ACM conference on Computer supported cooperative work, 294--303. Retrieved from
[27]
Anthony Tang, Saul Greenberg, and Sidney Fels. 2008. Exploring Video Streams Using Slit-tear Visualizations. In Proceedings of the Working Conference on Advanced Visual Interfaces (AVI '08), 191--198.
[28]
Anthony Tang, Michel Pahud, Sheelagh Carpen-dale, and Bill Buxton. 2010. VisTACO: visualizing tabletop collaboration. In ACM International Conference on Interactive Tabletops and Surfaces, 29--38. Retrieved from
[29]
Daniel Vogel and Ravin Balakrishnan. 2004. In-teractive public ambient displays: transitioning from implicit to explicit, public to personal, inter-action with multiple users. In Proceedings of the 17th annual ACM symposium on User interface software and technology, 137--146. Retrieved from
[30]
Carl Vondrick, Donald Patterson, and Deva Ra-manan. 2013. Efficiently scaling up crowdsourced video annotation. International Journal of Computer Vision 101, 1: 184--204.
[31]
Andrew D. Wilson and Hrvoje Benko. 2010. Combining Multiple Depth Cameras and Projec-tors for Interactions on, Above and Between Sur-faces. In Proceedings of the 23Nd Annual ACM Symposium on User Interface Software and Tech-nology (UIST '10), 273--282.
[32]
Chi-Jui Wu, Steven Houben, and Nicolai Mar-quardt. 2017. EagleSense: Tracking People and Devices in Interactive Spaces using Real-Time Top-View Depth-Sensing. 3929--3942.
[33]
Ulrich von Zadow and Raimund Dachselt. 2017. GIAnT: Visualizing Group Interaction at Large Wall Dis-plays. 2639--2647.
[34]
ATLAS.ti: The Qualitative Data Analysis & Re-search Software. atlas.ti. Retrieved July 6, 2018 from https://atlasti.com/

Cited By

View all
  • (2024)Less Typing, More Tagging: Investigating Tag-based Interfaces in Online Accommodation Review Creation and PerceptionProceedings of the 13th Nordic Conference on Human-Computer Interaction10.1145/3679318.3685369(1-13)Online publication date: 13-Oct-2024
  • (2024)PilotAR: Streamlining Pilot Studies with OHMDs from Concept to InsightProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785768:3(1-35)Online publication date: 9-Sep-2024
  • (2024)RealityEffects: Augmenting 3D Volumetric Videos with Object-Centric Annotation and Dynamic Visual EffectsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661631(1248-1261)Online publication date: 1-Jul-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ISS '18: Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces
November 2018
499 pages
ISBN:9781450356947
DOI:10.1145/3279778
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 19 November 2018

Permissions

Request permissions for this article.

Check for updates

Badges

  • Honorable Mention

Author Tags

  1. cross-device interaction analysis
  2. group collaboration
  3. information visualisation
  4. interaction analysis
  5. spatial interaction
  6. video analysis

Qualifiers

  • Research-article

Conference

ISS '18
Sponsor:

Acceptance Rates

ISS '18 Paper Acceptance Rate 28 of 105 submissions, 27%;
Overall Acceptance Rate 147 of 533 submissions, 28%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)43
  • Downloads (Last 6 weeks)3
Reflects downloads up to 20 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Less Typing, More Tagging: Investigating Tag-based Interfaces in Online Accommodation Review Creation and PerceptionProceedings of the 13th Nordic Conference on Human-Computer Interaction10.1145/3679318.3685369(1-13)Online publication date: 13-Oct-2024
  • (2024)PilotAR: Streamlining Pilot Studies with OHMDs from Concept to InsightProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785768:3(1-35)Online publication date: 9-Sep-2024
  • (2024)RealityEffects: Augmenting 3D Volumetric Videos with Object-Centric Annotation and Dynamic Visual EffectsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661631(1248-1261)Online publication date: 1-Jul-2024
  • (2024)Lifelogging in Mixed RealityExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650897(1-8)Online publication date: 11-May-2024
  • (2024)Practice-informed Patterns for Organising Large Groups in Distributed Mixed Reality CollaborationProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642502(1-18)Online publication date: 11-May-2024
  • (2023)Tesseract: Querying Spatial Design Recordings by Manipulating Worlds in MiniatureProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580876(1-16)Online publication date: 19-Apr-2023
  • (2023)AutoVis: Enabling Mixed-Immersive Analysis of Automotive User Interface Interaction StudiesProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580760(1-23)Online publication date: 19-Apr-2023
  • (2023)Pearl: Physical Environment based Augmented Reality Lenses for In-Situ Human Movement AnalysisProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580715(1-15)Online publication date: 19-Apr-2023
  • (2022)Semi-automated Analysis of Collaborative Interaction: Are We There Yet?Proceedings of the ACM on Human-Computer Interaction10.1145/35677246:ISS(354-380)Online publication date: 14-Nov-2022
  • (2022)Classroom Dandelions: Visualising Participant Position, Trajectory and Body Orientation Augments Teachers’ SensemakingProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3517736(1-17)Online publication date: 29-Apr-2022
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media