Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2388676.2388716acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Integrating PAMOCAT in the research cycle: linking motion capturing and conversation analysis

Published: 22 October 2012 Publication History

Abstract

In order to understand and model the non-verbal communicative conduct of humans, it seems fruitful to combine qualitative (Conversation Analysis [6] [10] [11]) and quantitative analytical (motion capturing) methods. Tools for data visualization and annotation are important as they constitute a central interface between different research approaches and methodologies. With this aim we have developed the pre-annotation tool "PAMOCAT - Pre Annotation Motion Capture Analysis Tool" that detects different phenomena. These phenomena are in the category single person and person overlapping phenomena. Included are functions for the analysis of head focused objects, hand activities, single DOF- degree of freedom activity, posture detection and intrusions into the co-participant's space. These detected phenomena related to the frames will be displayed in an overview. The phenomena can be chosen to search for a specific constellation between these different phenomena. A sophisticated user interface easily allows the annotating person to find correlations between different joints and phenomena, to analyze the corresponding 3D pose in a reconstructed virtual environment, and to export combined qualitative and quantitative annotations to standard annotation tools. Using this technique we are able to examine complex setups with three participants engaged in conversation. In this paper we propose how PAMOCAT can be integrated in the research cycle by showing a concrete PAMOCAT-based micro-analysis of a multimodal phenomenon, which deals with kinetic procedures to claim the floor.

References

[1]
Pitsch, K., Brüning, B., Schnier, C. and Wachmuth, S., 2010. Linking Conversation Analysis and Motion Capturing: "How to robustly track multiple participants?". In Proceedings Workshop on Multimodal Corpora, LREC.
[2]
Auer, E., Russel, A. Sloetjes, H., Witternurg, P., Schreer, O., Masneri, S., Schneider, D. & Toepel, S., 2010. ELAN as flexible annotation framework for sound and image processing detectors. Proceedings of LREC.
[3]
Heloir, A., Neff, M., Kipp, M., 2010, Exploiting Motion Capture for Virtual Human Animation: Data Collection and Annotation Visualization. In Proceedings of LREC Workshop on "Multimodal Corpora", ELDA.
[4]
Schnier, C., 2010. Turn-Taking: Interaktive Projektionsleistungen über Kinesische Displays. Masterarbeit Universität Bielefeld, p. 93.
[5]
Mondada, L., 2007. Multimodal resources for turn-taking: pointing and the emergence of possible next speakers. In Discourse Studies, 9, 2, pp. 194--225.
[6]
Sacks, H., Schegloff, E. A. and Jefferson G., 1974. A Simplest Systematics for the Organization of Turn Taking for Conversation. In: Language, 50, pp. 696--735.
[7]
Brüning, B., Schnier, C., Pitsch, K., Wachsmuth, S., 2012, PAMOCAT: Automatic retrieval of specified postures, In Proceedings Workshop on Multimodal Corpora, LREC 2012.
[8]
Chilton, P., 2009, Get and the grasp schema. In: Evans, V. & Pourcel, S. (Hrsg.). New Directions in Cognitive Linguistics. (333--337). Amsterdam, Philadelphia: John Benjamins Publishing Company.
[9]
Sacks, H., 1992. Lectures on conversation. Malden; Oxford: Blackwell.
[10]
ten Have, P., 1999. Doing conversation analysis. A practical guide. London: Sage.
[11]
Duncan, S., 1972. Some signals and rules for taking speaking turns in conversations. Journal of Personality and Social Psychology, 1972, 23, (2), (pp. 283--292).
[12]
Goodwin, C., 1981. Conversational Organization: Interaction between Hearers and Speakers. New York, NY: Academic Press.

Cited By

View all
  • (2023)AutoVis: Enabling Mixed-Immersive Analysis of Automotive User Interface Interaction StudiesProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580760(1-23)Online publication date: 19-Apr-2023
  • (2022)ReLive: Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User StudiesProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3517550(1-20)Online publication date: 29-Apr-2022
  • (2021)MIRIA: A Mixed Reality Toolkit for the In-Situ Visualization and Analysis of Spatio-Temporal Interaction DataProceedings of the 2021 CHI Conference on Human Factors in Computing Systems10.1145/3411764.3445651(1-15)Online publication date: 6-May-2021
  • Show More Cited By

Index Terms

  1. Integrating PAMOCAT in the research cycle: linking motion capturing and conversation analysis

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ICMI '12: Proceedings of the 14th ACM international conference on Multimodal interaction
      October 2012
      636 pages
      ISBN:9781450314671
      DOI:10.1145/2388676
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 22 October 2012

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. annotation
      2. motion analysis
      3. motion capturing

      Qualifiers

      • Research-article

      Conference

      ICMI '12
      Sponsor:
      ICMI '12: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
      October 22 - 26, 2012
      California, Santa Monica, USA

      Acceptance Rates

      Overall Acceptance Rate 453 of 1,080 submissions, 42%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)4
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 09 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2023)AutoVis: Enabling Mixed-Immersive Analysis of Automotive User Interface Interaction StudiesProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580760(1-23)Online publication date: 19-Apr-2023
      • (2022)ReLive: Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User StudiesProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3517550(1-20)Online publication date: 29-Apr-2022
      • (2021)MIRIA: A Mixed Reality Toolkit for the In-Situ Visualization and Analysis of Spatio-Temporal Interaction DataProceedings of the 2021 CHI Conference on Human Factors in Computing Systems10.1145/3411764.3445651(1-15)Online publication date: 6-May-2021
      • (2017)Analysis of Small GroupsSocial Signal Processing10.1017/9781316676202.025(349-367)Online publication date: 13-Jul-2017

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media