Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/1031607.1031687acmconferencesArticle/Chapter ViewAbstractPublication PagescscwConference Proceedingsconference-collections
Article

Action as language in a shared visual space

Published: 06 November 2004 Publication History

Abstract

A shared visual workspace allows multiple people to see similar views of objects and environments. Prior empirical literature demonstrates that visual information helps collaborators understand the current state of their task and enables them to communicate and ground their conversations efficiently. We present an empirical study that demonstrates how action replaces explicit verbal instruction in a shared visual workspace. Pairs performed a referential communication task with and without a shared visual space. A detailed sequential analysis of the communicative content reveals that pairs with a shared workspace were less likely to explicitly verify their actions with speech. Rather, they relied on visual information to provide the necessary communicative and coordinative cues.

References

[1]
Argyle, M., & Cook, M. (1976). Gaze and mutual gaze. Cambridge: Cambridge University Press.
[2]
Bakeman, R., & Gottman, J. M. (1997). Observing Interaction: An Introduction to Sequential Analysis. Cambridge: Cambridge University Press.
[3]
Bakeman, R., & Quera, V. (1995). Analyzing interaction: Sequential analysis with SDIS and GSEQ. Cambridge: Cambridge University Press.
[4]
Barnard, P., May, J., & Salber, D. (1996). Deixis and points of view in media spaces: An empirical gesture. Behaviour and Information Technology, 15(1), 37--50.
[5]
Bekker, M. M., Olson, J. S., & Olson, G. M. (1995). Analysis of gestures in face-to-face design teams provides guidance for how to use groupware in design. In Proceedings of DIS 95, 157--166. NY: ACM Press.
[6]
Boyle, E. A., Anderson, A. H., & Newlands, A. (1994). The effects of visibility on dialogue and performance in a cooperative problem solving task. Language & Speech, 37(1), 1--20.
[7]
Brennan, S.E. (2004). How conversation is shaped by visual and spoken evidence. In J. Trueswell & M. Tanenhaus (Eds.), World Situated Language Use: Psycholinguistic, Linguistic and Computational Perspectives on Bridging the Product and Action Traditions. Cambridge, MA: MIT Press.
[8]
Brennan, S. E. & Lockridge, C. B. (Under review). Monitoring an addressee's visual attention: Effects of visual co-presence on referring in conversation.
[9]
Clark, H. H. (1996). Using language. Cambridge: Cambridge University Press.
[10]
Clark, H. H. & Brennan, S. E. (1991). Grounding in communication. In L. B. Resnick, R. M. Levine, & S. D. Teasley (Eds.). Perspectives on socially shared cognition (pp. 127--149). Washington, DC: APA.
[11]
Clark, H. H., & Krych, M. A. (2004). Speaking while monitoring addressees for understanding. Journal of Memory & Language, 50(1), 62--81.
[12]
Clark, H. H., & Marshall, C. R. (1981). Definite reference and mutual knowledge. In B. L. W. A. K. Joshi, I. A. Sag (Ed.), Elements of discourse understanding. Cambridge: Cambridge University Press.
[13]
Clark, H. H., & Wilkes-Gibbs, D. (1986). Referring as a collaborative process. Cognition, 22, 1--39.
[14]
Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors Special Issue: Situation Awareness, 37(1), 32--64.
[15]
Fienberg, S. E. (1978). The analysis of cross-classified categorical data. Cambridge, MA: MIT Press.
[16]
Fussell, S. R., & Krauss, R. M. (1992). Coordination of knowledge in communication: Effects of speakers' assumptions about what others know. Journal of Personality and Social Psychology, 62, 378--391.
[17]
Fussell, S. R., Kraut, R. E., & Siegel, J. (2000). Coordination of communication: Effects of shared visual context on collaborative work. In Proceedings of CSCW 2000, 21--30. NY: ACM Press.
[18]
Fussell, S. R., Setlock, L. D., & Kraut, R. E. (2003). Effects of head-mounted and scene-oriented video systems on remote collaboration on physical tasks. In Proceedings of CHI 2003, 513--520. NY: ACM Press.
[19]
Fussell, S.R., Setlock, L.D., & Parker, E.M. (2003). Where do helpers look? Gaze targets during collaborative physical tasks. In Proceedings of CHI 2003 (Extended Abstracts), 768--769. NY: ACM Press.
[20]
Fussell, S. R., Setlock, L. D., Yang, J., Ou, J., Mauer, E. M., & Kramer, A. (in press). Gestures over video streams to support remote collaboration on physical tasks. Human-Computer Interaction.
[21]
Gergle, D., Millen, D. E., Kraut, R. E., & Fussell, S. R. (2004). Persistence matters: Making the most of chat in tightly-coupled work. In Proceedings of CHI 2004. NY: ACM Press.
[22]
Goodman, L. A. (1978). Analyzing qualitative/categorical data: Log-linear models and latent structure analysis. Cambridge, MA: Abt Books.
[23]
Goodwin, C. (1996). Professional vision. American Anthropologist, 96, 606--633.
[24]
Greenberg, S., Gutwin, C., & Cockburn, A. (1996). Awareness through fisheye views in relaxed-WYSIWIS groupware. In Proceedings of Graphics Interface, 28--38.
[25]
Gutwin, C. & Greenberg, S. (2001) A Descriptive Framework of Workspace Awareness for Real-Time Groupware. Journal of Computer-Supported Cooperative Work, 3-4, 411--446.
[26]
Gutwin, C., & Penner, R. (2002). Improving interpretation of remote gestures with telepointer traces. In Proceedings of CSCW 2002, 49--57. NY: ACM Press.
[27]
Isaacs, E. A., & Clark, H. H. (1987). References in conversation between experts and novices. Journal of Experimental Psychology: General, 116, 26--37.
[28]
Kraut, R. E., Fussell, S. R., Brennan, S. E., & Siegel, J. (2002). Understanding effects of proximity on collaboration: Implications for technologies to support remote collaborative work. In P. Hinds & S. Kiesler (Eds.) Distributed work (pp. 137--162). Cambridge, MA: MIT Press.
[29]
Kraut, R. E., Fussell, S. R., & Siegel, J. (2003). Visual information as a conversational resource in collaborative physical tasks. Human Computer Interaction, 18(1), 13--49.
[30]
Kraut, R. E., Gergle, D., & Fussell, S. R. (2002). The use of visual information in shared visual spaces: Informing the development of virtual co-presence. In Proceedings of CSCW 2002, 31--40. NY: ACM Press.
[31]
Nardi, B. A., Schwarz, H., Kuchinsky, A., Leichner, R., Whittaker, S., & Sclabassi, R. (1993). Turning Away from Talking Heads: The Use of Video-as-Data in Neurosurgery. In Proceedings of ACM INTERCHI'93, 327--334.
[32]
Olson, G. M., Herbsleb, J. D., & Rueter, H. H. (1994). Characterizing the Sequential Structure of Interactive Behaviors Through Statistical and Grammatical Techniques. Human-Computer Interaction, 9(3,4), 427--472.
[33]
Sanderson, P. M., & Fisher, C. (1993). Exploratory Sequential Data Analysis in Practice. In Proceedings of ACM INTERCHI'93--Adjunct Proceedings, p. 221.
[34]
Schafer, W., & Bowman, D. (2003). A comparison of traditional and fisheye radar view techniques for spatial collaboration. In Proceedings of Graphics Interface, 23--46.
[35]
Schober, M. F. (1993). Spatial perspective-taking in conversation. Cognition, 47, 1--24.
[36]
Schober, M. F., & Clark, H. H. (1989). Understanding by addressees and overhearers. Cognitive Psychology, 21(2), 211--232.
[37]
Tang, J. C. (1991). Findings from observational studies of collaborative work. International Journal of Man-Machine Studies, 34, 143--160.
[38]
Weingart, L. (1997). How did they do that? The ways and means of studying group processes. Research in Organizational Behavior, 19, 40--89.

Cited By

View all
  • (2024)Gaze-action coupling, gaze-gesture coupling, and exogenous attraction of gaze in dyadic interactionsAttention, Perception, & Psychophysics10.3758/s13414-024-02978-486:8(2761-2777)Online publication date: 18-Nov-2024
  • (2024)Field Trial of a Tablet-based AR System for Intergenerational Connections through Remote ReadingProceedings of the ACM on Human-Computer Interaction10.1145/36536968:CSCW1(1-28)Online publication date: 26-Apr-2024
  • (2024)The Jamais Vu Effect: Understanding the Fragile Illusion of Co-Presence in Mixed RealityProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661574(2227-2246)Online publication date: 1-Jul-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CSCW '04: Proceedings of the 2004 ACM conference on Computer supported cooperative work
November 2004
644 pages
ISBN:1581138105
DOI:10.1145/1031607
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 06 November 2004

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. communication
  2. empirical studies
  3. language
  4. sequential analysis
  5. shared visual space

Qualifiers

  • Article

Conference

CSCW04
CSCW04: Computer Supported Cooperative Work
November 6 - 10, 2004
Illinois, Chicago, USA

Acceptance Rates

CSCW '04 Paper Acceptance Rate 53 of 176 submissions, 30%;
Overall Acceptance Rate 2,235 of 8,521 submissions, 26%

Upcoming Conference

CSCW '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)35
  • Downloads (Last 6 weeks)1
Reflects downloads up to 22 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Gaze-action coupling, gaze-gesture coupling, and exogenous attraction of gaze in dyadic interactionsAttention, Perception, & Psychophysics10.3758/s13414-024-02978-486:8(2761-2777)Online publication date: 18-Nov-2024
  • (2024)Field Trial of a Tablet-based AR System for Intergenerational Connections through Remote ReadingProceedings of the ACM on Human-Computer Interaction10.1145/36536968:CSCW1(1-28)Online publication date: 26-Apr-2024
  • (2024)The Jamais Vu Effect: Understanding the Fragile Illusion of Co-Presence in Mixed RealityProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661574(2227-2246)Online publication date: 1-Jul-2024
  • (2023)Measuring and Comparing Collaborative Visualization Behaviors in Desktop and Augmented Reality EnvironmentsProceedings of the 29th ACM Symposium on Virtual Reality Software and Technology10.1145/3611659.3615691(1-11)Online publication date: 9-Oct-2023
  • (2023)UnMapped: Leveraging Experts’ Situated Experiences to Ease Remote Guidance in Collaborative Mixed RealityProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581444(1-20)Online publication date: 19-Apr-2023
  • (2023)Volumetric Mixed Reality Telepresence for Real-time Cross Modality CollaborationProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581277(1-14)Online publication date: 19-Apr-2023
  • (2023)Task-related gaze behaviour in face-to-face dyadic collaboration: Toward an interactive theory?Visual Cognition10.1080/13506285.2023.225050731:4(291-313)Online publication date: 30-Aug-2023
  • (2022)Probing the Potential of Extended Reality to Connect Experts and Novices in the GardenProceedings of the ACM on Human-Computer Interaction10.1145/35552116:CSCW2(1-30)Online publication date: 11-Nov-2022
  • (2022)I See You: Examining the Role of Spatial Information in Human-Agent TeamsProceedings of the ACM on Human-Computer Interaction10.1145/35550996:CSCW2(1-27)Online publication date: 11-Nov-2022
  • (2022)A Study of the Effects of Network Latency on Visual Task Performance in Video ConferencingExtended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491101.3519678(1-7)Online publication date: 27-Apr-2022
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media