Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1007/978-3-031-21707-4_21guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

I’m Only Human: The Effects of Trust Dampening by Anthropomorphic Agents

Published: 26 June 2022 Publication History

Abstract

The increasing prevalence of automated and autonomous systems necessitates design that facilitates user trust calibration. Trust repair and trust dampening have been suggested as behaviors with which a system can correct inappropriate states of user trust, yet trust dampening has received less attention in the literature. This paper aims to address this with a 2 (agent anthropomorphism: low, high) × 3 (message: control, apology, trust dampening) between-subject experiment which observes the effects of trust dampening delivered by anthropomorphic interface agents. Agent stimuli were chosen based on a pretest of 58 participants, after which the main experiment was conducted online with 225 participants. Results indicate that trust dampening increased perceptions of system integrity and may improve trust appropriateness, suggesting that lowering expectations via trust dampening messages is a viable approach for automated system designers.

References

[1]
Axelrod R and Hamilton WD The evolution of cooperation Science 1981 211 4489 1390-1396
[2]
Bartneck C, Kulić D, Croft E, and Zoghbi S Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots Int. J. Soc. Robot. 2009 1 1 71-81
[3]
Bickmore, T., Cassell, J.: Relational agents: a model and implementation of building user trust. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 396–403 (2001)
[4]
Cassell J Embodied conversational agents: representation and intelligence in user interfaces AI Mag. 2001 22 4 67-67
[5]
Catrambone R, Stasko J, and Xiao J Ruttkay Z and Pelachaud C ECA as user interface paradigm From Brows to Trust 2004 Dordrecht Springer 239-267
[6]
Cohen, M.S., Parasuraman, R., Freeman, J.T.: Trust in decision aids: a model and its training implications. In: Proceedings of Command and Control Research and Technology Symposium, pp. 1–37 (1998)
[7]
Culley KE and Madhavan P A note of caution regarding anthropomorphism in hci agents Comput. Hum. Behav. 2013 29 3 577-579
[8]
Hoff KA and Bashir M Trust in automation: integrating empirical evidence on factors that influence trust Hum. Factors 2015 57 3 407-434
[9]
Jensen, T., Albayram, Y., Khan, M.M.H., Fahim, M.A.A., Buck, R., Coman, E.: The apple does fall far from the tree: user separation of a system from its developers in human-automation trust repair. In: Proceedings of the 2019 on Designing Interactive Systems Conference, pp. 1071–1082 (2019)
[10]
Jensen T, Khan MMH, and Albayram Y Degen H and Reinerman-Jones L The role of behavioral anthropomorphism in human-automation trust calibration Artificial Intelligence in HCI 2020 Cham Springer 33-53
[11]
Jensen, T., Khan, M.M.H., Fahim, M.A.A., Albayram, Y.: Trust and anthropomorphism in tandem: the interrelated nature of automated agent appearance and reliability in trustworthiness perceptions. In: Designing Interactive Systems Conference 2021, pp. 1470–1480 (2021)
[12]
Lee JD and See KA Trust in automation: designing for appropriate reliance Hum. Factors 2004 46 1 50-80
[13]
Lewicki RJ and Brinsfield C Trust repair Annu. Rev. Organ. Psych. Organ. Behav. 2017 4 287-313
[14]
Lewicki RJ, McAllister DJ, and Bies RJ Trust and distrust: new relationships and realities Acad. Manag. Rev. 1998 23 3 438-458
[15]
Luhmann, N.: Trust and Power. Wiley (1979)
[16]
Macy, M.W., Skvoretz, J.: The evolution of trust and cooperation between strangers: a computational model. American Sociological Review, pp. 638–660 (1998)
[17]
Madhavan P and Wiegmann DA Similarities and differences between human-human and human-automation trust: an integrative review Theor. Issues Ergon. Sci. 2007 8 4 277-301
[18]
Mayer RC and Davis JH The effect of the performance appraisal system on trust for management: a field quasi-experiment J. Appl. Psychol. 1999 84 1 123
[19]
Mayer RC, Davis JH, and Schoorman FD An integrative model of organizational trust Acad. Manag. Rev. 1995 20 3 709-734
[20]
Moon Y Intimate exchanges: using computers to elicit self-disclosure from consumers J. Consumer Res. 2000 26 4 323-339
[21]
Muir, B.M.: Trust in automation: Part i. theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37(11), 1905–1922 (1994)
[22]
Muir, B.M., Moray, N.: Trust in automation. part ii. experimental studies of trust and human intervention in a process control simulation. Ergonomics 39(3), 429–460 (1996)
[23]
Nass C and Moon Y Machines and mindlessness: social responses to computers J. Soc. Issues 2000 56 1 81-103
[24]
Nass, C., Steuer, J., Tauber, E.R.: Computers are social actors. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 72–78 (1994)
[25]
Nowak KL, Hamilton MA, and Hammond CC The effect of image features on judgments of homophily, credibility, and intention to use as avatars in future interactions Media Psychol. 2009 12 1 50-76
[26]
Pak R, Fink N, Price M, Bass B, and Sturre L Decision support aids with anthropomorphic characteristics influence trust and performance in younger and older adults Ergonomics 2012 55 9 1059-1072
[27]
Parasuraman R and Riley V Humans and automation: Use, misuse, disuse, abuse Hum. Factors 1997 39 2 230-253
[28]
Quinn, D.B., Pak, R., de Visser, E.J.: Testing the efficacy of human-human trust repair strategies with machines. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting. vol. 61, pp. 1794–1798. SAGE Publications Sage CA: Los Angeles, CA (2017)
[29]
Reeves, B., Nass, C.I.: The media equation: How people treat computers, television, and new media like real people and places. Cambridge University Press (1996)
[30]
Robinette, P., Li, W., Allen, R., Howard, A.M., Wagner, A.R.: Overtrust of robots in emergency evacuation scenarios. In: 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 101–108. IEEE (2016)
[31]
Seyama, J., Nagayama, R.S.: The uncanny valley: effect of realism on the impression of artificial human faces. Presence: Teleoperators Virtual Environ. 16(4), 337–351 (2007)
[32]
Smith, M.A., Allaham, M.M., Wiese, E.: Trust in automated agents is modulated by the combined influence of agent and task type. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 60, pp. 206–210. SAGE Publications Sage CA: Los Angeles, CA (2016)
[33]
Strait, M., Vujovic, L., Floerke, V., Scheutz, M., Urry, H.: Too much humanness for human-robot interaction: exposure to highly humanlike robots elicits aversive responding in observers. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 3593–3602 (2015)
[34]
de Visser, E.J., et al.: The world is not enough: trust in cognitive agents. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 56, pp. 263–267. Sage Publications Sage CA: Los Angeles, CA (2012)
[35]
de Visser EJ, Monfort SS, Goodyear K, Lu L, O’Hara M, Lee MR, Parasuraman R, and Krueger F A little anthropomorphism goes a long way: effects of oxytocin on trust, compliance, and team performance with automated agents Hum. Factors 2017 59 1 116-133
[36]
de Visser EJ, Monfort SS, McKendrick R, Smith MA, McKnight PE, Krueger F, and Parasuraman R Almost human: anthropomorphism increases trust resilience in cognitive agents J. Exp. Psychol. Appl. 2016 22 3 331
[37]
de Visser EJ, Pak R, and Shaw TH From ‘automation’ to ‘autonomy’: the importance of trust repair in human-machine interaction Ergonomics 2018 61 10 1409-1427
[38]
de Visser EJ, Peeters MM, Jung MF, Kohn S, Shaw TH, Pak R, and Neerincx MA Towards a theory of longitudinal trust calibration in human-robot teams Int. J. Soc. Robot. 2020 12 2 459-478
[39]
Waytz A, Cacioppo J, and Epley N Who sees human? the stability and importance of individual differences in anthropomorphism Perspect. Psychol. Sci. 2010 5 3 219-232
[40]
Waytz A, Heafner J, and Epley N The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle J. Exp. Soc. Psychol. 2014 52 113-117
[41]
Wicks AC, Berman SL, and Jones TM The structure of optimal trust: Moral and strategic implications Acad. Manag. Rev. 1999 24 1 99-116

Index Terms

  1. I’m Only Human: The Effects of Trust Dampening by Anthropomorphic Agents
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Guide Proceedings
      HCI International 2022 – Late Breaking Papers: Interacting with eXtended Reality and Artificial Intelligence: 24th International Conference on Human-Computer Interaction, HCII 2022, Virtual Event, June 26 – July 1, 2022, Proceedings
      Jun 2022
      629 pages
      ISBN:978-3-031-21706-7
      DOI:10.1007/978-3-031-21707-4
      • Editors:
      • Jessie Y. C. Chen,
      • Gino Fragomeni,
      • Helmut Degen,
      • Stavroula Ntoa

      Publisher

      Springer-Verlag

      Berlin, Heidelberg

      Publication History

      Published: 26 June 2022

      Author Tags

      1. Trust
      2. Dampening
      3. Calibration
      4. Appropriateness
      5. Computers are social actors
      6. Anthropomorphism

      Qualifiers

      • Article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 0
        Total Downloads
      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 14 Jan 2025

      Other Metrics

      Citations

      View Options

      View options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media