Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Understanding the Influences of Past Experience on Trust in Human-agent Teamwork

Published: 13 September 2019 Publication History

Abstract

People use the knowledge acquired from past experiences in assessing the trustworthiness of a trustee. In a time where the agents are being increasingly accepted as partners in collaborative efforts and activities, it is critical to understand all aspects of human trust development in agent partners. For human-agent virtual ad hoc teams to be effective, humans must be able to trust their agent counterparts. To earn the humans’ trust, agents need to quickly develop an understanding of the expectation of human team members and adapt accordingly. This study empirically investigates the impact of past experience on human trust in and reliance on agent teammates. To do so, we developed a team coordination game, the Game of Trust (GoT), in which two players repeatedly cooperate to complete team tasks without prior assignment of subtasks. The effects of past experience on human trust are evaluated by performing an extensive set of controlled experiments with participants recruited from Amazon Mechanical Turk, a crowdsourcing marketplace. We collect both teamwork performance data as well as surveys to gauge participants’ trust in their agent teammates. The results show that positive (negative) past experience increases (decreases) human trust in agent teammates; lack of past experience leads to higher trust levels compared to positive past experience; positive (negative) past experience facilitates (hinders) reliance on agent teammates; the relationship between trust in and reliance on agent teammates is not always correlated. These findings provide clear and significant evidence of the influence of key factors on human trust in virtual agent teammates and enhance our understanding of the changes in human trust in peer-level agent teammates with respect to past experience.

References

[1]
Dimitrios Antos, Celso de Melo, Jonathan Gratch, and Barbara Grosz. 2011. The influence of emotion expression on perceptions of trustworthiness in negotiation. In Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI’11). 772--778.
[2]
Benoit A. Aubert and Barbara L. Kelsey. 2003. Further understanding of trust and performance in virtual teams. Small Group Res. 34, 5 (Oct. 2003), 575--618.
[3]
Robert Axelrod. 1984. The Evolution of Cooperation. Basic Books.
[4]
Gary Bente, Odile Baptist, and Haug Leuschner. 2012. To buy or not to buy: Influence of seller photos and reputation on buyer trust and purchase behavior. Int. J. Human-Comput. Stud. 70, 1 (2012), 1--13.
[5]
Moshe Bitan, Ya’akov Gal, Sarit Kraus, Elad Dokow, and Amos Azaria. 2013. Social rankings in human-computer committees. In Proceedings of the AAAI Conference on Artificial Intelligence. 116--122.
[6]
Grant Blank and William H. Dutton. 2012. Age and trust in the internet: The centrality of experience and attitudes toward technology in britain. Soc. Sci. Comput. Rev. 30, 2 (2012), 135--151.
[7]
Cody Buntain and Jennifer Golbeck. 2015. Trust transfer between contexts. Journal of Trust Management 2, 6 (2015), 1--16.
[8]
Christiano Castelfranchi and Rino Falcone. 2010. Trust Theory: A Socio-Cognitive and Computational Model. John Wiley 8 Sons.
[9]
E. Chan and P. Vorderer. 2006. Massively multiplayer online games. In Playing Computer Games: Motives, Responses, and Consequences, P. Vorderer and J. Bryant (Eds.). Lawrence Erlbaum Associates Publishers, Mahwah, NJ, 77--88.
[10]
Ying-Hueih Chen, Shu-Hua Chien, Jyh-Jeng Wu, and Pei-Yin Tsai. 2010. Impact of signals and experience on trust and trusting behavior. Cyberpsych., Behav., Soc. Netw. 13, 5 (Oct. 2010), 539--546.
[11]
Celso de Melo, Peter Carnevale, and Jonathan Gratch. 2014. Social categorization and cooperation between humans and computers. In Proceedings of the Meeting of the Cognitive Science Society (CogSci’14). 2109--2114.
[12]
Ewart J. de Visser, Frank Krueger, Patrick McKnight, Steven Scheid, Melissa Smith, Stephanie Chalk, and Raja Parasuraman. 2012. The world is not enough: Trust in cognitive agents. In Proceedings of the Human Factors and Ergonomics Society Meeting 56, 1 (Sept. 2012), 263--267.
[13]
Hongying Du and M. N. Huhns. 2013. Determining the effect of personality types on human-agent interactions. In Proceedings of the IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI’13) and Intelligent Agent Technologies (IAT’13). 239--244.
[14]
Jennifer R. Dunn and Maurice E. Schweitzer. 2005. Feeling and believing: The influence of emotion on trust. J. Person. Soc. Psychol. 88, 5 (May 2005), 736--748.
[15]
William H. Dutton and Adrian Shepherd. 2006. Trust in the internet as an experience technology. Inform., Commun. Soc. 9, 4 (Nov. 2006), 433--451.
[16]
Mary T. Dzindolet, Scott A. Peterson, Regina A. Pomranky, Linda G. Pierce, and Hall P. Beck. 2003. The role of trust in automation reliance. Int. J. Human-Comput. Stud. 58, 6 (2003), 697--718.
[17]
Rino Falcone and Cristiano Castelfranchi. 2004. Trust dynamics: How trust is influenced by direct experiences and by trust itself. In Proceedings of the 3rd International Joint Conference on Autonomous Agents and Multiagent Systems. IEEE Computer Society, Washington, DC, 740--747.
[18]
Xiaocong Fan, Sooyoung Oh, Michael McNeese, John Yen, Haydee Cuevas, Laura Strater, and Mica R. Endsley. 2008. The influence of agent reliability on trust in human-agent collaboration. In Proceedings of the 15th European Conference on Cognitive Ergonomics: The Ergonomics of Cool Interaction. ACM, New York, NY, 1--8.
[19]
Ernst Fehr and Klaus M. Schmidt. 1999. A theory of fairness, competition, and cooperation. Quart. J. Econ. 114, 3 (Aug. 1999), 817--868.
[20]
Mark A. Fuller, Mark A. Serva, and John “Skip” Benamati. 2007. Seeing is believing: The transitory influence of reputation information on e-commerce trust and decision making. Decis. Sci. 38, 4 (2007), 675--699.
[21]
Alyssa Glass, Deborah L. McGuinness, and Michael Wolverton. 2008. Toward establishing trust in adaptive agents. In Proceedings of the 13th International Conference on Intelligent User Interfaces. ACM, New York, NY, 227--236.
[22]
Feyza Merve Hafizoglu and Sandip Sen. 2015. Evaluating trust levels in human-agent teamwork in virtual environments. In Proceedings of the HAIDM Workshop at the Autonomous Agents and Multiagent Systems (AAMAS’15). 1--16.
[23]
Galit Haim, Yaakov Kobi Gal, Sarit Kraus, and Michele Gelfand. 2012. A cultural sensitive agent for human-computer negotiation. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2012), Conitzer, Winikoff, Padgham, and van der Hoek (Eds.). International Foundation for Autonomous Agents and Multiagent Systems, 451--458.
[24]
Nader Hanna and Deborah Richards. 2014. “Building a bridge”: Communication, trust and commitment in human-intelligent virtual agent teams. In Proceedings of the HAIDM Workshop at the Autonomous Agents and Multiagent Systems (AAMAS’14).
[25]
Sebastian Hergeth, Lutz Lorenz, and Josef F. Krems. 2017. Prior familiarization with takeover requests affects drivers’ takeover performance and automation trust. Human Fact. 59, 3 (May 2017), 457--470.
[26]
Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human Fact. 57, 3 (May 2015), 407--434.
[27]
Gareth R. Jones and Jennifer M. George. 1998. The experience and evolution of trust: Implications for cooperation and teamwork. Acad. Manag. Rev. 23, 3 (July 1998).
[28]
Aniket Kittur, Jeffrey V. Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The future of crowd work. In Proceedings of the Conference on Computer Supported Cooperative Work. ACM, 1301--1318.
[29]
Sherrie Y. X. Komiak and Izak Benbasat. 2006. The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS Quart. 30, 4 (Dec. 2006), 941--960.
[30]
John D. Lee and Katrina A. See. 2004. Trust in automation: Designing for appropriate reliance. Human Fact.: J. Human Fact. Ergon. Soc. 46, 1 (Spring 2004), 50--80.
[31]
R. B. Lount Jr. 2010. The impact of positive mood on trust in interpersonal and intergroup interactions. J. Person. Soc. Psychol. 98, 3 (2010), 420--433.
[32]
Dietrich Manzey, Juliane Reichenbach, and Linda Onnasch. 2012. Human performance consequences of automated decision aids: The impact of degree of automation and system experience. J. Cog. Eng. Decis. Mak. 6 (Jan. 2012), 57--87.
[33]
Roger C. Mayer, James H. Davis, and F. David Schoorman. 1995. An integrative model of organizational trust. Acad. Manag. Rev. 20, 3 (July 1995), 709--734.
[34]
Daniel J. McAllister. 1995. Affect- and cognition-based trust as foundations for interpersonal cooperation in organizations. Acad. Manag. J. 38, 1 (Feb. 1995), 24--59.
[35]
Stephanie M. Merritt, Heather Heimbaugh, Jennifer LaChapell, and Deborah Lee. 2013. I trust it, but i don’t know why effects of implicit attitudes toward automation on trust in an automated system. Human Fact.: J. Human Fact. Ergon. Soc. 55, 3 (June 2013), 520--534.
[36]
Tim Robert Merritt, Kian Boon Tan, Christopher Ong, Aswin Thomas Abraham, Teong Leong Chuah, and Kevin McGee. 2011. Are artificial team-mates scapegoats in computer games? In Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW’11). ACM, 685--688.
[37]
Kathleen L. Mosier, Linda J. Skitka, Susan Heers, and Mark Burdick. 1998. Automation bias: Decision making and performance in high-tech cockpits. Int. J. Aviat. Psychol. 8, 1 (1998), 47--63.
[38]
Florian Nothdurft, Tobias Heinroth, and Wolfgang Minker. 2013. The impact of explanation dialogues on human-computer trust. In Human-Computer Interaction. Users and Contexts of Use, Masaaki Kurosu (Ed.). Springer, Berlin, 59--67.
[39]
Christopher Ong, Kevin McGee, and Teong Leong Chuah. 2012. Closing the human-AI team-mate gap: How changes to displayed information impact player behavior towards computer teammates. In Proceedings of the 24th Australian Computer-Human Interaction Conference. ACM, New York, NY, 433--439.
[40]
R. Pak, N. Fink, M. Price, B. Bass, and L. Sturre. 2012. Decision support aids with anthropomorphic characteristics influence trust and performance in younger and older adults. Ergonomics 55, 9 (July 2012), 1059--1072.
[41]
Paul Robinette, Alan R. Wagner, and Ayanna M. Howard. 2013. Building and maintaining trust between humans and guidance robots in an emergency. In Proceedings of the AAAI Spring Symposium: Trust and Autonomous Systems. 78--83.
[42]
Juergen Sauer, Alain Chavaillaz, and David Wastell. 2016. Experience of automation failures in training: Effects on trust, automation bias, complacency and performance. Ergonomics 59, 6 (May 2016), 767--780.
[43]
Sandip Sen. 2013. A comprehensive approach to trust management. In Proceedings of the 12th International Conference on Autonomous Agents and Multiagent Systems. 797--800.
[44]
C. K. Stokes, J. B. Lyons, K. Littlejohn, J. Natarian, E. Case, and N. Speranza. 2010. Accounting for the human in cyberspace: Effects of mood on trust in automation. In Proceedings of the International Symposium on Collaborative Technologies and Systems (CTS’10). 180--187.
[45]
Piotr Sztompka. 1999. Trust: A Sociological Theory. Cambridge University Press, Cambridge, UK.
[46]
A. van Wissen, Y. Gal, B. A. Kamphorst, and M. V. Dignum. 2012. Human-agent teamwork in dynamic environments. Comput. Human Behav. 28, 1 (2012), 23--33.
[47]
Arlette van Wissen, Jurriaan van Diggelen, and Virginia Dignum. 2009. The effects of cooperative agent behavior on human cooperativeness. In Proceedings of 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS’09). 1179--1180.
[48]
Frank Verberne, Jaap Ham, and Cees J. H. Midden. 2012. Trust in smart systems: Sharing driving goals and giving information to increase trustworthiness and acceptability of smart systems in cars. Human Fact.: J. Human Fact. Ergon. Soc. 54, 5 (May 2012), 799--810.
[49]
Frank M. F. Verberne, Jaap Ham, and Cees J. H. Midden. 2014. Familiar faces: Trust in a facially similar agent. In Proceedings of the HAIDM Workshop at the Autonomous Agents and Multiagent Systems (AAMAS’14).
[50]
Robert A. Wagner and Michael J. Fischer. 1974. The string-to-string correction problem. J. ACM 21, 1 (Jan. 1974), 168--173.

Cited By

View all
  • (2024)Low-rank human-like agents are trusted more and blamed less in human-autonomy teamingFrontiers in Artificial Intelligence10.3389/frai.2024.12733507Online publication date: 29-Apr-2024
  • (2024)A Meta-Analysis of Vulnerability and Trust in Human–Robot InteractionACM Transactions on Human-Robot Interaction10.1145/365889713:3(1-25)Online publication date: 29-Apr-2024
  • (2024)Principal–agent trust and adoption of digital financial services: Evidence from IndiaEconomic Notes10.1111/ecno.1224753:3Online publication date: 3-Sep-2024
  • Show More Cited By

Index Terms

  1. Understanding the Influences of Past Experience on Trust in Human-agent Teamwork

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Internet Technology
    ACM Transactions on Internet Technology  Volume 19, Issue 4
    Special Section on Trust and AI and Regular Papers
    November 2019
    201 pages
    ISSN:1533-5399
    EISSN:1557-6051
    DOI:10.1145/3362102
    • Editor:
    • Ling Liu
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 13 September 2019
    Accepted: 01 April 2019
    Revised: 01 March 2019
    Received: 01 November 2018
    Published in TOIT Volume 19, Issue 4

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Human-agent teamwork
    2. past experience
    3. reliance
    4. trust

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Funding Sources

    • University of Tulsa, Bellwether (doctoral) Fellowship and Graduate Student Research Program

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)58
    • Downloads (Last 6 weeks)7
    Reflects downloads up to 03 Oct 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Low-rank human-like agents are trusted more and blamed less in human-autonomy teamingFrontiers in Artificial Intelligence10.3389/frai.2024.12733507Online publication date: 29-Apr-2024
    • (2024)A Meta-Analysis of Vulnerability and Trust in Human–Robot InteractionACM Transactions on Human-Robot Interaction10.1145/365889713:3(1-25)Online publication date: 29-Apr-2024
    • (2024)Principal–agent trust and adoption of digital financial services: Evidence from IndiaEconomic Notes10.1111/ecno.1224753:3Online publication date: 3-Sep-2024
    • (2024)What I Don’t Like about You?: A Systematic Review of Impeding Aspects for the Usage of Conversational AgentsInteracting with Computers10.1093/iwc/iwae01836:5(293-312)Online publication date: 31-May-2024
    • (2023)Experience, Trust, eWOM Engagement and Usage Intention of AI Enabled Services in Hospitality and Tourism Industry: Moderating Mediating AnalysisJournal of Quality Assurance in Hospitality & Tourism10.1080/1528008X.2023.2167762(1-29)Online publication date: 20-Jan-2023
    • (2023)If machines outperform humans: status threat evoked by and willingness to interact with sophisticated machines in a work-related context*Behaviour & Information Technology10.1080/0144929X.2023.221068843:7(1348-1364)Online publication date: 11-May-2023
    • (2022)Robot service failure: the double-edged sword effect of emotional labor in service recoveryJournal of Service Theory and Practice10.1108/JSTP-03-2022-004833:1(72-88)Online publication date: 28-Dec-2022
    • (2022)Impact of cognitive workload and situation awareness on clinicians’ willingness to use an artificial intelligence system in clinical practiceIISE Transactions on Healthcare Systems Engineering10.1080/24725579.2022.212703513:2(89-100)Online publication date: 3-Oct-2022
    • (2021)Trait Based Trustworthiness Assessment in Human-Agent Collaboration Using Multi-Layer Fuzzy Inference ApproachIEEE Access10.1109/ACCESS.2021.30798389(73561-73574)Online publication date: 2021
    • (2021)The role of domain expertise in trusting and following explainable AI decision support systemsJournal of Decision Systems10.1080/12460125.2021.195850532:1(110-138)Online publication date: 11-Aug-2021

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media