Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3580585.3607171acmconferencesArticle/Chapter ViewAbstractPublication PagesautomotiveuiConference Proceedingsconference-collections
research-article

Effects of Automation Transparency on Trust: Evaluating HMI in the Context of Fully Autonomous Driving

Published: 18 September 2023 Publication History

Abstract

Automation transparency offers a promising way for users to calibrate their trust in autonomous vehicles. However, it is still unknown what kind of information should be provided in driving scenarios with different risks and how this affects user trust. Driving scenarios based on different risks and Human-Machine-Interface (HMI) with different transparency based on Situation Awareness–Based Agent Transparency (SAT) model were developed to investigate the impact of risk and transparency on user trust using nine simulated fully autonomous drives within a static driving simulator environment. Results showed that driving scenario with lower-risk and HMI with higher-transparency increased user trust-related beliefs and intention to use. And perceived reliability and trust fully mediated the effects of risk and transparency on intention to use. The findings of this study provide insights on HMI transparency under different driving scenarios that may impact user trust.

References

[1]
Icek Ajzen and M. Fishbein. 1980. Understanding Attitudes and Predicting Social Behavior. Journal of Experimental Social Psychology 278, (January 1980).
[2]
Hebert Azevedo-Sa, Huajing Zhao, Connor Esterwood, X. Jessie Yang, Dawn M. Tilbury, and Lionel P. Robert. 2021. How internal and external risks affect the relationships between trust and driver behavior in automated driving systems. Transportation Research Part C: Emerging Technologies 123, (February 2021), 102973.
[3]
Nora Balfe, Sarah Sharples, and John R. Wilson. 2015. Impact of automation: Measurement of performance, workload and behaviour in a complex control environment. Applied Ergonomics 47, (March 2015), 52–64.
[4]
Adella Bhaskara, Lain Duong, James Brooks, Ryan Li, Ronan McInerney, Michael Skinner, Helen Pongracic, and Shayne Loft. 2021. Effect of automation transparency in the management of multiple unmanned vehicles. Applied Ergonomics 90, (January 2021), 103243.
[5]
Eric T Chancey, James P Bliss, Yusuke Yamani, and Holly A H Handley. 2017. Trust and the Compliance–Reliance Paradigm: The Effects of Risk, Error Bias, and Reliability on Trust and Dependence. Human Factors (2017), 13.
[6]
Jessie Chen, Katelyn Procci, Michael Boyce, Julia Wright, Andre Garcia, and Michael Barnes. 2014. Situation Awareness–Based Agent Transparency.
[7]
Jessie Y. C. Chen, Shan G. Lakhmani, Kimberly Stowers, Anthony R. Selkowitz, Julia L. Wright, and Michael Barnes. 2018. Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theoretical Issues in Ergonomics Science 19, 3 (May 2018), 259–282.
[8]
Jessie Y.C. Chen, Michael J. Barnes, Anthony R. Selkowitz, and Kimberly Stowers. 2016. Effects of Agent Transparency on human-autonomy teaming effectiveness. In 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, Budapest, Hungary, 001838–001843.
[9]
Jessie Y.C. Chen, Michael J. Barnes, Anthony R. Selkowitz, Kimberly Stowers, Shan G. Lakhmani, and Nicholas Kasdaglis. 2016. Human-Autonomy Teaming and Agent Transparency. In Companion Publication of the 21st International Conference on Intelligent User Interfaces, ACM, Sonoma California USA, 28–31.
[10]
Jong Kyu Choi and Yong Gu Ji. 2015. Investigating the Importance of Trust on Adopting an Autonomous Vehicle. International Journal of Human-Computer Interaction 31, 10 (October 2015), 692–702.
[11]
Ricardo A. Daziano, Mauricio Sarrias, and Benjamin Leard. 2017. Are consumers willing to pay to let cars drive for them? Analyzing response to autonomous vehicles. Transportation Research Part C: Emerging Technologies 78, (May 2017), 150–164.
[12]
Mica R. Endsley. 1995. Toward a Theory of Situation Awareness in Dynamic Systems. Hum Factors 37, 1 (March 1995), 32–64.
[13]
Stefanie M. Faas, Lesley-Ann Mathis, and Martin Baumann. 2020. External HMI for self-driving vehicles: Which information shall be displayed? Transportation Research Part F: Traffic Psychology and Behaviour 68, (January 2020), 171–186.
[14]
Daniel J. Fagnant and Kara Kockelman. 2015. Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations. Transportation Research Part A: Policy and Practice 77, (2015), 167–181.
[15]
Daniel J. Fagnant and Kara M. Kockelman. 2014. The travel and environmental implications of shared autonomous vehicles, using agent-based model scenarios. Transportation Research Part C: Emerging Technologies 40, (March 2014), 1–13.
[16]
Md Abdullah Al Fahim, Mohammad Maifi Hasan Khan, Theodore Jensen, Yusuf Albayram, and Emil Coman. 2021. Do Integral Emotions Affect Trust? The Mediating Effect of Emotions on Trust in the Context of Human-Agent Interaction. In Designing Interactive Systems Conference 2021, ACM, Virtual Event USA, 1492–1503.
[17]
Ernst Fehr. 2009. On The Economics and Biology of Trust. Journal of the European Economic Association 7, 2–3 (April 2009), 235–266.
[18]
Renate Haeuslschmid, Yixin Shou, John O'Donovan, Gary Burnett, and Andreas Butz. 2016. First Steps towards a View Management Concept for Large-sized Head-up Displays with Continuous Depth. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, ACM, Ann Arbor MI USA, 1–8.
[19]
Shabnam Haghzare, Jennifer Campos, and Alex Mihailidis. 2018. Identifying the Factors Influencing Older Adults’ Perceptions of Fully Automated Vehicles. In Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’18), Association for Computing Machinery, New York, NY, USA, 98–103.
[20]
Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Hum Factors 57, 3 (May 2015), 407–434.
[21]
Brittany E. Holthausen, Philipp Wintersberger, Bruce N. Walker, and Andreas Riener. 2020. Situational Trust Scale for Automated Driving (STS-AD): Development and Initial Validation. In 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, ACM, Virtual Event DC USA, 40–47.
[22]
Theodore Jensen, Yusuf Albayram, Mohammad Maifi Hasan Khan, Md Abdullah Al Fahim, Ross Buck, and Emil Coman. 2019. The Apple Does Fall Far from the Tree: User Separation of a System from its Developers in Human-Automation Trust Repair. In Proceedings of the 2019 on Designing Interactive Systems Conference, ACM, San Diego CA USA, 1071–1082.
[23]
Zsófia Kenesei, Katalin Ásványi, László Kökény, Melinda Jászberényi, Márk Miskolczi, Tamás Gyulavári, and Jhanghiz Syahrivar. 2022. Trust and perceived risk: How different manifestations affect the adoption of autonomous vehicles. Transportation Research Part A: Policy and Practice 164, (October 2022), 379–393.
[24]
Siddartha Khastgir, Stewart Birrell, Gunwant Dhadyalla, and Paul Jennings. 2018. Calibrating trust through knowledge: Introducing the concept of informed safety for automation in vehicles. Transportation Research Part C: Emerging Technologies 96, (November 2018), 290–303.
[25]
Cory Kidd and Cory David. 2003. Sociable robots: the role of presence and task in human-robot interaction /. (January 2003).
[26]
Roderick Kramer and Tom Tyler. 1996. Trust in Organizations: Frontiers of Theory and Research. SAGE Publications, Inc., 2455 Teller Road, Thousand Oaks California 91320 United States.
[27]
John D. Lee and Katrina A. See. 2004. Trust in Automation: Designing for Appropriate Reliance. Hum Factors 46, 1 (March 2004), 50–80.
[28]
Mengyao Li, Brittany E. Holthausen, Rachel E. Stuck, and Bruce N. Walker. 2019. No Risk No Trust: Investigating Perceived Risk in Highly Automated Driving. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, ACM, Utrecht Netherlands, 177–185.
[29]
Ruikun Luo, Na Du, and X. Jessie Yang. 2022. Evaluating Effects of Enhanced Autonomy Transparency on Trust, Dependence, and Human-Autonomy Team Performance over Time. International Journal of Human–Computer Interaction 38, 18–20 (December 2022), 1962–1971.
[30]
Joseph B. Lyons and Paul R. Havig. 2014. Transparency in a Human-Machine Context: Approaches for Fostering Shared Awareness/Intent. In Virtual, Augmented and Mixed Reality. Designing and Developing Virtual and Augmented Environments, Randall Shumaker and Stephanie Lackey (eds.). Springer International Publishing, Cham, 181–190.
[31]
Siu Shing Man, Wei Xiong, Fangrong Chang, and Alan Hoi Shou Chan. 2020. Critical Factors Influencing Acceptance of Automated Vehicles by Hong Kong Drivers. IEEE Access 8, (2020), 109845–109856.
[32]
Joseph E. Mercado, Michael A. Rupp, Jessie Y. C. Chen, Michael J. Barnes, Daniel Barber, and Katelyn Procci. 2016. Intelligent Agent Transparency in Human–Agent Teaming for Multi-UxV Management. Hum Factors 58, 3 (May 2016), 401–415.
[33]
Joseph E Mercado, Michael A Rupp, Jessie YC Chen, Daniel Barber, Katelyn Procci, and Michael Barnes. Effects of Agent Transparency on Multi-Robot Management Effectiveness.
[34]
Stephanie M. Merritt and Daniel R. Ilgen. 2008. Not All Trust Is Created Equal: Dispositional and History-Based Trust in Human-Automation Interactions. Hum Factors 50, 2 (April 2008), 194–210.
[35]
Luis Oliveira, Christopher Burns, Jacob Luton, Sumeet Iyer, and Stewart Birrell. 2020. The influence of system transparency on trust: Evaluating interfaces in a highly automated vehicle. Transportation Research Part F: Traffic Psychology and Behaviour 72, (July 2020), 280–296.
[36]
Scott Ososky, Tracy Sanders, Florian Jentsch, Peter Hancock, and Jessie Y. C. Chen. 2014. Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems. Baltimore, Maryland, USA, 90840E.
[37]
Raja Parasuraman and Victor Riley. 1997. Humans and Automation: Use, Misuse, Disuse, Abuse. Hum Factors 39, 2 (June 1997), 230–253.
[38]
LeeAnn Perkins, Janet E. Miller, Ali Hashemi, and Gary Burns. 2010. Designing for Human-Centered Systems: Situational Risk as a Factor of Trust in Automation. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 54, 25 (September 2010), 2130–2134.
[39]
Raissa Pokam, Serge Debernard, Christine Chauvin, and Sabine Langlois. 2019. Principles of transparency for autonomous vehicles: first results of an experiment with an augmented reality human–machine interface. Cogn Tech Work 21, 4 (November 2019), 643–656.
[40]
Bako Rajaonah, Nicolas Tricot, Françoise Anceaux, and Patrick Millot. 2008. The role of intervening variables in driver–ACC cooperation. International Journal of Human-Computer Studies 66, 3 (March 2008), 185–197.
[41]
Lionel P. Robert, Alan R. Denis, and Yu-Ting Caisy Hung. 2009. Individual Swift Trust and Knowledge-Based Trust in Face-to-Face and Virtual Team Members. Journal of Management Information Systems 26, 2 (September 2009), 241–279.
[42]
SAE. 2018. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. Retrieved March 15, 2023 from https://www.sae.org/standards/content/j3016_201806/
[43]
Anthony R. Selkowitz, Shan G. Lakhmani, and Jessie Y.C. Chen. 2017. Using agent transparency to support situation awareness of the Autonomous Squad Member. Cognitive Systems Research 46, (December 2017), 13–25.
[44]
Meredith Skeels, Bongshin Lee, Greg Smith, and George G. Robertson. 2010. Revealing Uncertainty for Information Visualization. Information Visualization 9, 1 (January 2010), 70–81.
[45]
Kimberly Stowers, Nicholas Kasdaglis, Olivia Newton, Shan Lakhmani, Ryan Wohleber, and Jessie Chen. 2016. Intelligent Agent Transparency: The Design and Evaluation of an Interface to Facilitate Human and Intelligent Agent Collaboration. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 60, 1 (September 2016), 1706–1710.
[46]
Arun Ulahannan, Paul Jennings, Luis Oliveira, and Stewart Birrell. 2020. Designing an Adaptive Interface: Using Eye Tracking to Classify How Information Usage Changes Over Time in Partially Automated Vehicles. IEEE Access 8, (2020), 16865–16875.
[47]
Venkatesh and Goyal. 2010. Expectation Disconfirmation and Technology Adoption: Polynomial Modeling and Response Surface Analysis. MIS Quarterly 34, 2 (2010), 281.
[48]
Xueqin Wang, Yiik Diew Wong, Kevin X. Li, and Kum Fai Yuen. 2020. This is not me! Technology-identity concerns in consumers’ acceptance of autonomous vehicle technology. Transportation Research Part F: Traffic Psychology and Behaviour 74, (October 2020), 345–360.
[49]
Auriel Washburn, Akanimoh Adeleye, Thomas An, and Laurel D. Riek. 2020. Robot Errors in Proximate HRI: How Functionality Framing Affects Perceived Reliability and Trust. J. Hum.-Robot Interact. 9, 3 (July 2020), 1–21.
[50]
Marie Waung, Pam McAuslan, and Sridhar Lakshmanan. 2021. Trust and intention to use autonomous vehicles: Manufacturer focus and passenger control. Transportation Research Part F: Traffic Psychology and Behaviour 80, (July 2021), 328–340.
[51]
Julia L. Wright, Jessie Y. C. Chen, and Shan G. Lakhmani. 2020. Agent Transparency and Reliability in Human–Robot Interaction: The Influence on User Confidence and Perceived Reliability. IEEE Trans. Human-Mach. Syst. 50, 3 (June 2020), 254–263.
[52]
Zhigang Xu, Kaifan Zhang, Haigen Min, Zhen Wang, Xiangmo Zhao, and Peng Liu. 2018. What drives people to accept automated vehicles? Findings from a field experiment. Transportation Research Part C: Emerging Technologies 95, (October 2018), 320–334.
[53]
Tingru Zhang, Da Tao, Xingda Qu, Xiaoyan Zhang, Rui Lin, and Wei Zhang. 2019. The roles of initial trust and perceived risk in public's acceptance of automated vehicles. Transportation Research Part C: Emerging Technologies 98, (January 2019), 207–220.
[54]
Ge Zhu, Yuche Chen, and Jiali Zheng. 2020. Modelling the acceptance of fully autonomous vehicles: A media-based perception and adoption model. Transportation Research Part F: Traffic Psychology and Behaviour 73, (August 2020), 80–91.

Cited By

View all
  • (2024)The Impact of Transparency on Driver Trust and Reliance in Highly Automated Driving: Presenting Appropriate Transparency in Automotive HMIApplied Sciences10.3390/app1408320314:8(3203)Online publication date: 11-Apr-2024
  • (2024)Shared eHMI: Bridging Human–Machine Understanding in Autonomous Wheelchair NavigationApplied Sciences10.3390/app1401046314:1(463)Online publication date: 4-Jan-2024
  • (2024)Investigating the factors influencing user trust and driving performance in level 3 automated driving from the perspective of perceived benefitsTransportation Research Part F: Traffic Psychology and Behaviour10.1016/j.trf.2024.06.013105(58-72)Online publication date: Aug-2024
  • Show More Cited By

Index Terms

  1. Effects of Automation Transparency on Trust: Evaluating HMI in the Context of Fully Autonomous Driving

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    AutomotiveUI '23: Proceedings of the 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
    September 2023
    352 pages
    ISBN:9798400701054
    DOI:10.1145/3580585
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 18 September 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Fully autonomous driving
    2. Risk
    3. Transparency
    4. Trust

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    AutomotiveUI '23
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 248 of 566 submissions, 44%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)299
    • Downloads (Last 6 weeks)26
    Reflects downloads up to 29 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)The Impact of Transparency on Driver Trust and Reliance in Highly Automated Driving: Presenting Appropriate Transparency in Automotive HMIApplied Sciences10.3390/app1408320314:8(3203)Online publication date: 11-Apr-2024
    • (2024)Shared eHMI: Bridging Human–Machine Understanding in Autonomous Wheelchair NavigationApplied Sciences10.3390/app1401046314:1(463)Online publication date: 4-Jan-2024
    • (2024)Investigating the factors influencing user trust and driving performance in level 3 automated driving from the perspective of perceived benefitsTransportation Research Part F: Traffic Psychology and Behaviour10.1016/j.trf.2024.06.013105(58-72)Online publication date: Aug-2024
    • (2024)Effects of agent transparency and situation criticality upon human-autonomy trust and risk perception in decision-makingCognition, Technology & Work10.1007/s10111-024-00782-6Online publication date: 1-Nov-2024

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media