Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3610978.3640693acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
short-paper
Open access

Operationally Realistic Human-Autonomy Teaming Task Simulation to Study Multi-Dimensional Trust

Published: 11 March 2024 Publication History

Abstract

Autonomous systems are increasingly integrated into performing critical tasks, including in the space domain. Humans need to appropriately trust their autonomous teammate if the team is to perform effectively. Historically, many trust studies have used obtrusive surveys to measure trust as a one-dimensional, static construct when it is actually dynamic and multi-dimensional in nature. Furthermore, some previous work uses trusting tasks that lack operational validity. This paper presents an operationally-realistic task to study dynamic, multiple dimensions of operator trust. It enables collecting and time-synchronizing biosignals and embedded measures with less than millisecond precision. Our results show that this task simulation is capable of collecting unobtrusive, multidimensional trust-relevant signals with the end goal of improving human-autonomy teaming performance.

Supplemental Material

MP4 File
Supplemental video

References

[1]
Kumar Akash, Wan-Lin Hu, Neera Jain, and Tahira Reid. 2018. A Classification Model for Sensing Human Trust in Machines Using EEG and GSR. ACM Transactions on Interactive Intelligent Systems 8, 4 (Nov. 2018), 1--20. https://doi.org/10.1145/3132743
[2]
Jackie Ayoub, X. Jessie Yang, and Feng Zhou. 2021. Modeling dispositional and initial learned trust in automated vehicles with predictability and explainability. Transportation Research Part F: Traffic Psychology and Behaviour 77 (2021), 102--116. https://doi.org/10.1016/j.trf.2020.12.015
[3]
Chadwick Boulay Matthew Grivich Tristan Stenner Christian Kothe, David Medine. 2019. Lab Streaming Layer. Retrieved December 08, 2023 from https://labstreaminglayer.readthedocs.io/info/faqs.html
[4]
Ewart de Visser, Marieke M.M. Peeters, Malte Jung, Spencer Kohn, Tyler Shaw, Richard Pak, and Mark Neerincx. 2020. Towards a Theory of Longitudinal Trust Calibration in Human--Robot Teams. International Journal of Social Robotics 12 (05 2020). https://doi.org/10.1007/s12369-019-00596-x
[5]
Ewart J. de Visser, Paul J. Beatty, Justin R. Estepp, Spencer Kohn, Abdulaziz Abubshait, John R. Fedota, and Craig G. McDonald. 2018. Learning From the Slips of Others: Neural Correlates of Trust in Automated Agents. Frontiers in Human Neuroscience 12 (2018). https://doi.org/10.3389/fnhum.2018.00309
[6]
David Harel, Assaf Marron, and Joseph Sifakis. 2020. Autonomics: In search of a foundation for next-generation autonomous systems. Proceedings of the National Academy of Sciences 117, 30 (2020), 17491--17498. https://doi.org/10.1073/pnas.2003162117 arXiv:https://www.pnas.org/doi/pdf/10.1073/pnas.2003162117
[7]
Stephanie T. Lane Chad Bieber Heather M. Wojton, Daniel Porter and Poornima Madhavan. 2020. Initial validation of the trust of automated systems test (TOAST). The Journal of Social Psychology 160, 6 (2020), 735--750. https://doi.org/10. 1080/00224545.2020.1749020 arXiv:https://doi.org/10.1080/00224545.2020.1749020 32297844.
[8]
Sebastian Hergeth, Lutz Lorenz, Roman Vilimek, and Josef F. Krems. 2016. Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving. Human Factors 58, 3 (2016), 509--519. https://doi.org/10.1177/0018720815625744 arXiv:https://doi.org/10.1177/0018720815625744 26843570.
[9]
Leanne M. Hirshfield, Stuart H. Hirshfield, Samuel Hincks, Matthew Russell, RachelWard, and Tom Williams. 2011. Trust in Human-Computer Interactions as Measured by Frustration, Surprise, and Workload. In Foundations of Augmented Cognition. Directing the Future of Adaptive Systems, Dylan D. Schmorrow and Cali M. Fidopiastis (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 507--516.
[10]
Robert Molloy Indramani L. Singh and Raja Parasuraman. 1993. Automation- Induced "Complacency": Development of the Complacency- Potential Rating Scale. The International Journal of Aviation Psychology 3, 2 (1993), 111--122. https://doi.org/10.1207/s15327108ijap0302_2 arXiv:https://doi.org/10.1207/s15327108ijap03022
[11]
Jiun-Yin Jian, Ann Bisantz, and Colin Drury. 2000. Foundations for an Empirically Determined Scale of Trust in Automated Systems. International Journal of Cognitive Ergonomics 4 (03 2000), 53--71. https://doi.org/10.1207/S15327566IJCE0401_04
[12]
Holger Krekel and Pytest dev team. 2003--2023. Pytest Documentation. Retrieved December 08, 2023 from https://docs.pytest.org/en/7.4.x/
[13]
J. David Lewis and Andrew Weigert. 1985. Trust as a Social Reality. Social Forces 63, 4 (1985), 967--985. http://www.jstor.org/stable/2578601
[14]
Riverbank Computing Limited. 1998--2023. PyQt6. Dorchester DT1, UK. Retrieved December 08, 2023 from https://pypi.org/project/PyQt6/
[15]
Bertram F. Malle and Daniel Ullman. 2021. Chapter 1 - A multidimensional conception and measure of human-robot trust. In Trust in Human-Robot Interaction, Chang S. Nam and Joseph B. Lyons (Eds.). Academic Press, 3--25. https://doi.org/10.1016/B978-0--12--819472-0.00001-0
[16]
Aniek F. Markus, Jan A. Kors, and Peter R. Rijnbeek. 2021. The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies. Journal of Biomedical Informatics 113 (2021), 103655. https://doi.org/10.1016/j.jbi.2020.103655
[17]
Drew M. Morris, Jason M. Erno, and June J. Pilcher. 2017. Electrodermal Response and Automation Trust during Simulated Self-Driving Car Use. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, 1 (2017), 1759--1762. https://doi.org/10.1177/1541931213601921 arXiv:https://doi.org/10.1177/1541931213601921
[18]
Seeung Oh and Younho Seong. 2017. Preliminary Study on Neurological Measure of Human Trust in Autonomous Systems. 1066--1072. https://api.semanticscholar.org/CorpusID:121179190
[19]
Richards D. Shelton-Rayner G. Inch D. Izzetoglu K. Palmer, S. 2019. Human- Agent Teaming - an Evolving Interaction Paradigm: An Innovative Measure of Trust. In 20th International Symposium on A viation Psychology. 426--431. https://corescholar.libraries.wright.edu/isap_2019/72
[20]
Nathan Sanders, Sanghyun Choo, Nayoung Kim, Chang S. Nam, and Edward P. Fitts. 2019. Neural Correlates of Trust During an Automated System Monitoring Task: Preliminary Results of an Effective Connectivity Study. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, 1 (2019), 83--87. https://doi.org/10.1177/1071181319631409 arXiv:https://doi.org/10.1177/1071181319631409
[21]
Helga O. Miguel1 Emma Condy1 Aaron Buckley1 Soongho Park1 John B. Perreault1 Thien Nguyen1 Selin Zeytinoglu2 John Millerhagen1 Nathan Fox2 Amir Gandjbakhche. Wan-Chun Su1, Hadis Dashtestani1. 2023. Simultaneous multimodal fNIRS-EEG recordings reveal new insights in neural activity during motor execution, observation, and imagery. Sci Rep 13, 5151 (2023). https://doi.org/10.1038/s41598-023--31609--5
[22]
Sheila Simsarian Webber. 2008. Development of Cognitive and Affective Trust in Teams: A Longitudinal Study. Small Group Research 39, 6 (2008), 746--769. https://doi.org/10.1177/1046496408323569 arXiv:https://doi.org/10.1177/1046496408323569
[23]
Heather Wilson. 2019. US air force science and technology strategy. Technical Report. United States Air Force.

Index Terms

  1. Operationally Realistic Human-Autonomy Teaming Task Simulation to Study Multi-Dimensional Trust

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    HRI '24: Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction
    March 2024
    1408 pages
    ISBN:9798400703232
    DOI:10.1145/3610978
    This work is licensed under a Creative Commons Attribution-NoDerivatives International 4.0 License.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 11 March 2024

    Check for updates

    Author Tags

    1. autonomous agents
    2. human-autonomy teaming
    3. trust in automation

    Qualifiers

    • Short-paper

    Funding Sources

    Conference

    HRI '24
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 268 of 1,124 submissions, 24%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 181
      Total Downloads
    • Downloads (Last 12 months)181
    • Downloads (Last 6 weeks)39
    Reflects downloads up to 01 Sep 2024

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media