Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2876456.2879479acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
poster
Public Access

Human-Autonomy Teaming and Agent Transparency

Published: 07 March 2016 Publication History

Abstract

We developed the user interfaces for two Human-Robot Interaction (HRI) tasking environments: dismounted infantry interacting with a ground robot (Autonomous Squad Member) and human interaction with an intelligent agent to manage a team of heterogeneous robotic vehicles (IMPACT). These user interfaces were developed based on the Situation awareness-based Agent Transparency (SAT) model. User testing showed that as agent transparency increased, so did overall human-agent team performance. Participants were able to calibrate their trust in the agent more appropriately as agent transparency increased.

References

[1]
Kevin B. Bennett and John M. Flach. 2011. Display and Interface Design: Subtle Science, Exact Art. CRC Press, Boca Raton, FL.
[2]
Michael W. Boyce, Jessie Y. C. Chen, Anthony R. Selkowitz, Shan G. Lakhmani. 2015. Agent Transparency for an Autonomous Squad Member (Tech Report: ARL-TR-7298). US Army Research Laboratory, Aberdeen Proving Ground MD.
[3]
Jessie Y. C. Chen and Michael J. Barnes. 2014. Human-agent teaming for multirobot control: A review of human factors issues. IEEE Trans. Human-Machine Systems 44, 1: 13--29.
[4]
Jessie Y. C. Chen, Katelyn Procci, Michael Boyce, Julia Wright, Andre Garcia, Michael Barnes. 2014. Situation awareness-based Agent Transparency (Tech Report: ARL-TR-6905). US Army Research Laboratory, Aberdeen Proving Ground MD.
[5]
Mica R. Endsley. 1995. Toward a theory of situation awareness in dynamic systems. Human Factors 37, 1: 32--64.
[6]
Jiun-Yin Jian, Ann Bisantz, Colin Drury. 2000. Foundations for an empirically determined scale of trust in automated systems. Int J. Cognitive Ergonomics 4, 1: 53--71.
[7]
John D. Lee and Katrina A. See. 2004. Trust in technology: Designing for appropriate reliance. Human Factors 46, 1: 50--80.
[8]
Joseph Mercado, Michael Rupp, Jessie Chen, Daniel Barber, Katelyn Procci, Michael Barnes. In Press. Intelligent agent transparency in human-agent teaming for multi-UxV management. Human Factors.
[9]
Anand S. Rao and Michael P. Georgeff. 1995. BDI agents: From theory to practice. In Proc. Int. Conf. Multiagent Syst., 312--319.
[10]
Sirpa Riihiaho. 2002. The pluralistic usability walkthrough method. Ergo. in Design 10, 3: 23--27.
[11]
Donna Spencer. 2009. Card Sorting: Designing Usable Categories. Rosenfeld Media, NY, NY.

Cited By

View all
  • (2025)Understanding the processes of trust and distrust contagion in Human–AI Teams: A qualitative approachComputers in Human Behavior10.1016/j.chb.2025.108560165(108560)Online publication date: Apr-2025
  • (2024)Understanding the Evolvement of Trust Over Time within Human-AI TeamsProceedings of the ACM on Human-Computer Interaction10.1145/36870608:CSCW2(1-31)Online publication date: 8-Nov-2024
  • (2024)Understanding Trust and Reliance Development in AI Advice: Assessing Model Accuracy, Model Explanations, and Experiences from Previous InteractionsACM Transactions on Interactive Intelligent Systems10.1145/368616414:4(1-30)Online publication date: 2-Aug-2024
  • Show More Cited By

Index Terms

  1. Human-Autonomy Teaming and Agent Transparency

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    IUI '16 Companion: Companion Publication of the 21st International Conference on Intelligent User Interfaces
    March 2016
    446 pages
    ISBN:9781450341400
    DOI:10.1145/2876456
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 07 March 2016

    Check for updates

    Author Tags

    1. agent transparency
    2. autonomy
    3. human-agent teaming
    4. human-robot interaction
    5. situation awareness

    Qualifiers

    • Poster

    Funding Sources

    Conference

    IUI'16
    Sponsor:

    Acceptance Rates

    IUI '16 Companion Paper Acceptance Rate 49 of 194 submissions, 25%;
    Overall Acceptance Rate 746 of 2,811 submissions, 27%

    Upcoming Conference

    IUI '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)135
    • Downloads (Last 6 weeks)20
    Reflects downloads up to 20 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Understanding the processes of trust and distrust contagion in Human–AI Teams: A qualitative approachComputers in Human Behavior10.1016/j.chb.2025.108560165(108560)Online publication date: Apr-2025
    • (2024)Understanding the Evolvement of Trust Over Time within Human-AI TeamsProceedings of the ACM on Human-Computer Interaction10.1145/36870608:CSCW2(1-31)Online publication date: 8-Nov-2024
    • (2024)Understanding Trust and Reliance Development in AI Advice: Assessing Model Accuracy, Model Explanations, and Experiences from Previous InteractionsACM Transactions on Interactive Intelligent Systems10.1145/368616414:4(1-30)Online publication date: 2-Aug-2024
    • (2024)The Trust Recovery Journey. The Effect of Timing of Errors on the Willingness to Follow AI Advice.Proceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645167(609-622)Online publication date: 18-Mar-2024
    • (2024)Towards neuro-symbolic reinforcement learning for trustworthy human-autonomy teamingAssurance and Security for AI-enabled Systems10.1117/12.3014232(11)Online publication date: 7-Jun-2024
    • (2024)Human Autonomy Teaming for ROV Shared ControlJournal of Computing in Civil Engineering10.1061/JCCEE5.CPENG-575638:4Online publication date: Jul-2024
    • (2023)It Seems Smart, but It Acts Stupid: Development of Trust in AI Advice in a Repeated Legal Decision-Making TaskProceedings of the 28th International Conference on Intelligent User Interfaces10.1145/3581641.3584058(528-539)Online publication date: 27-Mar-2023
    • (2023)Effects of Automation Transparency on Trust: Evaluating HMI in the Context of Fully Autonomous DrivingProceedings of the 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications10.1145/3580585.3607171(311-321)Online publication date: 18-Sep-2023
    • (2023)AI Agents as Team Members: Effects on Satisfaction, Conflict, Trustworthiness, and Willingness to Work WithJournal of Management Information Systems10.1080/07421222.2023.219677340:2(307-337)Online publication date: 17-Jun-2023
    • (2022)A Decision Support Design Framework for Selecting a Robotic InterfaceProceedings of the 10th International Conference on Human-Agent Interaction10.1145/3527188.3561913(104-113)Online publication date: 5-Dec-2022
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media