Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3173574.3174136acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article
Public Access

How the Experts Do It: Assessing and Explaining Agent Behaviors in Real-Time Strategy Games

Published: 21 April 2018 Publication History

Abstract

How should an AI-based explanation system explain an agent's complex behavior to ordinary end users who have no background in AI? Answering this question is an active research area, for if an AI-based explanation system could effectively explain intelligent agents' behavior, it could enable the end users to understand, assess, and appropriately trust (or distrust) the agents attempting to help them. To provide insights into this question, we turned to human expert explainers in the real-time strategy domain --"shoutcasters"-- to understand (1) how they foraged in an evolving strategy game in real time, (2) how they assessed the players' behaviors, and (3) how they constructed pertinent and timely explanations out of their insights and delivered them to their audience. The results provided insights into shoutcasters' foraging strategies for gleaning information necessary to assess and explain the players; a characterization of the types of implicit questions shoutcasters answered; and implications for creating explanations by using the patterns and abstraction levels these human experts revealed.

Supplementary Material

ZIP File (pn4411-file4.zip)
MP4 File (pn4411.mp4)

References

[1]
Adrian K Agogino and Kagan Tumer. 2004. Unifying temporal and structural credit assignment problems. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems-Volume 2. IEEE Computer Society, 980--987.
[2]
Svetlin Bostandjiev, John O'Donovan, and Tobias Höllerer. 2012. TasteWeights: A visual interactive hybrid recommender system. In Proceedings of the Sixth ACM Conference on Recommender Systems. ACM, 35--42.
[3]
Nico Castelli, Corinna Ogonowski, Timo Jakobi, Martin Stein, Gunnar Stevens, and Volker Wulf. 2017. What happened in my home?: An end-user development approach for smart home data visualization. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 853--866.
[4]
Gifford Cheung and Jeff Huang. 2011. Starcraft from the stands: Understanding the game spectator. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 763--772.
[5]
Ed H Chi, Peter Pirolli, Kim Chen, and James Pitkow. 2001. Using information scent to model user information needs and actions and the web. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 490--497.
[6]
Kelley Cotter, Janghee Cho, and Emilee Rader. 2017. Explaining the news feed algorithm: An analysis of the "News Feed FYI" blog. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 1553--1560.
[7]
Jonathan Dodge et al. 2018. Supplemental materials: How the experts do it: Assessing and explaining agent behaviors in real-time strategy games. web site. (2018). Retrieved December 28, 2017 from http://web.engr.oregonstate.edu/~burnett/ XAI-CHI2018-rebuilt_supplementary_materials/.
[8]
Scott D. Fleming, Chris Scaffidi, David Piorkowski, Margaret Burnett, Rachel Bellamy, Joseph Lawrance, and Irwin Kwan. 2013. An information foraging theory perspective on tools for debugging, refactoring, and reuse tasks. ACM Transactions on Software Engineering and Methodology (TOSEM) 22, 2 (2013), 14.
[9]
Wai-Tat Fu and Peter Pirolli. 2007. SNIF-ACT: A cognitive model of user navigation on the world wide web. Human-Computer Interaction 22, 4 (2007), 355--412.
[10]
Alex Groce, Todd Kulesza, Chaoqiang Zhang, Shalini Shamasunder, Margaret Burnett, Weng-Keen Wong, Simone Stumpf, Shubhomoy Das, Amber Shinsel, Forrest Bice, and others. 2014. You are the only possible oracle: Effective test selection for end users of interactive machine learning systems. IEEE Transactions on Software Engineering 40, 3 (2014), 307--323.
[11]
Robert R Hoffman and Gary Klein. 2017. Explaining explanation, part 1: theoretical foundations. IEEE Intelligent Systems 32, 3 (2017), 68--73.
[12]
Ashish Kapoor, Bongshin Lee, Desney Tan, and Eric Horvitz. 2010. Interactive optimization for steering machine classification. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1343--1352.
[13]
Man-Je Kim, Kyung-Joong Kim, SeungJun Kim, and Anind K Dey. 2016. Evaluation of starcraft artificial intelligence competition bots by experienced human players. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 1915--1921.
[14]
Josua Krause, Adam Perer, and Kenney Ng. 2016. Interacting with predictions: Visual inspection of black-box machine learning models. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 5686--5697.
[15]
Cliff Kuang. 2017. Can AI be taught to explain itself? New York Times, (2017). Retrieved December 26, 2017 from https://www.nytimes.com/2017/11/21/magazine/can-ai-be-taught-to-explain-itself.html.
[16]
Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces. ACM, 126--137.
[17]
Todd Kulesza, Simone Stumpf, Margaret Burnett, and Irwin Kwan. 2012. Tell me more? The effects of mental model soundness on personalizing an intelligent agent. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1--10.
[18]
Sandeep Kaur Kuttal, Anita Sarma, and Gregg Rothermel. 2013. Predator behavior in the wild web world of bugs: An information foraging theory perspective. In Visual Languages and Human-Centric Computing (VL/HCC), 2013 IEEE Symposium on. IEEE, 59--66.
[19]
Brian Y Lim and Anind K Dey. 2009. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th international conference on Ubiquitous computing. ACM, 195--204.
[20]
Brian Y. Lim, Anind K. Dey, and Daniel Avrahami. 2009. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2119--2128.
[21]
Diane Litman, Steve Young, M.J.F. Gales, Kate Knill, Karen Ottewell, Rogier van Dalen, and David Vandyke. 2016. Towards using conversations with spoken dialogue systems in the automated assessment of non-native speakers of English. In SIGDIAL Conference. 270--275.
[22]
Ronald Metoyer, Simone Stumpf, Christoph Neumann, Jonathan Dodge, Jill Cao, and Aaron Schnabel. 2010. Explaining how to play real-time strategy games. Knowledge-Based Systems 23, 4 (2010), 295--301.
[23]
Nan Niu, Anas Mahmoud, Zhangji Chen, and Gary Bradshaw. 2013. Departures from optimality: Understanding human analyst's information foraging in assisted requirements tracing. In Proceedings of the 2013 International Conference on Software Engineering. IEEE Press, 572--581.
[24]
Donald A Norman. 1983. Some observations on mental models. Mental models 7, 112 (1983), 7--14.
[25]
S. Ontañón, G. Synnaeve, A. Uriarte, F. Richoux, D. Churchill, and M. Preuss. 2013. A survey of real-time strategy game AI research and competition in StarCraft. IEEE Transactions on Computational Intelligence and AI in Games 5, 4 (Dec 2013), 293--311.
[26]
Alexandre Perez and Rui Abreu. 2014. A diagnosis-based approach to software comprehension. In Proceedings of the 22nd International Conference on Program Comprehension. ACM, 37--47.
[27]
David Piorkowski, Scott D. Fleming, Christopher Scaffidi, Margaret Burnett, Irwin Kwan, Austin Z Henley, Jamie Macbeth, Charles Hill, and Amber Horvath. 2015. To fix or to learn? How production bias affects developers' information foraging during debugging. In Software Maintenance and Evolution (ICSME), 2015 IEEE International Conference on. IEEE, 11--20.
[28]
David Piorkowski, Austin Z Henley, Tahmid Nabi, Scott D Fleming, Christopher Scaffidi, and Margaret Burnett. 2016. Foraging and navigations, fundamentally: Developers' predictions of value and cost. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. ACM, 97--108.
[29]
Peter Pirolli. 2007. Information foraging theory: Adaptive interaction with information. Oxford University Press.
[30]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1135--1144.
[31]
Stuart J. Russell and Peter Norvig. 2003. Artificial Intelligence: A modern approach (2 ed.). Pearson Education.
[32]
Robert Spence. 2007. Information Visualization: Design for interaction (2Nd Edition). Prentice-Hall, Inc., Upper Saddle River, NJ, USA.
[33]
Sruti Srinivasa Ragavan, Sandeep Kaur Kuttal, Charles Hill, Anita Sarma, David Piorkowski, and Margaret Burnett. 2016. Foraging among an overabundance of similar variants. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 3509--3521.
[34]
David J Stracuzzi, Alan Fern, Kamal Ali, Robin Hess, Jervis Pinto, Nan Li, Tolga Konik, and Daniel G Shapiro. 2011. An application of transfer to american football: From observation of raw video to control in a simulated environment. AI Magazine 32, 2 (2011), 107--125.
[35]
Adam Summerville, Michael Cook, and Ben Steenhuisen. 2016. Draft-Analysis of the Ancients: Predicting Draft Picks in DotA 2 using Machine Learning. (2016). https://aaai.org/ocs/index.php/AIIDE/AIIDE16/paper/ view/14075
[36]
Katia Sycara, Christian Lebiere, Yulong Pei, Donald Morrison, and Michael Lewis. 2015. Abstraction of analytical models from cognitive models of human control of robotic swarms. In International Conference on Cognitive Modeling. University of Pittsburgh.
[37]
Joe Tullio, Anind K Dey, Jason Chalecki, and James Fogarty. 2007. How it works: A field study of non-technical users interacting with an intelligent system. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 31--40.
[38]
Oriol Vinyals et al. 2017. StarCraft II: A New Challenge for Reinforcement Learning. Tech Report. (2017). Retrieved December 22, 2017 from https://deepmind.com/documents/110/sc2le.pdf.
[39]
Robert H Wortham, Andreas Theodorou, and Joanna J Bryson. 2017. Improving robot transparency:real-time visualisation of robot AI substantially improves understanding in naive observers, In IEEE RO-MAN 2017. IEEE RO-MAN 2017 (August 2017). http://opus.bath.ac.uk/55793/

Cited By

View all
  • (2023)The Effects of Explanations on Automation BiasArtificial Intelligence10.1016/j.artint.2023.103952(103952)Online publication date: Jun-2023
  • (2023)Explainable artificial intelligence in information systems: A review of the status quo and future research directionsElectronic Markets10.1007/s12525-023-00644-533:1Online publication date: 27-May-2023
  • (2023)How to Explain It to a Model Manager?Artificial Intelligence in HCI10.1007/978-3-031-35891-3_14(209-242)Online publication date: 9-Jul-2023
  • Show More Cited By

Index Terms

  1. How the Experts Do It: Assessing and Explaining Agent Behaviors in Real-Time Strategy Games

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
      April 2018
      8489 pages
      ISBN:9781450356206
      DOI:10.1145/3173574
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 21 April 2018

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. explainable ai
      2. information foraging
      3. intelligent agents
      4. rts games
      5. starcraft

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      CHI '18
      Sponsor:

      Acceptance Rates

      CHI '18 Paper Acceptance Rate 666 of 2,590 submissions, 26%;
      Overall Acceptance Rate 5,713 of 24,194 submissions, 24%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)188
      • Downloads (Last 6 weeks)21
      Reflects downloads up to 18 Aug 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2023)The Effects of Explanations on Automation BiasArtificial Intelligence10.1016/j.artint.2023.103952(103952)Online publication date: Jun-2023
      • (2023)Explainable artificial intelligence in information systems: A review of the status quo and future research directionsElectronic Markets10.1007/s12525-023-00644-533:1Online publication date: 27-May-2023
      • (2023)How to Explain It to a Model Manager?Artificial Intelligence in HCI10.1007/978-3-031-35891-3_14(209-242)Online publication date: 9-Jul-2023
      • (2022)Finding AI’s Faults with AAR/AI: An Empirical StudyACM Transactions on Interactive Intelligent Systems10.1145/348706512:1(1-33)Online publication date: 4-Mar-2022
      • (2021)After-Action Review for AI (AAR/AI)ACM Transactions on Interactive Intelligent Systems10.1145/345317311:3-4(1-35)Online publication date: 3-Sep-2021
      • (2021)AutoPreview: A Framework for Autopilot Behavior UnderstandingExtended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems10.1145/3411763.3451591(1-6)Online publication date: 8-May-2021
      • (2021)Wait, But Why?: Assessing Behavior Explanation Strategies for Real-Time Strategy GamesProceedings of the 26th International Conference on Intelligent User Interfaces10.1145/3397481.3450699(32-42)Online publication date: 14-Apr-2021
      • (2021)The Shoutcasters, the Game Enthusiasts, and the AI: Foraging for Explanations of Real-time Strategy PlayersACM Transactions on Interactive Intelligent Systems10.1145/339604711:1(1-46)Online publication date: 15-Mar-2021
      • (2021)Review of Research in the Field of Developing Methods to Extract Rules From Artificial Neural NetworksJournal of Computer and Systems Sciences International10.1134/S106423072106004660:6(966-980)Online publication date: 17-Dec-2021
      • (2021)Human-XAI Interaction: A Review and Design Principles for Explanation User InterfacesHuman-Computer Interaction – INTERACT 202110.1007/978-3-030-85616-8_36(619-640)Online publication date: 26-Aug-2021
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media