Abstract
Interest in collective robotic systems has increased rapidly due to the potential benefits that can be offered to operators, such as increased safety and support, who perform challenging tasks in high-risk environments. The limited human-collective transparency research has focused on how the design of the models (i.e., algorithms), visualizations, and control mechanisms influence human-collective behaviors. Traditional collective visualizations have shown all of the individual entities composing a collective, which may become problematic as collectives scale in size and heterogeneity, and tasks become more demanding. Human operators can become overloaded with information, which will negatively affect their understanding of the collective’s current state and overall behaviors, which can cause poor teaming performance. This manuscript contributes to the human-collective domain by analyzing how visualization transparency influences remote supervision of collectives. The visualization transparency analysis expands traditional transparency assessments by focusing on how operators with different individual capabilities are impacted, their comprehension, the interface usability, and the human-collective team’s performance. Metrics that effectively assess visualization transparency of collectives are identified, and design guidance can inform future real-world human-collective systems designs. The individual agent and abstract screen-based visualizations were analyzed while remotely supervising sequential best-of-n decision-making tasks involving four collectives, composed of 200 entities each, 800 in total. The abstract visualization provided better transparency by enabling operators with different individual differences and capabilities to perform relatively the same and promoted higher human-collective performance.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Alarcon, G. M., Gamble, R., Jessup, S. A., Walter, C., Ryan, T. J., Wood, D. W., et al. (2017). Application of the heuristic-systematic model to computer code trustworthiness: The influence of reputation and transparency. Cogent Psychology, 4, 1–22. https://doi.org/10.1080/23311908.2017.1389640.
Ashcraft, C. C., Goodrich, M. A., & Crandall, J. W. (2019). Moderating operator influence in human-swarm systems. In IEEE international conference on systems, man and cybernetics (pp. 4275–4282). https://doi.org/10.1109/SMC.2019.8914592.
Atoyan, H., Duquet, J. R., & Robert, J. M. (2006). Trust in new decision aid systems. In Conference on l’Interaction Homme-Machine (pp. 115–122). https://doi.org/10.1145/1132736.1132751.
Ballerini, M., Cabibbo, N., Candelier, R., Cavagna, A., Cisbani, E., Giardina, I., et al. (2008). Interaction ruling animal collective behavior depends on topological rather than metric distance: Evidence from a field study. Proceedings of the National Academy of Sciences, 105(4), 1232–1237. https://doi.org/10.1073/pnas.0711437105.
Bayindir, L., & Şahin, E. (2007). A review of studies in swarm robotics. Turkish Journal of Electrical Engineering & Computer Sciences, 15(2), 115–147.
Brambilla, M., Ferrante, E., Birattari, M., & Dorigo, M. (2013). Swarm robotics: A review from the swarm engineering perspective. Swarm Intelligence, 7(1), 1–41. https://doi.org/10.1007/s11721-012-0075-2.
Brown, D. S., Goodrich, M., Jung, S. Y., & Kerman, S. (2016). Two invariants of human–swarm interaction. Journal of Human-Robot Interaction, 5(1), 1–31. https://doi.org/10.5898/JHRI.5.1.Brown.
Chen, J. Y. C., Procci, K., Boyce, M., Wright, J. L., Garcia, A., & Barnes, M. J. (2014). Situation awareness-based agent transparency. Technical report. ARL-TR-6905, U.S. Army Research Laboratory, Aberdeen Proving Ground, MD. https://www.researchgate.net/publication/264963346_Situation_Awareness-Based_Agent_Transparency. Accessed 18 September 2018
Chen, J. Y. C., Lakhmani, S. G., Stowers, K., Selkowitz, A. R., Wright, J. L., & Barnes, M. J. (2018). Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theoretical Issues in Ergonomics Science, 19(3), 259–282. https://doi.org/10.1080/1463922X.2017.1315750.
Chen, M., Zhang, P., Wu, Z., & Chen, X. (2020). A multichannel human–swarm robot interaction system in augmented reality. Virtual Reality & Intelligent Hardware, 2(6), 518–533. https://doi.org/10.1016/j.vrih.2020.05.006.
Cody, J. R. (2018). Discrete consensus decisions in human-collective teams. PhD thesis, Vanderbilt University, Vanderbilt University, Nashville, TN, USA. https://etd.library.vanderbilt.edu/etd-04032018-164817. Accessed 18 September 2018
Cody, J. R., Roundtree, K. A., & Adams, J. A. (2021). Human-collective collaboration target selection. Transactions on Human-Robot Interaction, 10(2), 1–29. https://doi.org/10.1145/3442679.
Couzin, I. D., Krause, J., James, R., Ruxton, G. D., & Franks, N. R. (2002). Collective memory and spatial sorting in animal groups. Journal of Theoretical Biology, 218(1), 1–11. https://doi.org/10.1006/jtbi.2002.3065.
Couzin, I. D., Krause, J., Franks, N. R., & Levin, S. A. (2005). Effective leadership and decision-making in animal groups on the move. Nature, 433, 513–516. https://doi.org/10.1038/nature03236.
Crandall, J. W., Anderson, N., Ashcraft, C., Grosh, J., Henderson, J., McClellan, J., et al. (2017). Human-swarm interaction as shared control: Achieving flexible fault-tolerant systems. In International conference on engineering psychology and cognitive ergonomics (pp. 266–284). https://doi.org/10.1007/978-3-319-58472-0_21.
Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors: The Journal of the Human Factors and Ergonomics Society, 37(1), 32–64. https://doi.org/10.1518/001872095779049543.
Entin, E. B., Entin, E. E., & Serfaty, D. (1996). Optimizing aided target-recognition performance. In Human factors and ergonomics society annual meeting (pp. 233–237). https://doi.org/10.1177/154193129604000419.
Fox, M., Long, D., & Magazzeni, D. (2017). Explainable planning. arXiv:170910256, https://arxiv.org/abs/1709.10256.
Gillan, D. J., Holden, K., Adam, S., Rudisill, M., & Magee, L. (1992). How should Fitts’ Law be applied to human–computer interaction? Interacting with Computers, 4(3), 291–313. https://doi.org/10.1016/0953-5438(92)90019-c.
Gordon, D. M. (1999). Ants at work: How an insect society is organized. Oxford, UK: Simon and Schuster.
Haas, E., Fields, M., Hill, S., & Stachowiak, C. (2009). Extreme scalability: Designing interfaces and algorithms for soldier-robotic swarm interaction. Tech. Rep. ARL-TR-4800, U.S. Army Research Laboratory, Aberdeen Proving Ground, MD, URL https://apps.dtic.mil/sti/pdfs/ADA498162.pdf.
Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (task load index): Results of empirical and theoretical research. Advances in Psychology, 52, 139–183. https://doi.org/10.1016/S0166-4115(08)62386-9.
Harvey, J., Merrick, K. E., & Abbass, H. A. (2018). Assessing human judgment of computationally generated swarming behavior. Frontiers in Robotics and AI, 5(13), 1–13. https://doi.org/10.3389/frobt.2018.00013.
Hein, A. M., Rosenthal, S. B., Hagstrom, G. I., Berdahl, A., Torney, C. J., & Couzin, I. D. (2015). The evolution of distributed sensing and collective computation in animal populations. Ecology, Genomics and Evolutionary Biology, 4, 1–59. https://doi.org/10.7554/eLife.10955.
Helldin, T. (2014). Transparency for future semi-automated systems: Effects of transparency on operator performance, workload and trust. PhD thesis. Örebro University, Örebro University. https://www.diva-portal.org/smash/get/diva2:710832/FULLTEXT02.
Hepworth, A. J., Baxter, D. P., Hussein, A., Yaxley, K. J., Debie, E., & Abbass, H. A. (2020). Human-swarm-teaming transparency and trust architecture. IEEE/CAA Journal of Automatica Sinica. https://doi.org/10.1109/JAS.2020.1003545.
Hussein, A., Elsawah, S., & Abbass, H. A. (2020). The reliability and transparency bases on trust in human-swarm interaction: Principles and implications. Ergonomics, 63(9), 1116–1132. https://doi.org/10.1080/00140139.2020.1764112.
John, S. M., Smallman, H. S., & Manes, D.I. (2005). Recovery from interruptions to a dynamic monitoring task: The beguiling utility of instant replay. In Human factors and ergonomics society annual meeting (pp. 473–477). https://doi.org/10.1177/154193120504900355.
Kaltenbach, E., & Dolgov, I. (2017). On the dual nature of transparency and reliability: Rethinking factors that shape trust in automation. In Human factors and ergonomics society annual meeting (pp. 308–312). https://doi.org/10.1177/1541931213601558.
Kim, T., & Hinds, P. (2006). Who should I blame? Effects of autonomy and transparency on attributions in human-robot interaction. In IEEE international symposium on robot and human interactive communication (pp. 80–85). https://doi.org/10.1109/ROMAN.2006.314398.
Kizilcec, R. F. (2016). How much information?: Effects of transparency on trust in an algorithmic interface. In Conference on human factors in computing systems (pp. 2390–2395). https://doi.org/10.1145/2858036.2858402.
Komareji, M., & Bouffanais, R. (2013). Resilience and controllability of dynamic collective behaviors. PLoS ONE, 8(12), 1–15. https://doi.org/10.1371/journal.pone.0082578.
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392.
Lyons, J. B. (2013). Being transparent about transparency: A model for human-robot interaction. In AAAI Spring Sympsium: Trust and Autonomous Systems (pp. 48–53), URL https://www.researchgate.net/publication/286961455_Being_transparent_about_transparency_A_model_for_human-robot_interaction.
Lyons, J. B., Sadler, G. G., Koltai, K., Battiste, H., Ho, N. T., Hoffmann, L. C., et al. (2017). Shaping trust through transparent design: Theoretical and experimental guidelines. Advances in Human Factors in Robots and Unmanned Systems, 499, 127–136. https://doi.org/10.1007/978-3-319-41959-6_11.
Manning, M. D., Harriott, C. E., Hayes, S. T., Adams, J. A., & Seiffert, A. E. (2015). Heuristic evaluation of swarm metrics’ effectiveness. In ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts (pp. 17–18), https://doi.org/10.1145/2701973.2702046.
Mark, G., & Kobsa, A. (2005). The effects of collaboration and system transparency on CIVE usage: An empirical study and model. Presence, 14(1), 60–80. https://doi.org/10.1162/1054746053890279.
Mercado, J. E., Rupp, M. A., Chen, J. Y. C., Barnes, M. J., Barber, D., & Procci, K. (2016). Intelligent agent transparency in human-agent teaming for multi-UXV management. Human Factors: The Journal of the Human Factors and Ergonomics Society, 58(3), 401–415. https://doi.org/10.1177/0018720815621206.
Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on capacity for processing information. Psychological Review, 63(2), 81–97. https://doi.org/10.1037/h0043158.
Nagavalli, S., Chen, S. Y., Lewis, M., Chakraborty, N., & Sycara, K. (2015). Bounds of neglect benevolence in input timing for human interaction with robotic swarms. In ACM/IEEE international conference on human–robot interaction (pp. 197–204). https://doi.org/10.1145/2696454.2696470.
Nedbal, D., Auinger, A., & Hochmeier, A. (2013). Addressing transparency, communication and participation in enterprise 2.0 projects. Procedia Technology, 9, 676–686. https://doi.org/10.1016/j.protcy.2013.12.075.
Ososky, S., Sanders, T., Jentsch, F., Hancock, P., & Chen, J. Y. C. (2014). Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems. In The international society for optical engineering (pp. 1–12). https://www.researchgate.net/publication/271495823_Determinants_of_system_transparency_and_its_influence_on_trust_in_and_reliance_on_unmanned_robotic_systems. Accessed 18 September 2018
Preece, J., & Rogers, Y. (2007). Interaction Design: Beyond Human–Computer Interaction. West Sussex, England: Wiley.
Rao, A. S., & Georgeff, M. P. (1995). Bdi agents: From theory to practice. In International conference on multiagent systems (pp. 312–319).https://www.aaai.org/Papers/ICMAS/1995/ICMAS95-042.pdf.
Reina, A., Valentini, G., Fernández-Oto, C., Dorigo, M., & Trianni, V. (2015). A design pattern for decentralised decision making. PLoS ONE, 10(10), 1–18. https://doi.org/10.1371/journal.pone.0140950.
Roundtree, K. A., Manning, M. D., & Adams, J. A. (2018). Analysis of human-swarm visualizations. In Human factors and ergonomics society annual meeting (pp. 287–291). https://doi.org/10.1177/1541931218621066.
Roundtree, K. A., Cody, J. R., Leaf, J., Demirel, H. O., & Adams, J. A. (2019a). Visualization design for human-collective teams. In Human factors and ergonomics society annual meeting (pp. 417–421). https://doi.org/10.1177/1071181319631028.
Roundtree, K. A., Goodrich, M. A., & Adams, J. A. (2019b). Transparency: Transitioning from human-machine systems to human-swarm systems. Journal of Cognitive Engineering and Decision Making, 13(3), 171–195. https://doi.org/10.1177/1555343419842776.
Roundtree, K. A., Cody, J. R., Leaf, J., Demirel, H. O., & Adams, J. A. (2020). Transparency’s influence on human–collective interactions. arXiv:200909859 URL https://arxiv.org/abs/2009.09859.
Scholtz, J. (2003). Theory and evaluation of human robot interactions. In Hawaii international conference on system sciences. https://doi.org/10.1109/HICSS.2003.1174284.
Seeley, T. D. (2010). Honeybee democracy. Princeton, NJ: Princeton University Press.
Seiffert, A. E., Hayes, S. T., Harriott, C. E., & Adams, J. A. (2015). Motion perception of biological swarms. In Annual cognitive science society meeting (pp. 2128–2133). https://cogsci.mindmodeling.org/2015/papers/0367/paper0367.pdf.
Selcon, S. J., Taylor, R. M., & Koritsas, E. (1991). Workload or situational awareness? TLX vs. SART for aerospace systems design evaluation. In Human factors and ergonomics society annual meeting (pp. 62–66). https://doi.org/10.1518/107118191786755706.
Selkowitz, A. R., Lakhmani, S. G., Chen, J. Y. C., & Boyce, M. (2015). The effects of agent transparency on human interaction with an autonomous robotic agent. In Human factors and ergonomics society annual meeting (pp. 806–810). https://doi.org/10.1177/1541931215591246.
Şahin, E., & Spears, W. M. (2005). Swarm Robotics. New York, NY: Springer.
Theodorou, A., Wortham, R. H., & Bryson, J. J. (2017). Designing and implementing transparency for real time inspection of autonomous robots. Connection Science, 29(3), 230–241. https://doi.org/10.1080/09540091.2017.1310182.
Valentini, G., Ferrante, E., & Dorigo, M. (2017). The best-of-n problem in robot swarms: Formalization, state of the art, and novel perspectives. Frontiers in Robotics and AI, 4, 1–9. https://doi.org/10.3389/frobt.2017.00009.
Vandenberg, S. G., & Kuse, A. R. (1978). Mental rotations, a group test of three-dimensional spatial visualization. Perceptual and Motor Skills, 47(2), 599–604. https://doi.org/10.2466/pms.1978.47.2.599.
Walker, P., Nunanally, S., Lewis, M., Kolling, A., Chakraborty, N., & Sycara, K. (2012). Neglect benevolence in human control of swarms in the presence of latency. In IEEE international conference on systems, man, and cybernetics (pp. 3009–3014). https://doi.org/10.1109/ICSMC.2012.6378253.
Walker, P., Lewis, M., & Sycara, K. (2016). The effect of display type on operator prediction of future swarm states. In IEEE International Conference on Systems, Man, and Cybernetics (pp. 2521–2526), URL https://www.ri.cmu.edu/pub_files/2016/10/walker2016effect.pdf.
Walker, P., Miller, C., Mueller, J., Sycara, K., & Lewis, M. (2019). A playbook-based interface for human control of swarms (pp. 61–88). Boca Raton, FL: CRC Press.
Wickens, C. D., Lee, J. D., Liu, Y., & Becker, S. E. G. (2004). An introduction to human factors engineering. Upper Saddle River, NJ: Pearson Prentice Hall.
Yerkes, R. M., & Dodson, J. D. (1908). The relation of strength of stimulus to rapidity of habit-formation. Journal of Comparative Neurology of Psychology, 18, 459–482. https://doi.org/10.1002/cne.920180503.
Acknowledgements
The US Office of Naval Research Awards N000141613025, N000141210987, and N00014161302, supported this effort. The work of Jason R. Cody was fully supported by the United States Military Academy and the United States Army Advanced Civil Schooling program.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Supplementary material 1 (mp4 32283 KB)
Rights and permissions
About this article
Cite this article
Roundtree, K.A., Cody, J.R., Leaf, J. et al. Human-collective visualization transparency. Swarm Intell 15, 237–286 (2021). https://doi.org/10.1007/s11721-021-00194-6
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11721-021-00194-6