Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3377325.3377508acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article

Investigating the intelligibility of a computer vision system for blind users

Published: 17 March 2020 Publication History

Abstract

Computer vision systems to help blind users are becoming increasingly common, yet often these systems are not intelligible. Our work investigates the intelligibility of a wearable computer vision system to help blind users locate and identify people in their vicinity. Providing a continuous stream of information, this system allows us to explore intelligibility through interaction and instructions, going beyond studies of intelligibility that focus on explaining a decision a computer vision system might make. In a study with 13 blind users, we explored whether varying instructions (either basic or enhanced) about how the system worked would change blind users' experience of the system. We found offering a more detailed set of instructions did not affect how successful users were using the system nor their perceived workload. We did, however, find evidence of significant differences in what they knew about the system and they employed different, and potentially more effective, use strategies. Our findings have important implications for researchers and designers of computer vision systems for blind users, as well as more general implications for understanding what it means to make interactive computer vision systems intelligible.

References

[1]
Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan Kankanhalli. 2018. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 582:1--582:18.
[2]
A. Adadi and M. Berrada. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 6, (2018), 52138--52160.
[3]
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, and Yunfeng Zhang. 2019. One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. ArXiv190903012 Cs Stat (September 2019). Retrieved October 7, 2019 from http://arxiv.org/abs/1909.03012
[4]
Victoria Bellotti and Keith Edwards. 2001. Intelligibility and Accountability: Human Considerations in Context-aware Systems. Hum-Comput Interact 16, 2 (December 2001), 193--212.
[5]
Svetlin Bostandjiev, John O'Donovan, and Tobias Höllerer. 2012. TasteWeights: A Visual Interactive Hybrid Recommender System. In Proceedings of the Sixth ACM Conference on Recommender Systems (RecSys '12), 35--42.
[6]
Rupert R. A. Bourne, Seth R. Flaxman, Tasanee Braithwaite, Maria V. Cicinelli, Aditi Das, Jost B. Jonas, Jill Keeffe, John H. Kempen, Janet Leasher, Hans Limburg, Kovin Naidoo, Konrad Pesudovs, Serge Resnikoff, Alex Silvester, Gretchen A. Stevens, Nina Tahhan, Tien Y. Wong, Hugh R. Taylor, Rupert Bourne, Peter Ackland, Aries Arditi, Yaniv Barkana, Banu Bozkurt, Tasanee Braithwaite, Alain Bron, Donald Budenz, Feng Cai, Robert Casson, Usha Chakravarthy, Jaewan Choi, Maria Vittoria Cicinelli, Nathan Congdon, Reza Dana, Rakhi Dandona, Lalit Dandona, Aditi Das, Iva Dekaris, Monte Del Monte, Jenny Deva, Laura Dreer, Leon Ellwein, Marcela Frazier, Kevin Frick, David Friedman, Joao Furtado, Hua Gao, Gus Gazzard, Ronnie George, Stephen Gichuhi, Victor Gonzalez, Billy Hammond, Mary Elizabeth Hartnett, Minguang He, James Hejtmancik, Flavio Hirai, John Huang, April Ingram, Jonathan Javitt, Jost Jonas, Charlotte Joslin, Jill Keeffe, John Kempen, Moncef Khairallah, Rohit Khanna, Judy Kim, George Lambrou, Van Charles Lansingh, Paolo Lanzetta, Janet Leasher, Jennifer Lim, Hans Limburg, Kaweh Mansouri, Anu Mathew, Alan Morse, Beatriz Munoz, David Musch, Kovin Naidoo, Vinay Nangia, Maria Palaiou, Maurizio Battaglia Parodi, Fernando Yaacov Pena, Konrad Pesudovs, Tunde Peto, Harry Quigley, Murugesan Raju, Pradeep Ramulu, Serge Resnikoff, Alan Robin, Luca Rossetti, Jinan Saaddine, Mya Sandar, Janet Serle, Tueng Shen, Rajesh Shetty, Pamela Sieving, Juan Carlos Silva, Alex Silvester, Rita S. Sitorus, Dwight Stambolian, Gretchen Stevens, Hugh Taylor, Jaime Tejedor, James Tielsch, Miltiadis Tsilimbaris, Jan van Meurs, Rohit Varma, Gianni Virgili, Jimmy Volmink, Ya Xing Wang, Ning-Li Wang, Sheila West, Peter Wiedemann, Tien Wong, Richard Wormald, and Yingfeng Zheng. 2017. Magnitude, temporal trends, and projections of the global prevalence of blindness and distance and near vision impairment: a systematic review and meta-analysis. Lancet Glob. Health 5, 9 (September 2017), e888--e897.
[7]
Michael Brock and Per Ola Kristensson. 2013. Supporting Blind Navigation Using Depth Sensing and Sonification. In Proceedings of the 2013 ACM Conference on Pervasive and Ubiquitous Computing Adjunct Publication (UbiComp '13 Adjunct), 255--258.
[8]
S. Chakraborty, R. Tomsett, R. Raghavendra, D. Harborne, M. Alzantot, F. Cerutti, M. Srivastava, A. Preece, S. Julier, R. M. Rao, T. D. Kelley, D. Braines, M. Sensoy, C. J. Willis, and P. Gurram. 2017. Interpretability of deep learning models: A survey of results. In 2017 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computed, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), 1--6.
[9]
Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. ArXiv Prepr. ArXiv170208608 (2017).
[10]
Mary T. Dzindolet, Scott A. Peterson, Regina A. Pomranky, Linda G. Pierce, and Hall P. Beck. 2003. The role of trust in automation reliance. Int. J. Hum.-Comput. Stud. 58, 6 (June 2003), 697--718.
[11]
Malin Eiband, Charlotte Anlauff, Tim Ordenewitz, Martin Zürn, and Heinrich Hussmann. 2019. Understanding Algorithms Through Exploration: Supporting Knowledge Acquisition in Primary Tasks. In Proceedings of Mensch Und Computer 2019 (MuC'19), 127--136.
[12]
Valentina Grigoreanu, Margaret Burnett, Susan Wiedenbeck, Jill Cao, Kyle Rector, and Irwin Kwan. 2012. End-user Debugging Strategies: A Sensemaking Perspective. ACM Trans Comput-Hum Interact 19, 1 (May 2012), 5:1--5:28.
[13]
Sandra G. Hart and Lowell E. Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In Advances in Psychology, Peter A. Hancock and Najmedin Meshkati (ed.). North-Holland, 139--183. Retrieved November 6, 2013 from http://www.sciencedirect.com/science/article/pii/S0166411508623869
[14]
Diane Warner Hasling, William J. Clancey, and Glenn Rennels. 1984. Strategic explanations for a diagnostic consultation system. Int. J. Man-Mach. Stud. 20, 1 (January 1984), 3--19.
[15]
Jonathan L. Herlocker, Joseph A. Konstan, and John Riedl. 2000. Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM conference on Computer supported cooperative work, 241--250.
[16]
J. Karreman, N. Ummelen, and M. Steehouder. 2005. Procedural and declarative information in user instructions: what we do and don't know about these information types. In IPCC 2005. Proceedings. International Professional Communication Conference, 2005., 328--333.
[17]
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. 2018. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In International Conference on Machine Learning, 2668--2677. Retrieved December 11, 2018 from http://proceedings.mlr.press/v80/kim18d.html
[18]
Jacob Kittley-Davies, Ahmed Alqaraawi, Rayoung Yang, Enrico Costanza, Alex Rogers, and Sebastian Stein. 2019. Evaluating the Effect of Feedback from Different Computer Vision Processing Stages: A Comparative Lab Study. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19), 43:1--43:12.
[19]
René F. Kizilcec. 2016. How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16), 2390--2395.
[20]
Peter Kontschieder, Jonas F. Dorn, Cecily Morrison, Robert Corish, Darko Zikic, Abigail Sellen, Marcus D'Souza, Christian P. Kamm, Jessica Burggraaff, Prejaas Tewarie, Thomas Vogel, Michela Azzarito, Ben Glocker, Peter Chin, Frank Dahlke, Chris Polman, Ludwig Kappos, Bernard Uitdehaag, and Antonio Criminisi. 2014. Quantifying Progression of Multiple Sclerosis via Classification of Depth Videos. In Medical Image Computing and Computer-Assisted Intervention - MICCAI 2014 (Lecture Notes in Computer Science), 429--437.
[21]
T. Kulesza, S. Stumpf, M. Burnett, S. Yang, I. Kwan, and W. Wong. 2013. Too much, too little, or just right? Ways explanations impact end users' mental models. In 2013 IEEE Symposium on Visual Languages and Human Centric Computing, 3--10.
[22]
Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of Explanatory Debugging to Personalize Interactive Machine Learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI '15), 126--137.
[23]
Todd Kulesza, Simone Stumpf, Margaret Burnett, and Irwin Kwan. 2012. Tell me more?: the effects of mental model soundness on personalizing an intelligent agent. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI '12), 1--10.
[24]
Todd Kulesza, Weng-Keen Wong, Simone Stumpf, Stephen Perona, Rachel White, Margaret M. Burnett, Ian Oberst, and Andrew J. Ko. 2009. Fixing the Program My Computer Learned: Barriers for End Users, Challenges for the Machine. In Proceedings of the 14th International Conference on Intelligent User Interfaces (IUI '09), 187--196.
[25]
Brian Y. Lim, Anind K. Dey, and Daniel Avrahami. 2009. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the 27th international conference on Human factors in computing systems (CHI '09), 2119--2128.
[26]
Zachary C. Lipton. 2016. The Mythos of Model Interpretability. ArXiv160603490 Cs Stat (June 2016). Retrieved from http://arxiv.org/abs/1606.03490
[27]
Tania Lombrozo. 2006. The structure and function of explanations. Trends Cogn. Sci. 10, 10 (October 2006), 464--470.
[28]
Steve Mann, Jason Huang, Ryan Janzen, Raymond Lo, Valmiki Rampersad, Alexander Chen, and Taqveer Doha. 2011. Blind Navigation with a Wearable Range Camera and Vibrotactile Helmet. In Proceedings of the 19th ACM International Conference on Multimedia (MM '11), 1325--1328.
[29]
Tim Miller. 2017. Explanation in Artificial Intelligence: Insights from the Social Sciences. ArXiv170607269 Cs (June 2017). Retrieved from http://arxiv.org/abs/1706.07269
[30]
Cecily Morrison, Kit Huckvale, Bob Corish, Richard Banks, Martin Grayson, Jonas Dorn, Abigail Sellen, and Sân Lindley. 2018. Visualizing Ubiquitously Sensed Measures of Motor Ability in Multiple Sclerosis: Reflections on Communicating Machine Learning in Practice. ACM Trans Interact Intell Syst 8, 2 (July 2018), 12:1--12:28.
[31]
Don Norman. 1983. Some observations on mental models. In Mental Models. Lawrence Erlbaum Associates, Hillsdale, New Jersey, US.
[32]
Don Norman. 1989. The Design of Everyday Things. Currency-Doubleday, New York.
[33]
Raja Parasuraman and Victor Riley. 1997. Humans and Automation: Use, Misuse, Disuse, Abuse. Hum. Factors 39, 2 (June 1997), 230--253.
[34]
Forough Poursabzi-Sangdeh, Daniel G. Goldstein, Jake M. Hofman, Jennifer Wortman Vaughan, and Hanna Wallach. 2018. Manipulating and Measuring Model Interpretability. ArXiv180207810 Cs (February 2018). Retrieved from http://arxiv.org/abs/1802.07810
[35]
J. Rasmussen. 1983. Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models. IEEE Trans. Syst. Man Cybern. SMC-13, 3 (May 1983), 257--266.
[36]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '16), 1135--1144.
[37]
Larry R. Squire. 2004. Memory systems of the brain: a brief history and current perspective. Neurobiol. Learn. Mem. 82, 3 (November 2004), 171--177.
[38]
Lee Stearns and Anja Thieme. 2018. Automated Person Detection in Dynamic Scenes to Assist People with Vision Impairments: An Initial Investigation. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '18), 391--394.
[39]
Simone Stumpf, Vidya Rajaram, Lida Li, Weng-Keen Wong, Margaret Burnett, Thomas Dietterich, Erin Sullivan, and Jonathan Herlocker. 2009. Interacting meaningfully with machine learning systems: Three experiments. Int J Hum-Comput Stud 67, 8 (2009), 639--662.
[40]
Anja Thieme, Cynthia L. Bennett, Cecily Morrison, Edward Cutrell, and Alex S. Taylor. 2018. "I Can Do Everything but See!" - How People with Vision Impairments Negotiate Their Abilities in Social Contexts. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 203:1--203:14.
[41]
Nava Tintarev and Judith Masthoff. 2007. Effective explanations of recommendations: user-centered design. In Proceedings of the 2007 ACM conference on Recommender systems, 153--156.
[42]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19), 601:1--601:15.
[43]
Daniel S. Weld and Gagan Bansal. 2018. The Challenge of Crafting Intelligible Intelligence. ArXiv180304263 Cs (March 2018). Retrieved October 7, 2019 from http://arxiv.org/abs/1803.04263
[44]
Gesa Wiegand, Matthias Schmidmaier, Thomas Weber, Yuanting Liu, and Heinrich Hussmann. 2019. I Drive - You Trust: Explaining Driving Behavior Of Autonomous Cars. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (CHI EA '19), LBW0163:1--LBW0163:6.
[45]
Yuhang Zhao, Shaomei Wu, Lindsay Reynolds, and Shiri Azenkot. 2018. A Face Recognition Application for People with Visual Impairments: Understanding Use Beyond the Lab. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 215:1--215:14.

Cited By

View all
  • (2024)Enhancing Scene Understanding in VR for Visually Impaired Individuals with High-Frame Videos and Event Overlays2024 IEEE International Conference on Consumer Electronics (ICCE)10.1109/ICCE59016.2024.10444301(1-5)Online publication date: 6-Jan-2024
  • (2022)Intelligence and Usability Empowerment of Smartphone Adaptive FeaturesApplied Sciences10.3390/app12231224512:23(12245)Online publication date: 30-Nov-2022
  • (2022)Shared Privacy Concerns of the Visually Impaired and Sighted Bystanders with Camera-Based Assistive TechnologiesACM Transactions on Accessible Computing10.1145/350685715:2(1-33)Online publication date: 19-May-2022
  • Show More Cited By

Index Terms

  1. Investigating the intelligibility of a computer vision system for blind users

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        IUI '20: Proceedings of the 25th International Conference on Intelligent User Interfaces
        March 2020
        607 pages
        ISBN:9781450371186
        DOI:10.1145/3377325
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 17 March 2020

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. assistive technology
        2. blind users
        3. computer vision
        4. intelligibility

        Qualifiers

        • Research-article

        Conference

        IUI '20
        Sponsor:

        Acceptance Rates

        Overall Acceptance Rate 746 of 2,811 submissions, 27%

        Upcoming Conference

        IUI '25

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)27
        • Downloads (Last 6 weeks)3
        Reflects downloads up to 28 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Enhancing Scene Understanding in VR for Visually Impaired Individuals with High-Frame Videos and Event Overlays2024 IEEE International Conference on Consumer Electronics (ICCE)10.1109/ICCE59016.2024.10444301(1-5)Online publication date: 6-Jan-2024
        • (2022)Intelligence and Usability Empowerment of Smartphone Adaptive FeaturesApplied Sciences10.3390/app12231224512:23(12245)Online publication date: 30-Nov-2022
        • (2022)Shared Privacy Concerns of the Visually Impaired and Sighted Bystanders with Camera-Based Assistive TechnologiesACM Transactions on Accessible Computing10.1145/350685715:2(1-33)Online publication date: 19-May-2022
        • (2022)Recent trends in computer vision-driven scene understanding for VI/blind users: a systematic mappingUniversal Access in the Information Society10.1007/s10209-022-00868-w22:3(983-1005)Online publication date: 6-Feb-2022
        • (2021)Interdependence in ActionProceedings of the ACM on Human-Computer Interaction10.1145/34491435:CSCW1(1-33)Online publication date: 22-Apr-2021
        • (2021)Accessing Passersby Proxemic Signals through a Head-Worn Camera: Opportunities and Limitations for the BlindProceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3441852.3471232(1-15)Online publication date: 17-Oct-2021
        • (2021)Disability-first Dataset Creation: Lessons from Constructing a Dataset for Teachable Object Recognition with Blind and Low Vision Data CollectorsProceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3441852.3471225(1-12)Online publication date: 17-Oct-2021
        • (2020)A distributed cognitive approach in cybernetic modelling of human vision in a robotic swarmBio-Algorithms and Med-Systems10.1515/bams-2020-002516:3Online publication date: 21-Jul-2020
        • (2020)Privacy Considerations of the Visually Impaired with Camera Based Assistive Technologies: Misrepresentation, Impropriety, and FairnessProceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3373625.3417003(1-14)Online publication date: 26-Oct-2020

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media