Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

"If it is easy to understand then it will have value": Examining Perceptions of Explainable AI with Community Health Workers in Rural India

Published: 26 April 2024 Publication History
  • Get Citation Alerts
  • Abstract

    AI-driven tools are increasingly deployed to support low-skilled community health workers (CHWs) in hard-to-reach communities in the Global South. This paper examines how CHWs in rural India engage with and perceive AI explanations and how we might design explainable AI (XAI) interfaces that are more understandable to them. We conducted semi-structured interviews with CHWs who interacted with a design probe to predict neonatal jaundice in which AI recommendations are accompanied by explanations. We (1) identify how CHWs interpreted AI predictions and the associated explanations, (2) unpack the benefits and pitfalls they perceived of the explanations, and (3) detail how different design elements of the explanations impacted their AI understanding. Our findings demonstrate that while CHWs struggled to understand the AI explanations, they nevertheless expressed a strong preference for the explanations to be integrated into AI-driven tools and perceived several benefits of the explanations, such as helping CHWs learn new skills and improved patient trust in AI tools and in CHWs. We conclude by discussing what elements of AI need to be made explainable to novice AI users like CHWs and outline concrete design recommendations to improve the utility of XAI for novice AI users in non-Western contexts.

    References

    [1]
    Jose Abdelnour-Nocera, Torkil Clemmensen, and Masaaki Kurosu. 2013. Reframing HCI through local and indigenous perspectives. International Journal of Human-Computer Interaction, Vol. 29, 4 (2013), 201--204.
    [2]
    Wadhwani AI. 2020a. Cough Against Covid. https://www.wadhwaniai.org/programs/cough-against-covid/ Retrieved November 28, 2022 from
    [3]
    Wadhwani AI. 2020b. Newborn Anthropometry. https://www.wadhwaniai.org/programs/newborn-anthropometry/ Retrieved November 28, 2022 from
    [4]
    Ahmed Alqaraawi, Martin Schuessler, Philipp Weiß, Enrico Costanza, and Nadia Berthouze. 2020. Evaluating saliency map explanations for convolutional neural networks: a user study. In Proceedings of the 25th International Conference on Intelligent User Interfaces. 275--285.
    [5]
    Ariful Islam Anik and Andrea Bunt. 2021. Data-centric explanations: explaining training data of machine learning systems to promote transparency. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--13.
    [6]
    Alejandro Barredo Arrieta, Natalia D'iaz-Rodr'iguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garc'ia, Sergio Gil-López, Daniel Molina, Richard Benjamins, et al. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, Vol. 58 (2020), 82--115.
    [7]
    Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--16.
    [8]
    Abdullah H Baqui, Shams El-Arifeen, Gary L Darmstadt, Saifuddin Ahmed, Emma K Williams, Habibur R Seraji, Ishtiaq Mannan, Syed M Rahman, Rasheduzzaman Shah, Samir K Saha, et al. 2008. Effect of community-based newborn-care intervention package implemented through two service-delivery strategies in Sylhet district, Bangladesh: a cluster-randomised controlled trial. The lancet, Vol. 371, 9628 (2008), 1936--1944.
    [9]
    Lisa Feldman Barrett, Ralph Adolphs, Stacy Marsella, Aleix M Martinez, and Seth D Pollak. 2019. Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychological science in the public interest, Vol. 20, 1 (2019), 1--68.
    [10]
    Emma Beede, Elizabeth Baylor, Fred Hersch, Anna Iurchenko, Lauren Wilcox, Paisan Ruamviboonsuk, and Laura M Vardoulakis. 2020. A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1--12.
    [11]
    Adrien Bennetot, Jean-Luc Laurent, Raja Chatila, and Natalia D'iaz-Rodr'iguez. 2019. Towards explainable neural-symbolic visual reasoning. arXiv preprint arXiv:1909.09065 (2019).
    [12]
    Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José MF Moura, and Peter Eckersley. 2020. Explainable machine learning in deployment. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 648--657.
    [13]
    Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology, Vol. 3, 2 (2006), 77--101.
    [14]
    Andrea Brennen. 2020. What Do People Really Want When They Say They Want" Explainable AI?" We Asked 60 Stakeholders. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1--7.
    [15]
    Emma Brunskill and Neal Lesh. 2010. Routing for rural health: optimizing community health worker visit schedules. In 2010 AAAI Spring Symposium Series.
    [16]
    Zana Bucc inca, Maja Barbara Malaya, and Krzysztof Z Gajos. 2021. To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW1 (2021), 1--21.
    [17]
    Carrie J Cai, Jonas Jongejan, and Jess Holbrook. 2019a. The effects of example-based explanations in a machine learning interface. In Proceedings of the 24th international conference on intelligent user interfaces. 258--262.
    [18]
    Carrie J Cai, Samantha Winter, David Steiner, Lauren Wilcox, and Michael Terry. 2019b. " Hello AI": uncovering the onboarding needs of medical practitioners for human-AI collaborative decision-making. Proceedings of the ACM on Human-computer Interaction, Vol. 3, CSCW (2019), 1--24.
    [19]
    Elizabeth Charters. 2003. The Use of Think-aloud Methods in Qualitative Research An Introduction to Think-aloud Methods. Brock Education Journal, Vol. 12, 2 (July 2003). https://doi.org/10.26522/brocked.v12i2.38 Number: 2.
    [20]
    Beenish M Chaudry, Kay H Connelly, Katie A Siek, and Janet L Welch. 2012. Mobile interface design for low-literacy populations. In Proceedings of the 2nd ACM SIGHIT international health informatics symposium. 91--100.
    [21]
    Jung-Wei Chen and Jiajie Zhang. 2007. Comparing text-based and graphic user interfaces for novice and expert users. In AMIA annual symposium proceedings, Vol. 2007. American Medical Informatics Association, 125.
    [22]
    Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O'Connell, Terrance Gray, F Maxwell Harper, and Haiyi Zhu. 2019. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In Proceedings of the 2019 chi conference on human factors in computing systems. 1--12.
    [23]
    Minseok Cho, Gyeongbok Lee, and Seung-won Hwang. 2019. Explanatory and actionable debugging for machine learning: A tableqa demonstration. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. 1333--1336.
    [24]
    Michael Chromik, Malin Eiband, Felicitas Buchner, Adrian Krüger, and Andreas Butz. 2021. I think i get your point, AI! the illusion of explanatory depth in explainable AI. In 26th International Conference on Intelligent User Interfaces. 307--317.
    [25]
    Eric Chu, Deb Roy, and Jacob Andreas. 2020. Are visual explanations useful? a case study in model-in-the-loop prediction. arXiv preprint arXiv:2007.12248 (2020).
    [26]
    Danielle Keats Citron and Frank Pasquale. 2014. The scored society: Due process for automated predictions. Wash. L. Rev., Vol. 89 (2014), 1.
    [27]
    Lilian De Greef, Mayank Goel, Min Joon Seo, Eric C Larson, James W Stout, James A Taylor, and Shwetak N Patel. 2014. Bilicam: using mobile phones to monitor newborn jaundice. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing. 331--342.
    [28]
    Nicola Dell, Jessica Crawford, Nathan Breit, Timóteo Chaluco, Aida Coelho, Joseph McCord, and Gaetano Borriello. 2013. Integrating ODK Scan into the Community Health Worker Supply Chain in Mozambique. In Proceedings of the Sixth International Conference on Information and Communication Technologies and Development: Full Papers - Volume 1 (Cape Town, South Africa) (ICTD '13). Association for Computing Machinery, New York, NY, USA, 228--237. https://doi.org/10.1145/2516604.2516611
    [29]
    Nicola Dell and Neha Kumar. 2016. The ins and outs of HCI for development. In Proceedings of the 2016 CHI conference on human factors in computing systems. 2220--2232.
    [30]
    Brian DeRenzi, Nicola Dell, Jeremy Wacksman, Scott Lee, and Neal Lesh. 2017. Supporting community health workers in India through voice-and web-based feedback. In Proceedings of the 2017 CHI conference on human factors in computing systems. 2770--2781.
    [31]
    Brian DeRenzi, Neal Lesh, Tapan Parikh, Clayton Sims, Werner Maokla, Mwajuma Chemba, Yuna Hamisi, David S hellenberg, Marc Mitchell, and Gaetano Borriello. 2008. E-IMCI: Improving pediatric health care in low-income countries. In Proceedings of the SIGCHI conference on human factors in computing systems. 753--762.
    [32]
    Brian DeRenzi, Jeremy Wacksman, Nicola Dell, Scott Lee, Neal Lesh, Gaetano Borriello, and Andrew Ellner. 2016. Closing the feedback Loop: A 12-month evaluation of ASTA, a self-tracking application for ASHAs. In Proceedings of the Eighth International Conference on Information and Communication Technologies and Development. 1--10.
    [33]
    Shipi Dhanorkar, Christine T Wolf, Kun Qian, Anbang Xu, Lucian Popa, and Yunyao Li. 2021. Who needs to know what, when?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle. In Designing Interactive Systems Conference 2021. 1591--1602.
    [34]
    Aparna Dhinakaran. 2022. A Look Into Global, Cohort and Local Model Explainability. https://towardsdatascience.com/a-look-into-global-cohort-and-local-model-explainability-973bd449969f Retrieved October 31, 2022 from
    [35]
    Jonathan Dodge, Q Vera Liao, Yunfeng Zhang, Rachel KE Bellamy, and Casey Dugan. 2019. Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th international conference on intelligent user interfaces. 275--285.
    [36]
    Upol Ehsan, Q Vera Liao, Michael Muller, Mark O Riedl, and Justin D Weisz. 2021. Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--19.
    [37]
    Upol Ehsan and Mark O Riedl. 2020. Human-centered explainable ai: Towards a reflective sociotechnical approach. In International Conference on Human-Computer Interaction. Springer, 449--466.
    [38]
    Upol Ehsan and Mark O Riedl. 2021. Explainability pitfalls: Beyond dark patterns in explainable AI. arXiv preprint arXiv:2109.12480 (2021).
    [39]
    Márcia Cristina Rodrigues Fausto, Ligia Giovanella, Maria Helena Magalh aes de Mendoncc a, Patty Fidelis de Almeida, Sarah Escorel, Carla Lourencc o Tavares de Andrade, and Maria Inês Carsalade Martins. 2011. The work of community health workers in major cities in Brazil: mediation, community action, and health care. The Journal of ambulatory care management, Vol. 34, 4 (2011), 339--353.
    [40]
    Maximilian Förster, Mathias Klier, Kilian Kluge, and Irina Sigler. 2020. Fostering human agency: a process for the design of user-centric XAI systems. (2020).
    [41]
    Isabela Gasparini, Marcelo S Pimenta, and José Palazzo M De Oliveira. 2011. Vive la différence! a survey of cultural-aware issues in HCI. In Proceedings of the 10th Brazilian Symposium on Human Factors in Computing Systems and the 5th Latin American Conference on Human-Computer Interaction. 13--22.
    [42]
    Bill Gaver, Tony Dunne, and Elena Pacenti. 1999. Design: cultural probes. interactions, Vol. 6, 1 (1999), 21--29.
    [43]
    Julie Gerlings, Arisa Shollo, and Ioanna Constantiou. 2020. Reviewing the need for explainable artificial intelligence (xAI). arXiv preprint arXiv:2012.01007 (2020).
    [44]
    Hani Hagras. 2018. Toward human-understandable, explainable AI. Computer, Vol. 51, 9 (2018), 28--36.
    [45]
    Paul KJ Han, William MP Klein, and Neeraj K Arora. 2011. Varieties of uncertainty in health care: a conceptual taxonomy. Medical Decision Making, Vol. 31, 6 (2011), 828--838.
    [46]
    Carl Hartung, Adam Lerer, Yaw Anokwa, Clint Tseng, Waylon Brunette, and Gaetano Borriello. 2010. Open data kit: tools to build information services for developing regions. In Proceedings of the 4th ACM/IEEE international conference on information and communication technologies and development. 1--12.
    [47]
    Rüdiger Heimg"artner. 2013. Intercultural User Interface Design--Culture-Centered HCI Design--Cross-Cultural User Interface Design: Different Terminology or Different Approaches?. In International Conference of Design, User Experience, and Usability. Springer, 62--71.
    [48]
    Fred Hohman, Andrew Head, Rich Caruana, Robert DeLine, and Steven M Drucker. 2019. Gamut: A design probe to understand how data scientists understand machine learning models. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1--13.
    [49]
    Brian Hu, Paul Tunison, Bhavan Vasu, Nitesh Menon, Roddy Collins, and Anthony Hoogs. 2021. XAITK: The explainable AI toolkit. Applied AI Letters, Vol. 2, 4 (2021), e40.
    [50]
    Sami Hulkko, Tuuli Mattelm"aki, Katja Virtanen, and Turkka Keinonen. 2004. Mobile probes. In Proceedings of the third Nordic conference on Human-computer interaction. 43--51.
    [51]
    Hilary Hutchinson, Wendy Mackay, Bo Westerlund, Benjamin B. Bederson, Allison Druin, Catherine Plaisant, Michel Beaudouin-Lafon, Stéphane Conversy, Helen Evans, Heiko Hansen, Nicolas Roussel, and Björn Eiderbäck. 2003. Technology probes: inspiring design for and with families. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '03). Association for Computing Machinery, New York, NY, USA, 17--24. https://doi.org/10.1145/642611.642616
    [52]
    Azra Ismail and Neha Kumar. 2021. AI in global health: the view from the front lines. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--21.
    [53]
    Alon Jacovi, Swabha Swayamdipta, Shauli Ravfogel, Yanai Elazar, Yejin Choi, and Yoav Goldberg. 2021. Contrastive Explanations for Model Interpretability. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 1597--1611.
    [54]
    Weina Jin, Jianyu Fan, D Gromala, P Pasquier, and G Hamarneh. 2021. EUCA: the End-User-Centered Explainable AI Framework. arXiv preprint arXiv:2102.02437 (2021).
    [55]
    Shivani Kapania, Oliver Siy, Gabe Clapper, Azhagu Meena SP, and Nithya Sambasivan. 2022. " Because AI is 100% right and safe": User Attitudes and Sources of AI Authority in India. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1--18.
    [56]
    Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. Interpreting interpretability: understanding data scientists' use of interpretability tools for machine learning. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1--14.
    [57]
    Kimia Kiani, George Cui, Andrea Bunt, Joanna McGrenere, and Parmit K. Chilana. 2019. Beyond "One-Size-Fits-All": Understanding the Diversity in How Software Newcomers Discover and Make Use of Help Resources. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3290605.3300570
    [58]
    Kangmoon Kim and Young-Mee Lee. 2018. Understanding uncertainty in medicine: concepts and implications in medical education. Korean journal of medical education, Vol. 30, 3 (2018), 181.
    [59]
    Sunnie SY Kim, Nicole Meister, Vikram V Ramaswamy, Ruth Fong, and Olga Russakovsky. 2022. HIVE: Evaluating the human interpretability of visual explanations. In European Conference on Computer Vision. Springer, 280--298.
    [60]
    Neesha Kodagoda, BL William Wong, Chris Rooney, and Nawaz Khan. 2012. Interactive visualization for low literacy users: from lessons learnt to design. In Proceedings of the SIGCHI conference on human factors in computing systems. 1159--1168.
    [61]
    Irina Kondratova and Ilia Goldfarb. 2006. Cultural interface design: Global colors study. In OTM Confederated International Conferences" On the Move to Meaningful Internet Systems". Springer, 926--934.
    [62]
    Satyapriya Krishna, Tessa Han, Alex Gu, Javin Pombra, Shahin Jabbari, Steven Wu, and Himabindu Lakkaraju. 2022. The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective. arXiv preprint arXiv:2202.01602 (2022).
    [63]
    Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th international conference on intelligent user interfaces. 126--137.
    [64]
    Vishwajeet Kumar, Saroj Mohanty, Aarti Kumar, Rajendra P Misra, Mathuram Santosham, Shally Awasthi, Abdullah H Baqui, Pramod Singh, Vivek Singh, Ramesh C Ahuja, et al. 2008. Effect of community-based behaviour change management on neonatal mortality in Shivgarh, Uttar Pradesh, India: a cluster-randomised controlled trial. The Lancet, Vol. 372, 9644 (2008), 1151--1162.
    [65]
    Himabindu Lakkaraju and Osbert Bastani. 2020. " How do I fool you?" Manipulating User Trust via Misleading Black Box Explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 79--85.
    [66]
    Markus Langer, Daniel Oster, Timo Speith, Holger Hermanns, Lena K"astner, Eva Schmidt, Andreas Sesing, and Kevin Baum. 2021. What do we want from Explainable Artificial Intelligence (XAI)?--A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, Vol. 296 (2021), 103473.
    [67]
    Q Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--15.
    [68]
    Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Proceedings of the 31st international conference on neural information processing systems. 4768--4777.
    [69]
    B Mane Abhay and V Khandekar Sanjay. 2014. Strengthening primary health care through Asha Workers: a novel approach in India. Primary Health Care, Vol. 4, 149 (2014), 2167--1079.
    [70]
    Aaron Marcus. 1995. Principles of effective visual communication for graphical user interface design. In Readings in human-computer interaction. Elsevier, 425--441.
    [71]
    Aaron Marcus, William B Cowan, and Wanda Smith. 1989. Color in user interface design: functionally and aesthetics. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 25--27.
    [72]
    Aaron Marcus and Emilie W Gould. 2012. Globalization, localization, and cross-cultural user-interface design., 341--366 pages.
    [73]
    Indrani Medhi, Aman Sagar, and Kentaro Toyama. 2006. Text-free user interfaces for illiterate and semi-literate users. In 2006 international conference on information and communication technologies and development. IEEE, 72--82.
    [74]
    Barbara J Meier. 1988. ACE: a color expert system for user interface design. In Proceedings of the 1st annual ACM SIGGRAPH symposium on User Interface Software. 117--128.
    [75]
    Joy Ming, Srujana Kamath, Elizabeth Kuo, Madeline Sterling, Nicola Dell, and Aditya Vashistha. 2022. Invisible Work in Two Frontline Health Contexts. In ACM SIGCAS/SIGCHI Conference on Computing and Sustainable Societies (COMPASS) (Seattle, WA, USA) (COMPASS '22). Association for Computing Machinery, New York, NY, USA, 139--151. https://doi.org/10.1145/3530190.3534814
    [76]
    Sina Mohseni, Niloofar Zarei, and Eric D Ragan. 2021. A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Transactions on Interactive Intelligent Systems (TiiS), Vol. 11, 3--4 (2021), 1--45.
    [77]
    Maletsabisa Molapo, Melissa Densmore, and Brian DeRenzi. 2017. Video consumption patterns for first time smartphone users: Community health workers in Lesotho. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 6159--6170.
    [78]
    Maletsabisa Molapo, Melissa Densmore, and Limpho Morie. 2016. Designing with Community Health Workers: Enabling Productive Participation Through Exploration. In Proceedings of the First African Conference on Human Computer Interaction (AfriCHI'16). Association for Computing Machinery, New York, NY, USA, 58--68. https://doi.org/10.1145/2998581.2998589
    [79]
    Christoph Molnar. 2020. Interpretable machine learning.
    [80]
    T Nathan Mundhenk, Barry Y Chen, and Gerald Friedland. 2019. Efficient saliency maps for Explainable AI. arXiv preprint arXiv:1911.11293 (2019).
    [81]
    Grace W Mwai, Gitau Mburu, Kwasi Torpey, Peter Frost, Nathan Ford, and Janet Seeley. 2013. Role and outcomes of community health workers in HIV care in sub-Saharan Africa: a systematic review. Journal of the International AIDS Society, Vol. 16, 1 (2013), 18586.
    [82]
    Siddharth Nishtala, Harshavardhan Kamarthi, Divy Thakkar, Dhyanesh Narayanan, Anirudh Grama, Aparna Hegde, Ramesh Padmanabhan, Neha Madhiwalla, Suresh Chaudhary, Balaraman Ravindran, et al. 2020. Missed calls, Automated Calls and Health Support: Using AI to improve maternal health outcomes by increasing program engagement. arXiv preprint arXiv:2006.07590 (2020).
    [83]
    Greg Noone. 2022. Emotion recognition is mostly ineffective. Why are companies still investing in it? https://techmonitor.ai/technology/emerging-technology/emotion-recognition Retrieved October 1, 2022 from
    [84]
    Chinasa T Okolo. 2022. Optimizing human-centered AI for healthcare in the Global South. Patterns (2022), 100421.
    [85]
    Chinasa T. Okolo, Nicola Dell, and Aditya Vashistha. 2022. Making AI Explainable in the Global South: A Systematic Review. In ACM SIGCAS/SIGCHI Conference on Computing and Sustainable Societies (COMPASS) (COMPASS '22). Association for Computing Machinery, 439--452.
    [86]
    Chinasa T Okolo, Srujana Kamath, Nicola Dell, and Aditya Vashistha. 2021. ?It cannot do all of my work": community health worker perceptions of AI-enabled mobile health applications in rural India. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--20.
    [87]
    Abimbola Olaniran, Barbara Madaj, Sarah Bar-Zev, and Nynke van den Broek. 2019. The roles of community health workers who provide maternal and newborn health services: case studies from Africa and Asia. BMJ global health, Vol. 4, 4 (2019), e001388.
    [88]
    Ayomide Owoyemi, Joshua Owoyemi, Adenekan Osiyemi, and Andy Boyd. 2020. Artificial intelligence for healthcare in Africa. Frontiers in Digital Health, Vol. 2 (2020), 6.
    [89]
    Wuraola Oyewusi. 2022. Learn Tech Concepts in Yoruba Language. https://www.youtube.com/playlist?list=PLwYqTHZgNtLwPw2PtydKvb1fCyRqodBaf Retrieved November 15, 2022 from
    [90]
    James O'Donovan, Ken Kahn, MacKenzie MacRae, Allan Saul Namanda, Rebecca Hamala, Ken Kabali, Anne Geniets, Alice Lakati, Simon M Mbae, and Niall Winters. 2022. Analysing 3429 digital supervisory interactions between Community Health Workers in Uganda and Kenya: the development, testing and validation of an open access predictive machine learning web app. Human resources for health, Vol. 20, 1 (2022), 1--8.
    [91]
    Joyojeet Pal, Anjuli Dasika, Ahmad Hasan, Jackie Wolf, Nick Reid, Vaishnav Kameswaran, Purva Yardi, Allyson Mackay, Abram Wagner, Bhramar Mukherjee, et al. 2017. Changing data practices for community health workers: Introducing digital data collection in West Bengal, India. In Proceedings of the Ninth International Conference on Information and Communication Technologies and Development. 1--12.
    [92]
    Chunjong Park, Alex Mariakakis, Jane Yang, Diego Lassala, Yasamba Djiguiba, Youssouf Keita, Hawa Diarra, Beatrice Wasunna, Fatou Fall, Marème Soda Gaye, et al. 2020. Supporting Smartphone-Based Image Capture of Rapid Diagnostic Tests in Low-Resource Settings. In Proceedings of the 2020 International Conference on Information and Communication Technologies and Development. 1--11.
    [93]
    Madeline Plauché and Udhyakumar Nallasamy. 2007. Speech interfaces for equitable access to information technology. Information Technologies & International Development, Vol. 4, 1 (2007), pp--69.
    [94]
    Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Wortman Vaughan, and Hanna Wallach. 2021. Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI conference on human factors in computing systems. 1--52.
    [95]
    Alun Preece, Dan Harborne, Dave Braines, Richard Tomsett, and Supriyo Chakraborty. 2018. Stakeholders in explainable AI. arXiv preprint arXiv:1810.00184 (2018).
    [96]
    Divya Ramachandran, John Canny, Prabhu Dutta Das, and Edward Cutrell. 2010a. Mobile-izing health workers in rural India. In Proceedings of the SIGCHI conference on human factors in computing systems. 1889--1898.
    [97]
    Divya Ramachandran, Vivek Goswami, and John Canny. 2010b. Research and reality: using mobile messages to promote maternal health in rural India. In Proceedings of the 4th ACM/IEEE international conference on information and communication technologies and development. 1--10.
    [98]
    General Data Protection Regulation. 2018. General data protection regulation (GDPR). Intersoft Consulting, Accessed in October, Vol. 24, 1 (2018).
    [99]
    Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135--1144.
    [100]
    Sarah M Rodrigues, Anil Kanduri, Adeline Nyamathi, Nikil Dutt, Pramod Khargonekar, and Amir M Rahmani. 2022. Digital Health--Enabled Community-Centered Care: Scalable Model to Empower Future Community Health Workers Using Human-in-the-Loop Artificial Intelligence. JMIR formative research, Vol. 6, 4 (2022), e29535.
    [101]
    Yao Rong, Tobias Leemann, Thai-trang Nguyen, Lisa Fiedler, Tina Seidel, Gjergji Kasneci, and Enkelejda Kasneci. 2022. Towards Human-centered Explainable AI: User Studies for Model Explanations. arXiv preprint arXiv:2210.11584 (2022).
    [102]
    Jerry Martin Rosenberg. 1986. Dictionary of artificial intelligence and robotics. (1986).
    [103]
    Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, and Vinodkumar Prabhakaran. 2021a. Re-Imagining Algorithmic Fairness in India and Beyond. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT '21). Association for Computing Machinery, New York, NY, USA, 315--328. https://doi.org/10.1145/3442188.3445896
    [104]
    Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo. 2021b. ?Everyone wants to do the model work, not the data work": Data Cascades in High-Stakes AI. In proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--15.
    [105]
    Helen Schneider, Dickson Okello, and Uta Lehmann. 2016. The global pendulum swing towards community health workers in low-and middle-income countries: a scoping review of trends, geographical distribution and programmatic orientations, 2005 to 2014. Human resources for health, Vol. 14, 1 (2016), 1--12.
    [106]
    Kerry Scott, SW Beckham, Margaret Gross, George Pariyo, Krishna D Rao, Giorgio Cometto, and Henry B Perry. 2018. What do we know about community-based health worker programs? A systematic review of existing reviews on community health workers. Human resources for health, Vol. 16, 1 (2018), 1--17.
    [107]
    Hong Shen, Haojian Jin, Ángel Alexander Cabrera, Adam Perer, Haiyi Zhu, and Jason I Hong. 2020. Designing Alternative Representations of Confusion Matrices to Support Non-Expert Public Understanding of Algorithm Performance. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW2 (2020), 1--22.
    [108]
    Siu-Tsen Shen, Martin Woolley, and Stephen Prior. 2006. Towards culture-centred design. Interacting with computers, Vol. 18, 4 (2006), 820--852.
    [109]
    Danijel Skovc aj, Damjan Strnad, Marko Robnik vS ikonja, Vladimir Batagelj, Ivan Bratko, Matjavz Divjak, Peter Rogelj, Marko Bizjak, Bovs tjan Slivnik, and Tanja Fajfar. 2022. Terminological dictionary of artificial intelligence. (2022).
    [110]
    Alison Smith-Renner, Ron Fan, Melissa Birchfield, Tongshuang Wu, Jordan Boyd-Graber, Daniel S Weld, and Leah Findlater. 2020. No explainability without accountability: An empirical study of explanations and feedback in interactive ml. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--13.
    [111]
    Ayushi Srivastava, Shivani Kapania, Anupriya Tuli, and Pushpendra Singh. 2021. Actionable UI Design Guidelines for Smartphone Applications Inclusive of Low-Literate Users. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW1 (2021), 1--30.
    [112]
    Wouter Stuifmeel. 2009. Mobile UI design for low literacy users in West Africa. Vrije Universiteit Amsterdam. Universiteit van Amsterdam. Faculty of Science--Information Studies (2009).
    [113]
    S. Sue. 1999. Science, ethnicity, and bias: where have we gone wrong? The American Psychologist, Vol. 54, 12 (Dec. 1999), 1070--1077. https://doi.org/10.1037/0003-066x.54.12.1070
    [114]
    Yunjia Sun. 2016. Novice-Centric Visualizations for Machine Learning. (2016).
    [115]
    Yunjia Sun, Edward Lank, and Michael Terry. 2017. Label-and-Learn: Visualizing the Likelihood of Machine Learning Classifier's Success During Data Labeling. (2017), 523--534.
    [116]
    Harini Suresh, Steven R Gomez, Kevin K Nam, and Arvind Satyanarayan. 2021. Beyond expertise and roles: A framework to characterize the stakeholders of interpretable machine learning and their needs. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--16.
    [117]
    Maxwell Szymanski, Martijn Millecamp, and Katrien Verbert. 2021. Visual, textual or hybrid: the effect of user expertise on different explanations. In 26th International Conference on Intelligent User Interfaces. 109--119.
    [118]
    Sana Tonekaboni, Shalmali Joshi, Melissa D McCradden, and Anna Goldenberg. 2019. What clinicians want: contextualizing explainable machine learning for clinical end use. In Machine learning for healthcare conference. PMLR, 359--380.
    [119]
    Helena Vasconcelos, Matthew Jörke, Madeleine Grunde-McLaughlin, Tobias Gerstenberg, Michael S Bernstein, and Ranjay Krishna. 2023. Explanations can reduce overreliance on ai systems during decision-making. Proceedings of the ACM on Human-Computer Interaction, Vol. 7, CSCW1 (2023), 1--38.
    [120]
    Helena Vasconcelos, Matthew Jörke, Madeleine Grunde-McLaughlin, Ranjay Krishna, Tobias Gerstenberg, and Michael S. Bernstein. 2022. When Do XAI Methods Work? A Cost-Benefit Approach to Human-AI Collaboration. (2022).
    [121]
    Aditya Vashistha, Neha Kumar, Anil Mishra, and Richard Anderson. 2016. Mobile video dissemination for community health. In Proceedings of the Eighth International Conference on Information and Communication Technologies and Development. 1--11.
    [122]
    Otto Vollnhals. 1992. A multilingual dictionary of artificial intelligence: English, German, French, Spanish, Italian. Psychology Press.
    [123]
    Brian Wahl, Aline Cossy-Gantner, Stefan Germann, and Nina R Schwalbe. 2018. Artificial intelligence (AI) and global health: how can AI contribute to health in resource-poor settings? BMJ global health, Vol. 3, 4 (2018), e000798.
    [124]
    Ashley Marie Walker, Yaxing Yao, Christine Geeng, Roberto Hoyle, and Pamela Wisniewski. 2019. Moving beyond 'one size fits all': research considerations for working with vulnerable populations. Interactions, Vol. 26, 6 (Oct. 2019), 34--39. https://doi.org/10.1145/3358904
    [125]
    Jayne Wallace, John McCarthy, Peter C Wright, and Patrick Olivier. 2013. Making design probes work. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 3441--3450.
    [126]
    Angeline Seng Lian Wan, S Mat Daud, Siao Hean Teh, Yao Mun Choo, and Fazila Mohamed Kutty. 2016. Management of neonatal jaundice in primary care. Malaysian family physician: the official journal of the Academy of Family Physicians of Malaysia, Vol. 11, 2--3 (2016), 16.
    [127]
    Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y Lim. 2019. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1--15.
    [128]
    Caroline E Wellbery. 2012. A case of medical uncertainty. American Family Physician, Vol. 85, 5 (2012), 501--508.
    [129]
    Caroline Whidden, Kassoum Kayentao, Jenny X Liu, Scott Lee, Youssouf Keita, Djoumé Diakité, Alexander Keita, Samba Diarra, Jacqueline Edwards, Amanda Yembrick, et al. 2018. Improving Community Health Worker performance by using a personalised feedback dashboard for supervision: a randomised controlled trial. Journal of global health, Vol. 8, 2 (2018).
    [130]
    Raynor William. 1999. The international dictionary of artificial intelligence.
    [131]
    Susan Wyche. 2019. Using Cultural Probes In New Contexts: Exploring the Benefits of Probes in HCI4D/ICTD. In Conference Companion Publication of the 2019 on Computer Supported Cooperative Work and Social Computing. 423--427.
    [132]
    Yao Xie, Melody Chen, David Kao, Ge Gao, and Xiang'Anthony' Chen. 2020. CheXplain: enabling physicians to explore and understand data-driven, AI-enabled medical imaging analysis. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1--13.
    [133]
    Deepika Yadav, Anushka Bhandari, and Pushpendra Singh. 2019. LEAP: Scaffolding Collaborative Learning of Community Health Workers in India. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW (2019), 1--27.
    [134]
    Deepika Yadav, Prerna Malik, Kirti Dabas, and Pushpendra Singh. 2021. Illustrating the Gaps and Needs in the Training Support of Community Health Workers in India. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI '21). Association for Computing Machinery, New York, NY, USA, Article 231, 16 pages. https://doi.org/10.1145/3411764.3445111
    [135]
    Fumeng Yang, Zhuanyi Huang, Jean Scholtz, and Dustin L Arendt. 2020. How do visual explanations foster end users' appropriate trust in machine learning?. In Proceedings of the 25th International Conference on Intelligent User Interfaces. 189--201.
    [136]
    Hugo Zylberajch, Piyawat Lertvittayakumjorn, and Francesca Toni. 2021. HILDIF: Interactive debugging of NLI models using influence functions. In Proceedings of the First Workshop on Interactive Learning for Natural Language Processing. 1--6.

    Index Terms

    1. "If it is easy to understand then it will have value": Examining Perceptions of Explainable AI with Community Health Workers in Rural India

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image Proceedings of the ACM on Human-Computer Interaction
          Proceedings of the ACM on Human-Computer Interaction  Volume 8, Issue CSCW1
          CSCW
          April 2024
          6294 pages
          EISSN:2573-0142
          DOI:10.1145/3661497
          Issue’s Table of Contents
          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 26 April 2024
          Published in PACMHCI Volume 8, Issue CSCW1

          Permissions

          Request permissions for this article.

          Check for updates

          Author Tags

          1. HCI4D
          2. ICTD
          3. XAI4D
          4. artificial intelligence
          5. community health workers
          6. explainability
          7. global south
          8. machine learning
          9. mobile health

          Qualifiers

          • Research-article

          Funding Sources

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • 0
            Total Citations
          • 140
            Total Downloads
          • Downloads (Last 12 months)140
          • Downloads (Last 6 weeks)42
          Reflects downloads up to 11 Aug 2024

          Other Metrics

          Citations

          View Options

          Get Access

          Login options

          Full Access

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media