Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3630106.3658985acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

Transparency in the Wild: Navigating Transparency in a Deployed AI System to Broaden Need-Finding Approaches

Published: 05 June 2024 Publication History

Abstract

Transparency is a critical component when building artificial intelligence (AI) decision-support tools, especially for contexts in which AI outputs impact people or policy. Effectively identifying and addressing user transparency needs in practice remains a challenge. While a number of guidelines and processes for identifying transparency needs have emerged, existing methods tend to approach need-finding with a limited focus that centers around a narrow set of stakeholders and transparency techniques. To broaden this perspective, we employ numerous need-finding methods to investigate transparency mechanisms in a widely deployed AI-decision support tool developed by a wildlife conservation non-profit. Throughout our 5-month case study, we conducted need-finding through semi-structured interviews with end-users, analysis of the tool’s community forum, experiments with their ML model, and analysis of training documents created by end-users. We also held regular meetings with the tool’s product and machine learning teams. By approaching transparency need-finding from a broad lens, we uncover insights into end-users’ transparency needs as well as unexpected uses and challenges with current transparency mechanisms. Our study is one of the first to incorporate such diverse perspectives to reveal an unbiased and rich view of transparency needs. Lastly, we offer the FAccT community recommendations on broadening transparency need-finding approaches, contributing to the evolving field of transparency research.

References

[1]
Chirag Agarwal, Satyapriya Krishna, Eshika Saxena, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, and Himabindu Lakkaraju. 2022. OpenXAI: Towards a Transparent Evaluation of Model Explanations. https://arxiv.org/abs/2206.11104v3
[2]
Ahmed Alqaraawi, Martin Schuessler, Philipp Weiß, Enrico Costanza, and Nadia Berthouze. 2020. Evaluating saliency map explanations for convolutional neural networks: a user study. In Proceedings of the 25th international conference on intelligent user interfaces. 275–285. https://doi.org/10.1145/3377325.3377519
[3]
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, and Yunfeng Zhang. 2019. One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. https://arxiv.org/abs/1909.03012v2
[4]
Tanya Y Berger-Wolf, Daniel I Rubenstein, Charles V Stewart, Jason A Holmberg, Jason Parham, Sreejith Menon, Jonathan Crall, Jon Van Oast, Emre Kiciman, and Lucas Joppa. 2017. Wildbook: Crowdsourcing, computer vision, and data science for conservation. arXiv preprint arXiv:1710.08880 (2017). https://arxiv.org/abs/1710.08880
[5]
Hugh Beyer and Karen Holtzblatt. 1999. Contextual design. interactions 6, 1 (1999), 32–42. https://dl.acm.org/doi/fullHtml/10.1145/291224.291229
[6]
Drew Blount, Shane Gero, Jon Van Oast, Jason Parham, Colin Kingen, Ben Scheiner, Tanya Stere, Mark Fisher, Gianna Minton, Christin Khan, 2022. Flukebook: an open-source AI platform for cetacean photo identification. Mammalian Biology 102, 3 (2022), 1005–1023. https://doi.org/10.1007/s42991-021-00221-3
[7]
Angie Boggust, Benjamin Hoover, Arvind Satyanarayan, and Hendrik Strobelt. 2022. Shared interest: Measuring human-ai alignment to identify recurring patterns in model behavior. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–17. https://doi.org/10.1145/3491102.3501965
[8]
Angie Boggust, Harini Suresh, Hendrik Strobelt, John Guttag, and Arvind Satyanarayan. 2023. Saliency Cards: A Framework to Characterize and Compare Saliency Methods. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 285–296. https://doi.org/10.1145/3593013.3593997
[9]
Eve Bohnett, Jason Holmberg, Sorosh Poya Faryabi, Li An, Bilal Ahmad, Wajid Rashid, and Stephane Ostrowski. 2023. Comparison of two individual identification algorithms for snow leopards (Panthera uncia) after automated detection. Ecological Informatics 77 (2023), 102214. https://doi.org/10.1016/j.ecoinf.2023.102214
[10]
Garrick Cabour, Andrés Morales, Élise Ledoux, and Samuel Bassetto. 2021. Towards an Explanation Space to Align Humans and Explainable-AI Teamwork. https://doi.org/10.48550/arXiv.2106.01503 arXiv:2106.01503 [cs].
[11]
Ángel Alexander Cabrera, Abraham J Druck, Jason I Hong, and Adam Perer. 2021. Discovering and validating ai errors with crowdsourced failure reports. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–22. https://doi.org/10.1145/3479569
[12]
Ángel Alexander Cabrera, Erica Fu, Donald Bertucci, Kenneth Holstein, Ameet Talwalkar, Jason I Hong, and Adam Perer. 2023. Zeno: An interactive framework for behavioral evaluation of machine learning. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–14. https://doi.org/10.1145/3544548.3581268
[13]
Ángel Alexander Cabrera, Adam Perer, and Jason I Hong. 2023. Improving human-AI collaboration with descriptions of AI behavior. arXiv preprint arXiv:2301.06937 (2023). https://doi.org/10.1145/3579612
[14]
Carrie J Cai, Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, Fernanda Viegas, Greg S Corrado, Martin C Stumpe, 2019. Human-centered tools for coping with imperfect algorithms during medical decision-making. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–14. https://doi.org/10.1145/3290605.3300234
[15]
Carrie J Cai, Samantha Winter, David Steiner, Lauren Wilcox, and Michael Terry. 2019. " Hello AI": uncovering the onboarding needs of medical practitioners for human-AI collaborative decision-making. Proceedings of the ACM on Human-computer Interaction 3, CSCW (2019), 1–24. https://doi.org/10.1145/3359206
[16]
Carrie J Cai, Samantha Winter, David Steiner, Lauren Wilcox, and Michael Terry. 2021. Onboarding Materials as Cross-functional Boundary Objects for Developing AI Assistants. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–7. https://doi.org/10.1145/3411763.3443435
[17]
Stephen Casper, Tong Bu, Yuxiao Li, Jiawei Li, Kevin Zhang, Kaivalya Hariharan, and Dylan Hadfield-Menell. 2023. Red Teaming Deep Neural Networks with Feature Synthesis Tools. In Thirty-seventh Conference on Neural Information Processing Systems. https://arxiv.org/abs/2302.10894
[18]
Aditya Chattopadhay, Anirban Sarkar, Prantik Howlader, and Vineeth N Balasubramanian. 2018. Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE. https://doi.org/10.1109/wacv.2018.00097
[19]
Haomin Chen, Catalina Gomez, Chien-Ming Huang, and Mathias Unberath. 2022. Explainable Medical Imaging AI Needs Human-Centered Design: Guidelines and Evidence from a Systematic Review. https://doi.org/10.48550/arXiv.2112.12596 arXiv:2112.12596 [cs, eess].
[20]
Douglas Cirqueira, Dietmar Nedbal, Markus Helfert, and Marija Bezbradica. 2020. Scenario-Based Requirements Elicitation for User-Centric Explainable AI. In Machine Learning and Knowledge Extraction, Andreas Holzinger, Peter Kieseberg, A Min Tjoa, and Edgar Weippl (Eds.). Springer International Publishing, Cham, 321–341. https://doi.org/10.1007/978-3-030-57321-8_18
[21]
Eric Corbett and Emily Denton. 2023. Interrogating the T in FAccT. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 1624–1634. https://doi.org/10.1145/3593013.3594104
[22]
Lorenzo Corti, Rembrandt Oltmans, Jiwon Jung, Agathe Balayn, Marlies Wijsenbeek, and Jie Yang. 2024. "It Is a Moving Process": Understanding the Evolution of Explainability Needs of Clinicians in Pulmonary Medicine. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3613904.3642551
[23]
Jonathan P Crall, Charles V Stewart, Tanya Y Berger-Wolf, Daniel I Rubenstein, and Siva R Sundaresan. 2013. Hotspotter—patterned species instance recognition. In 2013 IEEE workshop on applications of computer vision (WACV). IEEE, 230–237. https://doi.org/10.1109/wacv.2013.6475023
[24]
Anamaria Crisan, Margaret Drouhard, Jesse Vig, and Nazneen Rajani. 2022. Interactive model cards: A human-centered approach to model documentation. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 427–439. https://doi.org/10.1145/3531146.3533108
[25]
Wesley Hanwen Deng, Boyuan Guo, Alicia Devrio, Hong Shen, Motahhare Eslami, and Kenneth Holstein. 2023. Understanding Practices, Challenges, and Opportunities for User-Engaged Algorithm Auditing in Industry Practice. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–18. https://doi.org/10.1145/3544548.3581026
[26]
Wesley Hanwen Deng, Michelle S Lam, Ángel Alexander Cabrera, Danaë Metaxa, Motahhare Eslami, and Kenneth Holstein. 2023. Supporting User Engagement in Testing, Auditing, and Contesting AI. In Companion Publication of the 2023 Conference on Computer Supported Cooperative Work and Social Computing. 556–559. https://doi.org/10.1145/3584931.3611279
[27]
Upol Ehsan, Q Vera Liao, Michael Muller, Mark O Riedl, and Justin D Weisz. 2021. Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–19. https://doi.org/10.1145/3411764.3445188
[28]
Upol Ehsan, Q. Vera Liao, Samir Passi, Mark O. Riedl, and Hal Daume III. 2022. Seamful XAI: Operationalizing Seamful Design in Explainable AI. https://doi.org/10.48550/arXiv.2211.06753 arXiv:2211.06753 [cs].
[29]
Upol Ehsan and Mark O Riedl. 2021. Explainability pitfalls: Beyond dark patterns in explainable AI. arXiv preprint arXiv:2109.12480 (2021). https://arxiv.org/abs/2109.12480
[30]
Upol Ehsan, Koustuv Saha, Munmun De Choudhury, and Mark O Riedl. 2023. Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI. Proceedings of the ACM on Human-Computer Interaction 7, CSCW1 (2023), 1–32. https://doi.org/10.1145/3579467
[31]
Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, and Heinrich Hussmann. 2018. Bringing Transparency Design into Practice. In 23rd International Conference on Intelligent User Interfaces(IUI ’18). Association for Computing Machinery, New York, NY, USA, 211–223. https://doi.org/10.1145/3172944.3172961
[32]
Florian Eyert and Paola Lopez. 2023. Rethinking Transparency as a Communicative Constellation. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 444–454. https://doi.org/10.1145/3593013.3594010
[33]
Nancy Friday, Tim D Smith, Peter T Stevick, and Judy Allen. 2000. Measurement of photographic quality and individual distinctiveness for the photographic identification of humpback whales, Megaptera novaeangliae. Marine Mammal Science 16, 2 (2000), 355–374. https://doi.org/10.1111/j.1748-7692.2000.tb00930.x
[34]
Umm-e Habiba, Justus Bogner, and Stefan Wagner. 2022. Can Requirements Engineering Support Explainable Artificial Intelligence? Towards a User-Centric Approach for Explainability Requirements. https://doi.org/10.48550/arXiv.2206.01507 arXiv:2206.01507 [cs].
[35]
Sungsoo Ray Hong, Jessica Hullman, and Enrico Bertini. 2020. Human factors in model interpretability: Industry practices, challenges, and needs. Proceedings of the ACM on Human-Computer Interaction 4, CSCW1 (2020), 1–26. https://doi.org/10.1145/3392878
[36]
Jinbin Huang, Aditi Mishra, Bum Chul Kwon, and Chris Bryan. 2022. ConceptExplainer: Interactive explanation for deep neural networks from a concept perspective. IEEE Transactions on Visualization and Computer Graphics 29, 1 (2022), 831–841. https://doi.org/10.1109/tvcg.2022.3209384
[37]
Weina Jin, Jianyu Fan, Diane Gromala, Philippe Pasquier, and Ghassan Hamarneh. 2022. EUCA: the End-User-Centered Explainable AI Framework. https://doi.org/10.48550/arXiv.2102.02437 arXiv:2102.02437 [cs].
[38]
Nari Johnson, Ángel Alexander Cabrera, Gregory Plumb, and Ameet Talwalkar. 2023. Where Does My Model Underperform? A Human Evaluation of Slice Discovery Algorithms. arXiv preprint arXiv:2306.08167 (2023). https://doi.org/10.1609/hcomp.v11i1.27548
[39]
Ja Ae Kim, Kenneth Mascola, and Yoan K Kagoma. 2021. Re-Envisioning On-Call Resident Workflows: Impact on Resident Experience. https://www.rsna.org/-/media/Files/RSNA/Practice-Tools/Quality-improvement/Quality-improvement-reports/2021/Re-Envisioning_On-Call_Resident-QDP-QI-20_RSNA_.ashx Accessed: January 22, 2022.
[40]
Sunnie SY Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, and Andrés Monroy-Hernández. 2023. "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–17. https://doi.org/10.1145/3544548.3581001
[41]
Sunnie SY Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, and Andrés Monroy-Hernández. 2023. Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 77–88. https://doi.org/10.1145/3593013.3593978
[42]
Brett Koonce and Brett Koonce. 2021. EfficientNet. Convolutional Neural Networks with Swift for Tensorflow: Image Recognition and Dataset Categorization (2021), 109–123. https://link.springer.com/chapter/10.1007/978-1-4842-6168-2_10
[43]
Michelle S Lam, Mitchell L Gordon, Danaë Metaxa, Jeffrey T Hancock, James A Landay, and Michael S Bernstein. 2022. End-User Audits: A System Empowering Communities to Lead Large-Scale Investigations of Harmful Algorithmic Behavior. Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (2022), 1–34. https://doi.org/10.1145/3555625
[44]
Rena Li, Sara Kingsley, Chelsea Fan, Proteeti Sinha, Nora Wai, Jaimie Lee, Hong Shen, Motahhare Eslami, and Jason Hong. 2023. Participation and Division of Labor in User-Driven Algorithm Audits: How Do Everyday Users Work together to Surface Algorithmic Harms?. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–19. https://dl.acm.org/doi/full/10.1145/3544548.3582074
[45]
Q Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI conference on human factors in computing systems. 1–15. https://doi.org/10.1145/3313831.3376590
[46]
Q. Vera Liao, Milena Pribić, Jaesik Han, Sarah Miller, and Daby Sow. 2021. Question-Driven Design Process for Explainable AI User Experiences. https://doi.org/10.48550/arXiv.2104.03483 arXiv:2104.03483 [cs].
[47]
Q Vera Liao, Hariharan Subramonyam, Jennifer Wang, and Jennifer Wortman Vaughan. 2023. Designerly understanding: Information needs for model transparency to support design ideation for AI-powered user experience. In Proceedings of the 2023 CHI conference on human factors in computing systems. 1–21. https://doi.org/10.1145/3544548.3580652
[48]
Q. Vera Liao and Jennifer Wortman Vaughan. 2023. AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap. arxiv:2306.01941 [cs.HC] https://doi.org/10.1162/99608f92.8036d03b
[49]
Nathan J McNeese, Beau G Schelble, Lorenzo Barberis Canonico, and Mustafa Demir. 2021. Who/what is my teammate? Team composition considerations in human–ai teaming. IEEE Transactions on Human-Machine Systems 51, 4 (2021), 288–299. https://doi.org/10.1109/thms.2021.3086018
[50]
Tim Miller. 2023. Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven Decision Support using Evaluative AI. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 333–342. https://doi.org/10.1145/3593013.3594001
[51]
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency. 220–229. https://doi.org/10.1145/3287560.3287596
[52]
Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2020. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. https://doi.org/10.48550/arXiv.1811.11839 arXiv:1811.11839 [cs].
[53]
Hussein Mozannar, Jimin J Lee, Dennis Wei, Prasanna Sattigeri, Subhro Das, and David Sontag. 2023. Effective Human-AI Teams via Learned Natural Language Rules and Onboarding. arXiv preprint arXiv:2311.01007 (2023). https://proceedings.neurips.cc/paper_files/paper/2023/file/61355b9c218505505d1bedede9da56b2-Paper-Conference.pdf
[54]
Hussein Mozannar, Arvind Satyanarayan, and David Sontag. 2022. Teaching humans when to defer to a classifier via exemplars. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 5323–5331. https://doi.org/10.1609/aaai.v36i5.20469
[55]
Imani Munyaka, Zahra Ashktorab, Casey Dugan, J Johnson, and Qian Pan. 2023. Decision Making Strategies and Team Efficacy in Human-AI Teams. Proceedings of the ACM on Human-Computer Interaction 7, CSCW1 (2023), 1–24. https://doi.org/10.1145/3579476
[56]
Mahima Pushkarna, Andrew Zaldivar, and Oddur Kjartansson. 2022. Data cards: Purposeful and transparent dataset documentation for responsible ai. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 1776–1826. https://doi.org/10.1145/3531146.3533231
[57]
Marissa Radensky, Doug Downey, Kyle Lo, Zoran Popovic, and Daniel S Weld. 2022. Exploring the role of local and global explanations in recommender systems. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1–7. https://doi.org/10.1145/3491101.3519795
[58]
Gesina Schwalbe and Bettina Finzel. 2023. A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts. Data Mining and Knowledge Discovery (Jan. 2023). https://doi.org/10.1007/s10618-022-00867-8 arXiv:2105.07190 [cs].
[59]
Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision. 618–626. https://doi.org/10.1109/iccv.2017.74
[60]
Sahil Singla and Soheil Feizi. 2022. Salient ImageNet: How to discover spurious features in Deep Learning?. In International Conference on Learning Representations. https://openreview.net/forum?id=XVPqLyNxSyh
[61]
Timo Speith. 2022. A review of taxonomies of explainable artificial intelligence (XAI) methods. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 2239–2250. https://doi.org/10.1145/3531146.3534639
[62]
Simone Stumpf, Adrian Bussone, and Dympna O’sullivan. 2016. Explanations considered harmful? user interactions with machine learning systems. In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI). https://www.yumpu.com/cs/document/view/55465112/explanations-considered-harmful-user-interactions-with-machine-learning-systems
[63]
Ashley Suh, Gabriel Appleby, Erik W Anderson, Luca Finelli, Remco Chang, and Dylan Cashman. 2023. Are Metrics Enough? Guidelines for Communicating and Visualizing Predictive Models to Subject Matter Experts. IEEE Transactions on Visualization and Computer Graphics (2023). https://doi.org/10.1109/tvcg.2023.3259341
[64]
Sana Tonekaboni, Shalmali Joshi, Melissa D McCradden, and Anna Goldenberg. 2019. What clinicians want: contextualizing explainable machine learning for clinical end use. In Machine learning for healthcare conference. PMLR, 359–380. http://proceedings.mlr.press/v106/tonekaboni19a/tonekaboni19a.pdf
[65]
Data Source Triangulation. 2014. The use of triangulation in qualitative research. In Oncol nurs forum, Vol. 41. 545–7. https://onf.ons.org/pubs/article/233796/preview
[66]
Michael Vössing, Niklas Kühl, Matteo Lind, and Gerhard Satzger. 2022. Designing transparency for effective human-AI collaboration. Information Systems Frontiers 24, 3 (2022), 877–895. https://doi.org/10.1007/s10796-022-10284-3
[67]
WildMe. 2023. Image Analysis Pipeline - Wild Me Documentation. https://docs.wildme.org/product-docs/en/wildbook/introduction/image-analysis-pipeline/#matching-and-interpreting-embeddings-for-wildlife-id-miew-id-or-id
[68]
Michael Williams and Tami Moser. 2019. The art of coding and thematic exploration in qualitative research. International management review 15, 1 (2019), 45–55. https://www.academia.edu/download/82465402/imr-v15n1art4.pdf
[69]
Yao Xie, Melody Chen, David Kao, Ge Gao, and Xiang’Anthony’ Chen. 2020. CheXplain: enabling physicians to explore and understand data-driven, AI-enabled medical imaging analysis. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13. https://doi.org/10.1145/3313831.3376807
[70]
Andrew Zaldivar, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, Lucy Vasserman, M. Mitchell, Parker Barnes, Simone Sanoian McCloskey Wu, and Timnit Gebru (Eds.). 2019. Model Cards for Model Reporting. https://doi.org/10.1145/3287560.3287596

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency
June 2024
2580 pages
ISBN:9798400704505
DOI:10.1145/3630106
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 June 2024

Check for updates

Author Tags

  1. AI-Supported Decision-Making
  2. Case Study
  3. Computer Vision
  4. Need-Finding
  5. Transparency Mechanisms

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

FAccT '24

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 136
    Total Downloads
  • Downloads (Last 12 months)136
  • Downloads (Last 6 weeks)54
Reflects downloads up to 01 Sep 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media