Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3617694.3623239acmconferencesArticle/Chapter ViewAbstractPublication PageseaamoConference Proceedingsconference-collections
research-article
Open access

FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

Published: 30 October 2023 Publication History

Abstract

As machine learning (ML) pipelines affect an increasing array of stakeholders, there is a growing need for documenting how input from stakeholders is recorded and incorporated. We propose FeedbackLogs, addenda to existing documentation of ML pipelines, to track the input of multiple stakeholders. Each log records important details about the feedback collection process, the feedback itself, and how the feedback is used to update the ML pipeline. In this paper, we introduce and formalise a process for collecting a FeedbackLog. We also provide concrete use cases where FeedbackLogs can be employed as evidence for algorithmic auditing and as a tool to record updates based on stakeholder feedback.

References

[1]
Tameem Adel, Zoubin Ghahramani, and Adrian Weller. 2018. Discovering interpretable representations for both deep generative and discriminative models. In International Conference on Machine Learning. 50–59.
[2]
Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. 2014. Power to the people: The role of humans in interactive machine learning. Ai Magazine 35, 4 (2014), 105–120.
[3]
Matthew Arnold, Rachel KE Bellamy, Michael Hind, Stephanie Houde, Sameep Mehta, A Mojsilović, Ravi Nair, K Natesan Ramamurthy, Alexandra Olteanu, David Piorkowski, 2019. FactSheets: Increasing trust in AI services through supplier’s declarations of conformity. IBM Journal of Research and Development 63, 4/5 (2019), 6–1.
[4]
M. Arnold, R. K. E. Bellamy, M. Hind, S. Houde, S. Mehta, A. Mojsilović, R. Nair, K. Natesan Ramamurthy, A. Olteanu, D. Piorkowski, D. Reimer, J. Richards, J. Tsay, and K. R. Varshney. 2019. FactSheets: Increasing trust in AI services through supplier’s declarations of conformity. IBM Journal of Research and Development 63, 4/5 (2019), 6:1–6:13. https://doi.org/10.1147/JRD.2019.2942288
[5]
Susan Bennett. 2017. What is information governance and how does it differ from data governance?Governance Directions 69, 8 (2017), 462–467.
[6]
Umang Bhatt, McKane Andrus, Adrian Weller, and Alice Xiang. 2020. Machine learning explainability for external stakeholders. ICML Workshop on Human Interpretability (2020).
[7]
Umang Bhatt, Javier Antorán, Yunfeng Zhang, Q Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, 2021. Uncertainty as a form of transparency: Measuring, communicating, and using uncertainty. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 401–413.
[8]
Abeba Birhane, William Isaac, Vinodkumar Prabhakaran, Mark Díaz, Madeleine Clare Elish, Iason Gabriel, and Shakir Mohamed. 2022. Power to the People? Opportunities and Challenges for Participatory AI. Equity and Access in Algorithms, Mechanisms, and Optimization (2022), 1–8.
[9]
Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021).
[10]
Virginia Braun and Victoria Clarke. 2012. Thematic analysis.American Psychological Association.
[11]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
[12]
Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, 2018. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228 (2018).
[13]
Valerie Chen, Umang Bhatt, Hoda Heidari, Adrian Weller, and Ameet Talwalkar. 2023. Perspectives on incorporating expert feedback into model updates. Patterns 4, 7 (2023).
[14]
Valerie Chen, Jeffrey Li, Joon Sik Kim, Gregory Plumb, and Ameet Talwalkar. 2022. Interpretable Machine Learning: Moving from Mythos to Diagnostics. Queue 19, 6 (jan 2022), 28–56. https://doi.org/10.1145/3511299
[15]
Hao-Fei Cheng, Logan Stapleton, Ruiqi Wang, Paige Bullock, Alexandra Chouldechova, Zhiwei Steven Steven Wu, and Haiyi Zhu. 2021. Soliciting stakeholders’ fairness notions in child maltreatment predictive systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–17.
[16]
Peter Cihon, Moritz J Kleinaltenkamp, Jonas Schuett, and Seth D Baum. 2021. AI certification: Advancing ethical practice by reducing information asymmetries. IEEE Transactions on Technology and Society 2, 4 (2021), 200–209.
[17]
European Commission. 2020. White paper on artificial intelligence: A European approach to excellence and trust. Com (2020) 65 Final (2020).
[18]
European Commission. 2021. EU AI Act - Draft. https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206&from=EN. (Accessed on 02/07/2023).
[19]
Alvaro HC Correia and Freddy Lecue. 2019. Human-in-the-loop feature selection. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 2438–2445.
[20]
Sasha Costanza-Chock, Inioluwa Deborah Raji, and Joy Buolamwini. 2022. Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 1571–1583.
[21]
Yuchen Cui, Pallavi Koppol, Henny Admoni, Scott Niekum, Reid Simmons, Aaron Steinfeld, and Tesca Fitzgerald. 2021. Understanding the Relationship between Interactions and Outcomes in Human-in-the-Loop Machine Learning. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, Montreal, QC, Canada, Vol. 10.
[22]
Tri Dao, Albert Gu, Alexander Ratner, Virginia Smith, Chris De Sa, and Christopher Ré. 2019. A kernel theory of modern data augmentation. In International Conference on Machine Learning. PMLR, 1528–1537.
[23]
Advait Deshpande and Helen Sharp. 2022. Responsible AI Systems: Who are the Stakeholders?. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society. 227–236.
[24]
Jonathan Dodge, Q Vera Liao, Yunfeng Zhang, Rachel KE Bellamy, and Casey Dugan. 2019. Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th international conference on intelligent user interfaces. 275–285.
[25]
Jerry Alan Fails and Dan R Olsen Jr. 2003. Interactive machine learning. In Proceedings of the 8th international conference on Intelligent user interfaces. 39–45.
[26]
Jonathan Frankle and Michael Carbin. 2018. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. In International Conference on Learning Representations.
[27]
Patricia Garcia, Tonia Sutherland, Niloufar Salehi, Marika Cifor, and Anubha Singh. 2022. No! Re-imagining Data Practices Through the Lens of Critical Refusal. Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (2022), 1–20.
[28]
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. 2021. Datasheets for datasets. Commun. ACM 64, 12 (2021), 86–92.
[29]
Thomas Krendl Gilbert, Sarah Dean, Nathan Lambert, Tom Zick, and Aaron Snoswell. 2022. Reward reports for reinforcement learning. arXiv preprint arXiv:2204.10817 (2022).
[30]
James Hannan. 1957. Approximation to Bayes risk in repeated play. Contributions to the Theory of Games 3, 2 (1957), 97–139.
[31]
Peter Hase, Mohit Bansal, Been Kim, and Asma Ghandeharioun. 2023. Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models. https://doi.org/10.48550/ARXIV.2301.04213
[32]
Anne Henriksen, Simon Enni, and Anja Bechmann. 2021. Situated accountability: Ethical principles, certification standards, and explanation methods in applied AI. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 574–585.
[33]
Ralph Hertwig and Ido Erev. 2009. The description–experience gap in risky choice. Trends in cognitive sciences 13, 12 (2009), 517–523.
[34]
James E Hunton and Jacob M Rose. 2010. 21st CenturyAuditing: Advancing Decision Support Systems to Achieve Continuous Auditing. Accounting Horizons 24, 2 (2010), 297.
[35]
Nature Machine Intelligence. 2021. How to be responsible in AI publication. Nature Machine Intelligence 3 (2021).
[36]
Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute, Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya, 2019. Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In Proceedings of the AAAI conference on artificial intelligence, Vol. 33. 590–597.
[37]
Matthew Kay, Tara Kola, Jessica R Hullman, and Sean A Munson. 2016. When (ish) is my bus? user-centered visualizations of uncertainty in everyday, mobile predictive systems. In Proceedings of the 2016 chi conference on human factors in computing systems. 5092–5103.
[38]
Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang. 2020. Concept bottleneck models. In International Conference on Machine Learning. PMLR, 5338–5348.
[39]
Bogdan Kulynych, David Madras, Smitha Milli, Inioluwa Deborah Raji, Angela Zhou, and Richard Zemel. 2020. Participatory Approaches to Machine Learning. International Conference on Machine Learning Workshop.
[40]
Himabindu Lakkaraju, Stephen H Bach, and Jure Leskovec. 2016. Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1675–1684.
[41]
Mina Lee, Megha Srivastava, Amelia Hardy, John Thickstun, Esin Durmus, Ashwin Paranjape, Ines Gerard-Ursin, Xiang Lisa Li, Faisal Ladhak, Frieda Rong, 2022. Evaluating Human-Language Model Interaction. arXiv preprint arXiv:2212.09746 (2022).
[42]
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022).
[43]
Nick Littlestone and Manfred K Warmuth. 1994. The weighted majority algorithm. Information and computation 108, 2 (1994), 212–261.
[44]
Todor Markov. 2022. New and improved content moderation tooling. https://openai.com/blog/new-and-improved-content-moderation-tooling/
[45]
Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022. Locating and Editing Factual Associations in GPT. Advances in Neural Information Processing Systems 36 (2022).
[46]
Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, and David Bau. 2022. Mass-Editing Memory in a Transformer. https://doi.org/10.48550/ARXIV.2210.07229
[47]
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency. 220–229.
[48]
Jakob Mökander and Luciano Floridi. 2021. Ethics-based auditing to develop trustworthy AI. Minds and Machines 31, 2 (2021), 323–327.
[49]
Stephano Nativi and Sarah De Nigris. 2021. AI watch: AI Standardisation Landscape: state of play and link to the European Commission proposal for an AI regulatory framework. AI watch, Publications Office of the European Union, Luxembourg (2021), 1–23. https://doi.org/
[50]
House of Commons of Canada. 2022. Government Bill C-27 (44-1) - First Reading - Digital Charter Implementation Act. https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading. (Accessed on 02/07/2023).
[51]
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35 (2022), 27730–27744.
[52]
Andrei Paleyes, Raoul-Gabriel Urma, and Neil D Lawrence. 2022. Challenges in deploying machine learning: a survey of case studies. Comput. Surveys 55, 6 (2022), 1–29.
[53]
Fen Qin, Kai Li, and Jianyuan Yan. 2020. Understanding user trust in artificial intelligence-based educational systems: Evidence from China. British Journal of Educational Technology 51, 5 (2020), 1693–1710.
[54]
Inioluwa Deborah Raji and Joy Buolamwini. 2019. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 429–435.
[55]
Inioluwa Deborah Raji, Andrew Smart, Rebecca N White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes. 2020. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 33–44.
[56]
Bogdana Rakova, Jingying Yang, Henriette Cramer, and Rumman Chowdhury. 2021. Where responsible AI meets reality: Practitioner perspectives on enablers for shifting organizational practices. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (2021), 1–23.
[57]
Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher Ré. 2017. Snorkel: Rapid training data creation with weak supervision. In Proceedings of the VLDB Endowment. International Conference on Very Large Data Bases, Vol. 11. NIH Public Access, 269.
[58]
Kenneth D Roe, Vibhu Jawa, Xiaohan Zhang, Christopher G Chute, Jeremy A Epstein, Jordan Matelsky, Ilya Shpitser, and Casey Overby Taylor. 2020. Feature engineering with clinical expert knowledge: a case study assessment of machine learning model complexity and performance. PloS one 15, 4 (2020), e0231300.
[59]
Negar Rostamzadeh, Diana Mincu, Subhrajit Roy, Andrew Smart, Lauren Wilcox, Mahima Pushkarna, Jessica Schrouff, Razvan Amironesei, Nyalleng Moorosi, and Katherine Heller. 2022. Healthsheet: development of a transparency artifact for health datasets. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 1943–1961.
[60]
Daniel Schiff, Bogdana Rakova, Aladdin Ayesh, Anat Fanti, and Michael Lennon. 2020. Principles to practices for responsible AI: closing the gap. arXiv preprint arXiv:2006.04707 (2020).
[61]
Tobias Schnabel, Saleema Amershi, Paul N Bennett, Peter Bailey, and Thorsten Joachims. 2020. The impact of more transparent interfaces on behavior in personalized recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 991–1000.
[62]
Tobias Schnabel, Paul N Bennett, and Thorsten Joachims. 2018. Improving recommender systems beyond the algorithm. arXiv preprint arXiv:1802.07578 (2018).
[63]
Fiona Schweitzer, Russell Belk, Werner Jordan, and Melanie Ortner. 2019. Servant, friend or master? The relationships users build with voice-controlled smart devices. Journal of Marketing Management 35, 7-8 (2019), 693–715.
[64]
Murtuza N Shergadwala, Himabindu Lakkaraju, and Krishnaram Kenthapadi. 2022. A Human-Centric Take on Model Monitoring. arXiv preprint arXiv:2206.02868 (2022).
[65]
Mona Sloane, Emanuel Moss, Olaitan Awomolo, and Laura Forlano. 2022. Participation is not a design fix for machine learning. In Equity and Access in Algorithms, Mechanisms, and Optimization. 1–6.
[66]
Kacper Sokol and Peter Flach. 2020. Explainability fact sheets: a framework for systematic assessment of explainable approaches. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 56–67.
[67]
Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, Gretchen Krueger, Jong Wook Kim, Sarah Kreps, 2019. Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203 (2019).
[68]
Thomas Stafford, George Deitz, and Yaojie Li. 2018. The role of internal audit and user training in information security policy compliance. Managerial Auditing Journal 33, 4 (2018), 410–424.
[69]
Simone Stumpf, Vidya Rajaram, Lida Li, Margaret Burnett, Thomas Dietterich, Erin Sullivan, Russell Drummond, and Jonathan Herlocker. 2007. Toward harnessing user feedback for machine learning. In Proceedings of the 12th international conference on Intelligent user interfaces. 82–91.
[70]
Harini Suresh, Steven R Gomez, Kevin K Nam, and Arvind Satyanarayan. 2021. Beyond expertise and roles: A framework to characterize the stakeholders of interpretable machine learning and their needs. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
[71]
C Reid Turner, Alfonso Fuggetta, Luigi Lavazza, and Alexander L Wolf. 1999. A conceptual basis for feature engineering. Journal of Systems and Software 49, 1 (1999), 3–15.
[72]
Guido Van Rossum and Fred L Drake Jr. 1995. Python reference manual. Centrum voor Wiskunde en Informatica Amsterdam.
[73]
Paul Voigt and Axel Von dem Bussche. 2017. The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing 10, 3152676 (2017), 10–5555.
[74]
Sen Wan, Yimin Hou, Feng Bao, Zhiquan Ren, Yunfeng Dong, Qionghai Dai, and Yue Deng. 2020. Human-in-the-loop low-shot learning. IEEE Transactions on Neural Networks and Learning Systems 32, 7 (2020), 3287–3292.
[75]
Zijie J Wang, Alex Kale, Harsha Nori, Peter Stella, Mark Nunnally, Duen Horng Chau, Mihaela Vorvoreanu, Jennifer Wortman Vaughan, and Rich Caruana. 2021. GAM Changer: Editing Generalized Additive Models with Interactive Visualization. arXiv preprint arXiv:2112.03245 (2021).
[76]
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35 (2022), 24824–24837.
[77]
Ethan Weinberger, Joseph Janizek, and Su-In Lee. 2020. Learning deep attribution priors based on prior knowledge. Advances in Neural Information Processing Systems 33 (2020), 14034–14045.
[78]
Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias Huber, and Elisabeth André. 2019. "Do You Trust Me?": Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents (Paris, France) (IVA ’19). Association for Computing Machinery, New York, NY, USA, 7–9. https://doi.org/10.1145/3308532.3329441
[79]
Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, 2022. Robust fine-tuning of zero-shot models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7959–7971.
[80]
Depeng Xu, Shuhan Yuan, Lu Zhang, and Xintao Wu. 2018. Fairgan: Fairness-aware generative adversarial networks. In 2018 IEEE International Conference on Big Data (Big Data). IEEE, 570–575.
[81]
Wei Xu. 2019. Toward human-centered AI: a perspective from human-computer interaction. interactions 26, 4 (2019), 42–46.
[82]
Yiwei Yang, Eser Kandogan, Yunyao Li, Prithviraj Sen, and Walter S Lasecki. 2019. A study on interaction in human-in-the-loop machine learning for text analytics. In IUI Workshops.
[83]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P Gummadi. 2017. Fairness constraints: Mechanisms for fair classification. In Artificial Intelligence and Statistics. PMLR, 962–970.
[84]
Theodore Zamenopoulos and Katerina Alexiou. 2007. Towards an anticipatory view of design. Design Studies 28, 4 (2007), 411–436.
[85]
Shiying Zhang, Zixuan Meng, Beibei Chen, Xiu Yang, and Xinran Zhao. 2021. Motivation, Social Emotion, and the Acceptance of Artificial Intelligence Virtual Assistants—Trust-Based Mediating Effects. Frontiers in Psychology (2021), 3441.
[86]
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. 2022. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models. arXiv e-prints (2022), arXiv–2205.
[87]
Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2022. Large language models are human-level prompt engineers. arXiv preprint arXiv:2211.01910 (2022).

Cited By

View all
  • (2024)Challenges of responsible AI in practice: scoping review and recommended actionsAI & SOCIETY10.1007/s00146-024-01880-9Online publication date: 19-Feb-2024
  • (2024)Democratization is a Process, not a Destination: Operationalizing Ethics and Democratization in a Cyberinfrastructure for AI ProjectAI for People, Democratizing AI10.1007/978-3-031-71304-0_3(29-45)Online publication date: 3-Oct-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
EAAMO '23: Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization
October 2023
498 pages
ISBN:9798400703812
DOI:10.1145/3617694
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 October 2023

Check for updates

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

EAAMO '23
Sponsor:

Upcoming Conference

EAAMO '24
Equity and Access in Algorithms, Mechanisms, and Optimization
October 29 - 31, 2024
San Luis Potosi , Mexico

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)347
  • Downloads (Last 6 weeks)41
Reflects downloads up to 10 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Challenges of responsible AI in practice: scoping review and recommended actionsAI & SOCIETY10.1007/s00146-024-01880-9Online publication date: 19-Feb-2024
  • (2024)Democratization is a Process, not a Destination: Operationalizing Ethics and Democratization in a Cyberinfrastructure for AI ProjectAI for People, Democratizing AI10.1007/978-3-031-71304-0_3(29-45)Online publication date: 3-Oct-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media