Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3531146.3533213acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article

Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem

Published: 20 June 2022 Publication History

Abstract

Algorithmic audits (or ‘AI audits’) are an increasingly popular mechanism for algorithmic accountability; however, they remain poorly defined. Without a clear understanding of audit practices, let alone widely used standards or regulatory guidance, claims that an AI product or system has been audited, whether by first-, second-, or third-party auditors, are difficult to verify and may potentially exacerbate, rather than mitigate, bias and harm. To address this knowledge gap, we provide the first comprehensive field scan of the AI audit ecosystem. We share a catalog of individuals (N=438) and organizations (N=189) who engage in algorithmic audits or whose work is directly relevant to algorithmic audits; conduct an anonymous survey of the group (N=152); and interview industry leaders (N=10). We identify emerging best practices as well as methods and tools that are becoming commonplace, and enumerate common barriers to leveraging algorithmic audits as effective accountability mechanisms. We outline policy recommendations to improve the quality and impact of these audits, and highlight proposals with wide support from algorithmic auditors as well as areas of debate. Our recommendations have implications for lawmakers, regulators, internal company policymakers, and standards-setting bodies, as well as for auditors. They are: 1) require the owners and operators of AI systems to engage in independent algorithmic audits against clearly defined standards; 2) notify individuals when they are subject to algorithmic decision-making systems; 3) mandate disclosure of key components of audit findings for peer review; 4) consider real-world harm in the audit process, including through standardized harm incident reporting and response mechanisms; 5) directly involve the stakeholders most likely to be harmed by AI systems in the algorithmic audit process; and 6) formalize evaluation and, potentially, accreditation of algorithmic auditors.

References

[1]
2022. Open letter to FAccT steering committee. Not yet published.
[2]
ACLU. 2020. Federal court rules ‘big data’ discrimination studies do not violate federal anti-hacking law. https://www.aclu.org/press-releases/federal-court-rules-big-data-discrimination-studies-do-not-violate-federal-anti
[3]
Elisha Anderson. 2020. Controversial Detroit facial recognition got him arrested for a crime he didn’t commit. Detroit Free Press (2020). https://www.freep.com/story/news/local/michigan/detroit/2020/07/10/facial-recognition-detroit-michael-oliver-robert-williams/5392166002/
[4]
Jacqui Ayling and Adriane Chapman. 2021. Putting AI ethics to work: are the tools fit for purpose?AI and Ethics (2021), 1–36. https://eprints.soton.ac.uk/451191/
[5]
Jack Bandy. 2021. Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits. Proc. ACM Hum.-Comput. Interact. 5, CSCW1, Article 74 (apr 2021), 34 pages. https://doi.org/10.1145/3449148
[6]
Solon Barocas, Anhong Guo, Ece Kamar, Jacquelyn Krones, Meredith Ringel Morris, Jennifer Wortman Vaughan, W. Duncan Wadsworth, and Hanna Wallach. 2021. Designing Disaggregated Evaluations of AI Systems: Choices, Considerations, and Tradeoffs. Association for Computing Machinery, New York, NY, USA, 368–378. https://doi.org/10.1145/3461702.3462610
[7]
Solon Barocas and Andrew D. Selbst. 2016. Big Data’s Disparate Impact. California Law Review 104, 3 (2016), 671–732. http://www.jstor.org/stable/24758720
[8]
Shea Brown, Jovana Davidovic, and Ali Hasan. 2021. The algorithm audit: Scoring the algorithms that score us. Big Data & Society 8, 1 (2021). https://doi.org/10.1177/2053951720983865
[9]
Miles Brundage, Shahar Avin, Jasmine Wang, Haydn Belfield, Gretchen Krueger, Gillian Hadfield, Heidy Khlaaf, Jingying Yang, Helen Toner, Ruth Fong, Tegan Maharaj, Pang Wei Koh, Sara Hooker, Jade Leung, Andrew Trask, Emma Bluemke, Jonathan Lebensold, Cullen O’Keefe, Mark Koren, Théo Ryffel, JB Rubinovitz, Tamay Besiroglu, Federica Carugati, Jack Clark, Peter Eckersley, Sarah de Haas, Maritza Johnson, Ben Laurie, Alex Ingerman, Igor Krawczuk, Amanda Askell, Rosario Cammarota, Andrew Lohn, David Krueger, Charlotte Stix, Peter Henderson, Logan Graham, Carina Prunkl, Bianca Martin, Elizabeth Seger, Noa Zilberman, Seán Ó hÉigeartaigh, Frens Kroeger, Girish Sastry, Rebecca Kagan, Adrian Weller, Brian Tse, Elizabeth Barnes, Allan Dafoe, Paul Scharre, Ariel Herbert-Voss, Martijn Rasser, Shagun Sodhani, Carrick Flynn, Thomas Krendl Gilbert, Lisa Dyer, Saif Khan, Yoshua Bengio, and Markus Anderljung. 2020. Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. https://doi.org/10.48550/ARXIV.2004.07213
[10]
Joy Buolamwini. 2022. Facing the Coded Gaze with Evocative Audits and Algorithmic Audits. Ph. D. Dissertation. Cambridge, MA, USA.
[11]
Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency(Proceedings of Machine Learning Research, Vol. 81), Sorelle A. Friedler and Christo Wilson (Eds.). PMLR, 77–91. https://proceedings.mlr.press/v81/buolamwini18a.html
[12]
Garance Burke, Martha Mendoza, Juliet Linderman, and Michael Tarm. 2021. 4 Takeaways: AP Investigates Gunshot Detection Technology. Associated Press (2021). https://pulitzercenter.org/stories/4-takeaways-ap-investigates-gunshot-detection-technology
[13]
Rumman Chowdhury and Jutta Williams. 2021. Introducing Twitter’s first algorithmic bias bounty challenge. https://blog.twitter.com/engineering/en_us/topics/insights/2021/algorithmic-bias-bounty-challenge
[14]
European Commission. 2020. Proposal for a Regulation of the European Parliament and of the Council on a Single Market For Digital Services (Digital Services Act) and amending Directive 2000/31/EC. https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-european-parliament-and-council-single-market-digital-services-digital-services
[15]
European Commission. 2021. Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206
[16]
Congress.gov. 2019. A bill to direct the Federal Trade Commission to require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments.https://www.congress.gov/bill/116th-congress/senate-bill/1108/
[17]
Sasha Costanza-Chock. 2020. Design Justice: Community-Led Practices to Build the Worlds We Need. MIT Press, Cambridge, MA, USA.
[18]
The New York City Council. 2021. A Local Law to amend the administrative code of the city of New York, in relation to automated employment decision tools. https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-451E-81F8-6596032FA3F9&Options=Advanced&Search
[19]
Jeffrey Dastin. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters (2018). https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
[20]
Deloitte. 2018. Just 35 Percent of Responding Organizations Are GDPR-compliant With EU Data Privacy Rules. https://www2.deloitte.com/lt/en/pages/legal/articles/few-organizations-are-gdpr-compliant-eu-data-privacy-contract-management.html
[21]
Nicholas Diakopoulos. 2016. Accountability in Algorithmic Decision Making. Commun. ACM 59, 2 (jan 2016), 56–62. https://doi.org/10.1145/2844110
[22]
Tawanna R. Dillahunt, Xinyi Wang, Earnest Wheeler, Hao Fei Cheng, Brent Hecht, and Haiyi Zhu. 2017. The Sharing Economy in Computing: A Systematic Literature Review. Proc. ACM Hum.-Comput. Interact. 1, CSCW, Article 38 (dec 2017), 26 pages. https://doi.org/10.1145/3134673
[23]
Nasser Eledroos and Kade Crockford. 2018. Social Media Monitoring in Boston: Free Speech in the Crosshairs. Privacy SOS (2018). https://privacysos.org/social-media-monitoring-boston-free-speech-crosshairs/
[24]
Alex Engler. 2021. Auditing employment algorithms for discrimination. https://www.brookings.edu/research/auditing-employment-algorithms-for-discrimination/
[25]
Virginia Eubanks. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press, Inc., USA.
[26]
Todd Feathers. 2022. College Prep Software Naviance Is Selling Advertising Access to Millions of Students. The Markup (2022). https://themarkup.org/machine-learning/2022/01/13/college-prep-software-naviance-is-selling-advertising-access-to-millions-of-students
[27]
Center for Democracy and Technology. 2021. CDT Leads Letter to New York City Council on Pending Automated Employment Tools Bill. https://cdt.org/insights/cdt-leads-letter-to-new-york-city-council-on-pending-automated-employment-tools-bill/
[28]
Foxglove. 2020. Home Office says it will abandon its racist visa algorithm - after we sued them. https://www.foxglove.org.uk/2020/08/04/home-office-says-it-will-abandon-its-racist-visa-algorithm-after-we-sued-them/
[29]
HireVue. 2021. Download Algorithmic Audit Description - O’Neil Risk Consulting & Algorithmic Auditing. https://www.hirevue.com/resources/template/orcaa-report
[30]
Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé, Miro Dudik, and Hanna Wallach. 2019. Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need?Association for Computing Machinery, New York, NY, USA, 1–16. https://doi.org/10.1145/3290605.3300830
[31]
IATA. [n. d.]. IATA Operational Safety Audit (IOSA). Retrieved April 17, 2022 from https://www.iata.org/en/programs/safety/audit/iosa/#tab-5
[32]
Snap Inc.2021. 2021 CitizenSnap Report. Technical Report.
[33]
Cornell Law School Legal Information Institute. [n. d.]. Dodd–Frank Wall Street Reform and Consumer Protection Act. Retrieved April 17, 2022 from https://www.law.cornell.edu/wex/dodd-frank#:~:text=Dodd%20Originally%20prepared%20by%20Heather%20Byrne%2C%20Jennifer%20Uren%2C,in%20July%202010%2C%20made%20reforms%20to%20financial%20regulations
[34]
Cornell Law School Legal Information Institute. [n. d.]. Sarbanes-Oxley Act. Retrieved April 17, 2022 from https://www.law.cornell.edu/wex/sarbanes-oxley_act
[35]
Cornell Law School Legal Information Institute. [n. d.]. Securities Act of 1933. Retrieved April 17, 2022 from https://www.law.cornell.edu/wex/securities_act_of_1933
[36]
Khari Johnson. 2021. The Movement to Hold AI Accountable Gains More Steam. Wired (2021). https://www.wired.com/story/movement-hold-ai-accountable-gains-steam/
[37]
Josh Kenway, Camille François, Sasha Costanza-Chock, Inioluwa Deborah Raji, and Joy Buolamwini. 2022. Bug Bounties For Algorithmic Harms? Lessons from Cybersecurity Vulnerability Disclosure for Algorithmic Harms Discovery, Disclosure, and Redress. Algorithmic Justice League, Washington, DC.
[38]
P. M. Krafft, Meg Young, Michael Katell, Karen Huang, and Ghislain Bugingo. 2020. Defining AI in Policy versus Practice. Association for Computing Machinery, New York, NY, USA, 72–78. https://doi.org/10.1145/3375627.3375835
[39]
P. M. Krafft, Meg Young, Michael Katell, Jennifer E. Lee, Shankar Narayan, Micah Epstein, Dharma Dailey, Bernease Herman, Aaron Tam, Vivian Guetler, Corinne Bintz, Daniella Raz, Pa Ousman Jobe, Franziska Putz, Brian Robick, and Bissan Barghouti. 2021. An Action-Oriented AI Policy Toolkit for Technology Audits by Community Advocates and Activists. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 772–781. https://doi.org/10.1145/3442188.3445938
[40]
Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. 2016. How We Analyzed the COMPAS Recidivism Algorithm. ProPublica (2016). https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
[41]
Colin Lecher. 2019. NYC’s algorithm task force was ‘a waste,’ member says. The Verge (2019). https://www.theverge.com/2019/11/20/20974379/nyc-algorithm-task-force-report-de-blasio
[42]
Emmanuel Martinez and Lauren Kirchner. 2021. The Secret Bias Hidden in Mortgage-Approval Algorithms. The Markup (2021). https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms
[43]
Jacob Metcalf, Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, and Madeleine Clare Elish. 2021. Algorithmic Impact Assessments and Accountability: The Co-Construction of Impacts. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency(Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 735–746. https://doi.org/10.1145/3442188.3445935
[44]
Microsoft. [n. d.]. FATE: Fairness, Accountability, Transparency, and Ethics in AI. Retrieved April 18, 2022 from https://www.microsoft.com/en-us/research/theme/fate/
[45]
NIST. 2021. NIST Evaluates Face Recognition Software’s Accuracy for Flight Boarding. https://www.nist.gov/news-events/news/2021/07/nist-evaluates-face-recognition-softwares-accuracy-flight-boarding
[46]
Access Now. 2020. Access Now resigns from the Partnership on AI. https://www.accessnow.org/access-now-resignation-partnership-on-ai/
[47]
Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (2019), 447–453. https://doi.org/10.1126/science.aax2342 arXiv:https://www.science.org/doi/pdf/10.1126/science.aax2342
[48]
New York State Department of Health. 2022. Food Service Establishment: Last Inspection. Retrieved April 17, 2022 from https://health.data.ny.gov/Health/Food-Service-Establishment-Last-Inspection/cnih-y5dw
[49]
European Parliament. 2016. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02016R0679-20160504
[50]
Jerome Pesenti. 2021. Facebook’s five pillars of Responsible AI. https://ai.facebook.com/blog/facebooks-five-pillars-of-responsible-ai/
[51]
Inioluwa Deborah Raji and Joy Buolamwini. 2019. Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (Honolulu, HI, USA) (AIES ’19). Association for Computing Machinery, New York, NY, USA, 429–435. https://doi.org/10.1145/3306618.3314244
[52]
Inioluwa Deborah Raji, Sasha Costanza-Chock, and Joy Buolamwini. 2022. Change From the Outside: Towards Credible Third-Party Audits of AI Systems. Missing Links in AI Policy(2022).
[53]
Inioluwa Deborah Raji, Timnit Gebru, Margaret Mitchell, Joy Buolamwini, Joonseok Lee, and Emily Denton. 2020. Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing. arxiv:2001.00964 [cs.CY]
[54]
Dillon Reisman, Jason Schultz, Kate Crawford, and Meredith Whittaker. 2018. Algorithmic Impact Assessments: A Practical Framework for Public Agency. AI Now (2018). https://ainowinstitute.org/aiareport2018.pdf
[55]
Aaron Rieke, Urmila Janardan, Mingwei Hsu, and Natasha Duarte. 2021. Essential Work: Analyzing the Hiring Technologies of Large Hourly Employers. Upturn (2021). https://www.upturn.org/work/essential-work/
[56]
Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cédric Langbort. 2014. Auditing Algorithms : Research Methods for Detecting Discrimination on Internet Platforms. In “Data and Discrimination: Converting Critical Concerns into Productive Inquiry,” a preconference at the 64th Annual Meeting of the International Communication Association (Seattle, WA, USA).
[57]
Adam Satariano. 2020. Europe’s Privacy Law Hasn’t Shown Its Teeth, Frustrating Advocates. The New York Times (2020). https://www.nytimes.com/2020/04/27/technology/GDPR-privacy-law-europe.html
[58]
Adam Satariano. 2021. Facebook’s WhatsApp is fined for breaking the E.U.’s data privacy law. The New York Times (2021). https://www.nytimes.com/2021/09/02/business/facebook-whatsapp-privacy-fine.html
[59]
Hilke Schellmann. 2021. Auditors are testing hiring algorithms for bias, but there’s no easy fix. https://www.technologyreview.com/2021/02/11/1017955/auditors-testing-ai-hiring-algorithms-bias-big-questions-remain/
[60]
Tim Simonite. 2020. How an Algorithm Blocked Kidney Transplants to Black Patients. Wired (2020). https://www.wired.com/story/how-algorithm-blocked-kidney-transplants-black-patients/
[61]
Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3173574.3174014
[62]
Briana Vecchione, Karen Levy, and Solon Barocas. 2021. Algorithmic Auditing and Social Justice: Lessons from the History of Audit Studies. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3465416.3483294
[63]
Kent Walker and Jeff Dean. 2020. An update on our work on AI and responsible innovation. https://blog.google/technology/ai/update-work-ai-responsible-innovation/
[64]
Jess Whittlestone, Rune Nyrup, Anna Alexandrova, and Stephen Cave. 2019. The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (Honolulu, HI, USA) (AIES ’19). Association for Computing Machinery, New York, NY, USA, 195–200. https://doi.org/10.1145/3306618.3314289
[65]
Jutta Williams and Rumman Chowdhury. 2021. Introducing our Responsible Machine Learning Initiative. https://blog.twitter.com/en_us/topics/company/2021/introducing-responsible-machine-learning-initiative
[66]
Christo Wilson, Avijit Ghosh, Shan Jiang, Alan Mislove, Lewis Baker, Janelle Szary, Kelly Trindel, and Frida Polli. 2021. Building and Auditing Fair Algorithms: A Case Study in Candidate Screening(FAccT ’21). Association for Computing Machinery, New York, NY, USA, 666–677. https://doi.org/10.1145/3442188.3445928
[67]
Kyra Yee, Uthaipon Tantipongpipat, and Shubhanshu Mishra. 2021. Image Cropping on Twitter: Fairness Metrics, Their Limitations, and the Importance of Representation, Design, and Agency. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 450 (oct 2021), 24 pages. https://doi.org/10.1145/3479594

Cited By

View all
  • (2025)Insights into suggested Responsible AI (RAI) practices in real-world settings: a systematic literature reviewAI and Ethics10.1007/s43681-024-00648-7Online publication date: 8-Jan-2025
  • (2024)Perspective Chapter: Governing Corporations in Appearance but Not in Fact – A Possible Unintended Consequence of the Corporate Governance MovementCorporate Governance - Evolving Practices and Emerging Challenges [Working Title]10.5772/intechopen.1005075Online publication date: 13-May-2024
  • (2024)PositionProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693397(32691-32710)Online publication date: 21-Jul-2024
  • Show More Cited By

Index Terms

  1. Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Other conferences
        FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
        June 2022
        2351 pages
        ISBN:9781450393522
        DOI:10.1145/3531146
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 20 June 2022

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. AI audit
        2. AI bias
        3. AI harm
        4. AI policy
        5. algorithm audit
        6. algorithmic accountability
        7. audit
        8. ethical AI

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Conference

        FAccT '22
        Sponsor:

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)650
        • Downloads (Last 6 weeks)75
        Reflects downloads up to 23 Feb 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2025)Insights into suggested Responsible AI (RAI) practices in real-world settings: a systematic literature reviewAI and Ethics10.1007/s43681-024-00648-7Online publication date: 8-Jan-2025
        • (2024)Perspective Chapter: Governing Corporations in Appearance but Not in Fact – A Possible Unintended Consequence of the Corporate Governance MovementCorporate Governance - Evolving Practices and Emerging Challenges [Working Title]10.5772/intechopen.1005075Online publication date: 13-May-2024
        • (2024)PositionProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693397(32691-32710)Online publication date: 21-Jul-2024
        • (2024)Auditing Flood Vulnerability Geo-Intelligence Workflow for BiasesISPRS International Journal of Geo-Information10.3390/ijgi1312041913:12(419)Online publication date: 21-Nov-2024
        • (2024)Establishing and evaluating trustworthy AI: overview and research challengesFrontiers in Big Data10.3389/fdata.2024.14672227Online publication date: 29-Nov-2024
        • (2024)Responsible Development of Internal GenAI SystemsSSRN Electronic Journal10.2139/ssrn.4834767Online publication date: 2024
        • (2024)What is civic participation in artificial intelligence?Environment and Planning B: Urban Analytics and City Science10.1177/23998083241296200Online publication date: 1-Nov-2024
        • (2024)The emergence of artificial intelligence ethics auditingBig Data & Society10.1177/2053951724129973211:4Online publication date: 19-Dec-2024
        • (2024)From human-centered to social-centered artificial intelligence: Assessing ChatGPT's impact through disruptive eventsBig Data & Society10.1177/2053951724129022011:4Online publication date: 15-Oct-2024
        • (2024)Constructing Websites with Generative AI Tools: The Accessibility of Their Workflows and Products for Users With DisabilitiesJournal of Business and Technical Communication10.1177/10506519241280644Online publication date: 28-Sep-2024
        • Show More Cited By

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media