Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

Inherent Limitations of AI Fairness

Published: 18 January 2024 Publication History

Abstract

AI fairness should not be considered a panacea: It may have the potential to make society more fair than ever, but it needs critical thought and outside help to make it happen.

Supplementary Material

Additional References for "Inherent Limitations of AI Fairness" (p48-buyl-supp.pdf)

References

[1]
Abebe, R. et al. Roles for computing in social change. In Proceedings of the 2020 Conf. on Fairness, Accountability, and Transparency, Association for Computing Machinery, 252--260
[2]
Andrus, M., Spitzer, E., Brown, J., and Xiang, A. What we can't measure, we can't understand: Challenges to demographic data procurement in the pursuit of fairness. In Proceedings of the 2021 ACM Conf. on Fairness, Accountability, and Transparency. ACM, 249--260
[3]
Andrus, M. and Villeneuve, S. Demographic-reliant algorithmic fairness: Characterizing the risks of demographic data collection in the pursuit of fairness. In Proceedings of the 2022 ACM Conf. on Fairness, Accountability, and Transparency, 1709--1721
[4]
Barocas, S., Hardt, M., and Narayanan, A. Fairness and machine learning: Limitations and opportunities. fairmlbook.org (2019).
[5]
Bietti, E. From ethics washing to ethics bashing: A view on tech ethics from within moral philosophy. In Proceedings of the 2020 Conf. on Fairness, Accountability, and Transparency. Association for Computing Machinery, 210--219
[6]
Binns, R. On the apparent conflict between individual and group fairness. In Proceedings of the 2020 Conf. on Fairness, Accountability, and Transparency. ACM, 514--524
[7]
Binns, R. et al. 'It's reducing a human being to a percentage': Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conf. on Human Factors in Computing Systems. Association for Computing Machinery, 1--14
[8]
Buolamwini, J. and Gebru, T. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of the 1st Conf. on Fairness, Accountability and Transparency. PMLR (2018), 77--91.
[9]
Buyl, M., Cociancig, C., Frattone, C., and Roekens, N. Tackling algorithmic disability discrimination in the hiring process: An ethical, legal and technical analysis. In 2022 ACM Conf. on Fairness, Accountability, and Transparency, Association for Computing Machinery, 1071--1082
[10]
Feder Cooper, A. and Abrams, E. Emergent unfairness in algorithmic fairness-accuracy trade-off research. In Proceedings of the 2021 AAAI/ACM Conf. on AI, Ethics, and Society. Association for Computing Machinery, 46--54.
[11]
Cortés, E.C., Rajtmajer, S., and Ghosh, D. Locality of technical objects and the role of structural interventions for systemic change. In 2022 ACM Conf. on Fairness, Accountability, and Transparency. Association for Computing Machinery, 2327--2341
[12]
D'Amour, A. et al. Fairness is not static: Deeper understanding of long term fairness via simulation studies. In Proceedings of the 2020 Conf. on Fairness, Accountability, and Transparency. Association for Computing Machinery, 525
[13]
Donahoe, E. and MacDuffee Metzger, M. Artificial intelligence and human rights. J. of Democracy 30, 2 (2019), 115--126
[14]
Dressel, J. and Farid, H. The accuracy, fairness, and limits of predicting recidivism. Science Advances 4, 1 (Jan. 2018)
[15]
Dwork, C. et al. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conf., Association for Computing Machinery (2012), 214
[16]
Fazelpour, S. and Lipton, Z.C. Algorithmic fairness from a non-ideal perspective. In Proceedings of the AAAI/ACM Conf. on AI, Ethics, and Society. Association for Computing Machinery (2020), 57
[17]
Flores, A.W., Lowenkamp, C.T., and Bechtel, K. False positives, false negatives, and false analyses: A rejoinder to "machine bias: There's software used across the country to predict future criminals. And it's biased against blacks." Federal Probation J. 80, 2 (Sept. 2016).
[18]
Floridi, L. Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology 32, 2 (June 2019), 185--193
[19]
Freedman, D., Pisani, R., and Purves, R. Statistics: Fourth Intern. Student Edition. W. W. Norton & Company (2007).
[20]
Noris Freire, M. and de Castro, L.N. E-recruitment recommender systems: A systematic review. Knowledge and Information Systems 63, 1 (Jan. 2021), 1--20
[21]
Friedler, S.A., Scheidegger, C., and Venkatasubramanian, S. The (im)possibility of fairness: Different value systems require different mechanisms for fair decision making. Communications of the ACM 64, 4 (Mar. 2021), 136--143
[22]
Green, B. The Flaws of Policies Requiring Human Oversight of Government Algorithms (2022)
[23]
Green, B. and Chen, Y. The principles and limits of algorithm-in-the-loop decision making. In Proceedings of the ACM on Human-Computer Interaction 3, CSCW (Nov. 2019), 1--24
[24]
Hagendorff, T. The ethics of AI ethics: An evaluation of guidelines. Minds and Machines 30, 1 (Mar. 2020), 99--120
[25]
Hoffmann, A.L. Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society 22, 7 (June 2019), 900--915
[26]
Holstein, K. et al. Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the 2019 CHI Conf. on Human Factors in Computing Systems, Association for Computing Machinery, 1
[27]
Jobin, A., Ienca, M., and Vayena, E. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, 9 (Sept. 2019), 389--399
[28]
Kearns, M., Neel, S., Roth, A., and Wu, Z.S. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In Proceedings of the 35th Intern. Conf. on Machine Learning, PMLR (2018), 2564--2572.
[29]
Kleinberg, J., Mullainathan, S., and Raghavan, M. Inherent trade-offs in the fair determination of risk scores. In Proceedings of th 8th Innovations in Theoretical Computer Science Conf., Leibniz Intern. Proceedings in Informatics 67 (2017), 1--23.
[30]
Wei Koh, P. et al. WILDS: A benchmark of in-the-wild distribution shifts. In Proceedings of the 38th Intern. Conf. on Machine Learning. PMLR (2021), 5637--5664.
[31]
Larson, J., Mattu, S., Kirchner, L., and Angwin, J. How we analyzed the COMPAS recidivism algorithm. Propublica (2016); https://bit.ly/3vbmMx3.
[32]
Law, P., Malik, S., Du, F., and Sinha, M. Designing tools for semi-automated detection of machine learning biases: An interview study. In Proceedings of the CHI 2020 Workshop on Detection and Design for Cognitive Biases in People and Computing Systems.
[33]
Lee, M.S.A. and Singh, J. The landscape and gaps in open source fairness toolkits. In Proceedings of the 2021 CHI Conf. on Human Factors in Computing Systems. Association for Computing Machinery, 1
[34]
Lu, C., Kay, J., and McKee, K. Subverting machines, fluctuating identities: Re-learning human categorization. In Proceedings of the 2022 ACM Conf. on Fairness, Accountability, and Transparency. ACM, 1005--1015
[35]
Madaio, M.A., Stark, L., Wortman Vaughan, J., and Wallach, H. Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In Proceedings of the 2020 CHI Conf. on Human Factors in Computing Systems. ACM, 1--14
[36]
Mehrabi, N. et al. A survey on bias and fairness in machine learning. Comput. Surveys 54, 6 (July 2021), 115:1--115:35
[37]
Rakova, B., Yang, J., Cramer, H., and Chowdhury, R. Where responsible AI meets reality: Practitioner perspectives on enablers for shifting organizational practices. In Proceedings of the ACM on Human-Computer Interaction 5 (April 2021), 7:1--7:23
[38]
Selbst, A.D. et al. Fairness and abstraction in sociotechnical systems. In Proceedings of the Conf. on Fairness, Accountability, and Transparency. ACM (2019), 59--68
[39]
Starke, C., Baleis, J., Keller, B., and Marcinkowski, F. Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature. In Proceedings of Big Data & Society 9, 2 (July 2022)
[40]
Thomas, L., Crook, J., and Edelman, D. Credit Scoring and Its Applications, Second Edition. SIAM, (2017).
[41]
Tomasev, N., McKee, K.R., Kay, J., and Mohamed, S. Fairness for unobserved characteristics: Insights from technological impacts on queer communities. In Proceedings of the 2021 AAAI/ACM Conf. on AI, Ethics, and Society, 254--265
[42]
Veale, M. and Binns, R. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society 4, 2 (Dec. 2017), 2053951717743530
[43]
Verma, S. and Rubin, J. Fairness definitions explained. In Proceedings of the Intern. Workshop on Software Fairness. ACM (2018), 1--7
[44]
Wachter, S., Mittelstadt, B., and Russell, C. Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI. Computer Law & Security Rev. 41 (July 2021), 105567
[45]
Young, H.P. Equity: In Theory and Practice. Princeton University Press, (1995).

Cited By

View all
  • (2024)Knowledge Management at a Precipice: The Precarious State of Ideas, Facts, and Truth in the AI RenaissanceSSRN Electronic Journal10.2139/ssrn.4754610Online publication date: 2024
  • (2024)FEIR: Quantifying and Reducing Envy and Inferiority for Fair Recommendation of Limited ResourcesACM Transactions on Intelligent Systems and Technology10.1145/3643891Online publication date: 3-Feb-2024
  • (2024)Policy advice and best practices on bias and fairness in AIEthics and Information Technology10.1007/s10676-024-09746-w26:2Online publication date: 29-Apr-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Communications of the ACM
Communications of the ACM  Volume 67, Issue 2
February 2024
110 pages
EISSN:1557-7317
DOI:10.1145/3641526
  • Editor:
  • James Larus
Issue’s Table of Contents
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 January 2024
Online First: 18 January 2024
Published in CACM Volume 67, Issue 2

Check for updates

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)5,766
  • Downloads (Last 6 weeks)297
Reflects downloads up to 21 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Knowledge Management at a Precipice: The Precarious State of Ideas, Facts, and Truth in the AI RenaissanceSSRN Electronic Journal10.2139/ssrn.4754610Online publication date: 2024
  • (2024)FEIR: Quantifying and Reducing Envy and Inferiority for Fair Recommendation of Limited ResourcesACM Transactions on Intelligent Systems and Technology10.1145/3643891Online publication date: 3-Feb-2024
  • (2024)Policy advice and best practices on bias and fairness in AIEthics and Information Technology10.1007/s10676-024-09746-w26:2Online publication date: 29-Apr-2024
  • (undefined)Unnatural: Artificial Selection as Flawed Metaphor for Organizational ChangeSSRN Electronic Journal10.2139/ssrn.3836974

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Digital Edition

View this article in digital edition.

Digital Edition

Magazine Site

View this article on the magazine site (external)

Magazine Site

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media