Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

What Should We Do when Our Ideas of Fairness Conflict?

Published: 21 December 2023 Publication History

Abstract

Standards for fair decision making could help us develop algorithms that comport with our consensus views; however, algorithmic fairness has its limits.

References

[1]
Abebe, R. et al. Roles for computing in social change. In Proceedings of the Conf. on Fairness, Accountability, and Transparency (2020), 252--260.
[2]
Angwin, J. and Larson, J. Bias in criminal risk scores is mathematically inevitable, researchers say. Ethics of Data and Analytics, Auerbach Publications (2016), 265--267.
[3]
Angwin, J., Larson, J., Mattu, S. and Kirchner, L. Machine bias. Ethics of Data and Analytics, Auerbach Publications (2016), 254--264.
[4]
Barabas, C. et al. Interventions over predictions: Reframing the ethical debate for actuarial risk assessment. In Proceedings of the Conf. on Fairness, Accountability, and Transparency (2018), 62--76.
[5]
Barocas, S., Hardt, M. and Narayanan, A. Fairness and Machine Learning (2019); http://www.fairmlbook.org.
[6]
Berk, R. et al. Fairness in criminal justice risk assessments: The state of the art. In Proceedings of Sociological Methods & Research 50, 1 (Feb. 2021), 3--44.
[7]
Blum, A. and Stangl, K. Recovering from biased data: Can fairness constraints improve accuracy? In Proceedings of the 1st Symp. on Foundations of Responsible Computing 156 (2020), 3:1--3:20.
[8]
Canetti, R. et al. From soft classifiers to hard decisions: How fair can we be? In Proceedings of the Conf. on Fairness, Accountability, and Transparency (2019), 309--318.
[9]
Celis, L.E., Huang, L., Keswani, V. and Vishnoi, N.K. Classification with fairness constraints: A meta-algorithm with provable guarantees. In Proceedings of the Conf. on Fairness, Accountability, and Transparency (2019), 319--328.
[10]
Chouldechova, A. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. In Proceedings of Big Data 5, 2 (2017), 153--163.
[11]
Corbett-Davies, S. and Goel, S. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv (2018); http://arxiv.org/abs/1808.00023.
[12]
Dastin, J. Amazon scraps secret AI recruiting tool that showed bias against women. In Ethics of Data and Analytics, Auerbach Publications (2018), 296--299.
[13]
Davis, J.L., Williams, A. and Yang, M.W. Algorithmic reparation. In Proceedings of Big Data & Society 8, 2 (2021).
[14]
Dieterich, W., Mendoza, C. and Brennan, T. COMPAS risk scales: Demonstrating accuracy equity and predictive parity. Northpointe Inc. 7, 4 (2016).
[15]
Donahue, K. and Kleinberg, J. Fairness and utilization in allocating resources with uncertain demand. In Proceedings of the Conf. on Fairness, Accountability, and Transparency (2020), 658--668.
[16]
Dwork, C. et al. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conf. (2012), 214--226.
[17]
Flores, A.W., Bechtel, K. and Lowenkamp, C.T. False positives, false negatives, and false analyses: A rejoinder to machine bias: There's software used across the country to predict future criminals. and it's biased against blacks. Federal Probation J. 80, (2016), 38.
[18]
Green, B. Escaping the impossibility of fairness: From formal to substantive algorithmic fairness. Philosophy & Technology 35, 90 (2022)
[19]
Gultchin, L. et al. Beyond Impossibility: Balancing Sufficiency, Separation and Accuracy (2022); http://arxiv.org/abs/2205.12327.
[20]
Harcourt, B.E. Against Prediction, University of Chicago Press (2008).
[21]
Hardt, M., Price, E. and Srebro, N. Equality of opportunity in supervised learning. In Proceedings of Advances in Neural Information Processing Systems 29, (2016).
[22]
He, Y., Burghardt, K., Guo, S. and Lerman, K. Inherent trade-offs in the fair allocation of treatments. arXiv (2020); http://arxiv.org/abs/2010.16409.
[23]
Hedden, B. On statistical criteria of algorithmic fairness. Philosophy and Public Affairs 49, 2 (2021).
[24]
Hellman, D. Measuring algorithmic fairness. Virginia Law Rev. 106, 4 (2020), 811--866.
[25]
Kleinberg, J., Mullainathan, S. and Raghavan, M. Inherent trade-offs in the fair determination of risk scores. In Proceedings of the 8th Innovations in Theoretical Computer Science Conf. (2017).
[26]
Lechner, T., Ben-David, S., Agarwal, S. and Ananthakrishnan, N. Impossibility results for fair representations. arXiv (2021); http://arxiv.org/abs/2107.03483.
[27]
Loi, M. and Heitz, C. Is calibration a fairness requirement? An argument from the point of view of moral philosophy and decision theory. In Proceedings of the Conf. on Fairness, Accountability, and Transparency (2022), 2026--2034.
[28]
Long, R. Fairness in machine learning: Against false positive rate equality as a measure of fairness. J. of Moral Philosophy 19, 1 (Nov. 2021), 49--78.
[29]
Madras, D., Pitassi, T. and Zemel, R. Predict responsibly: Improving fairness and accuracy by learning to defer. In Proceedings of Advances in Neural Information Processing Systems (2018).
[30]
Mayson, S.G. Bias in, bias out. Yale Law J. 128, 8 (June 2019).
[31]
Narayanan, A. Translation tutorial: 21 fairness definitions and their politics. In Proceedings of the Conf. on Fairness, Accountability, and Transparency 1170, 3 (2022).
[32]
Pleiss, G. et al. On fairness and calibration. In Proceedings of Advances in Neural Information Processing Systems (2017).
[33]
Powles, J. and Nissenbaum, H. The seductive diversion of 'solving' bias in artificial intelligence. OneZero, Medium (2018).
[34]
Raghavan, M. The societal impacts of algorithmic decision-making. Ph.D. Dissertation, Cornell University (2021).
[35]
Rambachan, A., Kleinberg, J., Mullainathan, S. and Ludwig, J. An economic approach to regulating algorithms. Technical Report, National Bureau of Economic Research (2020).
[36]
Reich, C.L. and Vijaykumar, S. A possibility in algorithmic fairness: Can calibration and equal error rates be reconciled? In Foundations of Responsible Computing (2021).
[37]
Richardson, R., Schultz, J.M. and Jacobson, S.E. Brief for amicus curiae AI Now Institute in support of petitioners. Philadelphia Community Bail Fund, et al. Pets v. Commonwealth Arraignment Ct., 21 EM 2019 (Pa. 2020).
[38]
Skeem, J. and Lowenkamp, C. Using algorithms to address trade-offs inherent in predicting recidivism. Behavioral Sciences & the Law 38, 3 (2020), 259--278.
[39]
Srivastava, M., Heidari, H. and Krause, A. Mathematical notions vs. human perception of fairness: A descriptive approach to fairness for machine learning. In Proceedings of the 25th ACM SIGKDD Intern. Conf. on Knowledge Discovery & Data Mining (2019), 2459--2468.
[40]
Yu, B. et al. Keeping designers in the loop: Communicating inherent algorithmic trade-offs across multiple objectives. In Proceedings of the 2020 ACM Designing Interactive Systems Conf. 1245--1257.

Cited By

View all
  • (2024)Implications of causality in artificial intelligenceFrontiers in Artificial Intelligence10.3389/frai.2024.14397027Online publication date: 21-Aug-2024
  • (2024)Algorithmic Fairness in Performative Policy Learning: Escaping the Impossibility of Group FairnessProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658929(616-630)Online publication date: 3-Jun-2024
  • (2024)Exploring Fairness Interpretability with FairnessFriend: A Chatbot Solution2024 IEEE 40th International Conference on Data Engineering Workshops (ICDEW)10.1109/ICDEW61823.2024.00037(246-253)Online publication date: 13-May-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Communications of the ACM
Communications of the ACM  Volume 67, Issue 1
January 2024
122 pages
EISSN:1557-7317
DOI:10.1145/3638509
  • Editor:
  • James Larus
Issue’s Table of Contents
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 December 2023
Published in CACM Volume 67, Issue 1

Check for updates

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,201
  • Downloads (Last 6 weeks)172
Reflects downloads up to 03 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Implications of causality in artificial intelligenceFrontiers in Artificial Intelligence10.3389/frai.2024.14397027Online publication date: 21-Aug-2024
  • (2024)Algorithmic Fairness in Performative Policy Learning: Escaping the Impossibility of Group FairnessProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658929(616-630)Online publication date: 3-Jun-2024
  • (2024)Exploring Fairness Interpretability with FairnessFriend: A Chatbot Solution2024 IEEE 40th International Conference on Data Engineering Workshops (ICDEW)10.1109/ICDEW61823.2024.00037(246-253)Online publication date: 13-May-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Digital Edition

View this article in digital edition.

Digital Edition

Magazine Site

View this article on the magazine site (external)

Magazine Site

Login options

Full Access

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media