Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
review-article
Open access

A snapshot of the frontiers of fairness in machine learning

Published: 20 April 2020 Publication History

Abstract

A group of industry, academic, and government experts convene in Philadelphia to explore the roots of algorithmic bias.

References

[1]
Agarwal, A., Beygelzimer, A., Dudík, M., Langford, J. and Wallach, H. A reductions approach to fair classification. In Proceedings of the 35th Intern. Conf. Machine Learning. ICML, JMLR Workshop and Conference Proceedings, 2018, 2569---2577.
[2]
Bastani, H., Bayati, M. and Khosravi, K. Exploiting the natural exploration in contextual bandits. arXiv preprint, 2017, arXiv:1704.09011.
[3]
Becker, G.S. The Economics of Discrimination. University of Chicago Press, 2010.
[4]
Berk, R., Heidari, H., Jabbari, S., Kearns, M. and Roth, A. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research (2018), 0(0):0049124118782533.
[5]
Beutel, A., Chen, J., Zhao, Z. and Chi, E.H. Data decisions and theoretical implications when adversarially learning fair representations. arXiv preprint, 2017, arXiv:1707.00075.
[6]
Bird, S., Barocas, S., Crawford, K., Diaz, F. and Wallach, H. Exploring or exploiting? Social and ethical implications of autonomous experimentation in AI. In Proceedings of Workshop on Fairness, Accountability, and Transparency in Machine Learning. ACM, 2016.
[7]
Bolukbasi, T., Chang, K-W., Zou, J.Y., Saligrama, V. and Kalai, A.T. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Advances in Neural Information Processing Systems, 2016, 4349---4357.
[8]
Bower, A. et al. Fair pipelines. arXiv preprint, 2017, arXiv:1707.00391.
[9]
Buolamwini, J. and Gebru, T. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of the Conference on Fairness, Accountability and Transparency. ACM, 2018, 77---91.
[10]
Calders, T. and Verwer, S. Three naive Bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery 21, 2 (2010), 277---292.
[11]
Caliskan, A., Bryson, J.J. and Narayanan, A. Semantics derived automatically from language corpora contain human-like biases. Science 356, 6334 (2017), 183---186.
[12]
Celis, L.E., Straszak, D. and Vishnoi, N.K. Ranking with fairness constraints. In Proceedings of the 45th Intern. Colloquium on Automata, Languages, and Programming. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2018.
[13]
Celis, L.E. and Vishnoi, N.K. Fair personalization. arXiv preprint, 2017, arXiv:1707.02260.
[14]
Chen, I., Johansson, F.D. and Sontag, D. Why is my classifier discriminatory? Advances in Neural Information Processing Systems, 2018, 3539---3550.
[15]
Chouldechova, A. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data 5, 2 (2017), 153---163.
[16]
Coat, S. and Loury, G.C. Will affirmative-action policies eliminate negative stereotypes? The American Economic Review, 1993, 1220---1240.
[17]
Corbett-Davies, S., Pierson, E., Feller, A., Goel, S. and Huq, A. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD Intern. Conf. Knowledge Discovery and Data Mining. ACM, 2017, 797---806.
[18]
Doroudi, S., Thomas, P.S. and Brunskill, E. Importance sampling for fair policy selection. In Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence. AUAI Press, 2017.
[19]
Dwork, C., Hardt, M., Pitassi, T., Reingold, O. and Zemel, R. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conf. ACM, 2012, 214---226.
[20]
Dwork, C. and Ilvento, C. Fairness under composition. Manuscript, 2018.
[21]
Dwork, C., McSherry, F., Nissim, K. and Smith, A. Calibrating noise to sensitivity in private data analysis. In Proceedings of Theory of Cryptography Conference. Springer, 2006, 265---284.
[22]
Dwork, C. and Roth, A. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science 9, 3---4 (2014), 211---407.
[23]
Edwards, H. and Storkey, A. Censoring representations with an adversary. arXiv preprint, 2015, arXiv:1511.05897.
[24]
Ensign, D., Friedler, S.A., Neville, S., Scheidegger, C. and Venkatasubramanian, S. Runaway feedback loops in predictive policing. In Proceedings of 1st Conf. Fairness, Accountability and Transparency in Computer Science. ACM, 2018.
[25]
Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C. and Venkatasubramanian, S. Certifying and removing disparate impact. Proceedings of KDD, 2015.
[26]
Foster, D.P. and Vohra, R.A. An economic argument for affirmative action. Rationality and Society 4, 2 (1992), 176---188.
[27]
Friedler, S.A., Scheidegger, C. and Venkatasubramanian, S. On the (im) possibility of fairness. arXiv preprint, 2016, arXiv:1609.07236.
[28]
Gillen, S., Jung, C., Kearns, M. and Roth, A. Online learning with an unknown fairness metric. Advances in Neural Information Processing Systems, 2018.
[29]
Hardt, M., Price, E. and Srebro, N. Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems, 2016, 3315---3323.
[30]
Hébert-Johnson, U., Kim, M.P., Reingold, O. and Rothblum, G.N. Calibration for the (computationally identifiable) masses. In Proceedings of the 35th Intern. Conf. Machine Learning 80. ICML, JMLR Workshop and Conference Proceedings, 2018, 2569---2577.
[31]
Hu, L. and Chen, Y. A short-term intervention for long-term fairness in the labor market. In Proceedings of the 2018 World Wide Web Conference on World Wide Web. P.A. Champin, F.L. Gandon, M. Lalmas, and P.G. Ipeirotis, eds. ACM, 2018, 1389---1398.
[32]
Ja bbari, S., Joseph, M., Kearns, M., Morgenstern, J.H. and Roth, A. Fairness in reinforcement learning. In Proceedings of the Intern. Conf. Machine Learning, 2017, 1617---1626.
[33]
Johndrow, J.E. Lum, K. et al. An algorithm for removing sensitive information: application to race-independent recidivism prediction. The Annals of Applied Statistics 13, 1 (2019), 189---220.
[34]
Joseph, M., Kearns, M., Morgenstern, J.H., Neel, S. and Roth. A. Fair algorithms for infinite and contextual bandits. In Proceedings of AAAI/ACM Conference on AI, Ethics, and Society, 2018.
[35]
Joseph, M., Kearns, M., Morgenstern, J.H. and Roth, A. Fairness in learning: Classic and contextual bandits. Advances in Neural Information Processing Systems, 2016, 325---333.
[36]
Kamishima, T., Akaho, S. and Sakuma, J. Fairness-aware learning through regularization approach. In Proceedings of the IEEE 11th Intern. Conf. Data Mining Workshops. IEEE, 2011, 643---650.
[37]
Kannan, S. et al. Fairness incentives for myopic agents. In Proceedings of the 2017 ACM Conference on Economics and Computation. ACM, 2017, 369---386.
[38]
Kannan, S., Morgenstern, J., Roth, A., Waggoner, B. and Wu, Z.S. A smoothed analysis of the greedy algorithm for the linear contextual bandit problem. Advances in Neural Information Processing Systems, 2018.
[39]
Kannan, S., Roth, A. and Ziani, J. Downstream effects of affirmative action. In Proceedings of the Conf. Fairness, Accountability, and Transparency. ACM, 2019, 240---248.
[40]
Kearns, M.J., Neel, S., Roth, A. and Wu, Z.S. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In Proceedings of the 35th International Conference on Machine Learning. J.G. Dy and A. Krause, eds. JMLR Workshop and Conference Proceedings, ICML, 2018. 2569---2577.
[41]
Kearns, M., Neel, S., Roth, A. and Wu, Z.S. An empirical study of rich subgroup fairness for machine learning. In Proceedings of the Conf. Fairness, Accountability, and Transparency. ACM, 2019, 100---109.
[42]
Kearns, M., Roth, A. and Wu, Z.S. Meritocratic fairness for cross-population selection. In Proceedings of International Conference on Machine Learning, 2017, 1828---1836.
[43]
Kilbertus, N. et al. Avoiding discrimination through causal reasoning. Advances in Neural Information Processing Systems, 2017, 656---666.
[44]
Kim, M.P., Ghorbani, A. and Zou, J. Multiaccuracy: Blackbox postprocessing for fairness in classification. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. ACM, 2019, 247---254.
[45]
Kim, M.P., Reingold, O. and Rothblum, G.N. Fairness through computationally bounded awareness. Advances in Neural Information Processing Systems, 2018.
[46]
Kleinberg, J.M., Mullainathan, S. and Raghavan, M. Inherent trade-offs in the fair determination of risk scores. In Proceedings of the 8th Innovations in Theoretical Computer Science Conference, 2017.
[47]
Kleinberg, J. and Raghavan, M. Selection problems in the presence of implicit bias. In Proceedings of the 9th Innovations in Theoretical Computer Science Conference 94, 2018, 33. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.
[48]
Kusner, M.J., Loftus, J., Russell, C. and Silva, R. Counterfactual fairness. Advances in Neural Information Processing Systems, 2017, 4069---4079.
[49]
Liu, L.T., Dean, S., Rolf, E., Simchowitz, M. and Hardt, M. Delayed impact of fair machine learning. In Proceedings of the 35th Intern. Conf. Machine Learning. ICML, 2018.
[50]
Liu, Y., Radanovic, G., Dimitrakakis, C., Mandal, D. and Parkes, D.C. Calibrated fairness in bandits. arXiv preprint, 2017, arXiv:1707.01875.
[51]
Lum, K. and Isaac, W. To predict and serve? Significance 13, 5 (2016), 14---19.
[52]
Madras, D., Creager, E., Pitassi, T. and Zemel, R. Learning adversarially fair and transferable representations. In Proceedings of Intern. Conf. Machine Learning, 2018, 3381---3390.
[53]
Madras, D., Pitassi, T. and Zemel, R.S. Predict responsibly: Increasing fairness by learning to defer. CoRR, 2017, abs/1711.06664.
[54]
McNamara, D., Ong, C.S. and Williamson, R.C. Provably fair representations. arXiv preprint, 2017, arXiv:1710.04394.
[55]
Nabi, R. and Shpitser, I. Fair inference on outcomes. In Proceedings of the AAAI Conference on Artificial Intelligence 2018 (2018), 1931. NIH Public Access.
[56]
Pedreshi, D., Ruggieri, S. and Turini, F. Discrimination-aware data mining. In Proceedings of the 14th ACM SIGKDD Intern. Conf. Knowledge Discovery and Data Mining. ACM, 2008, 560---568.
[57]
Raghavan, M., Slivkins, A., Wortman Vaughan, J. and Wu, Z.S. The unfair externalities of exploration. Conference on Learning Theory, 2018.
[58]
Rothblum, G.N. and Yona, G. Probably approximately metric-fair learning. In Proceedings of the 35th Intern. Conf. Machine Learning. JMLR Workshop and Conference Proceedings, ICML 80 (2018), 2569---2577.
[59]
Rothwell, J. How the war on drugs damages black social mobility. The Brookings Institution, Sept. 30, 2014.
[60]
Sweeney, L. Discrimination in online ad delivery. Queue 11, 3 (2013), 10.
[61]
Woodworth, B., Gunasekar, S., Ohannessian, M.I. and Srebro, N. Learning non-discriminatory predictors. In Proceedings of Conf. Learning Theory, 2017, 1920---1953.
[62]
Yang, K. and Stoyanovich, J. Measuring fairness in ranked outputs. In Proceedings of the 29th Intern. Conf. Scientific and Statistical Database Management. ACM, 2017, 22.
[63]
Zafar, M.B., Valera, I. Gomez-Rodriguez, M. and Gummadi, K.P. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th Intern. Conf. World Wide Web. ACM, 2017, 1171---1180.
[64]
Zemel, R., Wu, Y., Swersky, K., Pitassi, T. and Dwork, C. Learning fair representations. In Proceedings of ICML, 2013.
[65]
Zhang, B.H., Lemoine, B. and Mitchell, M. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conf. AI, Ethics, and Society. ACM, 2018, 335---340.

Cited By

View all
  • (2024)Algorithmic discrimination: examining its types and regulatory measures with emphasis on US legal practicesFrontiers in Artificial Intelligence10.3389/frai.2024.13202777Online publication date: 21-May-2024
  • (2024)Toward Fairness, Accountability, Transparency, and Ethics in AI for Social Media and Health Care: Scoping ReviewJMIR Medical Informatics10.2196/5004812(e50048)Online publication date: 3-Apr-2024
  • (2024)A Fair price to pay: exploiting causal graphs for fairness in insuranceSSRN Electronic Journal10.2139/ssrn.4709243Online publication date: 2024
  • Show More Cited By

Index Terms

  1. A snapshot of the frontiers of fairness in machine learning

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Communications of the ACM
    Communications of the ACM  Volume 63, Issue 5
    May 2020
    101 pages
    ISSN:0001-0782
    EISSN:1557-7317
    DOI:10.1145/3396208
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 20 April 2020
    Published in CACM Volume 63, Issue 5

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Review-article
    • Popular
    • Refereed

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)2,246
    • Downloads (Last 6 weeks)138
    Reflects downloads up to 11 Sep 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Algorithmic discrimination: examining its types and regulatory measures with emphasis on US legal practicesFrontiers in Artificial Intelligence10.3389/frai.2024.13202777Online publication date: 21-May-2024
    • (2024)Toward Fairness, Accountability, Transparency, and Ethics in AI for Social Media and Health Care: Scoping ReviewJMIR Medical Informatics10.2196/5004812(e50048)Online publication date: 3-Apr-2024
    • (2024)A Fair price to pay: exploiting causal graphs for fairness in insuranceSSRN Electronic Journal10.2139/ssrn.4709243Online publication date: 2024
    • (2024)Wasserstein Robust Classification with Fairness ConstraintsManufacturing & Service Operations Management10.1287/msom.2022.023026:4(1567-1585)Online publication date: 1-Jul-2024
    • (2024)Intersectionality: Affirmative Action with Multidimensional IdentitiesManagement Science10.1287/mnsc.2023.03839Online publication date: 13-Sep-2024
    • (2024)Modeling Individual Fairness Beliefs and Its ApplicationsACM Transactions on Management Information Systems10.1145/368207015:3(1-26)Online publication date: 2-Aug-2024
    • (2024)Should Fairness be a Metric or a Model? A Model-based Framework for Assessing Bias in Machine Learning PipelinesACM Transactions on Information Systems10.1145/364127642:4(1-41)Online publication date: 23-Jan-2024
    • (2024)Risky Artificial IntelligenceDigital Dreams Have Become Nightmares: What We Must Do10.1145/3640479.3640492Online publication date: 15-Feb-2024
    • (2024)Recommend Me? Designing Fairness Metrics with ProvidersProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659044(2389-2399)Online publication date: 3-Jun-2024
    • (2024)ReFAIR: Toward a Context-Aware Recommender for Fairness Requirements EngineeringProceedings of the IEEE/ACM 46th International Conference on Software Engineering10.1145/3597503.3639185(1-12)Online publication date: 20-May-2024
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Digital Edition

    View this article in digital edition.

    Digital Edition

    Magazine Site

    View this article on the magazine site (external)

    Magazine Site

    Get Access

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media