Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3630106.3658929acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article

Algorithmic Fairness in Performative Policy Learning: Escaping the Impossibility of Group Fairness

Published: 05 June 2024 Publication History

Abstract

In many prediction problems, the predictive model affects the distribution of the prediction target. This phenomenon is known as performativity and is often caused by the behavior of individuals with vested interests in the outcome of the predictive model. Although performativity is generally problematic because it manifests as distribution shifts, we develop algorithmic fairness practices that leverage performativity to achieve stronger group fairness guarantees in social classification problems (compared to what is achievable in non-performative settings). In particular, we leverage the policymaker’s ability to steer the population to remedy inequities in the long term. A crucial benefit of this approach is that it is possible to resolve the incompatibilities between conflicting group fairness definitions.

References

[1]
Alekh Agarwal, Alina Beygelzimer, Miroslav Dudik, John Langford, and Hanna Wallach. 2018. A Reductions Approach to Fair Classification. In Proceedings of the 35th International Conference on Machine Learning. PMLR, 60–69.
[2]
Kenneth Arrow. 1971. The Theory of Discrimination. Labor Economics vol 4 (1971).
[3]
Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. 2017. Fairness in Criminal Justice Risk Assessments: The State of the Art. arXiv:1703.09207 [stat] (March 2017). arxiv:1703.09207 [stat]
[4]
Stephen P. Boyd and Lieven Vandenberghe. 2004. Convex Optimization. Cambridge University Press, Cambridge, UK ; New York.
[5]
Gavin Brown, Shlomi Hod, and Iden Kalemaj. 2022. Performative Prediction in a Stateful World. In International Conference on Artificial Intelligence and Statistics.
[6]
Ran Canetti, Aloni Cohen, Nishanth Dikkala, Govind Ramnarayan, Sarah Scheffler, and Adam Smith. 2019. From Soft Classifiers to Hard Decisions: How fair can we be?arxiv:1810.02003 [cs.LG]
[7]
Ben Casselman and Dana Goldstein. 2015. The New Science of Sentencing. https://www.themarshallproject.org/2015/08/04/the-new-science-of-sentencing.
[8]
L. Elisa Celis, Lingxiao Huang, Vijay Keswani, and Nisheeth K. Vishnoi. 2019. Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 319–328. https://doi.org/10.1145/3287560.3287586
[9]
Alexandra Chouldechova. 2017. Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. Big Data 5, 2 (June 2017), 153–163. https://doi.org/10.1089/big.2016.0047
[10]
Stephen Coate and Glenn C. Loury. 1993. Will Affirmative-Action Policies Eliminate Negative Stereotypes?The American Economic Review 83, 5 (1993), 1220–1240. jstor:2117558
[11]
Ashley C Craig and Jr Fryer, Roland G. 2017. Complementary Bias: A Model of Two-Sided Statistical Discrimination. Working Paper 23811. National Bureau of Economic Research.
[12]
Kate Crawford. 2017. The Trouble with Bias.
[13]
Alexander D’Amour, Hansa Srinivasan, James Atwood, Pallavi Baljekar, D. Sculley, and Yoni Halpern. 2020. Fairness Is Not Static: Deeper Understanding of Long Term Fairness via Simulation Studies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency(FAT* ’20). Association for Computing Machinery, New York, NY, USA, 525–534. https://doi.org/10.1145/3351095.3372878
[14]
Jeffrey Dastin. 2018. Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women. Reuters (Oct. 2018).
[15]
J. L. Davis, A. Williams, and M. W. Yang. 2021. Algorithmic reparation.Big Data and Society (2021).
[16]
Rahul C. Deo. 2015. Machine Learning in Medicine. Circulation 132, 20 (Nov. 2015), 1920–1930. https://doi.org/10.1161/CIRCULATIONAHA.115.001593
[17]
Kate Donahue and Jon Kleinberg. 2020. Fairness and utilization in allocating resources with uncertain demand. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency(FAT* ’20). ACM. https://doi.org/10.1145/3351095.3372847
[18]
Andrew Estornell, Sanmay Das, Yang Liu, and Yevgeniy Vorobeychik. 2023. Group-Fair Classification with Strategic Agents(FAccT ’23). Association for Computing Machinery, New York, NY, USA, 389–399. https://doi.org/10.1145/3593013.3594006
[19]
Andrew Estornell, Sanmay Das, Yang Liu, and Yevgeniy Vorobeychik. 2023. Group-Fair Classification with Strategic Agents. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 389–399. https://doi.org/10.1145/3593013.3594006
[20]
Hanming Fang and Andrea Moro. 2011. Chapter 5 - Theories of Statistical Discrimination and Affirmative Action: A Survey. Handbook of Social Economics, Vol. 1. North-Holland, 133–200. https://doi.org/10.1016/B978-0-444-53187-2.00005-X
[21]
Roland G. Fryar and Glenn C. Loury. 2005. Affirmative Action in Winner-Take-All Markets. The Journal of Economic Inequality (2005).
[22]
Roland G. Fryar and Glenn C. Loury. 2013. Valuing Diversity. Journal of Political Economy (2013).
[23]
Roland G. Fryar, Glenn C. Loury, and Tolga Yuret. 2008. An Economic Analysis of Color-Blind Affirmative Action. The Journal of Law, Economics, and Organization. (2008).
[24]
Ben Green. 2022. Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic Fairness. Philosophy and Technology 35, 4 (Oct. 2022). https://doi.org/10.1007/s13347-022-00584-6
[25]
Limor Gultchin, Vincent Cohen-Addad, Sophie Giffard-Roisin, Varun Kanade, and Frederik Mallmann-Trenn. 2022. Beyond Impossibility: Balancing Sufficiency, Separation and Accuracy. arxiv:2205.12327 [cs.LG]
[26]
Moritz Hardt, Meena Jagadeesan, and Celestine Mendler-Dünner. 2022. Performative Power. arXiv:2203.17232 [cs, econ] (March 2022). arxiv:2203.17232 [cs, econ]
[27]
Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, and Mary Wootters. 2016. Strategic Classification. In Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science(ITCS ’16). Association for Computing Machinery, New York, NY, USA, 111–122. https://doi.org/10.1145/2840728.2840730
[28]
Moritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of Opportunity in Supervised Learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems(NIPS’16). Curran Associates Inc., Red Hook, NY, USA, 3323–3331.
[29]
Hoda Heidari, Vedant Nanda, and Krishna Gummadi. 2019. On the Long-term Impact of Algorithmic Decision Policies: Effort Unfairness and Feature Segregation through Social Learning. In Proceedings of the 36th International Conference on Machine Learning. PMLR, 2692–2701.
[30]
Guy Horowitz and Nir Rosenfeld. 2023. A Tale of Two Shifts: Causal Strategic Classification. https://arxiv.org/pdf/2302.06280.pdf (2023).
[31]
Lily Hu, Nicole Immorlica, and Jennifer Wortman Vaughan. 2019. The Disparate Effects of Strategic Manipulation. In Proceedings of the Conference on Fairness, Accountability, and Transparency(FAT* ’19). Association for Computing Machinery, New York, NY, USA, 259–268. https://doi.org/10.1145/3287560.3287597
[32]
Zachary Izzo, Lexing Ying, and James Zou. 2021. How to Learn When Data Reacts to Your Model: Performative Gradient Descent. In Proceedings of the 38th International Conference on Machine Learning. PMLR, 4641–4650.
[33]
Meena Jagadeesan, Nikhil Garg, and Jacob Steinhardt. 2023. Supply-Side Equilibria in Recommender Systems. In Thirty-Seventh Conference on Neural Information Processing Systems.
[34]
Sampath Kannan, Aaron Roth, and Juba Ziani. 2019. Downstream Effects of Affirmative Action. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, Atlanta GA USA, 240–248. https://doi.org/10.1145/3287560.3287578
[35]
Michael P. Kim and Juan C. Perdomo. 2023. Making Decisions under Outcome Performativity. arxiv:2210.01745 [cs, stat]
[36]
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2016. Inherent Trade-Offs in the Fair Determination of Risk Scores. In Proceedings of the 8th Conference on Innovations in Theoretical Computer Science (ITCS). Berkeley, CA. arxiv:1609.05807
[37]
Claire Lazar Reich and Suhas Vijaykumar. 2021. A Possibility in Algorithmic Fairness: Can Calibration and Equal Error Rates Be Reconciled?Schloss Dagstuhl – Leibniz-Zentrum für Informatik. https://doi.org/10.4230/LIPICS.FORC.2021.4
[38]
Sagi Levanon and Nir Rosenfeld. 2021. Strategic Classification Made Practical. In Proceedings of the 38th International Conference on Machine Learning. PMLR, 6243–6253.
[39]
Lydia T. Liu, Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. 2018. Delayed Impact of Fair Machine Learning. arXiv:1803.04383 [cs, stat] (March 2018). arxiv:1803.04383 [cs, stat]
[40]
Lydia T. Liu, Ashia Wilson, Nika Haghtalab, Adam Tauman Kalai, Christian Borgs, and Jennifer Chayes. 2020. The Disparate Equilibria of Algorithmic Decision Making when Individuals Invest Rationally. In ACM Conference on Fairness, Accountability, and Transparency in Machine Learning.
[41]
Michael Lohaus, Michael Perrot, and Ulrike Von Luxburg. 2020. Too Relaxed to Be Fair. In Proceedings of the 37th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 119), Hal Daumé III and Aarti Singh (Eds.). PMLR, 6360–6369. https://proceedings.mlr.press/v119/lohaus20a.html
[42]
F. M. Lord. 1980. Applications of Item Response Theory To Practical Testing Problems. Routledge, New York. https://doi.org/10.4324/9780203056615
[43]
David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. 2018. Learning Adversarially Fair and Transferable Representations. arXiv:1802.06309 [cs, stat] (Feb. 2018). arxiv:1802.06309 [cs, stat]
[44]
Andreas Maurer. 2016. A vector-contraction inequality for Rademacher complexities. arxiv:1605.00251 [cs.LG]
[45]
Celestine Mendler-Dünner, Frances Ding, and Yixin Wang. 2022. Anticipating Performativity by Predicting from Predictions. In Advances in Neural Information Processing Systems.
[46]
Celestine Mendler-Dünner, Juan C. Perdomo, Tijana Zrnic, and Moritz Hardt. 2020. Stochastic Optimization for Performative Prediction. In Proceedings of the 34th International Conference on Neural Information Processing Systems(NIPS’20). Curran Associates Inc., Red Hook, NY, USA, 4929–4939.
[47]
John Miller, Smitha Milli, and Moritz Hardt. 2020. Strategic Classification Is Causal Modeling in Disguise. arXiv:1910.10362 [cs, stat] (Feb. 2020). arxiv:1910.10362 [cs, stat]
[48]
Smitha Milli, John Miller, Anca D. Dragan, and Moritz Hardt. 2019. The Social Cost of Strategic Classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency(FAT* ’19). Association for Computing Machinery, New York, NY, USA, 230–239. https://doi.org/10.1145/3287560.3287576
[49]
Andrea Moro and Peter Norman. 2003. Affirmative Action in a Competitive Economy. Journal of Public Economics 87, 3-4 (March 2003), 567–594. https://doi.org/10.1016/S0047-2727(01)00121-9
[50]
Andrea Moro and Peter Norman. 2004. A General Equilibrium Model of Statistical Discrimination. Journal of Economic Theory 114, 1 (Jan. 2004), 1–30. https://doi.org/10.1016/S0022-0531(03)00165-0
[51]
Kirtan Padh, Diego Antognini, Emma Lejal Glaude, Boi Faltings, and Claudiu Musat. 2021. Addressing Fairness in Classification with a Model-Agnostic Multi-Objective Algorithm. arxiv:2009.04441 [cs.LG]
[52]
Randall D. Penfield and Gregory Camilli. 2006. 5 Differential Item Functioning and Item Bias. In Handbook of Statistics, C. R. Rao and S. Sinharay (Eds.). Psychometrics, Vol. 26. Elsevier, 125–167. https://doi.org/10.1016/S0169-7161(06)26005-X
[53]
Juan Perdomo, Tijana Zrnic, Celestine Mendler-Dünner, and Moritz Hardt. 2020. Performative Prediction. In Proceedings of the 37th International Conference on Machine Learning. PMLR, 7599–7609.
[54]
Case-Kevin Petrasic, Benjamin Saul, James Greig, and Katherine Lamberth. 2017. Algorithms and Bias: What Lenders Need to Know. White & Case LLP.
[55]
Edmund Phelps. 1972. The Statistical Theory of Racism and Sexism. The American Economic Review (1972).
[56]
Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. 2017. On Fairness and Calibration. In Advances in Neural Information Processing Systems, I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Vol. 30. Curran Associates, Inc.https://proceedings.neurips.cc/paper_files/paper/2017/file/b8b9c74ac526fffbeb2d39ab038d1cd7-Paper.pdf
[57]
Manish Raghavan. 2023. What Should We Do when Our Ideas of Fairness Conflict?Commun. ACM 67, 1 (dec 2023), 88–97. https://doi.org/10.1145/3587930
[58]
Miriam Rateike, Isabel Valera, and Patrick Forré. 2023. Designing Long-term Group Fair Policies in Dynamical Systems. arxiv:2311.12447 [cs.AI]
[59]
Cynthia Rudin. 2013. Predictive Policing: Using Machine Learning to Detect Patterns of Crime. Wired (Aug. 2013).
[60]
Yonadav Shavit, Benjamin Edelman, and Brian Axelrod. 2020. Causal Strategic Linear Regression. In International Conference on Machine Learning.
[61]
Seamus Somerstep, Yuekai Sun, and Ya’acov Ritov. 2023. Learning in Reverse Causal Strategic Environments with Ramifications on Two Sided Markets. In NeurIPS 2023 Workshop on Algorithmic Fairness through the Lens of Time (AFT2023).
[62]
Seamus Somerstep, Yuekai Sun, and Yaacov Ritov. 2024. Learning in reverse causal strategic environments with ramifications on two sided markets. In The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=vEfmVS5ywF
[63]
Tongxin Yin, Reilly Raab, Mingyan Liu, and Yang Liu. 2023. Long-Term Fairness with Unknown Dynamics. In Thirty-Seventh Conference on Neural Information Processing Systems.
[64]
Sebastian Zezulka and Konstantin Genin. 2023. Performativity and Prospective Fairness. arxiv:2310.08349 [cs.CY]
[65]
Xueru Zhang, Mohammad Mahdi Khalili, Kun Jin, Parinaz Naghizadeh, and Mingyan Liu. 2022. Fairness Interventions as (Dis)Incentives for Strategic Manipulation. In Proceedings of the 39th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 162), Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (Eds.). PMLR, 26239–26264. https://proceedings.mlr.press/v162/zhang22l.html

Index Terms

  1. Algorithmic Fairness in Performative Policy Learning: Escaping the Impossibility of Group Fairness

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency
    June 2024
    2580 pages
    ISBN:9798400704505
    DOI:10.1145/3630106
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 05 June 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Group Fairness
    2. Impossibility Theorems
    3. Long Term Fairness
    4. Performative Prediction

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    Conference

    FAccT '24

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 156
      Total Downloads
    • Downloads (Last 12 months)156
    • Downloads (Last 6 weeks)28
    Reflects downloads up to 08 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media