Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
article
Free access

Fairness constraints: a flexible approach for fair classification

Published: 01 January 2019 Publication History

Abstract

Algorithmic decision making is employed in an increasing number of real-world applications to aid human decision making. While it has shown considerable promise in terms of improved decision accuracy, in some scenarios, its outcomes have been also shown to impose an unfair (dis)advantage on people from certain social groups (e.g., women, blacks). In this context, there is a need for computational techniques to limit unfairness in algorithmic decision making. In this work, we take a step forward to fulfill that need and introduce a flexible constraint-based framework to enable the design of fair margin-based classifiers. The main technical innovation of our framework is a general and intuitive measure of decision boundary unfairness, which serves as a tractable proxy to several of the most popular computational definitions of unfairness from the literature. Leveraging our measure, we can reduce the design of fair margin-based classifiers to adding tractable constraints on their decision boundaries. Experiments on multiple synthetic and real-world datasets show that our framework is able to successfully limit unfairness, often at a small cost in terms of accuracy.

References

[1]
Adult. http://tinyurl.com/UCI-Adult, 1996.
[2]
Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. A Reductions Approach to Fair Classification. In ICML, 2018.
[3]
Andrew Altman. Discrimination. In The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, 2016. https://plato.stanford.edu/archives/win2016/entries/discrimination/.
[4]
Julia Angwin and Jeff Larson. Bias in Criminal Risk Scores Is Mathematically Inevitable, Researchers Say. https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say.
[5]
Bank. http://tinyurl.com/UCI-Bank, 2014.
[6]
Solon Barocas and Andrew D. Selbst. Big Data's Disparate Impact. California Law Review, 2016.
[7]
Esha Bhandari. Big Data Can Be Used To Violate Civil Rights Laws, and the FTC Agrees, 2016. https://www.aclu.org/blog/free-future/big-data-can-be-used-violate-civil-rights-laws-and-ftc-agrees.
[8]
Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
[9]
Stephen Boyd and Lieven Vandenberghe. Convex Optimization. 2004.
[10]
Ernest W. Burgess. Factors Determining Success or Failure on Parole. The Workings of the Indeterminate Sentence Law and the Parole System in Illinois, 1928.
[11]
Toon Calders and Sicco Verwer. Three Naive Bayes Approaches for Discrimination-Free Classification. Data Mining and Knowledge Discovery, 2010.
[12]
Flavio Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. Optimized Pre-Processing for Discrimination Prevention. In NIPS, 2017.
[13]
Alexandra Chouldechova. Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. arXiv preprint, arXiv:1610.07524, 2016.
[14]
Civil Rights Act. Civil Rights Act of 1964, Title VII, Equal Employment Opportunities, 1964.
[15]
Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. Algorithmic Decision Making and the Cost of Fairness. In KDD, 2017.
[16]
Paul Covington, Jay Adams, and Emre Sargin. Deep Neural Networks for YouTube Recommendations. In RecSys, 2016.
[17]
Yann N. Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and Attacking the Saddle Point Problem in High-dimensional Non-convex Optimization. In NIPS, 2014.
[18]
Michele Donini, Luca Oneto, Shai Ben-David, John Shawe-Taylor, and Massimiliano Pontil. Empirical Risk Minimization under Fairness Constraints. In NIPS. 2018.
[19]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, and Omer Reingold. Fairness Through Awareness. In ITCSC, 2012.
[20]
Cynthia Dwork, Nicole Immorlica, Adam Tauman Kalai, and Mark DM Leiserson. Decoupled Classifiers for Group-fair and Efficient Machine Learning. In Conference on Fairness, Accountability and Transparency, 2018.
[21]
Harrison Edwards and Amos Storkey. Censoring Representations with an Adversary. In ICLR, 2016.
[22]
Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and Removing Disparate Impact. In KDD, 2015.
[23]
FICO. FICO Score, 2017. https://en.wikipedia.org/wiki/Credit_score_in_the_United_States.
[24]
Anthony W. Flores, Christopher T. Lowenkamp, and Kristin Bechtel. False Positives, False Negatives, and False Analyses: A Rejoinder to "Machine Bias: There's Software Used Across the Country to Predict Future Criminals. And it's Biased Against Blacks.". 2016.
[25]
Mark J. Furletti. An Overview and History of Credit Reporting, 2002.
[26]
Alex Gano. Disparate impact and mortgage lending: A beginner's guide. U. Colo. L. Rev., 88:1109, 2017.
[27]
Andrew Gelman, Jeffrey Fagan, and Alex Kiss. An analysis of the new york city police department's stop-and-frisk policy in the context of claims of racial bias. Journal of the American Statistical Association, 102(479):813-823, 2007.
[28]
James E Gentle, Wolfgang Karl Härdle, and Yuichi Mori. Handbook of Computational Statistics: Concepts and Methods. Springer Science & Business Media, 2012.
[29]
Sharad Goel, Justin M. Rao, and Ravi Shroff. Precinct or Prejudice? Understanding Racial Disparities in New York City's Stop-and-Frisk Policy. Annals of Applied Statistics, 2015.
[30]
Gabriel Goh, Andrew Cotter, Maya Gupta, and Michael Friedlander. Satisfying Real-world Goals with Dataset Constraints. In NIPS, 2016.
[31]
Carlos A. Gomez-Uribe and Neil Hunt. The Netflix Recommender System: Algorithms, Business Value, and Innovation. ACM Trans. Manage. Inf. Syst., 2015.
[32]
Google AdSense. https://www.google.com/adsense.
[33]
Moritz Hardt, Eric Price, and Nathan Srebro. Equality of Opportunity in Supervised Learning. In NIPS, 2016.
[34]
Faisal Kamiran and Toon Calders. Classifying without Discriminating. In IC4, 2009.
[35]
Faisal Kamiran and Toon Calders. Classification with No Discrimination by Preferential Sampling. In BENELEARN, 2010.
[36]
Faisal Kamiran, Toon Calders, and Mykola Pechenizkiy. Discrimination aware decision tree learning. In ICDM, 2010.
[37]
Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. Fairness-aware Classifier with Prejudice Remover Regularizer. In PADM, 2011.
[38]
Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. In ICML, 2018.
[39]
Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. Avoiding Discrimination through Causal Reasoning. In NIPS. 2017.
[40]
Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. Human Decisions and Machine Predictions. 2017a. Working paper.
[41]
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent Trade-Offs in the Fair Determination of Risk Scores. In ITCS, 2017b.
[42]
Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. Counterfactual Fairness. In NIPS. 2017.
[43]
Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. https://github.com/propublica/compas-analysis, 2016a.
[44]
Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. How We Analyzed the COMPAS Recidivism Algorithm. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm, 2016b.
[45]
Yun Liu, Krishna Gadepalli, Mohammad Norouzi, George E. Dahl, Timo Kohlberger, Aleksey Boyko, Subhashini Venugopalan, Aleksei Timofeev, Philip Q. Nelson, Greg S. Corrado, Jason D. Hipp, Lily Peng, and Martin C. Stumpe. Detecting Cancer Metastases on Gigapixel Pathology Images. arXiv preprint arXiv:1703.02442, 2017.
[46]
Christopher T Lowenkamp. The Development of an Actuarial Risk Assessment Instrument for US Pretrial Services. Fed. Probation, 2009.
[47]
Binh Thanh Luong, Salvatore Ruggieri, and Franco Turini. kNN as an Implementation of Situation Testing for Discrimination Discovery and Prevention. In KDD, 2011.
[48]
David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Learning Adversarially Fair and Transferable Representations. In ICML, 2018.
[49]
Aditya Krishna Menon and Robert C. Williamson. The Cost of Fairness in Classification. arXiv:1705.09055, 2017.
[50]
Aditya Krishna Menon and Robert C Williamson. The Cost of Fairness in Binary Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 2018.
[51]
Cecilia Muñoz, Megan Smith, and DJ Patil. Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights. Executive Office of the President. The White House., 2016.
[52]
Dino Pedreschi, Salvatore Ruggieri, and Franco Turini. Discrimination-aware Data Mining. In KDD, 2008.
[53]
Walt L Perry. Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations. Rand Corporation, 2013.
[54]
Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. On Fairness and Calibration. In NIPS. 2017.
[55]
John Podesta, Penny Pritzker, Ernest Moniz, John Holdren, and Jeffrey Zients. Big Data: Seizing Opportunities, Preserving Values. Executive Office of the President. The White House., 2014.
[56]
Christian Posse. Cloud Jobs API: Machine Learning Goes to Work on Job Search and Discovery, 2016. https://cloud.google.com/blog/big-data/2016/11/cloud-jobs-api-machine-learning-goes-to-work-on-job-search-and-discovery.
[57]
Novi Quadrianto and Viktoriia Sharmanska. Recycling Privileged Learning and Distribution Matching for Fairness. In NIPS, 2017.
[58]
Edith Ramirez, Julie Brill, Maureen K. Ohlhausen, and Terrell McSweeny. Big Data: A Tool for Inclusion or Exclusion? Understanding the Issues. Federal Trade Commission (FTC) Report, 2016.
[59]
Cristóbal Romero and Sebastián Ventura. Preface to the Special Issue on Data Mining for Personalised Educational Systems. User Modeling and User-Adapted Interaction, 2011.
[60]
Bernhard Schölkopf and Alexander J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT press, 2002.
[61]
Xinyue Shen, Steven Diamond, Yuantao Gu, and Stephen Boyd. Disciplined Convex-Concave Programming. arXiv:1604.02639, 2016.
[62]
Reva B Siegel. Race-Conscious but Race-Neutral: The Constitutionality of Disparate Impact in the Roberts Court. Ala. L. Rev., 66:653, 2014.
[63]
Stop, Question and Frisk Data. http://www1.nyc.gov/site/nypd/stats/reportsanalysis/stopfrisk.page, 2017.
[64]
Latanya Sweeney. Discrimination in Online Ad Delivery. ACM Queue, 2013.
[65]
Tess Taylor. Recruiting will be an elementary task for Watson, says IBM, 2016. http://www.hrdive.com/news/recruiting-will-be-an-elementary-task-for-watson-says-ibm/427692/.
[66]
Blake Woodworth, Suriya Gunasekar, Mesrob I. Ohannessian, and Nathan Srebro. Learning Non-Discriminatory Predictors. In COLT, 2017.
[67]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment. In WWW, 2017a.
[68]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi. Fairness Constraints: Mechanisms for Fair Classification. In AISTATS, 2017b.
[69]
R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. Learning Fair Representations. In ICML, 2013.

Cited By

View all
  • (2024)A Survey on Trustworthy Recommender SystemsACM Transactions on Recommender Systems10.1145/3652891Online publication date: 13-Apr-2024
  • (2024)Fairness without Sensitive Attributes via Knowledge SharingProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659014(1897-1906)Online publication date: 3-Jun-2024
  • (2024)Operationalizing the Search for Less Discriminatory Alternatives in Fair LendingProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658912(377-387)Online publication date: 3-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image The Journal of Machine Learning Research
The Journal of Machine Learning Research  Volume 20, Issue 1
January 2019
3071 pages
ISSN:1532-4435
EISSN:1533-7928
Issue’s Table of Contents

Publisher

JMLR.org

Publication History

Accepted: 01 March 2019
Published: 01 January 2019
Revised: 01 December 2018
Received: 01 May 2018
Published in JMLR Volume 20, Issue 1

Author Tags

  1. discrimination
  2. disparate impact
  3. fairness
  4. margin-based classifiers
  5. supervised learning

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)80
  • Downloads (Last 6 weeks)14
Reflects downloads up to 16 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)A Survey on Trustworthy Recommender SystemsACM Transactions on Recommender Systems10.1145/3652891Online publication date: 13-Apr-2024
  • (2024)Fairness without Sensitive Attributes via Knowledge SharingProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659014(1897-1906)Online publication date: 3-Jun-2024
  • (2024)Operationalizing the Search for Less Discriminatory Alternatives in Fair LendingProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658912(377-387)Online publication date: 3-Jun-2024
  • (2024)A Taxation Perspective for Fair Re-rankingProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657766(1494-1503)Online publication date: 10-Jul-2024
  • (2024)Fair and green hyperparameter optimization via multi-objective and multiple information source Bayesian optimizationMachine Language10.1007/s10994-024-06515-0113:5(2701-2731)Online publication date: 1-May-2024
  • (2023)Oracle complexity of single-loop switching subgradient methods for non-smooth weakly convex functional constrained optimizationProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3668801(61327-61340)Online publication date: 10-Dec-2023
  • (2023)Adapting fairness interventions to missing valuesProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3668717(59388-59409)Online publication date: 10-Dec-2023
  • (2023)Aleatoric and epistemic discriminationProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3667298(27040-27062)Online publication date: 10-Dec-2023
  • (2023)Fairly recommending with social attributesProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3667059(21454-21465)Online publication date: 10-Dec-2023
  • (2023)ReHLineProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3666753(14366-14386)Online publication date: 10-Dec-2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media