Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3038912.3052660acmotherconferencesArticle/Chapter ViewAbstractPublication PageswebconfConference Proceedingsconference-collections
research-article

Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment

Published: 03 April 2017 Publication History
  • Get Citation Alerts
  • Abstract

    Automated data-driven decision making systems are increasingly being used to assist, or even replace humans in many settings. These systems function by learning from historical decisions, often taken by humans. In order to maximize the utility of these systems (or, classifiers), their training involves minimizing the errors (or, misclassifications) over the given historical data. However, it is quite possible that the optimally trained classifier makes decisions for people belonging to different social groups with different misclassification rates (e.g., misclassification rates for females are higher than for males), thereby placing these groups at an unfair disadvantage. To account for and avoid such unfairness, in this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates. We then propose intuitive measures of disparate mistreatment for decision boundary-based classifiers, which can be easily incorporated into their formulation as convex-concave constraints. Experiments on synthetic as well as real world datasets show that our methodology is effective at avoiding disparate mistreatment, often at a small cost in terms of accuracy.

    References

    [1]
    Stop-and-frisk in New York City. https://en.wikipedia.org/wiki/Stop-and-frisk_in_New_York_City.
    [2]
    https://www.documentcloud.org/documents/2702103-Sample-Risk-Assessment-COMPAS-CORE.html, 2016.
    [3]
    J. Angwin and J. Larson. Bias in Criminal Risk Scores Is Mathematically Inevitable, Researchers Say. https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say.
    [4]
    J. Angwin, J. Larson, S. Mattu, and L. Kirchner. Machine Bias: There's Software Used Across the Country to Predict Future Criminals. And it's Biased Against Blacks. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, 2016.
    [5]
    S. Barocas and A. D. Selbst. Big Data's Disparate Impact. California Law Review, 2016.
    [6]
    C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
    [7]
    A. Chouldechova. Fair Prediction with Disparate Impact:A Study of Bias in Recidivism Prediction Instruments. arXiv preprint, arXiv:1610.07524, 2016.
    [8]
    K. Crawford. Artificial Intelligence's White Guy Problem. https://www.nytimes.com/2016/06/26/øpinion/sunday/artificial-intelligences-white-guy-problem.html.
    [9]
    C. Dwork, M. Hardt, T. Pitassi, and O. Reingold. Fairness Through Awareness. In ITCSC, 2012.
    [10]
    M. Feldman, S. A. Friedler, J. Moeller, C. Scheidegger, and S. Venkatasubramanian. Certifying and Removing Disparate Impact. In KDD, 2015.
    [11]
    A. W. Flores, C. T. Lowenkamp, and K. Bechtel. False Positives, False Negatives, and False Analyses: A Rejoinder to "Machine Bias: There's Software Used Across the Country to Predict Future Criminals. And it's Biased Against Blacks.". 2016.
    [12]
    S. Goel, J. M. Rao, and R. Shroff. Precinct or Prejudice? Understanding Racial Disparities in New York City's Stop-and-Frisk Policy. Annals of Applied Statistics, 2015.
    [13]
    G. Goh, A. Cotter, M. Gupta, and M. Friedlander. Satisfying Real-world Goals with Dataset Constraints. In NIPS, 2016.
    [14]
    J. M. Greg Ridgeway. Doubly Robust Internal Benchmarking and False Discovery Rates for Detecting Racial Bias in Police Stops. Journal of the American Statistical Association, 2009.
    [15]
    M. Hardt, E. Price, and N. Srebro. Equality of Opportunity in Supervised Learning. In NIPS, 2016.
    [16]
    F. Kamiran and T. Calders. Classification with No Discrimination by Preferential Sampling. In BENELEARN, 2010.
    [17]
    T. Kamishima, S. Akaho, H. Asoh, and J. Sakuma. Fairness-aware Classifier with Prejudice Remover Regularizer. In PADM, 2011.
    [18]
    J. Kleinberg, S. Mullainathan, and M. Raghavan. Inherent Trade-Offs in the Fair Determination of Risk Scores. In ITCS, 2017.
    [19]
    J. Larson, S. Mattu, L. Kirchner, and J. Angwin. https://github.com/propublica/compas-analysis, 2016.
    [20]
    J. Larson, S. Mattu, L. Kirchner, and J. Angwin. How We Analyzed the COMPAS Recidivism Algorithm. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm, 2016.
    [21]
    B. T. Luong, S. Ruggieri, and F. Turini. kNN as an Implementation of Situation Testing for Discrimination Discovery and Prevention. In KDD, 2011.
    [22]
    C. Muñoz, M. Smith, and D. Patil. Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights. Executive Office of the President. The White House., 2016.
    [23]
    D. Pedreschi, S. Ruggieri, and F. Turini. Discrimination-aware Data Mining. In KDD, 2008.
    [24]
    J. Podesta, P. Pritzker, E. Moniz, J. Holdren, and J. Zients. Big Data: Seizing Opportunities, Preserving Values. Executive Office of the President. The White House., 2014.
    [25]
    A. Romei and S. Ruggieri. A Multidisciplinary Survey on Discrimination Analysis. KER, 2014.
    [26]
    X. Shen, S. Diamond, Y. Gu, and S. Boyd. Disciplined Convex-Concave Programming. arXiv:1604.02639, 2016.
    [27]
    L. Sweeney. Discrimination in Online Ad Delivery. ACM Queue, 2013.
    [28]
    M. B. Zafar, I. V. Martinez, M. G. Rodriguez, and K. P. Gummadi. Fairness Constraints: Mechanisms for Fair Classification. In AISTATS, 2017.
    [29]
    R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. Learning Fair Representations. In ICML, 2013.

    Cited By

    View all
    • (2024)Explainable AI for CybersecurityAdvances in Explainable AI Applications for Smart Cities10.4018/978-1-6684-6361-1.ch002(31-97)Online publication date: 18-Jan-2024
    • (2024)Building Non-Discriminatory Algorithms in Selected DataSSRN Electronic Journal10.2139/ssrn.4825988Online publication date: 2024
    • (2024)FairHash: A Fair and Memory/Time-efficient HashmapProceedings of the ACM on Management of Data10.1145/36549392:3(1-29)Online publication date: 30-May-2024
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    WWW '17: Proceedings of the 26th International Conference on World Wide Web
    April 2017
    1678 pages
    ISBN:9781450349130

    Sponsors

    • IW3C2: International World Wide Web Conference Committee

    In-Cooperation

    Publisher

    International World Wide Web Conferences Steering Committee

    Republic and Canton of Geneva, Switzerland

    Publication History

    Published: 03 April 2017

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. algorithmic decision making
    2. discrimination in decision making
    3. fair classification
    4. fair decision making
    5. machine learning and law

    Qualifiers

    • Research-article

    Conference

    WWW '17
    Sponsor:
    • IW3C2

    Acceptance Rates

    WWW '17 Paper Acceptance Rate 164 of 966 submissions, 17%;
    Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)531
    • Downloads (Last 6 weeks)36
    Reflects downloads up to

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Explainable AI for CybersecurityAdvances in Explainable AI Applications for Smart Cities10.4018/978-1-6684-6361-1.ch002(31-97)Online publication date: 18-Jan-2024
    • (2024)Building Non-Discriminatory Algorithms in Selected DataSSRN Electronic Journal10.2139/ssrn.4825988Online publication date: 2024
    • (2024)FairHash: A Fair and Memory/Time-efficient HashmapProceedings of the ACM on Management of Data10.1145/36549392:3(1-29)Online publication date: 30-May-2024
    • (2024)A Survey on Trustworthy Recommender SystemsACM Transactions on Recommender Systems10.1145/3652891Online publication date: 13-Apr-2024
    • (2024)Fair Feature Selection: A Causal PerspectiveACM Transactions on Knowledge Discovery from Data10.1145/364389018:7(1-23)Online publication date: 3-Feb-2024
    • (2024)Do Crowdsourced Fairness Preferences Correlate with Risk Perceptions?Proceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645209(304-324)Online publication date: 18-Mar-2024
    • (2024)Fairness-Driven Private Collaborative Machine LearningACM Transactions on Intelligent Systems and Technology10.1145/363936815:2(1-30)Online publication date: 22-Feb-2024
    • (2024)"It's the most fair thing to do but it doesn't make any sense": Perceptions of Mathematical Fairness Notions by Hiring ProfessionalsProceedings of the ACM on Human-Computer Interaction10.1145/36373608:CSCW1(1-35)Online publication date: 26-Apr-2024
    • (2024)Gender Representation Across Online Retail ProductsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658947(947-957)Online publication date: 3-Jun-2024
    • (2024)A preprocessing Shapley value-based approach to detect relevant and disparity prone features in machine learningProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658905(279-289)Online publication date: 3-Jun-2024
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media