Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3511808.3557424acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article

RAGUEL: Recourse-Aware Group Unfairness Elimination

Published: 17 October 2022 Publication History

Abstract

While machine learning and ranking-based systems are in widespread use for sensitive decision-making processes (e.g., determining job candidates, assigning credit scores), they are rife with concerns over unintended biases in their outcomes, which makes algorithmic fairness (e.g., demographic parity, equal opportunity) an objective of interest. 'Algorithmic recourse' offers feasible recovery actions to change unwanted outcomes through the modification of attributes. We introduce the notion of ranked group-level recourse fairness, and develop a 'recourse-aware ranking' solution that satisfies ranked recourse fairness constraints while minimizing the cost of suggested modifications. Our solution suggests interventions that can reorder the ranked list of database records and mitigate group-level unfairness; specifically, disproportionate representation of sub-groups and recourse cost imbalance. This re-ranking identifies the minimum modifications to data points, with these attribute modifications weighted according to their ease of recourse. We then present an efficient block-based extension that enables re-ranking at any granularity (e.g., multiple brackets of bank loan interest rates, multiple pages of search engine results). Evaluation on real datasets shows that, while existing methods may even exacerbate recourse unfairness, our solution – RAGUEL – significantly improves recourse-aware fairness. RAGUEL outperforms alternatives at improving recourse fairness, through a combined process of counterfactual generation and re-ranking, whilst remaining efficient for large-scale datasets.

References

[1]
Charu C. Aggarwal, Chen Chen, and Jiawei Han. 2010. The inverse classification problem. Journal of Computer Science and Technology, Vol. 25, 3 (2010), 458--468.
[2]
Alberto F. Alesina, Francesca Lotti, and Paolo Emilio Mistrulli. 2013. Do Women Pay More for Credit? Evidence From Italy. Journal of the European Economic Association, Vol. 11 (2013), 45--66.
[3]
Nasim Arianpoo and Victor Leung. 2016. How network monitoring and reinforcement learning can improve TCP fairness in wireless multi-hop networks. EURASIP Journal on Wireless Communications and Networking 3 (2016), 1--15.
[4]
Boston Federal Reserve Bank. 1997. Closing the gap: a guide to equal opportunity lending.
[5]
Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2017. Fairness in machine learning. NeurIPS Tutorial, Vol. 1 (2017), 2017.
[6]
Fuat Basik, Bugra Gedik, Hakan Ferhatosmanoglu, and Kun-Lung Wu. 2018. Fair task allocation in crowdsourced delivery. IEEE Transactions on Services Computing (2018), 1040--1053.
[7]
Vaishnavi Bhargava, Miguel Couceiro, and Amedeo Napoli. 2020. LimeOut: An Ensemble Approach To Improve Process Fairness. In ECML PKDD International Workshop on eXplainable Knowledge Discovery in Data Mining (XKDD 2020).
[8]
Asia J. Biega, Krishna P. Gummadi, and Gerhard Weikum. 2018. Equity of attention: Amortizing individual fairness in rankings. In ACM SIGIR. 405--414.
[9]
Nicola Capuano, Carmen De Maio, Saverio Salerno, and Daniele Toti. 2014. A methodology based on commonsense knowledge and ontologies for the automatic classification of legal cases. In ACM Conference on Web Intelligence, Mining and Semantics. 27.
[10]
Sam Corbett-Davies and Sharad Goel. 2018. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023 (2018).
[11]
Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic decision making and the cost of fairness. In ACM SIGKDD. 797--806.
[12]
Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http://archive.ics.uci.edu/ml
[13]
Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and removing disparate impact. In ACM SIGKDD. 259--268.
[14]
Federal Financial Institutions Examination Council's (FFIEC). 2021. HMDA - Home Mortgage Disclosure act. https://ffiec.cfpb.gov/
[15]
Martha L. Garrison. 1976. Credit-Ability for Women. The Family Coordinator, Vol. 25, 3 (1976), 241--248.
[16]
Ahmad Ghizzawi, Julien Marinescu, Shady Elbassuoni, Sihem Amer-Yahia, and Gilles Bisson. 2019. Fairank: An interactive system to explore fairness of ranking in online job marketplaces. In EDBT.
[17]
Vivek Gupta, Pegah Nokhiz, Chitradeep Dutta Roy, and Suresh Venkatasubramanian. 2019. Equalizing recourse across groups. arXiv preprint arXiv:1909.03166 (2019).
[18]
Sara Hajian, Francesco Bonchi, and Carlos Castillo. 2016. Algorithmic bias: From discrimination discovery to fairness-aware data mining. In ACM SIGKDD. 2125--2126.
[19]
Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. In NeurIPS. 3315--3323.
[20]
Paul R. Harper. 2005. A review and comparison of classification algorithms for medical decision making. Health Policy, Vol. 71, 3 (2005), 315--331.
[21]
Wen Huan, Yongkai Wu, Lu Zhang, and Xintao Wu. 2020. Fairness through equality of effort. In Companion Proceedings of the Web Conference 2020. 743--751.
[22]
Zhongjun Jin, Mengjing Xu, Chenkai Sun, Abolfazl Asudeh, and HV Jagadish. 2020. Mithracoverage: a system for investigating population bias for intersectional fairness. In ACM SIGMOD. 2721--2724.
[23]
Faisal Kamiran and Toon Calders. 2009. Classifying without discriminating. In IEEE Conference on Computer, Control and Communication. 1--6.
[24]
Jian Kang, Tiankai Xie, Xintao Wu, Ross Maciejewski, and Hanghang Tong. 2021. MultiFair: Multi-Group Fairness in Machine Learning. arXiv preprint arXiv:2105.11069 (2021).
[25]
Amir-Hossein Karimi, Gilles Barthe, Borja Balle, and Isabel Valera. 2020a. Model-agnostic counterfactual explanations for consequential decisions. In AISTATS. PMLR, 895--905.
[26]
Amir-Hossein Karimi, Julius von Kügelgen, Bernhard Schölkopf, and Isabel Valera. 2020b. Algorithmic recourse under imperfect causal knowledge: a probabilistic approach. In NeurIPS.
[27]
Matt J. Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness. In NeurIPS. 4066--4076.
[28]
Helen F. Ladd. 1982. Equal Credit Opportunity: Women and Mortgage Credit. The American Economic Review, Vol. 72, 2 (1982), 166--170.
[29]
Michael T. Lash, Qihang Lin, Nick Street, Jennifer G. Robinson, and Jeffrey Ohlmann. 2017. Generalized inverse classification. In SIAM International Conference on Data Mining. SIAM, 162--170.
[30]
Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2018. Comparison-based inverse classification for interpretability in machine learning. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems. 100--111.
[31]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), Vol. 54, 6 (2021), 1--35.
[32]
Rishabh Mehrotra, James McInerney, Hugues Bouchard, Mounia Lalmas, and Fernando Diaz. 2018. Towards a fair marketplace: Counterfactual evaluation of the trade-off between relevance, fairness & satisfaction in recommendation systems. In ACM CIKM. 2243--2251.
[33]
Ioannis Paparrizos, B Barla Cambazoglu, and Aristides Gionis. 2011. Machine learned job recommendation. In Proceedings of the fifth ACM Conference on Recommender Systems. 325--328.
[34]
Evaggelia Pitoura, Kostas Stefanidis, and Georgia Koutrika. 2021. Fairness in rankings and recommendations: An overview. The VLDB Journal (2021), 1--28.
[35]
Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q. Weinberger. 2017. On fairness and calibration. In NeurIPS. 5680--5689.
[36]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why should I trust you?" Explaining the predictions of any classifier. In ACM SIGKDD. 1135--1144.
[37]
Babak Salimi, Bill Howe, and Dan Suciu. 2020. Database repair meets algorithmic fairness. ACM SIGMOD Record, Vol. 49, 1 (2020), 34--41.
[38]
Babak Salimi, Luke Rodriguez, Bill Howe, and Dan Suciu. 2019. Interventional fairness: Causal database repair for algorithmic fairness. In ACM SIGMOD. 793--810.
[39]
Suproteem K. Sarkar, Kojin Oshiba, Daniel Giebisch, and Yaron Singer. 2018. Robust Classification of Financial Risk. arXiv preprint arXiv:1811.11079 (2018).
[40]
Ashudeep Singh and Thorsten Joachims. 2018. Fairness of exposure in rankings. In ACM SIGKDD. 2219--2228.
[41]
Berk Ustun, Alexander Spangher, and Yang Liu. 2019. Actionable recourse in linear classification. In ACM Conference on Fairness, Accountability, and Transparency. 10--19.
[42]
Salvatore Valenti, Francesca Neri, and Alessandro Cucchiarelli. 2003. An overview of current research on automated essay grading. Journal of Information Technology Education: Research, Vol. 2, 1 (2003), 319--330.
[43]
Suresh Venkatasubramanian. 2019. Algorithmic fairness: Measures, methods and representations. In ACM PODS. 481--481.
[44]
Sahil Verma, John Dickerson, and Keegan Hines. 2020. Counterfactual Explanations for Machine Learning: A Review. arXiv:2010.10596 (2020).
[45]
Julius von Kügelgen, Amir-Hossein Karimi, Umang Bhatt, Isabel Valera, Adrian Weller, and Bernhard Schölkopf. 2020. On the fairness of causal algorithmic recourse. arXiv:2010.06529 (2020).
[46]
Hao Wang, Berk Ustun, and Flavio Calmon. 2019. Repairing without retraining: Avoiding disparate impact with counterfactual distributions. In ICML. PMLR, 6618--6627.
[47]
Adam White and Artur d'Avila Garcez. 2019. Measurable counterfactual local explanations for any classifier. arXiv:1908.03020 (2019).
[48]
Ke Yang and Julia Stoyanovich. 2017. Measuring fairness in ranked outputs. In SSDBM. 1--6.
[49]
Meike Zehlike, Francesco Bonchi, Carlos Castillo, Sara Hajian, Mohamed Megahed, and Ricardo Baeza-Yates. 2017. Fa* ir: A fair top-k ranking algorithm. In ACM CIKM. 1569--1578.
[50]
Hantian Zhang, Xu Chu, Abolfazl Asudeh, and Shamkant B. Navathe. 2021. OmniFair: A Declarative System for Model-Agnostic Group Fairness in Machine Learning. In ACM SIGMOD. 2076--2088.
[51]
Junzhe Zhang and Elias Bareinboim. 2018. Fairness in decision-making-the causal explanation formula. In AAAI.

Cited By

View all
  • (2024)Mathematical optimization modelling for group counterfactual explanationsEuropean Journal of Operational Research10.1016/j.ejor.2024.01.002319:2(399-412)Online publication date: Dec-2024

Index Terms

  1. RAGUEL: Recourse-Aware Group Unfairness Elimination

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CIKM '22: Proceedings of the 31st ACM International Conference on Information & Knowledge Management
    October 2022
    5274 pages
    ISBN:9781450392365
    DOI:10.1145/3511808
    • General Chairs:
    • Mohammad Al Hasan,
    • Li Xiong
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 17 October 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. algorithmic recourse
    2. classification
    3. fairness
    4. machine learning
    5. ranking
    6. recourse-aware ranking

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    CIKM '22
    Sponsor:

    Acceptance Rates

    CIKM '22 Paper Acceptance Rate 621 of 2,257 submissions, 28%;
    Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

    Upcoming Conference

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)43
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 30 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Mathematical optimization modelling for group counterfactual explanationsEuropean Journal of Operational Research10.1016/j.ejor.2024.01.002319:2(399-412)Online publication date: Dec-2024

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media