Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3447548.3467258acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Explaining Algorithmic Fairness Through Fairness-Aware Causal Path Decomposition

Published: 14 August 2021 Publication History

Abstract

Algorithmic fairness has aroused considerable interests in data mining and machine learning communities recently. So far the existing research has been mostly focusing on the development of quantitative metrics to measure algorithm disparities across different protected groups, and approaches for adjusting the algorithm output to reduce such disparities. In this paper, we propose to study the problem of identification of the source of model disparities. Unlike existing interpretation methods which typically learn feature importance, we consider the causal relationships among feature variables and propose a novel framework to decompose the disparity into the sum of contributions from fairness-aware causal paths, which are paths linking the sensitive attribute and the final predictions, on the graph. We also consider the scenario when the directions on certain edges within those paths cannot be determined. Our framework is also model agnostic and applicable to a variety of quantitative disparity measures. Empirical evaluations on both synthetic and real-world data sets are provided to show that our method can provide precise and comprehensive explanations to the model disparities.

Supplementary Material

MP4 File (explaining_algorithmic_fairness_through_fairnessaware-weishen_pan-sen_cui-38957832-zpeH.mp4)
Presentation video

References

[1]
Kjersti Aas, Martin Jullum, and Anders Løland. 2019. Explaining individual predictions when features are dependent: More accurate approximations to Shapley values. arXiv preprint arXiv:1903.10464 (2019).
[2]
Ayodeji Adegunsoye, Iazsmin Bauer Ventura, and Vladimir M Liarski. 2020. Association of black race with outcomes in COVID-19 disease: a retrospective cohort study. Annals of the American Thoracic Society, Vol. 17, 10 (2020), 1336--1339.
[3]
Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. 2018. A reductions approach to fair classification. In International Conference on Machine Learning. PMLR, 60--69.
[4]
Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias: There's software used across the country to predict future criminals. And it's biased against blacks. ProPublica, Vol. 23 (2016).
[5]
Benjamin R Baer, Daniel E Gilbert, and Martin T Wells. 2019. Fairness criteria through the lens of directed acyclic graphical models. arXiv preprint arXiv:1906.11333 (2019).
[6]
Tom Begley, Tobias Schwedes, Christopher Frye, and Ilya Feige. 2020. Explainability for fair machine learning. arXiv preprint arXiv:2010.07389 (2020).
[7]
Giorgos Borboudakis and Ioannis Tsamardinos. 2012. Incorporating causal prior knowledge as path-constraints in bayesian networks and maximal ancestral graphs. In Proceedings of the 29th International Coference on International Conference on Machine Learning. 427--434.
[8]
Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. 785--794.
[9]
Silvia Chiappa. 2019. Path-specific counterfactual fairness. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 7801--7808.
[10]
Christine S Cox. 1998. Plan and operation of the NHANES I Epidemiologic Followup Study, 1992. Number 35. National Ctr for Health Statistics.
[11]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. 214--226.
[12]
Christopher Frye, Colin Rowat, and Ilya Feige. 2020. Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability. Advances in Neural Information Processing Systems, Vol. 33 (2020).
[13]
Elise Gould and Valerie Wilson. 2020. Black workers face two of the most lethal preexisting conditions for coronavirus-Racism and economic inequality. Economic Policy Institute, Vol. 1 (2020).
[14]
Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of Opportunity in Supervised Learning. In Advances in Neural Information Processing Systems. 3315--3323.
[15]
Tom Heskes, Evi Sijben, Ioan Gabriel Bucur, and Tom Claassen. 2020. Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models. Advances in Neural Information Processing Systems, Vol. 33 (2020).
[16]
Aapo Hyärinen and Petteri Pajunen. 1999. Nonlinear independent component analysis: Existence and uniqueness results. Neural networks, Vol. 12, 3 (1999), 429--439.
[17]
Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. 2017. Avoiding discrimination through causal reasoning. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 656--666.
[18]
Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness. In Advances in neural information processing systems. 4066--4076.
[19]
Moshe Lichman et al. 2013. UCI machine learning repository.
[20]
Scott M Lundberg. 2020. Explaining Quantitative Measures of Fairness. Fair & Responsible AI Workshop @ CHI2020 (2020).
[21]
Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Advances in neural information processing systems. 4765--4774.
[22]
David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. 2018. Learning Adversarially Fair and Transferable Representations. In International Conference on Machine Learning. 3384--3393.
[23]
Diego A Martinez, Jeremiah S Hinson, Eili Y Klein, Nathan A Irvin, Mustapha Saheed, Kathleen R Page, and Scott R Levin. 2020. SARS-CoV-2 positivity rate for Latinos in the Baltimore--Washington, DC Region. Jama, Vol. 324, 4 (2020), 392--395.
[24]
Razieh Nabi and Ilya Shpitser. 2018. Fair inference on outcomes. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.
[25]
Emilija Perkovic. 2020. Identifying causal effects in maximally oriented partially directed acyclic graphs. In Conference on Uncertainty in Artificial Intelligence. PMLR, 530--539.
[26]
Eboni G Price-Haywood, Jeffrey Burton, Daniel Fort, and Leonardo Seoane. 2020. Hospitalization and mortality among black patients and white patients with Covid-19. New England Journal of Medicine (2020).
[27]
Jiaming Song, Pratyusha Kalluri, Aditya Grover, Shengjia Zhao, and Stefano Ermon. 2019. Learning controllable fair representations. In The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2164--2173.
[28]
Peter Spirtes, Clark N Glymour, Richard Scheines, and David Heckerman. 2000. Causation, prediction, and search. MIT press.
[29]
Jiaxuan Wang, Jenna Wiens, and Scott Lundberg. 2021. Shapley flow: A graph-based approach to interpreting model predictions. In International Conference on Artificial Intelligence and Statistics. PMLR, 721--729.
[30]
Julia A Wolfson and Cindy W Leung. 2020. Food Insecurity and COVID-19: Disparities in Early Effects for US Adults. Nutrients, Vol. 12, 6 (2020), 1648.
[31]
Yongkai Wu, Lu Zhang, Xintao Wu, and Hanghang Tong. 2019. Pc-fairness: A unified framework for measuring causality-based fairness. In Advances in Neural Information Processing Systems. 3404--3414.
[32]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. 2017. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th international conference on world wide web. 1171--1180.
[33]
Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018a. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. 335--340.
[34]
Lu Zhang, Yongkai Wu, and Xintao Wu. 2018b. Causal modeling-based discrimination discovery and removal: Criteria, bounds, and algorithms. IEEE Transactions on Knowledge and Data Engineering, Vol. 31, 11 (2018), 2035--2050.

Cited By

View all
  • (2024)A Critical Survey on Fairness Benefits of Explainable AIProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658990(1579-1595)Online publication date: 3-Jun-2024
  • (2024)On Explaining Unfairness: An Overview2024 IEEE 40th International Conference on Data Engineering Workshops (ICDEW)10.1109/ICDEW61823.2024.00035(226-236)Online publication date: 13-May-2024
  • (2024)Enhancing neuro-oncology care through equity-driven applications of artificial intelligenceNeuro-Oncology10.1093/neuonc/noae127Online publication date: 19-Aug-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
KDD '21: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining
August 2021
4259 pages
ISBN:9781450383325
DOI:10.1145/3447548
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 August 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. causal graph
  2. explanation
  3. fairness

Qualifiers

  • Research-article

Funding Sources

Conference

KDD '21
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)196
  • Downloads (Last 6 weeks)13
Reflects downloads up to 21 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)A Critical Survey on Fairness Benefits of Explainable AIProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658990(1579-1595)Online publication date: 3-Jun-2024
  • (2024)On Explaining Unfairness: An Overview2024 IEEE 40th International Conference on Data Engineering Workshops (ICDEW)10.1109/ICDEW61823.2024.00035(226-236)Online publication date: 13-May-2024
  • (2024)Enhancing neuro-oncology care through equity-driven applications of artificial intelligenceNeuro-Oncology10.1093/neuonc/noae127Online publication date: 19-Aug-2024
  • (2024)Fair Federated Learning with Opposite GANKnowledge-Based Systems10.1016/j.knosys.2024.111420287:COnline publication date: 16-May-2024
  • (2024)Modular Debiasing of Latent User Representations in Prototype-Based Recommender SystemsMachine Learning and Knowledge Discovery in Databases. Research Track10.1007/978-3-031-70341-6_4(56-72)Online publication date: 22-Aug-2024
  • (2023)“How Biased are Your Features?”: Computing Fairness Influence Functions with Global Sensitivity AnalysisProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency10.1145/3593013.3593983(138-148)Online publication date: 12-Jun-2023
  • (2023)Quantification of Racial Disparity on Urinary Tract Infection Recurrence and Treatment Resistance in Florida using Algorithmic Fairness Methods2023 IEEE 11th International Conference on Healthcare Informatics (ICHI)10.1109/ICHI57859.2023.00043(261-267)Online publication date: 26-Jun-2023
  • (2023)Algorithmic fairness in artificial intelligence for medicine and healthcareNature Biomedical Engineering10.1038/s41551-023-01056-87:6(719-742)Online publication date: 28-Jun-2023
  • (2022)Collaboration Equilibrium in Federated LearningProceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3534678.3539237(241-251)Online publication date: 14-Aug-2022
  • (2022)Explainable Fairness in RecommendationProceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3477495.3531973(681-691)Online publication date: 6-Jul-2022
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media