Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3514221.3517841acmconferencesArticle/Chapter ViewAbstractPublication PagesmodConference Proceedingsconference-collections
research-article
Public Access

Through the Data Management Lens: Experimental Analysis and Evaluation of Fair Classification

Published: 11 June 2022 Publication History

Abstract

Classification, a heavily-studied data-driven machine learning task, drives an increasing number of prediction systems involving critical human decisions such as loan approval and criminal risk assessment. However, classifiers often demonstrate discriminatory behavior, especially when presented with biased data. Consequently, fairness in classification has emerged as a high-priority research area. Data management research is showing an increasing presence and interest in topics related to data and algorithmic fairness, including the topic of fair classification. The interdisciplinary efforts in fair classification, with machine learning research having the largest presence, have resulted in a large number of fairness notions and a wide range of approaches that have not been systematically evaluated and compared. In this paper, we contribute a broad analysis of 13 fair classification approaches and additional variants, over their correctness, fairness, efficiency, scalability, robustness to data errors, sensitivity to underlying ML model, data efficiency, and stability using a variety of metrics and real-world datasets. Our analysis highlights novel insights on the impact of different metrics and high-level approach characteristics on different aspects of performance. We also discuss general principles for choosing approaches suitable for different practical settings, and identify areas where data-management-centric solutions are likely to have the most impact.

References

[1]
Alekh Agarwal, Alina Beygelzimer, Miroslav Dud'ik, John Langford, and Hanna M Wallach. 2018. A Reductions Approach to Fair Classification. In ICML.
[2]
Saba Ahmadi, Sainyam Galhotra, Barna Saha, and Roy Schwartz. 2020. Fair Correlation Clustering. CoRR, Vol. abs/2002.03508 (2020). arxiv: 2002.03508 https://arxiv.org/abs/2002.03508
[3]
Abolfazl Asudeh and HV Jagadish. 2020. Fairly evaluating and scoring items in a data set. Proceedings of the VLDB Endowment, Vol. 13, 12 (2020).
[4]
Abolfazl Asudeh, HV Jagadish, Julia Stoyanovich, and Gautam Das. 2019. Designing fair ranking schemes. In Proceedings of the 2019 International Conference on Management of Data. 1259--1276.
[5]
Solon Barocas and Andrew D Selbst. 2016. Big data's disparate impact. Calif. L. Rev., Vol. 104 (2016), 671.
[6]
Rachel KE Bellamy, Kuntal Dey, Michael Hind, Samuel C Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, A Mojsilović, et al. 2019. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development, Vol. 63, 4/5 (2019), 4--1.
[7]
Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. 2018. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research (2018), 0049124118782533.
[8]
Léon Bottou. 2010. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT'2010. Springer, 177--186.
[9]
Toon Calders, Faisal Kamiran, and Mykola Pechenizkiy. 2009. Building classifiers with independency constraints. In 2009 IEEE International Conference on Data Mining Workshops. IEEE, 13--18.
[10]
Toon Calders and Sicco Verwer. 2010. Three naive Bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery, Vol. 21, 2 (2010), 277--292.
[11]
Flavio Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. 2017. Optimized pre-processing for discrimination prevention. In Advances in Neural Information Processing Systems. 3992--4001.
[12]
Simon Caton and Christian Haas. 2020. Fairness in Machine Learning: A Survey. arXiv preprint arXiv:2010.04053 (2020).
[13]
L Elisa Celis, Lingxiao Huang, Vijay Keswani, and Nisheeth K Vishnoi. 2019. Classification with fairness constraints: A meta-algorithm with provable guarantees. In Proceedings of the Conference on Fairness, Accountability, and Transparency. 319--328.
[14]
Irene Y Chen, Emma Pierson, Sherri Rose, Shalmali Joshi, Kadija Ferryman, and Marzyeh Ghassemi. 2020 b. Ethical Machine Learning in Healthcare. Annual Review of Biomedical Data Science, Vol. 4 (2020).
[15]
Shunqin Chen, Zhengfeng Guo, and Xinlei Zhao. 2020 a. Predicting Mortgage Early Delinquency with Machine Learning Methods. European Journal of Operational Research (2020).
[16]
Silvia Chiappa. 2019. Path-specific counterfactual fairness. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 7801--7808.
[17]
Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, Vol. 5, 2 (2017), 153--163.
[18]
Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd acm sigkdd international conference on knowledge discovery and data mining. 797--806.
[19]
William Dieterich, Christina Mendoza, and Tim Brennan. 2016. COMPAS risk scales: Demonstrating accuracy equity and predictive parity. Northpointe Inc (2016).
[20]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. 214--226.
[21]
Evanthia Faliagka, Kostas Ramantas, Athanasios Tsakalidis, and Giannis Tzimas. 2012. Application of machine learning algorithms to an online recruitment system. In Proc. International Conference on Internet and Web Applications and Services. Citeseer.
[22]
Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 259--268.
[23]
Benjamin Fish, Jeremy Kun, and Ádám D Lelkes. 2016. A confidence-based approach for balancing fairness and accuracy. In Proceedings of the 2016 SIAM International Conference on Data Mining. SIAM, 144--152.
[24]
Anthony W Flores, Kristin Bechtel, and Christopher T Lowenkamp. 2016. A rejoinder to machine bias: There's software used across the country to predict future criminals. and it's biased against blacks. Fed. Probation, Vol. 80 (2016), 38.
[25]
James R Foulds, Rashidul Islam, Kamrun Naher Keya, and Shimei Pan. 2020. An intersectional definition of fairness. In 2020 IEEE 36th International Conference on Data Engineering (ICDE). 1918--1921.
[26]
Sorelle A Friedler, Carlos Scheidegger, Suresh Venkatasubramanian, Sonam Choudhary, Evan P Hamilton, and Derek Roth. 2019. A comparative study of fairness-enhancing interventions in machine learning. In Proceedings of the conference on fairness, accountability, and transparency. 329--338.
[27]
Sainyam Galhotra, Yuriy Brun, and Alexandra Meliou. 2017. Fairness testing: testing software for discrimination. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. 498--510.
[28]
Sainyam Galhotra, Karthikeyan Shanmugam, Prasanna Sattigeri, and Kush R. Varshney. 2020. Fair Data Integration. CoRR, Vol. abs/2006.06053 (2020). arxiv: 2006.06053 https://arxiv.org/abs/2006.06053
[29]
Milena A Gianfrancesco, Suzanne Tamang, Jinoos Yazdany, and Gabriela Schmajuk. 2018. Potential biases in machine learning algorithms using electronic health record data. JAMA internal medicine, Vol. 178, 11 (2018), 1544--1547.
[30]
Gabriel Goh, Andrew Cotter, Maya Gupta, and Michael P Friedlander. 2016. Satisfying real-world goals with dataset constraints. In Advances in Neural Information Processing Systems. 2415--2423.
[31]
Mihajlo Grbovic, Vladan Radosavljevic, Nemanja Djuric, Narayan Bhamidipati, Jaikit Savla, Varun Bhagwan, and Doug Sharp. 2015. E-commerce in your inbox: Product recommendations at scale. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 1809--1818.
[32]
William W Hager and Sanjoy K Mitter. 1976. Lagrange duality theory for convex control problems. SIAM Journal on Control and Optimization, Vol. 14, 5 (1976), 843--856.
[33]
Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. In Advances in neural information processing systems. 3315--3323.
[34]
Wen Huan, Yongkai Wu, Lu Zhang, and Xintao Wu. 2020. Fairness through equality of effort. In Companion Proceedings of the Web Conference 2020. 743--751.
[35]
Gareth P Jones, James M Hickey, Pietro G Di Stefano, Charanpal Dhanjal, Laura C Stoddart, and Vlasios Vasileiou. 2020. Metrics and methods for a systematic comparison of fairness-aware machine learning algorithms. arXiv preprint arXiv:2010.03986 (2020).
[36]
Faisal Kamiran and Toon Calders. 2012. Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, Vol. 33, 1 (2012), 1--33.
[37]
Faisal Kamiran, Toon Calders, and Mykola Pechenizkiy. 2010. Discrimination aware decision tree learning. In 2010 IEEE International Conference on Data Mining. IEEE, 869--874.
[38]
Faisal Kamiran, Asim Karim, and Xiangliang Zhang. 2012. Decision theory for discrimination-aware classification. In 2012 IEEE 12th International Conference on Data Mining. 924--929.
[39]
Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. 2012. Fairness-aware classifier with prejudice remover regularizer. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 35--50.
[40]
Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. 2018. Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. In Proceedings of the 35th International Conference on Machine Learning. PMLR, 2564--2572.
[41]
Aria Khademi, Sanghack Lee, David Foley, and Vasant Honavar. 2019. Fairness in algorithmic decision making: An excursion through the lens of causality. In The World Wide Web Conference. 2907--2914.
[42]
Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. 2017a. Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems. 656--666.
[43]
Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. 2017b. Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems. 656--666.
[44]
Michael Kim, Omer Reingold, and Guy Rothblum. 2018. Fairness through computationally-bounded awareness. In Advances in Neural Information Processing Systems. 4842--4852.
[45]
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2017. Inherent Trade-Offs in the Fair Determination of Risk Scores. In 8th Innovations in Theoretical Computer Science Conference (ITCS 2017).
[46]
Ronny Kohavi and Barry Becker. 1994. UCI Machine Learning Repository. https://archive.ics.uci.edu/ml/datasets/Adult
[47]
Caitlin Kuhlman and Elke Rundensteiner. 2020. Rank aggregation algorithms for fair consensus. Proceedings of the VLDB Endowment, Vol. 13, 12 (2020).
[48]
Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness. In Advances in neural information processing systems. 4066--4076.
[49]
Vincent Labatut and Hocine Cherifi. 2012. Accuracy measures for the comparison of classifiers. arXiv preprint arXiv:1207.3790 (2012).
[50]
Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, and Ed Chi. 2020. Fairness without demographics through adversarially reweighted learning. Advances in Neural Information Processing Systems, Vol. 33 (2020).
[51]
Preethi Lahoti, Krishna P Gummadi, and Gerhard Weikum. 2019. Operationalizing individual fairness with pairwise fair representations. Proceedings of the VLDB Endowment, Vol. 13, 4 (2019), 506--518.
[52]
Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. 2016. How we analyzed the COMPAS recidivism algorithm. ProPublica (5 2016), Vol. 9 (2016).
[53]
Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. 2015. The variational fair autoencoder. stat, Vol. 1050 (2015), 3.
[54]
Suvodeep Majumder, Joymallya Chakraborty, Gina R Bai, Kathryn T Stolee, and Tim Menzies. 2021. Fair Enough: Searching for Sufficient Measures of Fairness. arXiv preprint arXiv:2110.13029 (2021).
[55]
Karima Makhlouf, Sami Zhioua, and Catuscia Palamidessi. 2020. Survey on Causal-based Machine Learning Fairness Notions. arXiv preprint arXiv:2010.09553 (2020).
[56]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2019. A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635 (2019).
[57]
Alan Mishler, Edward H Kennedy, and Alexandra Chouldechova. 2021. Fairness in Risk Assessment Instruments: Post-Processing to Achieve Counterfactual Equalized Odds. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 386--400.
[58]
Razieh Nabi and Ilya Shpitser. 2018. Fair inference on outcomes. In Thirty-Second AAAI Conference on Artificial Intelligence.
[59]
Arvind Narayanan. 2018. Translation tutorial: 21 fairness definitions and their politics. In Proc. Conf. Fairness Accountability Transp., New York, USA, Vol. 1170.
[60]
Melissa Nobles. 2000. Shades of citizenship: Race and the census in modern politics .Stanford University Press.
[61]
Alejandro Noriega-Campero, Michiel A Bakker, Bernardo Garcia-Bulle, and Alex'Sandy' Pentland. 2019. Active fairness in algorithmic decision making. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 77--83.
[62]
Judea Pearl. 2009. Causality .Cambridge university press.
[63]
Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. 2017. On fairness and calibration. In Advances in Neural Information Processing Systems. 5680--5689.
[64]
Novi Quadrianto and Viktoriia Sharmanska. 2017. Recycling privileged learning and distribution matching for fairness. In Advances in Neural Information Processing Systems. 677--688.
[65]
Bilal Qureshi, Faisal Kamiran, Asim Karim, Salvatore Ruggieri, and Dino Pedreschi. 2019. Causal inference for social discrimination reasoning. Journal of Intelligent Information Systems (2019), 1--13.
[66]
Jonathan Rothwell. 2014. How the war on drugs damages black social mobility. The Brookings Institution, published Sept, Vol. 30 (2014).
[67]
Anian Ruoss, Mislav Balunovic, Marc Fischer, and Martin Vechev. 2020. Learning Certified Individually Fair Representations. In Advances in Neural Information Processing Systems. 7584--7596.
[68]
Chris Russell, Matt J Kusner, Joshua Loftus, and Ricardo Silva. 2017. When worlds collide: integrating different counterfactual assumptions in fairness. In Advances in Neural Information Processing Systems. 6414--6423.
[69]
Ricardo Salazar, Felix Neutatz, and Ziawasch Abedjan. 2021. Automated Feature Engineering for Algorithmic Fairness. PROCEEDINGS OF THE VLDB ENDOWMENT, Vol. 14, 9 (2021), 1694--1702.
[70]
Babak Salimi, Luke Rodriguez, Bill Howe, and Dan Suciu. 2019. Interventional fairness: Causal database repair for algorithmic fairness. In Proceedings of the 2019 International Conference on Management of Data. 793--810.
[71]
Samira Samadi, Uthaipon Tantipongpipat, Jamie H Morgenstern, Mohit Singh, and Santosh Vempala. 2018. The price of fair pca: One extra dimension. In Advances in Neural Information Processing Systems. 10976--10987.
[72]
Sebastian Schelter, Felix Biessmann, Tim Januschowski, David Salinas, Stephan Seufert, and Gyuri Szarvas. 2018. On challenges in machine learning model management. (2018).
[73]
Sebastian Schelter, Tammo Rukat, and Felix Biessmann. 2021. JENGA-A Framework to Study the Impact of Data Errors on the Predictions of Machine Learning Models. In EDBT. 529--534.
[74]
Amit Sharma and Emre Kiciman. 2020. DoWhy: An End-to-End Library for Causal Inference. arXiv preprint arXiv:2011.04216 (2020).
[75]
Xinyue Shen, Steven Diamond, Yuantao Gu, and Stephen Boyd. 2016. Disciplined convex-concave programming. In 2016 IEEE 55th Conference on Decision and Control (CDC). IEEE, 1009--1014.
[76]
Julia Stoyanovich, Ke Yang, and HV Jagadish. 2018. Online set selection with fairness and diversity constraints. In Proceedings of the EDBT Conference.
[77]
Philip S Thomas, Bruno Castro da Silva, Andrew G Barto, Stephen Giguere, Yuriy Brun, and Emma Brunskill. 2019. Preventing undesirable behavior of intelligent machines. Science, Vol. 366, 6468 (2019), 999--1004.
[78]
Florian Tramèr, Vaggelis Atlidakis, Roxana Geambasu, Daniel J Hsu, Jean-Pierre Hubaux, Mathias Humbert, Ari Juels, and Huang Lin. 2015. Discovering unwarranted associations in data-driven applications with the fairtest testing toolkit. CoRR, abs/1510.02377 (2015).
[79]
Jennifer Valentino-Devries, Jeremy Singer-Vine, and Ashkan Soltani. 2012. Websites vary prices, deals based on users' information. Wall Street Journal, Vol. 10 (2012), 60--68.
[80]
Sahil Verma and Julia Rubin. 2018. Fairness definitions explained. In 2018 IEEE/ACM International Workshop on Software Fairness (FairWare). IEEE, 1--7.
[81]
Blake Woodworth, Suriya Gunasekar, Mesrob I Ohannessian, and Nathan Srebro. 2017. Learning Non-Discriminatory Predictors. In Conference on Learning Theory. 1920--1953.
[82]
Yongkai Wu, Lu Zhang, Xintao Wu, and Hanghang Tong. 2019. Pc-fairness: A unified framework for measuring causality-based fairness. In Advances in Neural Information Processing Systems. 3404--3414.
[83]
An Yan and Bill Howe. 2019. Fairst: Equitable spatial and temporal demand prediction for new mobility systems. In Proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. 552--555.
[84]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. 2017a. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th international conference on world wide web. 1171--1180.
[85]
Muhammad Bilal Zafar, Isabel Valera, Manuel Rodriguez, Krishna Gummadi, and Adrian Weller. 2017c. From parity to preference-based notions of fairness in classification. In Advances in Neural Information Processing Systems. 229--239.
[86]
Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. 2017b. Fairness constraints: Mechanisms for fair classification. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics.
[87]
Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. Learning fair representations. In International Conference on Machine Learning. 325--333.
[88]
Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. 335--340.
[89]
Hantian Zhang, Xu Chu, Abolfazl Asudeh, and Shamkant B Navathe. 2021. OmniFair: A Declarative System for Model-Agnostic Group Fairness in Machine Learning. In Proceedings of the 2021 International Conference on Management of Data. 2076--2088.
[90]
Junzhe Zhang and Elias Bareinboim. 2018a. Equality of opportunity in classification: A causal approach. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 3675--3685.
[91]
Junzhe Zhang and Elias Bareinboim. 2018b. Fairness in decision-making-the causal explanation formula. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.
[92]
Lu Zhang, Yongkai Wu, and Xintao Wu. 2016. Situation Testing-Based Discrimination Discovery: A Causal Inference Approach. In IJCAI, Vol. 16. 2718--2724.
[93]
Lu Zhang, Yongkai Wu, and Xintao Wu. 2017a. Achieving non-discrimination in data release. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1335--1344.
[94]
Lu Zhang, Yongkai Wu, and Xintao Wu. 2017b. A Causal Framework for Discovering and Removing Direct and Indirect Discrimination. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17. 3929--3935.

Cited By

View all
  • (2025)Is it still fair? A comparative evaluation of fairness algorithms through the lens of covariate driftMachine Learning10.1007/s10994-024-06698-6114:1Online publication date: 14-Jan-2025
  • (2024)How far can fairness constraints help recover from biased data?Proceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693882(44515-44544)Online publication date: 21-Jul-2024
  • (2024)OTClean: Data Cleaning for Conditional Independence Violations using Optimal TransportProceedings of the ACM on Management of Data10.1145/36549632:3(1-26)Online publication date: 30-May-2024
  • Show More Cited By

Index Terms

  1. Through the Data Management Lens: Experimental Analysis and Evaluation of Fair Classification

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        SIGMOD '22: Proceedings of the 2022 International Conference on Management of Data
        June 2022
        2597 pages
        ISBN:9781450392495
        DOI:10.1145/3514221
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 11 June 2022

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. algorithmic fairness
        2. classifiers
        3. empirical study

        Qualifiers

        • Research-article

        Funding Sources

        Conference

        SIGMOD/PODS '22
        Sponsor:

        Acceptance Rates

        Overall Acceptance Rate 785 of 4,003 submissions, 20%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)371
        • Downloads (Last 6 weeks)39
        Reflects downloads up to 08 Feb 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2025)Is it still fair? A comparative evaluation of fairness algorithms through the lens of covariate driftMachine Learning10.1007/s10994-024-06698-6114:1Online publication date: 14-Jan-2025
        • (2024)How far can fairness constraints help recover from biased data?Proceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693882(44515-44544)Online publication date: 21-Jul-2024
        • (2024)OTClean: Data Cleaning for Conditional Independence Violations using Optimal TransportProceedings of the ACM on Management of Data10.1145/36549632:3(1-26)Online publication date: 30-May-2024
        • (2024)Automated Data Cleaning can Hurt Fairness in Machine Learning-Based Decision MakingIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.336552436:12(7368-7379)Online publication date: Dec-2024
        • (2024)FairCR – An Evaluation and Recommendation System for Fair Classification Algorithms2024 IEEE 40th International Conference on Data Engineering (ICDE)10.1109/ICDE60146.2024.00431(5473-5476)Online publication date: 13-May-2024
        • (2024)Explainable Disparity Compensation for Efficient Fair Ranking2024 IEEE 40th International Conference on Data Engineering (ICDE)10.1109/ICDE60146.2024.00174(2192-2204)Online publication date: 13-May-2024
        • (2024)Non-Invasive Fairness in Learning Through the Lens of Data Drift2024 IEEE 40th International Conference on Data Engineering (ICDE)10.1109/ICDE60146.2024.00172(2164-2178)Online publication date: 13-May-2024
        • (2024)Mitigating Subgroup Unfairness in Machine Learning Classifiers: A Data-Driven Approach2024 IEEE 40th International Conference on Data Engineering (ICDE)10.1109/ICDE60146.2024.00171(2151-2163)Online publication date: 13-May-2024
        • (2024)Enforcing Conditional Independence for Fair Representation Learning and Causal Image Generation2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)10.1109/CVPRW63382.2024.00015(103-112)Online publication date: 17-Jun-2024
        • (2023)Consistent Range Approximation for Fair Predictive ModelingProceedings of the VLDB Endowment10.14778/3611479.361149816:11(2925-2938)Online publication date: 1-Jul-2023
        • Show More Cited By

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Login options

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media