Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3357384.3357974acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article

AdaFair: Cumulative Fairness Adaptive Boosting

Published: 03 November 2019 Publication History
  • Get Citation Alerts
  • Abstract

    The widespread use of ML-based decision making in domains with high societal impact such as recidivism, job hiring and loan credit has raised a lot of concerns regarding potential discrimination. In particular, in certain cases it has been observed that ML algorithms can provide different decisions based on sensitive attributes such as gender or race and therefore can lead to discrimination. Although, several fairness-aware ML approaches have been proposed, their focus has been largely on preserving the overall classification accuracy while improving fairness in predictions for both protected and non-protected groups (defined based on the sensitive attribute(s)). The overall accuracy however is not a good indicator of performance in case of class imbalance, as it is biased towards the majority class. As we will see in our experiments, many of the fairness-related datasets suffer from class imbalance and therefore, tackling fairness requires also tackling the imbalance problem. To this end, we propose AdaFair, a fairness-aware classifier based on AdaBoost that further updates the weights of the instances in each boosting round taking into account a cumulative notion of fairness based upon all current ensemble members, while explicitly tackling class-imbalance by optimizing the number of ensemble members for balanced classification error. Our experiments show that our approach can achieve parity in true positive and true negative rates for both protected and non-protected groups, while it significantly outperforms existing fairness-aware methods up to 25% in terms of balanced error.

    References

    [1]
    Alekh Agarwal, Alina Beygelzimer, Miroslav Dud'i k, John Langford, and Hanna M. Wallach. 2018. A Reductions Approach to Fair Classification. In ICML, Vol. 80. 60--69.
    [2]
    Kevin Bache and Moshe Lichman. 2013. UCI machine learning repository. (2013).
    [3]
    Kay Henning Brodersen, Cheng Soon Ong, Klaas Enno Stephan, and Joachim M Buhmann. 2010. The balanced accuracy and its posterior distribution. In 2010 20th International Conference on Pattern Recognition. IEEE, 3121--3124.
    [4]
    Toon Calders, Faisal Kamiran, and Mykola Pechenizkiy. 2009. Building classifiers with independency constraints. In 2009 IEEE ICDM Workshops. IEEE, 13--18.
    [5]
    Toon Calders and Sicco Verwer. 2010. Three naive Bayes approaches for discrimination-free classification. DMKD, Vol. 21, 2 (2010), 277--292.
    [6]
    Flavio Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. 2017. Optimized pre-processing for discrimination prevention. In NIPS . 3992--4001.
    [7]
    Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. 2002. SMOTE: synthetic minority over-sampling technique. Journal of artificial intelligence research, Vol. 16 (2002), 321--357.
    [8]
    Nitesh V Chawla, Aleksandar Lazarevic, Lawrence O Hall, and Kevin W Bowyer. 2003. SMOTEBoost: Improving prediction of the minority class in boosting. In ECML PKDD. Springer, 107--119.
    [9]
    Amit Datta, Michael Carl Tschantz, and Anupam Datta. 2015. Automated experiments on ad privacy settings. Privacy Enhancing Technologies, Vol. 2015, 1 (2015), 92--112.
    [10]
    Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. ACM, 214--226.
    [11]
    Benjamin G Edelman and Michael Luca. 2014. Digital discrimination: The case of Airbnb. com. (2014).
    [12]
    Benjamin Fish, Jeremy Kun, and Ádám D Lelkes. 2016. A confidence-based approach for balancing fairness and accuracy. In Proceedings of the 2016 SIAM International Conference on Data Mining. SIAM, 144--152.
    [13]
    Mikel Galar, Alberto Fernandez, Edurne Barrenechea, Humberto Bustince, and Francisco Herrera. 2012. A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), Vol. 42, 4 (2012), 463--484.
    [14]
    Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans. 2018. When will AI exceed human performance? Evidence from AI experts. Journal of Artificial Intelligence Research, Vol. 62 (2018), 729--754.
    [15]
    Moritz Hardt, Eric Price, Nati Srebro, et almbox. 2016. Equality of opportunity in supervised learning. In NIPS. 3315--3323.
    [16]
    David Ingold and Spencer Soper. 2016. Amazon doesn't consider the race of its customers. Should It. Bloomberg, April (2016).
    [17]
    Vasileios Iosifidis and Eirini Ntoutsi. 2018. Dealing with Bias via Data Augmentation in Supervised Learning Scenarios. Jo Bates Paul D. Clough Robert J"aschke (2018), 24.
    [18]
    Vasileios Iosifidis, Thi Ngoc Han Tran, and Eirini Ntoutsi. 2019. Fairness-enhancing interventions in stream classification. To appear in International Conference on Database and Expert Systems Applications (DEXA) proceedings (2019).
    [19]
    Faisal Kamiran and Toon Calders. 2009. Classifying without discriminating. In Computer, Control and Communication. IEEE, 1--6.
    [20]
    Faisal Kamiran and Toon Calders. 2012. Data preprocessing techniques for classification without discrimination. KAIS, Vol. 33, 1 (2012), 1--33.
    [21]
    Faisal Kamiran, Toon Calders, and Mykola Pechenizkiy. 2010. Discrimination aware decision tree learning. In Data Mining (ICDM), 2010 IEEE 10th International Conference on. IEEE, 869--874.
    [22]
    Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. 2012. Fairness-aware classifier with prejudice remover regularizer. In ECML PKDD. Springer, 35--50.
    [23]
    Emmanouil Krasanakis, Eleftherios Spyromitros Xioufis, Symeon Papadopoulos, and Yiannis Kompatsiaris. 2018. Adaptive Sensitive Reweighting to Mitigate Bias in Fairness-aware Classification. In WWW. ACM, 853--862.
    [24]
    Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. 2016. How we analyzed the COMPAS recidivism algorithm. ProPublica (5 2016), Vol. 9 (2016).
    [25]
    United States. Executive Office of the President and John Podesta. 2014. Big data: Seizing opportunities, preserving values .White House, Executive Office of the President.
    [26]
    Dino Pedreschi, Salvatore Ruggieri, and Franco Turini. 2009. Measuring Discrimination in Socially-Sensitive Decision Records. In SDM. SIAM, 581--592.
    [27]
    Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. 2017. On fairness and calibration. In NIPS. 5680--5689.
    [28]
    Andrea Romei and Salvatore Ruggieri. 2014. A multidisciplinary survey on discrimination analysis. The Knowledge Engineering Review, Vol. 29, 5 (2014), 582--638.
    [29]
    Robert E Schapire. 1999. A brief introduction to boosting. In IJCAI, Vol. 99. 1401--1406.
    [30]
    Robert E Schapire. 2013. Explaining adaboost. In Empirical inference . Springer, 37--52.
    [31]
    Latanya Sweeney. 2013. Discrimination in online ad delivery. arXiv preprint arXiv:1301.6822 (2013).
    [32]
    Sahil Verma and Julia Rubin. 2018. Fairness Definitions Explained. (2018).
    [33]
    Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. 2017. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web. WWW, 1171--1180.
    [34]
    Wenbin Zhang and Eirini Ntoutsi. 2019. FAHT: An Adaptive Fairness-aware Decision Tree Classifier. To appear in International Joint Conferences on Artificial Intelligence (IJCAI) proceedings (2019).

    Cited By

    View all
    • (2024)Fair-CMNB: Advancing Fairness-Aware Stream Learning with Naïve Bayes and Multi-Objective OptimizationBig Data and Cognitive Computing10.3390/bdcc80200168:2(16)Online publication date: 31-Jan-2024
    • (2024)Falcon: Fair Active Learning Using Multi-Armed BanditsProceedings of the VLDB Endowment10.14778/3641204.364120717:5(952-965)Online publication date: 2-May-2024
    • (2024)An improved method to detect arrhythmia using ensemble learning-based model in multi lead electrocardiogram (ECG)PLOS ONE10.1371/journal.pone.029755119:4(e0297551)Online publication date: 9-Apr-2024
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CIKM '19: Proceedings of the 28th ACM International Conference on Information and Knowledge Management
    November 2019
    3373 pages
    ISBN:9781450369763
    DOI:10.1145/3357384
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 03 November 2019

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. boosting
    2. class imbalance
    3. fairness-aware classification

    Qualifiers

    • Research-article

    Funding Sources

    • DFG

    Conference

    CIKM '19
    Sponsor:

    Acceptance Rates

    CIKM '19 Paper Acceptance Rate 202 of 1,031 submissions, 20%;
    Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

    Upcoming Conference

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)120
    • Downloads (Last 6 weeks)11
    Reflects downloads up to 29 Jul 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Fair-CMNB: Advancing Fairness-Aware Stream Learning with Naïve Bayes and Multi-Objective OptimizationBig Data and Cognitive Computing10.3390/bdcc80200168:2(16)Online publication date: 31-Jan-2024
    • (2024)Falcon: Fair Active Learning Using Multi-Armed BanditsProceedings of the VLDB Endowment10.14778/3641204.364120717:5(952-965)Online publication date: 2-May-2024
    • (2024)An improved method to detect arrhythmia using ensemble learning-based model in multi lead electrocardiogram (ECG)PLOS ONE10.1371/journal.pone.029755119:4(e0297551)Online publication date: 9-Apr-2024
    • (2024)Group Fairness via Group ConsensusProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659006(1788-1808)Online publication date: 3-Jun-2024
    • (2024)Fairness in Machine Learning: A SurveyACM Computing Surveys10.1145/361686556:7(1-38)Online publication date: 9-Apr-2024
    • (2023)A survey on social-physical sensing: An emerging sensing paradigm that explores the collective intelligence of humans and machinesCollective Intelligence10.1177/263391372311708252:2(263391372311708)Online publication date: 25-Apr-2023
    • (2023)Bias Mitigation for Machine Learning Classifiers: A Comprehensive SurveyACM Journal on Responsible Computing10.1145/36313261:2(1-52)Online publication date: 1-Nov-2023
    • (2023)Fair and Private Data Preprocessing through MicroaggregationACM Transactions on Knowledge Discovery from Data10.1145/361737718:3(1-24)Online publication date: 9-Dec-2023
    • (2023)Disentangling and Operationalizing AI Fairness at LinkedInProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency10.1145/3593013.3594075(1213-1228)Online publication date: 12-Jun-2023
    • (2023)Multi-dimensional Discrimination in Law and Machine Learning - A Comparative OverviewProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency10.1145/3593013.3593979(89-100)Online publication date: 12-Jun-2023
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media