Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3306618.3314255acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article

Taking Advantage of Multitask Learning for Fair Classification

Published: 27 January 2019 Publication History
  • Get Citation Alerts
  • Abstract

    A central goal of algorithmic fairness is to reduce bias in automated decision making. An unavoidable tension exists between accuracy gains obtained by using sensitive information as part of a statistical model, and any commitment to protect these characteristics. Often, due to biases present in the data, using the sensitive information in the functional form of a classifier improves classification accuracy. In this paper we show how it is possible to get the best of both worlds: optimize model accuracy and fairness without explicitly using the sensitive feature in the functional form of the model, thereby treating different individuals equally. Our method is based on two key ideas. On the one hand, we propose to use Multitask Learning (MTL), enhanced with fairness constraints, to jointly learn group specific classifiers that leverage information between sensitive groups. On the other hand, since learning group specific models might not be permitted, we propose to first predict the sensitive features by any learning method and then to use the predicted sensitive feature to train MTL with fairness constraints. This enables us to tackle fairness with a three-pronged approach, that is, by increasing accuracy on each group, enforcing measures of fairness during training, and protecting sensitive information during testing. Experimental results on two real datasets support our proposal, showing substantial improvements in both accuracy and fairness.

    References

    [1]
    J. Adebayo and L. Kagal. 2016. Iterative orthogonal feature projection for diagnosing bias in black-box models. In Conference on Fairness, Accountability, and Transparency in Machine Learning (FATML).
    [2]
    A. Agarwal, A. Beygelzimer, M. Dud'ik, and J. Langford. 2017. A Reductions Approach to Fair Classification. In FATML.
    [3]
    A. Agarwal, A. Beygelzimer, M. Dud'ik, J. Langford, and H. Wallach. 2018. A reductions approach to fair classification. arXiv preprint arXiv:1803.02453 (2018).
    [4]
    D. Alabi, N. Immorlica, and A. T. Kalai. 2018. When optimizing nonlinear objectives is no harder than linear objectives. arXiv preprint arXiv:1804.04503 (2018).
    [5]
    A. Argyriou, T. Evgeniou, and M. Pontil. 2008. Convex multi-task feature learning. Machine Learning, Vol. 73, 3 (2008), 243--272.
    [6]
    B. Bakker and T. Heskes. 2003. Task clustering and gating for bayesian multitask learning. Journal of Machine Learning Research, Vol. 4 (2003), 83--99.
    [7]
    J. Baxter. 2000. A model of inductive bias learning. Journal of artificial intelligence research, Vol. 12 (2000), 149--198.
    [8]
    Y. Bechavod and K. Ligett. 2018. Penalizing Unfairness in Binary Classification. arXiv preprint arXiv:1707.00044v3 (2018).
    [9]
    R. Berk, H. Heidari, S. Jabbari, M. Joseph, M. Kearns, J. Morgenstern, S. Neel, and A. Roth. 2017. A convex framework for fair regression. arXiv preprint arXiv:1706.02409 (2017).
    [10]
    A. Beutel, J. Chen, Z. Zhao, and E. H. Chi. 2017. Data decisions and theoretical implications when adversarially learning fair representations. In FATML.
    [11]
    L. Breiman. 2001. Random forests. Machine learning, Vol. 45, 1 (2001), 5--32.
    [12]
    F. Calmon, D. Wei, B. Vinzamuri, K. Natesan Ramamurthy, and K. R. Varshney. 2017. Optimized Pre-Processing for Discrimination Prevention. In Advances in Neural Information Processing Systems (NIPS).
    [13]
    R. Caruana. 1997. Multitask Learning. Machine Learning, Vol. 28, 1 (1997), 41--75.
    [14]
    A. Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, Vol. 5, 2 (2017), 153--163.
    [15]
    M. Donini, D. Martinez-Rego, M. Goodson, J. Shawe-Taylor, and M. Pontil. 2016. Distributed variance regularized multitask learning. In International Joint Conference on Neural Networks.
    [16]
    M. Donini, L. Oneto, S. Ben-David, J. Shawe-Taylor, and M. Pontil. 2018. Empirical Risk Minimization under Fairness Constraints. In NIPS.
    [17]
    C. Dwork, N. Immorlica, A. T. Kalai, and M. D. M. Leiserson. 2018. Decoupled Classifiers for Group-Fair and Efficient Machine Learning. In Conference on Fairness, Accountability and Transparency (FAT).
    [18]
    T. Evgeniou and M. Pontil. 2004. Regularized multi-task learning. In ACM SIGKDD international conference on Knowledge discovery and data mining.
    [19]
    M. Feldman, S. A. Friedler, J. Moeller, C. Scheidegger, and S. Venkatasubramanian. 2015. Certifying and removing disparate impact. In International Conference on Knowledge Discovery and Data Mining.
    [20]
    M. Hardt, E. Price, and N. Srebro. 2016. Equality of opportunity in supervised learning. In NIPS.
    [21]
    IBM. 2018. User-Manual CPLEX 12.7.1. IBM Software Group.
    [22]
    F. Kamiran and T. Calders. 2009. Classifying without discriminating. In International Conference on Computer, Control and Communication.
    [23]
    F. Kamiran and T. Calders. 2010. Classification with no discrimination by preferential sampling. In Machine Learning Conference.
    [24]
    F. Kamiran and T. Calders. 2012. Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, Vol. 33, 1 (2012), 1--33.
    [25]
    T. Kamishima, S. Akaho, and J. Sakuma. 2011. Fairness-aware learning through regularization approach. In International Conference on Data Mining Workshops.
    [26]
    M. Kearns, S. Neel, A. Roth, and Z. S. Wu. 2017. Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. arXiv preprint arXiv:1711.05144 (2017).
    [27]
    A. Khosla, T. Zhou, T. Malisiewicz, A. A. Efros, and A. Torralba. 2012. Undoing the damage of dataset bias. In European Conference on Computer Vision.
    [28]
    A. K. Menon and R. C. Williamson. 2018. The cost of fairness in binary classification. In FAT.
    [29]
    D. Pedreshi, S. Ruggieri, and F. Turini. 2008. Discrimination-aware data mining. In ACM SIGKDD international conference on Knowledge discovery and data mining.
    [30]
    A. Pérez-Suay, V. Laparra, G. Mateo-Garc'ia, J. Mu n oz-Mar'i, L. Gómez-Chova, and G. Camps-Valls. 2017. Fair Kernel Learning. In Machine Learning and Knowledge Discovery in Databases.
    [31]
    G. Pleiss, M. Raghavan, F. Wu, J. Kleinberg, and K. Q. Weinberger. 2017. On fairness and calibration. In NIPS.
    [32]
    S. Shalev-Shwartz and S. Ben-David. 2014. Understanding machine learning: From theory to algorithms .Cambridge University Press.
    [33]
    J. Shawe-Taylor and N. Cristianini. 2004. Kernel methods for pattern analysis .Cambridge University Press.
    [34]
    A. J. Smola and B. Schölkopf. 2001. Learning with Kernels .MIT Press.
    [35]
    B. Woodworth, S. Gunasekar, M. I. Ohannessian, and N. Srebro. 2017. Learning non-discriminatory predictors. In Computational Learning Theory.
    [36]
    M. B. Zafar, I. Valera, M. Gomez Rodriguez, and K. P. Gummadi. 2017. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In International Conference on World Wide Web.
    [37]
    M. B. Zafar, I. Valera, M. Gomez Rodriguez, and K. P. Gummadi. 2017. Fairness constraints: Mechanisms for fair classification. In International Conference on Artificial Intelligence and Statistics.
    [38]
    M. B. Zafar, I. Valera, M. Rodriguez, K. Gummadi, and A. Weller. 2017. From parity to preference-based notions of fairness in classification. In NIPS.
    [39]
    R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. 2013. Learning fair representations. In International Conference on Machine Learning.

    Cited By

    View all
    • (2024)Fairness in Machine Learning: A SurveyACM Computing Surveys10.1145/361686556:7(1-38)Online publication date: 9-Apr-2024
    • (2023)Responsible AI (RAI) games and ensemblesProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3669301(72717-72749)Online publication date: 10-Dec-2023
    • (2023)Bias Mitigation for Machine Learning Classifiers: A Comprehensive SurveyACM Journal on Responsible Computing10.1145/36313261:2(1-52)Online publication date: 1-Nov-2023
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    AIES '19: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society
    January 2019
    577 pages
    ISBN:9781450363242
    DOI:10.1145/3306618
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 January 2019

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. classification
    2. fairness
    3. multitask learning

    Qualifiers

    • Research-article

    Conference

    AIES '19
    Sponsor:
    AIES '19: AAAI/ACM Conference on AI, Ethics, and Society
    January 27 - 28, 2019
    HI, Honolulu, USA

    Acceptance Rates

    Overall Acceptance Rate 61 of 162 submissions, 38%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)85
    • Downloads (Last 6 weeks)10

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Fairness in Machine Learning: A SurveyACM Computing Surveys10.1145/361686556:7(1-38)Online publication date: 9-Apr-2024
    • (2023)Responsible AI (RAI) games and ensemblesProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3669301(72717-72749)Online publication date: 10-Dec-2023
    • (2023)Bias Mitigation for Machine Learning Classifiers: A Comprehensive SurveyACM Journal on Responsible Computing10.1145/36313261:2(1-52)Online publication date: 1-Nov-2023
    • (2023)Faire: Repairing Fairness of Neural Networks via Neuron Condition SynthesisACM Transactions on Software Engineering and Methodology10.1145/361716833:1(1-24)Online publication date: 23-Nov-2023
    • (2023)FairMask: Better Fairness via Model-Based Rebalancing of Protected AttributesIEEE Transactions on Software Engineering10.1109/TSE.2022.322071349:4(2426-2439)Online publication date: 1-Apr-2023
    • (2023)Fair Robust Active Learning by Joint Inconsistency2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)10.1109/ICCVW60793.2023.00390(3624-3633)Online publication date: 2-Oct-2023
    • (2023)On The Fairness of Multitask Representation LearningICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP49357.2023.10095627(1-5)Online publication date: 4-Jun-2023
    • (2023)An adversarial training framework for mitigating algorithmic biases in clinical machine learningnpj Digital Medicine10.1038/s41746-023-00805-y6:1Online publication date: 29-Mar-2023
    • (2023)Bipol: A novel multi-axes bias evaluation metric with explainability for NLPNatural Language Processing Journal10.1016/j.nlp.2023.1000304(100030)Online publication date: Sep-2023
    • (2023)Multi-task learning with dynamic re-weighting to achieve fairness in healthcare predictive modelingJournal of Biomedical Informatics10.1016/j.jbi.2023.104399143(104399)Online publication date: Jul-2023
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media