Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3531146.3534632acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article

Promoting Fairness in Learned Models by Learning to Active Learn under Parity Constraints

Published: 20 June 2022 Publication History
  • Get Citation Alerts
  • Abstract

    Machine learning models can have consequential effects when used to automate decisions, and disparities between groups of people in the error rates of those decisions can lead to harms suffered more by some groups than others. Past algorithmic approaches aim to enforce parity across groups given a fixed set of training data; instead, we ask: what if we can gather more data to mitigate disparities? We develop a meta-learning algorithm for parity-constrained active learning that learns a policy to decide which labels to query so as to maximize accuracy subject to parity constraints. To optimize the active learning policy, our proposed algorithm formulates the parity-constrained active learning task as a bi-level optimization problem. The inner level corresponds to training a classifier on a subset of labeled examples. The outer level corresponds to updating the selection policy choosing this subset to achieve a desired fairness and accuracy behavior on the trained classifier. To solve this constrained bi-level optimization problem, we employ the Forward-Backward Splitting optimization method. Empirically, across several parity metrics and classification tasks, our approach outperforms alternatives by a large margin.

    References

    [1]
    Behnoush Abdollahi and Olfa Nasraoui. 2018. Transparency in fair machine learning: The case of explainable recommender systems. In Human and Machine Learning. Springer, Springer, 21–35.
    [2]
    Alekh Agarwal, Alina Beygelzimer, Miroslav Dudik, John Langford, and Hanna Wallach. 2018. A Reductions Approach to Fair Classification. In Proceedings of the 35th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 80), Jennifer Dy and Andreas Krause (Eds.). PMLR, USA, 60–69. https://proceedings.mlr.press/v80/agarwal18a.html
    [3]
    Hadis Anahideh, Abolfazl Asudeh, and Saravanan Thirumuruganathan. 2022. Fair active learning. Expert Systems with Applications 199 (2022), 116981. https://doi.org/10.1016/j.eswa.2022.116981
    [4]
    Dana Angluin. 1988. Queries and Concept Learning. Mach. Learn. 2, 4 (April 1988), 319–342. https://doi.org/10.1023/A:1022821128753
    [5]
    Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias. ProPublica.
    [6]
    Philip Bachman, Alessandro Sordoni, and Adam Trischler. 2017. Learning Algorithms for Active Learning. In Proceedings of the 34th International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 70), Doina Precup and Yee Whye Teh (Eds.). PMLR, PMLR, 301–310. https://proceedings.mlr.press/v70/bachman17a.html
    [7]
    Michelle Bao, Angela Zhou, Samantha Zottola, Brian Brubach, Sarah Desmarais, Aaron Horowitz, Kristian Lum, and Suresh Venkatasubramanian. 2021. It’s COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks.
    [8]
    Ángel Alexander Cabrera, Will Epperson, Fred Hohman, Minsuk Kahng, Jamie Morgenstern, and Duen Horng Chau. 2019. FairVis: Visual analytics for discovering intersectional bias in machine learning.
    [9]
    Irene Y. Chen, Fredrik D. Johansson, and David Sontag. 2018. Why is My Classifier Discriminatory?. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (Montréal, Canada) (NIPS’18). Curran Associates Inc., Red Hook, NY, USA, 3543–3554.
    [10]
    Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data 5, 2 (2017), 153–163.
    [11]
    David Cohn, Les Atlas, and Richard Ladner. 1994. Improving generalization with active learning. Machine learning 15, 2 (1994), 201–221.
    [12]
    Patrick L Combettes and Valérie R Wajs. 2005. Signal recovery by proximal forward-backward splitting. Multiscale Modeling & Simulation 4, 4 (2005), 1168–1200.
    [13]
    Kate Crawford and Ryan Calo. 2016. There is a blind spot in AI research. Nature 538, 7625 (2016), 311–313.
    [14]
    Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context.
    [15]
    Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Łukasz Kaiser. 2018. Universal transformers.
    [16]
    Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding.
    [17]
    Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http://archive.ics.uci.edu/ml
    [18]
    Meng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to active learn: A deep reinforcement learning approach.
    [19]
    Valerii Vadimovich Fedorov. 2013. Theory of optimal experiments.
    [20]
    Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and Removing Disparate Impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Sydney, NSW, Australia) (KDD ’15). Association for Computing Machinery, New York, NY, USA, 259–268. https://doi.org/10.1145/2783258.2783311
    [21]
    Batya Friedman and Helen Nissenbaum. 1996. Bias in computer systems. ACM Transactions on Information Systems (TOIS) 14, 3 (1996), 330–347.
    [22]
    Sainyam Galhotra, Yuriy Brun, and Alexandra Meliou. 2017. Fairness testing: Testing software for discrimination.
    [23]
    Vivian Giang. 2018. The potential hidden bias in automated hiring systems.
    [24]
    Tom Goldstein, Christoph Studer, and Richard Baraniuk. 2014. A field guide to forward-backward splitting with a FASTA implementation.
    [25]
    Emil Julius Gumbel. 1948. Statistical theory of extreme values and some practical applications: a series of lectures.
    [26]
    Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning., 3315–3323 pages.
    [27]
    Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudik, and Hanna Wallach. 2019. Improving fairness in machine learning systems: What do industry practitioners need?, 16 pages.
    [28]
    Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax.
    [29]
    Jun Jiang and Horace Ho-Shing Ip. 2008. Active learning for the prediction of phosphorylation sites., 3158–3165 pages.
    [30]
    Nathan Kallus and Angela Zhou. 2018. Residual unfairness in fair machine learning from prejudiced data.
    [31]
    Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization.
    [32]
    Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2016. Inherent trade-offs in the fair determination of risk scores.
    [33]
    Ksenia Konyushkova, Raphael Sznitman, and Pascal Fua. 2017. Learning active learning from data., 4225–4235 pages.
    [34]
    Wouter Kool, Herke Van Hoof, and Max Welling. 2019. Stochastic beams and where to find them: The gumbel-top-k trick for sampling sequences without replacement.
    [35]
    Pierre-Louis Lions and Bertrand Mercier. 1979. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 6 (1979), 964–979.
    [36]
    Chris J Maddison, Andriy Mnih, and Yee Whye Teh. 2016. The concrete distribution: A continuous relaxation of discrete random variables.
    [37]
    Arvind Narayanan. 2018. Translation tutorial: 21 fairness definitions and their politics.
    [38]
    Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library., 8024–8035 pages. http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
    [39]
    Piyush Rai, Avishek Saha, Hal Daumé III, and Suresh Venkatasubramanian. 2010. Domain adaptation meets active learning., 27–32 pages.
    [40]
    Dan Roth and Kevin Small. 2006. Margin-based active learning for structured output spaces., 413–424 pages.
    [41]
    Burr Settles. 2009. Active learning literature survey. Technical Report. University of Wisconsin-Madison Department of Computer Sciences.
    [42]
    Claude E Shannon. 1948. A note on the concept of entropy. Bell System Tech. J 27, 3 (1948), 379–423.
    [43]
    Xiaoxiao Shi, Wei Fan, and Jiangtao Ren. 2008. Actively transfer domain knowledge., 342–357 pages.
    [44]
    Megha Srivastava, Hoda Heidari, and Andreas Krause. 2019. Mathematical notions vs. human perception of fairness: A descriptive approach to fairness for machine learning., 2459–2468 pages.
    [45]
    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need., 5998–6008 pages.
    [46]
    Michael Veale and Reuben Binns. 2017. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society 4, 2 (2017), 2053951717743530.
    [47]
    Tim Vieira. 2014. Gumbel-max trick and weighted reservoir sapling. (2014). https://timvieira.github.io/blog/post/2014/08/01/gumbel-max-trick-and-weightedreservoir-sampling/.
    [48]
    Sara Wachter-Boettcher. 2017. AI recruiting tools do not eliminate bias.
    [49]
    Xuezhi Wang, Tzu-Kuo Huang, and Jeff Schneider. 2014. Active Transfer Learning under Model Shift. In Proceedings of the 31st International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 32), Eric P. Xing and Tony Jebara (Eds.). PMLR, Bejing, China, 1305–1313. https://proceedings.mlr.press/v32/wangi14.html
    [50]
    L. Wightman. 1998. LSAC National Longitudinal Bar Passage Study. (1998). LSAC.
    [51]
    Benjamin Wilson, Judy Hoffman, and Jamie Morgenstern. 2019. Predictive inequity in object detection.
    [52]
    Tal Zarsky. 2016. The trouble with algorithmic decisions: An analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology, & Human Values 41, 1 (2016), 118–132.

    Cited By

    View all

    Index Terms

    1. Promoting Fairness in Learned Models by Learning to Active Learn under Parity Constraints
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image ACM Other conferences
          FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency
          June 2022
          2351 pages
          ISBN:9781450393522
          DOI:10.1145/3531146
          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Sponsors

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 20 June 2022

          Permissions

          Request permissions for this article.

          Check for updates

          Author Tags

          1. active learning
          2. meta-learning

          Qualifiers

          • Research-article
          • Research
          • Refereed limited

          Conference

          FAccT '22
          Sponsor:

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)81
          • Downloads (Last 6 weeks)2
          Reflects downloads up to 12 Aug 2024

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)Mutual Information-Based Fair Active LearningICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP48485.2024.10446623(4965-4969)Online publication date: 14-Apr-2024
          • (2024)FAL-CURExpert Systems with Applications: An International Journal10.1016/j.eswa.2023.122842242:COnline publication date: 16-May-2024
          • (2023)Measures of Disparity and their Efficient EstimationProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3600211.3604697(927-938)Online publication date: 8-Aug-2023
          • (2023)Fair Robust Active Learning by Joint Inconsistency2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)10.1109/ICCVW60793.2023.00390(3624-3633)Online publication date: 2-Oct-2023
          • (2023)Active learning with fairness-aware clustering for fair classification considering multiple sensitive attributesInformation Sciences: an International Journal10.1016/j.ins.2023.119521647:COnline publication date: 1-Nov-2023
          • (2022)Fairness-Aware Active Learning for Decoupled Model2022 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN55064.2022.9892608(1-9)Online publication date: 18-Jul-2022

          View Options

          Get Access

          Login options

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format.

          HTML Format

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media