Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3341105.3374090acmconferencesArticle/Chapter ViewAbstractPublication PagessacConference Proceedingsconference-collections
poster

Constraining deep representations with a noise module for fair classification

Published: 30 March 2020 Publication History

Abstract

The recent surge in interest for Deep Learning (motivated by its exceptional performances on longstanding problems) made Neural Networks a very appealing tool for many actors in our society. One issue in this shift of interest is that Neural Networks are very opaque objects and it is often hard to make sense of their predictions.
In this context, research efforts have focused on building fair representations of data which display little to no correlation with regard to a sensitive attribute s. In this paper we build onto a domain adaptation neural model by augmenting it with a "noise conditioning" mechanism which we show is instrumental in obtaining fair (i.e. non-correlated with s) representations. We provide experiments on standard datasets showing the effectiveness of the noise conditioning mechanism in helping the networks to ignore the sensible attribute.

References

[1]
Ganin, Y., Lempitsky, V.: Unsupervised Domain Adaptation by Backpropagation. In: Proceedings of the 32nd International Conference on Machine Learning (2015)
[2]
Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., Lempitsky, V.: Domain-adversarial training of neural networks. The Journal of Machine Learning Research 17(1), 2096--2030 (2016)
[3]
Julia Angwin, Jeff Larson, S.M., Kirchner, L.: Machine Bias (2016)
[4]
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)
[5]
Louizos, C., Swersky, K., Li, Y., Welling, M., Zemel, R.S.: The Variational Fair Autoencoder. CoRR abs/1511.00830 (2015)
[6]
Newman, C.B.D., Merz, C.: UCI repository of machine learning databases (1998), http://archive.ics.uci.edu/ml/index.php
[7]
Nitin Mittal, David Kuder, S.H.: AI-fueled organizations: Reaching AI's full potential in the enterprise. Deloitte Insights (January 2019)
[8]
Xie, Q., Dai, Z., Du, Y., Hovy, E., Neubig, G.: Controllable invariance through adversarial feature learning. In: Advances in Neural Information Processing Systems 30, pp. 585--596 (2017)
[9]
Zafar, M.B., Valera, I., Gomez Rodriguez, M., Gummadi, K.P.: Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In: Proceedings of the 26th International Conference on World Wide Web (2017)
[10]
Zemel, R., Wu, Y., Swersky, K., Pitassi, T., Dwork, C.: Learning fair representations. In: International Conference on Machine Learning. pp. 325--333 (2013)

Cited By

View all
  • (2023)Bias and Unfairness in Machine Learning Models: A Systematic Review on Datasets, Tools, Fairness Metrics, and Identification and Mitigation MethodsBig Data and Cognitive Computing10.3390/bdcc70100157:1(15)Online publication date: 13-Jan-2023
  • (2020)Fair pairwise learning to rank2020 IEEE 7th International Conference on Data Science and Advanced Analytics (DSAA)10.1109/DSAA49011.2020.00083(729-738)Online publication date: Oct-2020

Index Terms

  1. Constraining deep representations with a noise module for fair classification
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      SAC '20: Proceedings of the 35th Annual ACM Symposium on Applied Computing
      March 2020
      2348 pages
      ISBN:9781450368667
      DOI:10.1145/3341105
      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 30 March 2020

      Check for updates

      Qualifiers

      • Poster

      Conference

      SAC '20
      Sponsor:
      SAC '20: The 35th ACM/SIGAPP Symposium on Applied Computing
      March 30 - April 3, 2020
      Brno, Czech Republic

      Acceptance Rates

      Overall Acceptance Rate 1,650 of 6,669 submissions, 25%

      Upcoming Conference

      SAC '25
      The 40th ACM/SIGAPP Symposium on Applied Computing
      March 31 - April 4, 2025
      Catania , Italy

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)4
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 03 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2023)Bias and Unfairness in Machine Learning Models: A Systematic Review on Datasets, Tools, Fairness Metrics, and Identification and Mitigation MethodsBig Data and Cognitive Computing10.3390/bdcc70100157:1(15)Online publication date: 13-Jan-2023
      • (2020)Fair pairwise learning to rank2020 IEEE 7th International Conference on Data Science and Advanced Analytics (DSAA)10.1109/DSAA49011.2020.00083(729-738)Online publication date: Oct-2020

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media