Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Adversarial Representation Learning with Closed-Form Solvers

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases. Research Track (ECML PKDD 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12976))

  • 2195 Accesses

Abstract

Adversarial representation learning aims to learn data representations for a target task while removing unwanted sensitive information at the same time. Existing methods learn model parameters iteratively through stochastic gradient descent-ascent, which is often unstable and unreliable in practice. To overcome this challenge, we adopt closed-form solvers for the adversary and target task. We model them as kernel ridge regressors and analytically determine an upper-bound on the optimal dimensionality of representation. Our solution, dubbed OptNet-ARL, reduces to a stable one one-shot optimization problem that can be solved reliably and efficiently. OptNet-ARL can be easily generalized to the case of multiple target tasks and sensitive attributes. Numerical experiments, on both small and large scale datasets, show that, from an optimization perspective, OptNet-ARL is stable and exhibits three to five times faster convergence. Performance wise, when the target and sensitive attributes are dependent, OptNet-ARL learns representations that offer a better trade-off front between (a) utility and bias for fair classification and (b) utility and privacy by mitigating leakage of private information than existing solutions.

Code is available at https://github.com/human-analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    \(k(\boldsymbol{z}, \boldsymbol{z}^\prime )=\exp {(-\frac{\Vert \boldsymbol{z}-\boldsymbol{z}^\prime \Vert ^2)}{2\sigma ^2}}\).

  2. 2.

    \(k(\boldsymbol{z}, \boldsymbol{z}^\prime )=\frac{1}{\sqrt{\Vert \boldsymbol{z}-\boldsymbol{z}^\prime \Vert ^2+c^2}}\).

References

  1. Agrawal, A., Amos, B., Barratt, S., Boyd, S., Diamond, S., Kolter, Z.: Differentiable convex optimization layers. In: Advances in Neural Information Processing Systems (2019)

    Google Scholar 

  2. Amos, B., Kolter, J.Z.: Optnet: differentiable optimization as a layer in neural networks. In: International Conference on Machine Learning (2017)

    Google Scholar 

  3. Balduzzi, D., Racaniere, S., Martens, J., Foerster, J., Tuyls, K., Graepel, T.: The mechanics of n-player differentiable games. In: International Conference on Machine Learning (2018)

    Google Scholar 

  4. Bertinetto, L., Henriques, J.F., Torr, P.H.: Meta-learning with differentiable closed-form solvers. In: International Conference on Learning Representations (2018)

    Google Scholar 

  5. Bertran, M., et al.: Adversarially learned representations for information obfuscation and inference. In: International Conference on Machine Learning (2019)

    Google Scholar 

  6. Beutel, A., Chen, J., Zhao, Z., Chi, E.H.: Data decisions and theoretical implications when adversarially learning fair representations. In: Accountability, and Transparency in Machine Learning, Fairness (2017)

    Google Scholar 

  7. Creager, E., et al.: Flexibly fair representation learning by disentanglement. In: International Conference on Machine Learning, pp. 1436–1445 (2019)

    Google Scholar 

  8. Daskalakis, C., Panageas, I.: The limit points of (optimistic) gradient descent in min-max optimization. In: Advances in Neural Information Processing Systems (2018)

    Google Scholar 

  9. UCI machine learning repository. http://archive.ics.uci.edu/ml

  10. Edwards, H., Storkey, A.: Censoring representations with an adversary. In: International Conference on Learning Representations (2015)

    Google Scholar 

  11. Elazar, Y., Goldberg, Y.: Adversarial removal of demographic attributes from text data. In: Empirical Methods in Natural Language Processing (2018)

    Google Scholar 

  12. Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: International Conference on Machine Learning, pp. 1180–1189 (2015)

    Google Scholar 

  13. Ganin, Y., et al.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17(1), 2030–2096 (2016)

    MathSciNet  Google Scholar 

  14. Gidel, G., Berard, H., Vignoud, G., Vincent, P., Lacoste-Julien, S.: A variational inequality perspective on generative adversarial networks. In: International Conference on Learning Representations (2019)

    Google Scholar 

  15. Golub, G.H., Pereyra, V.: The differentiation of pseudo-inverses and nonlinear least squares problems whose variables separate. SIAM J. Numer. Anal. 10(2), 413–432 (1973)

    Article  MathSciNet  Google Scholar 

  16. Gretton, A., Herbrich, R., Smola, A., Bousquet, O., Schölkopf, B.: Kernel methods for measuring. J. Mach. Learn. Res. Ind. 6, 2075–2129 (2005)

    MathSciNet  MATH  Google Scholar 

  17. Ionescu, C., Vantzos, O., Sminchisescu, C.: Training deep networks with structured layers by matrix backpropagation. In: IEEE International Conference on Computer Vision (2015)

    Google Scholar 

  18. Jacot, A., Gabriel, F., Hongler, C.: Neural tangent kernel: convergence and generalization in neural networks. In: Advances in Neural Information Processing Systems (2018)

    Google Scholar 

  19. Jin, C., Netrapalli, P., Jordan, M.: What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization? arXiv preprint arXiv:1902.00618 (2019)

  20. Kim, B., Kim, H., Kim, K., Kim, S., Kim, J.: Learning not to learn: training deep neural networks with biased data. In: IEEE Conference on Computer Vision and Pattern Recognition (2019)

    Google Scholar 

  21. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)

  22. Knowles, J.: A summary-attainment-surface plotting method for visualizing the performance of stochastic multiobjective optimizers. In: International Conference on Intelligent Systems Design and Applications (2015)

    Google Scholar 

  23. Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)

    MathSciNet  MATH  Google Scholar 

  24. Kumar, S., Mohri, M., Talwalkar, A.: Sampling methods for the Nyström method. J. Mach. Learn. Res. 13(4), 981–1006 (2012)

    MathSciNet  MATH  Google Scholar 

  25. Lee, K., Maji, S., Ravichandran, A., Soatto, S.: Meta-learning with differentiable convex optimization. In: IEEE Conference on Computer Vision and Pattern Recognition (2019)

    Google Scholar 

  26. Letcher, A., Balduzzi, D., Racaniere, S., Martens, J., Foerster, J., Tuyls, K., Graepel, T.: Differentiable game mechanics. J. Mach. Learn. Res. 20(84), 1–40 (2019)

    MathSciNet  MATH  Google Scholar 

  27. Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: IEEE International Conference on Computer Vision (2015)

    Google Scholar 

  28. Louizos, C., Swersky, K., Li, Y., Welling, M., Zemel, R.: The variational fair autoencoder. arXiv preprint arXiv:1511.00830 (2015)

  29. Madras, D., Creager, E., Pitassi, T., Zemel, R.: Learning adversarially fair and transferable representations. In: International Conference on Machine Learning (2018)

    Google Scholar 

  30. Mescheder, L., Nowozin, S., Geiger, A.: The numerics of gans. In: Advances in Neural Information Processing Systems (2017)

    Google Scholar 

  31. Moyer, D., Gao, S., Brekelmans, R., Steeg, G.V., Galstyan, A.: Invariant Representations without Adversarial Training, In: Advances in Neural Information Processing Systems (2018)

    Google Scholar 

  32. Nagarajan, V., Kolter, J.Z.: Gradient descent GAN optimization is locally stable. In: Advances in Neural Information Processing Systems (2017)

    Google Scholar 

  33. Roy, P.C., Boddeti, V.N.: Mitigating information leakage in image representations: a maximum entropy approach. In: IEEE Conference on Computer Vision and Pattern Recognition (2019)

    Google Scholar 

  34. Sadeghi, B., Boddeti, V.N.: Imparting fairness to pre-trained biased representations. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (2020)

    Google Scholar 

  35. Sadeghi, B., Yu, R., Boddeti, V.: On the global optima of kernelized adversarial representation learning. In: IEEE International Conference on Computer Vision (2019)

    Google Scholar 

  36. Shawe-Taylor, J., Cristianini, N.: Kernel methods for pattern analysis. Cambridge University Press, Cambridge (2014)

    Google Scholar 

  37. Song, J., Kalluri, P., Grover, A., Zhao, S., Ermon, S.: Learning controllable fair representations. In: International Conference on Artificial Intelligence and Statistics (2019)

    Google Scholar 

  38. Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  39. Valmadre, J., Bertinetto, L., Henriques, J., Vedaldi, A., Torr, P.H.: End-to-end representation learning for correlation filter based tracking. In: IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  40. Xie, Q., Dai, Z., Du, Y., Hovy, E., Neubig, G.: Controllable invariance through adversarial feature learning. In: Advances in Neural Information Processing Systems (2017)

    Google Scholar 

  41. Zemel, R., Wu, Y., Swersky, K., Pitassi, T., Dwork, C.: Learning fair representations. In: International Conference on Machine Learning (2013)

    Google Scholar 

  42. Zhang, B.H., Lemoine, B., Mitchell, M.: Mitigating unwanted biases with adversarial learning. In: AAAI/ACM Conference on AI, Ethics, and Society (2018)

    Google Scholar 

  43. Zitzler, E., Thiele, L.: Multiobjective optimization using evolutionary algorithms—a comparative case study. In: International Conference on Parallel Problem Solving from Nature (1998)

    Google Scholar 

  44. Fisher, R.A.: The use of multiple measurements in taxonomic problems. In: Annals of human eugenics. Wiley Online Library (1926)

    Google Scholar 

  45. Hardt, M., Recht, B., Singer, Y.: Train faster, generalize better: stability of stochastic gradient descent. In: International Conference on Machine Learning (2016)

    Google Scholar 

  46. Souza, C.R.: Kernel functions for machine learning applications. In: Creative commons attribution-noncommercial-share alike (2016)

    Google Scholar 

Download references

Acknowledgements

This work was performed under the following financial assistance award 60NANB18D210 from U.S. Department of Commerce, National Institute of Standards and Technology.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vishnu Naresh Boddeti .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sadeghi, B., Wang, L., Boddeti, V.N. (2021). Adversarial Representation Learning with Closed-Form Solvers. In: Oliver, N., Pérez-Cruz, F., Kramer, S., Read, J., Lozano, J.A. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2021. Lecture Notes in Computer Science(), vol 12976. Springer, Cham. https://doi.org/10.1007/978-3-030-86520-7_45

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86520-7_45

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86519-1

  • Online ISBN: 978-3-030-86520-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics