Assumed Density Filtering Methods for Learning Bayesian Neural Networks

Authors

  • Soumya Ghosh Disney Research
  • Francesco Delle Fave Disney Research
  • Jonathan Yedidia Disney Research

DOI:

https://doi.org/10.1609/aaai.v30i1.10296

Abstract

Buoyed by the success of deep multilayer neural networks, there is renewed interest in scalable learning of Bayesian neural networks. Here, we study algorithms that utilize recent advances in Bayesian inference to efficiently learn distributions over network weights. In particular, we focus on recently proposed assumed density filtering based methods for learning Bayesian neural networks -- Expectation and Probabilistic backpropagation. Apart from scaling to large datasets, these techniques seamlessly deal with non-differentiable activation functions and provide parameter (learning rate, momentum) free learning. In this paper, we first rigorously compare the two algorithms and in the process develop several extensions, including a version of EBP for continuous regression problems and a PBP variant for binary classification. Next, we extend both algorithms to deal with multiclass classification and count regression problems. On a variety of diverse real world benchmarks, we find our extensions to be effective, achieving results competitive with the state-of-the-art.

Downloads

Published

2016-02-21

How to Cite

Ghosh, S., Delle Fave, F., & Yedidia, J. (2016). Assumed Density Filtering Methods for Learning Bayesian Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10296

Issue

Section

Technical Papers: Machine Learning Methods