In this paper, we present a nonparametric Bayesian approach towards one-hidden-layer feedforward neural net- works. Our approach is based on a random selection of the weights of the synapses between the input and the hidden layer neurons,...
moreIn this paper, we present a nonparametric Bayesian approach towards one-hidden-layer feedforward neural net- works. Our approach is based on a random selection of the weights of the synapses between the input and the hidden layer neurons, and a Bayesian marginalization over the weights of the connections between the hidden layer neurons and the output neurons, giving rise to a kernel-based nonparametric Bayesian inference procedure for feedforward neural networks. Compared to existing approaches, our method presents a number of advan- tages, with the most significant being: (i) it offers a significant improvement in terms of the obtained generalization capabilities; (ii) being a nonparametric Bayesian learning approach, it entails inference instead of fitting to data, thus resolving the overfitting issues of non-Bayesian approaches; and (iii) it yields a full predictive posterior distribution, thus naturally providing a measure of uncertainty on the generated predictions (expressed by means of the variance of the predictive distribution), without the need of applying computationally intensive methods, e.g., bootstrap. We exhibit the merits of our approach by investigating its application to two difficult multimedia content classification applications: semantic characterization of audio scenes based on content, and yearly song classification, as well as a set of benchmark classification and regression tasks.