Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
We have presented a scalable, distributed algorithm based on. Hessian-free optimization for state-level minimum Bayes risk training of deep neural network ...
... The objective of this paper is to explore speeding up DNN training using 2nd-order optimization, which easily lends itself to parallelization, with BG/Q. We ...
Dec 1, 2012 · Scalable minimum bayes risk training of deep neural network acoustic models using distributed hessian-free optimization for INTERSPEECH 2012 ...
Scalable minimum Bayes risk training of deep neural network acoustic models using distributed hessian-free optimization ... To read the full-text of this research ...
Bibliographic details on Scalable Minimum Bayes Risk Training of Deep Neural Network Acoustic Models Using Distributed Hessian-free Optimization.
Deep convolutional neural networks ... Scalable Minimum Bayes Risk Training of Deep Neural Network Acoustic Models Using Distributed Hessian-free Optimization.
Jun 2, 2016 · Abstract:Training deep neural network is a high dimensional and a highly non-convex optimization problem. Stochastic gradient descent (SGD) ...
Missing: Minimum Bayes Risk Acoustic
Scalable Minimum Bayes Risk Training of Deep Neural Network Acoustic Models Using Distributed Hessian-free Optimization. B Kingsbury, TN Sainath, H Soltau.
Scalable minimum bayes risk training of deep neural network acoustic models using distributed hessian-free optimization. In INTERSPEECH. ISCA,. 2012. R ...
Abstract. This paper presents a novel natural gradient and Hessian-free (NGHF) optimisation framework for neural network training that can operate efficiently ...