Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Nov 15, 2012 · The recently proposed context-dependent deep neural network hidden Markov models (CD-DNN-HMMs) have been proved highly promising for large vocabulary speech ...
Evaluation on Switchboard tasks indicates that DTNNs can outperform the already high-performing DNNs with 4–5% and. 3% relative word error reduction, ...
The recently proposed context-dependent deep neural network hidden Markov models (CD-DNN-HMMs) have been proved highly promising for large vocabulary speech ...
Evaluation on Switchboard tasks indicates that DTNNs can outperform the already high-performing DNNs with 4-5% and 3% relative word error reduction, ...
In this paper, we extend DNNs to deep tensor neural networks (DTNNs) in which one or more layers are double-projection and tensor layers. The basic idea of the ...
In this paper we report results of a DBN-pretrained context-dependent ANN/HMM system trained on two datasets that are much larger than any reported previously.
Missing: Tensor | Show results with:Tensor
Recently, we proposed and developed the context-dependent deep neural network hidden Markov models (CD-DNN-HMMs) for large vocabulary speech recognition and ...
Dong Y. et al. [16] proposes that deep tensor neural network (DTNN) is used for Large Vocabulary Continuous Speech Recognition (LVCSR) and has achieved good ...
Deep neural network (DNN) acoustic models have driven tremendous improvements in large vocabulary continuous speech recognition (LVCSR) in recent years.
Jan 24, 2011 · I would like to build a language model for CMU Sphinx, but my corpus has more than 1000 words so I cannot use the online tool.