Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Feb 12, 2021 · We show that custom sequence loss training is the state-of-the-art on open SLU datasets and leads to 6% relative improvement in both ASR and NLU ...
We first develop custom sequence loss training approaches to make use of non-differentiable arbitrary risk values or losses on the entire sequence of outputs. ...
This study explores the temporal dynamics of language processing by examining the alignment between word representations from a pre-trained transformer-based ...
We show that custom sequence loss training is the state-of-the-art on open SLU datasets and leads to 6% relative improvement in both ASR and NLU performance ...
We show that custom sequence loss training is the state-of-the-art on open SLU datasets and leads to 6% relative improvement in both ASR and NLU performance ...
Do as I mean, not as I say: Sequence Loss Training for Spoken Language Understanding · no code implementations • 12 Feb 2021 • Milind Rao, Pranav Dheram ...
Do as I mean, not as I say: Sequence Loss Training for Spoken Language Understanding · no code implementations • 12 Feb 2021 • Milind Rao, Pranav Dheram ...
DO as I Mean, Not as I Say: Sequence Loss Training for Spoken Language Understanding · Milind RaoPranav Dheram +4 authors. A. Stolcke. Computer Science ...
Jul 25, 2022 · Stolcke, “Do as i mean, not as i say: Sequence loss train- ing for spoken language understanding,” in ICASSP 2021-2021. IEEE International ...
Do as i mean, not as i say: Sequence loss training for spoken language understanding. M Rao, P Dheram, G Tiwari, A Raju, J Droppo, A Rastrow, A Stolcke.