Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
May 26, 2017 · Abstract:For conversational large-vocabulary continuous speech recognition (LVCSR) tasks, up to about two thousand hours of audio is ...
This work proposes a technique to construct a modern, high quality conversational speech training corpus on the order of hundreds of millions of utterances ...
Utilizing the colossal scale of our unlabeled telephony dataset, we propose a technique to construct a modern, high quality conversational speech training ...
People also ask
Aug 9, 2023 · Co-training: Train two separate models on different views/subsets of features in the labeled data. Use each model to label unlabeled data for ...
In this paper, we explore various approaches for semi supervised learning in an end to end automatic speech recognition (ASR) framework.
Walker et al., “Semi-supervised model training for unbounded conversational speech recognition,” arXiv, 2017. [25]. V. Manohar et al., "Semi-supervised ...
The proposed approach significantly reduces ASR errors, compared to the baseline model and assumes that the diversity of automatically generated transcripts ...
Semi-supervised model training for unbounded conversational speech recognition. S Walker, M Pedersen, I Orife, J Flaks. arXiv preprint arXiv:1705.09724, 2017.
Dec 18, 2018 · I am a bot! You linked to a paper that has a summary on ShortScience.org! Virtual Adversarial Training for Semi-Supervised Text Classification.
Dec 19, 2023 · Jason Flaks shares his experience on developing conversational AI and NLP products and solving different challenges around it.