Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unsupervised Training for Large Vocabulary Translation Using Sparse Lexicon and Word Classes

Yunsu Kim, Julian Schamper, Hermann Ney


Abstract
We address for the first time unsupervised training for a translation task with hundreds of thousands of vocabulary words. We scale up the expectation-maximization (EM) algorithm to learn a large translation table without any parallel text or seed lexicon. First, we solve the memory bottleneck and enforce the sparsity with a simple thresholding scheme for the lexicon. Second, we initialize the lexicon training with word classes, which efficiently boosts the performance. Our methods produced promising results on two large-scale unsupervised translation tasks.
Anthology ID:
E17-2103
Volume:
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
Month:
April
Year:
2017
Address:
Valencia, Spain
Editors:
Mirella Lapata, Phil Blunsom, Alexander Koller
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
650–656
Language:
URL:
https://aclanthology.org/E17-2103
DOI:
Bibkey:
Cite (ACL):
Yunsu Kim, Julian Schamper, and Hermann Ney. 2017. Unsupervised Training for Large Vocabulary Translation Using Sparse Lexicon and Word Classes. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 650–656, Valencia, Spain. Association for Computational Linguistics.
Cite (Informal):
Unsupervised Training for Large Vocabulary Translation Using Sparse Lexicon and Word Classes (Kim et al., EACL 2017)
Copy Citation:
PDF:
https://aclanthology.org/E17-2103.pdf