Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Dec 9, 2022 · The main contributions of this paper include: we propose a transfer learning framework based on hot-start. On the basis of transfer learning, we ...
Specifically, we perform bidirectional knowledge transfer between translation model and text style transfer model iteratively through knowledge distillation.
TL;DR: This work proposes a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages using a shared ...
Hot-Start Transfer Learning Combined with Approximate Distillation for Mongolian-Chinese Neural Machine Translation. P. Wang, H. Hou, S. Sun, N. Wu, W. Jian ...
When parallel training data is scarce, it will affect neural machine translation. For low-resource neural machine translation (NMT), transfer.
Dec 8, 2022 · Hot-Start Transfer Learning Combined with Approximate Distillation for Mongolian-Chinese Neural Machine Translation. Pengcong Wang, Hongxu ...
Jun 21, 2024 · Hot-start transfer learning combined with approximate distillation for Mongolian-Chinese neural machine translation. In Machine Translation ...
People also ask
Hot-Start Transfer Learning Combined with Approximate Distillation for Mongolian-Chinese Neural Machine Translation. Chapter. Dec 2022. Pengcong ...
Nov 2, 2023 · Comparing the algorithm in this paper with pre-training and pre-training combined with fine-tuning only, the BLEU of pre-training alone is only ...
- Hot-start Transfer Learning combined with Approximate Distillation for Mongolian- Chinese Neural Machine Translation.- Review-based Curriculum Learning ...