Unsupervised domain adaptation of machine translation, which adapts a pre-trained translation model to a specific domain without in-domain parallel data, has drawn extensive attention in recent years. However, most existing methods focus on the fine-tuning based techniques, which is non-extensible. In this paper, we propose a new method to perform unsupervised domain adaptation in a non-parametric manner. Our method only resorts to in-domain monolingual data, and we jointly perform nearest neighbour inference on both forward and backward translation directions. The forward translation model creates nearest neighbour datastore for the backward direction, and vice versa, strengthening each other in an iterative style. Experiments on multi-domain datasets demonstrate that our method significantly improves the in-domain translation performance and achieves state-of-the-art results among non-parametric methods.
Previous work on multimodal machine translation (MMT) has focused on the way of incorporating vision features into translation but little attention is on the quality of vision models. In this work, we investigate the impact of vision models on MMT. Given the fact that Transformer is becoming popular in computer vision, we experiment with various strong models (such as Vision Transformer) and enhanced features (such as object-detection and image captioning). We develop a selective attention model to study the patch-level contribution of an image in MMT. On detailed probing tasks, we find that stronger vision models are helpful for learning translation from the visual modality. Our results also suggest the need of carefully examining MMT models, especially when current benchmarks are small-scale and biased.
2021
pdf
bib
abs The NiuTrans Machine Translation Systems for WMT21 Shuhan Zhou
|
Tao Zhou
|
Binghao Wei
|
Yingfeng Luo
|
Yongyu Mu
|
Zefan Zhou
|
Chenglong Wang
|
Xuanjun Zhou
|
Chuanhao Lv
|
Yi Jing
|
Laohu Wang
|
Jingnan Zhang
|
Canan Huang
|
Zhongxiang Yan
|
Chi Hu
|
Bei Li
|
Tong Xiao
|
Jingbo Zhu Proceedings of the Sixth Conference on Machine Translation
This paper describes NiuTrans neural machine translation systems of the WMT 2021 news translation tasks. We made submissions to 9 language directions, including English2Chinese, Japanese, Russian, Icelandic and English2Hausa tasks. Our primary systems are built on several effective variants of Transformer, e.g., Transformer-DLCL, ODE-Transformer. We also utilize back-translation, knowledge distillation, post-ensemble, and iterative fine-tuning techniques to enhance the model performance further.
2020
pdf
bib
abs The NiuTrans System for the WMT20 Quality Estimation Shared Task Chi Hu
|
Hui Liu
|
Kai Feng
|
Chen Xu
|
Nuo Xu
|
Zefan Zhou
|
Shiqin Yan
|
Yingfeng Luo
|
Chenglong Wang
|
Xia Meng
|
Tong Xiao
|
Jingbo Zhu Proceedings of the Fifth Conference on Machine Translation
This paper describes the submissions of the NiuTrans Team to the WMT 2020 Quality Estimation Shared Task. We participated in all tasks and all language pairs. We explored the combination of transfer learning, multi-task learning and model ensemble. Results on multiple tasks show that deep transformer machine translation models and multilingual pretraining methods significantly improve translation quality estimation performance. Our system achieved remarkable results in multiple level tasks, e.g., our submissions obtained the best results on all tracks in the sentence-level Direct Assessment task.