Parallel attention mechanisms in neural machine translation

JR Medina, J Kalita - 2018 17th IEEE international conference …, 2018 - ieeexplore.ieee.org
2018 17th IEEE international conference on machine learning and …, 2018ieeexplore.ieee.org
Recent papers in neural machine translation have proposed the strict use of attention
mechanisms over previous standards such as recurrent and convolutional neural networks
(RNNs and CNNs). We propose that by running traditionally? stacked encoding branches
from encoder-decoder attention-focused architectures in parallel, that even more sequential
operations can be removed from the model and thereby decrease training time. In particular,
we modify the recently published attention-based architecture called Transformer by Google …
Recent papers in neural machine translation have proposed the strict use of attention mechanisms over previous standards such as recurrent and convolutional neural networks (RNNs and CNNs). We propose that by running traditionally? stacked encoding branches from encoder-decoder attention-focused architectures in parallel, that even more sequential operations can be removed from the model and thereby decrease training time. In particular, we modify the recently published attention-based architecture called Transformer by Google, by replacing sequential attention modules with parallel ones, reducing the amount of training time and substantially improving BLEU scores at the same time. Experiments over the English to German and English to French translation tasks show that our model establishes a new state of the art.
ieeexplore.ieee.org