Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

BART: Denoising Sequence-to-Sequence Pre-Training For Natural Language Generation, Translation, and Comprehension

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

BART: Denoising Sequence-to-Sequence Pre-training for Natural

Language Generation, Translation, and Comprehension

Mike Lewis*, Yinhan Liu*, Naman Goyal*, Marjan Ghazvininejad,


Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer
Facebook AI
{mikelewis,yinhanliu,naman}@fb.com

Abstract masked tokens are predicted (Yang et al., 2019), and the
available context for replacing masked tokens (Dong
We present BART, a denoising autoencoder et al., 2019). However, these methods typically focus
for pretraining sequence-to-sequence models. on particular types of end tasks (e.g. span prediction,
arXiv:1910.13461v1 [cs.CL] 29 Oct 2019

BART is trained by (1) corrupting text with an generation, etc.), limiting their applicability.
arbitrary noising function, and (2) learning a In this paper, we present BART, which pre-trains
model to reconstruct the original text. It uses a model combining Bidirectional and Auto-Regressive
a standard Tranformer-based neural machine Transformers. BART is a denoising autoencoder built
translation architecture which, despite its sim- with a sequence-to-sequence model that is applicable
plicity, can be seen as generalizing BERT (due to a very wide range of end tasks. Pretraining has
to the bidirectional encoder), GPT (with the two stages (1) text is corrupted with an arbitrary nois-
left-to-right decoder), and many other more re- ing function, and (2) a sequence-to-sequence model is
cent pretraining schemes. We evaluate a num- learned to reconstruct the original text. BART uses a
ber of noising approaches, finding the best per- standard Tranformer-based neural machine translation
formance by both randomly shuffling the or- architecture which, despite its simplicity, can be seen as
der of the original sentences and using a novel generalizing BERT (due to the bidirectional encoder),
in-filling scheme, where spans of text are re- GPT (with the left-to-right decoder), and many other
placed with a single mask token. BART is more recent pretraining schemes (see Figure 1).
particularly effective when fine tuned for text A key advantage of this setup is the noising flexibil-
generation but also works well for compre- ity; arbitrary transformations can be applied to the orig-
hension tasks. It matches the performance of inal text, including changing its length. We evaluate
RoBERTa with comparable training resources a number of noising approaches, finding the best per-
on GLUE and SQuAD, achieves new state- formance by both randomly shuffling the order of the
of-the-art results on a range of abstractive di- original sentences and using a novel in-filling scheme,
alogue, question answering, and summariza- where arbitrary length spans of text (including zero
tion tasks, with gains of up to 6 ROUGE. length) are replaced with a single mask token. This ap-
BART also provides a 1.1 BLEU increase over proach generalizes the original word masking and next
a back-translation system for machine transla- sentence prediction objectives in BERT by forcing the
tion, with only target language pretraining. We model to reason more about overall sentence length and
also report ablation experiments that replicate make longer range transformations to the input.
other pretraining schemes within the BART
BART is particularly effective when fine tuned for
framework, to better measure which factors
text generation but also works well for comprehen-
most influence end-task performance.
sion tasks. It matches the performance of RoBERTa
(Liu et al., 2019) with comparable training resources
1 Introduction
on GLUE (Wang et al., 2018) and SQuAD (Rajpurkar
Self-supervised methods have achieved remarkable et al., 2016), and achieves new state-of-the-art results
success in a wide range of NLP tasks (Mikolov et al., on a range of abstractive dialogue, question answer-
2013; Peters et al., 2018; Devlin et al., 2019; Joshi ing, and summarization tasks. For example, it im-
et al., 2019; Yang et al., 2019; Liu et al., 2019). proves performance by 6 ROUGE over previous work
The most successful approaches have been variants of on XSum (Narayan et al., 2018).
masked language models, which are denoising autoen- BART also opens up new ways of thinking about fine
coders that are trained to reconstruct text where a ran- tuning. We present a new scheme for machine transla-
dom subset of the words has been masked out. Recent tion where a BART model is stacked above a few ad-
work has shown gains by improving the distribution of ditional transformer layers. These layers are trained
masked tokens (Joshi et al., 2019), the order in which to essentially translate the foreign language to noised
B D A B C D E

Bidirectional Autoregressive
Encoder Decoder

A _ C _ E <s> A B C D
(a) BERT: Random tokens are replaced with masks, and (b) GPT: Tokens are predicted auto-regressively, meaning
the document is encoded bidirectionally. Missing tokens GPT can be used for generation. However words can only
are predicted independently, so BERT cannot easily be condition on leftward context, so it cannot learn bidirec-
used for generation. tional interactions.
A B C D E

Bidirectional Autoregressive
Encoder Decoder

A _ B _ E <s> A B C D
(c) BART: Inputs to the encoder need not be aligned with decoder outputs, allowing arbitary noise transformations. Here, a
document has been corrupted by replacing spans of text with mask symbols. The corrupted document (left) is encoded with
a bidirectional model, and then the likelihood of the original document (right) is calculated with an autoregressive decoder.
For fine-tuning, an uncorrupted document is input to both the encoder and decoder, and we use representations from the final
hidden state of the decoder.

Figure 1: A schematic comparison of BART with BERT (Devlin et al., 2019) and GPT (Radford et al., 2018).

English, by propagation through BART, thereby us- coder, and for our large model we use 12 layers in
ing BART as a pre-trained target-side language model. each. The architecture is closely related to that used in
This approach improves performance over a strong BERT, with the following differences: (1) each layer of
back-translation MT baseline by 1.1 BLEU on the the decoder additionally performs cross-attention over
WMT Romanian-English benchmark. the final hidden layer of the encoder (as in the trans-
To better understand these effects, we also report former sequence-to-sequence model); and (2) BERT
an ablation analysis that replicates other recently pro- uses an additional feed-forward network before word-
posed training objectives. This study allows us to care- prediction, which BART does not. In total, BART con-
fully control for a number of factors, including data tains roughly 10% more parameters than the equiva-
and optimization parameters, which have been shown lently sized BERT model.
to be as important for overall performance as the se-
lection of training objectives (Liu et al., 2019). We find 2.2 Pre-training BART
that BART exhibits the most consistently strong perfor- BART is trained by corrupting documents and then op-
mance across the full range of tasks we consider. timizing a reconstruction loss—the cross-entropy be-
tween the decoder’s output and the original document.
2 Model Unlike existing denoising autoencoders, which are tai-
lored to specific noising schemes, BART allows us to
BART is a denoising autoencoder that maps a corrupted apply any type of document corruption. In the extreme
document to the original document it was derived from. case, where all information about the source is lost,
It is implemented as a sequence-to-sequence model BART is equivalent to a language model.
with a bidirectional encoder over corrupted text and a We experiment with several previously proposed and
left-to-right autoregressive decoder. For pre-training, novel transformations, but we believe there is a sig-
we optimize the negative log likelihood of the original nificant potential for development of other new alter-
document. natives. The transformations we used are summarized
below, and examples are shown in Figure 2.
2.1 Architecture
Token Masking Following BERT (Devlin et al.,
BART uses the standard sequence-to-sequence Trans- 2019), random tokens are sampled and replaced with
former architecture from (Vaswani et al., 2017), ex- [MASK] elements.
cept, following GPT, that we modify ReLU activa-
tion functions to GeLUs (Hendrycks & Gimpel, 2016) Token Deletion Random tokens are deleted from the
and initialise parameters from N (0, 0.02). For our input. In contrast to token masking, the model must
base model, we use 6 layers in the encoder and de- decide which positions are missing inputs.
A _C . _ E . DE.ABC. C.DE.AB
Token Masking Sentence Permutation Document Rotation

A.C.E. ABC.DE. A_.D_E.


Token Deletion Text Infilling

Figure 2: Transformations for noising the input that we experiment with. These transformations can be composed.

Text Infilling A number of text spans are sampled, input but manipulated, which is closely related to the
with span lengths drawn from a Poisson distribution denoising pre-training objective. Here, the encoder in-
( = 3). Each span is replaced with a single [MASK] put is the input sequence, and the decoder generates
token. 0-length spans correspond to the insertion of outputs autoregressively.
[MASK] tokens. Text infilling is inspired by Span-
BERT (Joshi et al., 2019), but SpanBERT samples 3.4 Machine Translation
span lengths from a different (clamped geometric) dis- We also explore using BART to improve machine trans-
tribution, and replaces each span with a sequence of lation decoders for translating into English. Previous
[MASK] tokens of exactly the same length. Text infill- work Edunov et al. (2019) has shown that models can
ing teaches the model to predict how many tokens are be improved by incorporating pre-trained encoders, but
missing from a span. gains from using pre-trained language models in de-
coders have been limited. We show that it is possible
Sentence Permutation A document is divided into
to use the entire BART model (both encoder and de-
sentences based on full stops, and these sentences are
coder) as a single pretrained decoder for machine trans-
shuffled in a random order.
lation, by adding a new set of encoder parameters that
Document Rotation A token is chosen uniformly at are learned from bitext (see Figure 3b).
random, and the document is rotated so that it begins More precisely, we replace BART’s encoder embed-
with that token. This task trains the model to identify ding layer with a new randomly initialized encoder.
the start of the document. The model is trained end-to-end, which trains the new
encoder to map foreign words into an input that BART
3 Fine-tuning BART can de-noise to English. The new encoder can use a
separate vocabulary from the original BART model.
The representations produced by BART can be used in We train the source encoder in two steps, in both
several ways for downstream applications. cases backpropagating the cross-entropy loss from the
3.1 Sequence Classification Tasks output of the BART model. In the first step, we freeze
most of BART parameters and only update the ran-
For sequence classification tasks, the same input is fed domly initialized source encoder, the BART positional
into the encoder and decoder, and the final hidden state embeddings, and the self-attention input projection ma-
of the final decoder token is fed into new multi-class trix of BART’s encoder first layer. In the second step,
linear classifier. This approach is related to the CLS we train all model parameters for a small number of
token in BERT; however we add the additional token iterations.
to the end so that representation for the token in the
decoder can attend to decoder states from the complete 4 Comparing Pre-training Objectives
input (Figure 3a).
BART supports a much wider range of noising schemes
3.2 Token Classification Tasks during pre-training than previous work. We compare a
For token classification tasks, such as answer endpoint range of options using base-size models (6 encoder and
classification for SQuAD, we feed the complete doc- 6 decoder layers, with a hidden size of 768), evaluated
ument into the encoder and decoder, and use the top on a representative subset of the tasks we will consider
hidden state of the decoder as a representation for each for the full large scale experiments in §5.
word. This representation is used to classify the token.
4.1 Comparison Objectives
3.3 Sequence Generation Tasks While many pre-training objectives have been pro-
Because BART has an autoregressive decoder, it can be posed, fair comparisons between these have been dif-
directly fine tuned for sequence generation tasks such ficult to perform, at least in part due to differences in
as abstractive question answering and summarization. training data, training resources, architectural differ-
In both of these tasks, information is copied from the ences between models, and fine-tuning procedures. We
A B C D E
label
Pre-trained Pre-trained
Encoder Decoder
Pre-trained Pre-trained
Encoder Decoder Randomly
<s> A B C D
Initialized Encoder

A B C D E <s> A B C D E α β γ δ ε

(a) To use BART for classification problems, the same (b) For machine translation, we learn a small additional
input is fed into the encoder and decoder, and the repre- encoder that replaces the word embeddings in BART. The
sentation from the final output is used. new encoder can use a disjoint vocabulary.

Figure 3: Fine tuning BART for classification and translation.

re-implement strong pre-training approaches recently We experiment with (1) treating the task as a stan-
proposed for discriminative and generation tasks. We dard sequence-to-sequence problem, where the source
aim, as much as possible, to control for differences un- input to the encoder and the target is the decoder out-
related to the pre-training objective. However, we do put, or (2) adding the source as prefix to the target in
make minor changes to the learning rate and usage of the decoder, with a loss only on the target part of the
layer normalisation in order to improve performance sequence. We find the former works better for BART
(tuning these separately for each objective). For refer- models, and the latter for other models.
ence, we compare our implementations with published To most directly compare our models on their ability
numbers from BERT, which was also trained for 1M to model their fine-tuning objective (the log likelihood
steps on a combination of books and Wikipedia data. of the human text), we report perplexity in Table 1.
We compare the following approaches:
4.2 Tasks
Language Model Similarly to GPT (Radford et al.,
2018), we train a left-to-right Transformer language SQuAD (Rajpurkar et al., 2016)a an extractive ques-
model. This model is equivalent to the BART decoder, tion answering task on Wikipedia paragraphs. Answers
without cross-attention. are text spans extracted from a given document context.
Similar to BERT (Devlin et al., 2019), we use concate-
Permuted Language Model Based on XLNet (Yang nated question and context as input to the encoder of
et al., 2019), we sample 1/6 of the tokens, and gener- BART, and additionally pass them to the decoder. The
ate them in a random order autoregressively. For con- model includes classifiers to predict the start and end
sistency with other models, we do not implement the indices of each token.
relative positional embeddings or attention across seg-
ments from XLNet. MNLI (Williams et al., 2017), a bitext classification
task to predict whether one sentence entails another.
Masked Language Model Following BERT (Devlin The fine-tuned model concatenates the two sentences
et al., 2019), we replace 15% of tokens with [MASK] with appended an EOS token, and passes them to both
symbols, and train the model to independently predict the BART encoder and decoder. In contrast to BERT,
the original tokens. the representation of the EOS token is used to classify
the sentences relations.
Multitask Masked Language Model As in UniLM
(Dong et al., 2019), we train a Masked Language ELI5 (Fan et al., 2019), a long-form abstractive ques-
Model with additional self-attention masks. Self at- tion answering dataset. Models generate answers con-
tention masks are chosen randomly in with the follow ditioned on the concatenation of a question and sup-
proportions: 1/6 left-to-right, 1/6 right-to-left, 1/3 un- porting documents.
masked, and 1/3 with the first 50% of tokens unmasked
and a left-to-right mask for the remainder. XSum (Narayan et al., 2018), a news summarization
dataset with highly abstractive summaries.
Masked Seq-to-Seq Inspired by MASS (Song et al.,
2019), we mask a span containing 50% of tokens, ConvAI2 (Dinan et al., 2019), a dialogue response
and train a sequence to sequence model to predict the generation task, conditioned on context and a persona.
masked tokens.
CNN/DM (Hermann et al., 2015), a news summa-
For the Permuted LM, Masked LM and Multitask rization dataset. Summaries here are typically closely
Masked LM, we use two-stream attention (Yang et al., related to source sentences.
2019) to efficiently compute likelihoods of the output
part of the sequence (using a diagonal self-attention 4.3 Results
mask on the output to predict words left-to-right). Results are shown in Table 1. Several trends are clear:
Model SQuAD 1.1 MNLI ELI5 XSum ConvAI2 CNN/DM
F1 Acc PPL PPL PPL PPL
BERT Base (Devlin et al., 2019) 88.5 84.3 - - - -
Masked Language Model 90.0 83.5 24.77 7.87 12.59 7.06
Masked Seq2seq 87.0 82.1 23.40 6.80 11.43 6.19
Language Model 76.7 80.1 21.40 7.00 11.51 6.56
Permuted Language Model 89.1 83.7 24.03 7.69 12.23 6.96
Multitask Masked Language Model 89.2 82.4 23.73 7.50 12.39 6.74
BART Base
w/ Token Masking 90.4 84.1 25.05 7.08 11.73 6.10
w/ Token Deletion 90.4 84.1 24.61 6.90 11.46 5.87
w/ Text Infilling 90.8 84.0 24.26 6.61 11.05 5.83
w/ Document Rotation 77.2 75.3 53.69 17.14 19.87 10.59
w/ Sentence Shuffling 85.4 81.5 41.87 10.93 16.67 7.89
w/ Text Infilling + Sentence Shuffling 90.8 83.8 24.17 6.62 11.12 5.41

Table 1: Comparison of pre-training objectives. All models are of comparable size and are trained for 1M steps
on a combination of books and Wikipedia data. Entries in the bottom two blocks are trained on identical data
using the same code-base, and fine-tuned with the same procedures. Entries in the second block are inspired by
pre-training objectives proposed in previous work, but have been simplified to focus on evaluation objectives (see
§4.1). Performance varies considerably across tasks, but the BART models with text infilling demonstrate the most
consistently strong performance.

Performance of pre-training methods varies signifi- Pure language models perform best on ELI5 The
cantly across tasks The effectiveness of pre-training ELI5 dataset is an outlier, with much higher perplex-
methods is highly dependent on the task. For exam- ities than other tasks, and is the only generation task
ple, a simple language model achieves the best ELI5 where other models outperform BART. A pure lan-
performance, but the worst SQUAD results. guage model performs best, suggesting that BART is
less effective when the output is only loosely con-
Token masking is crucial Pre-training objectives strained by the input.
based on rotating documents or permuting sentences
perform poorly in isolation. The successful methods BART achieves the most consistently strong perfor-
either use token deletion or masking, or self-attention mance. With the exception of ELI5, BART models
masks. Deletion appears to outperform masking on using text-infilling perform well on all tasks.
generation tasks.
5 Large-scale Pre-training Experiments
Left-to-right pre-training improves generation
The Masked Language Model and the Permuted Recent work has shown that downstream performance
Language Model perform less well than others on can dramatically improve when pre-training is scaled
generation, and are the only models we consider that to large batch sizes (Yang et al., 2019; Liu et al., 2019)
do not include left-to-right auto-regressive language and corpora. To test how well BART performs in this
modelling during pre-training. regime, and to create a useful model for downstream
tasks, we trained BART using the same scale as the
Bidirectional encoders are crucial for SQuAD As RoBERTa model.
noted in previous work (Devlin et al., 2019), just
5.1 Experimental Setup
left-to-right decoder performs poorly on SQuAD, be-
cause future context is crucial in classification deci- We pre-train a large model with 12 layers in each of the
sions. However, BART achieves similar performance encoder and decoder, and a hidden size of 1024. Fol-
with only half the number of bidirectional layers. lowing RoBERTa (Liu et al., 2019), we use a batch size
of 8000, and train the model for 500000 steps. Docu-
The pre-training objective is not the only important ments are tokenized with the same byte-pair encoding
factor Our Permuted Language Model performs less as GPT-2 (Radford et al., 2019). Based on the results in
well than XLNet (Yang et al., 2019). Some of this dif- Section §4, we use a combination of text infilling and
ference is likely due to not including other architectural sentence permutation. We mask 30% of tokens in each
improvements, such as relative-position embeddings or document, and permute all sentences. Although sen-
segment-level recurrence. tence permutation only shows significant additive gains
SQuAD 1.1 SQuAD 2.0 MNLI SST QQP QNLI STS-B RTE MRPC CoLA
EM/F1 EM/F1 m/mm Acc Acc Acc Acc Acc Acc Mcc
BERT 84.1/90.9 79.0/81.8 86.6/- 93.2 91.3 92.3 90.0 70.4 88.0 60.6
UniLM -/- 80.5/83.4 87.0/85.9 94.5 - 92.7 - 70.9 - 61.1
XLNet 89.0/94.5 86.1/88.8 89.8/- 95.6 91.8 93.9 91.8 83.8 89.2 63.6
RoBERTa 88.9/94.6 86.5/89.4 90.2/90.2 96.4 92.2 94.7 92.4 86.6 90.9 68.0
BART 88.8/94.6 86.1/89.2 89.9/90.1 96.6 92.5 94.9 91.2 87.0 90.4 62.8

Table 2: Results for large models on SQuAD and GLUE tasks. BART performs comparably to RoBERTa and
XLNet, suggesting that BART’s uni-directional decoder layers do not reduce performance on discriminative tasks.

CNN/DailyMail XSum
R1 R2 RL R1 R2 RL
Lead-3 40.42 17.62 36.67 16.30 1.60 11.95
PTGEN (See et al., 2017) 36.44 15.66 33.42 29.70 9.21 23.24
PTGEN+COV (See et al., 2017) 39.53 17.28 36.38 28.10 8.02 21.72
UniLM 43.33 20.21 40.51 - - -
BERTSUMABS (Liu & Lapata, 2019) 41.72 19.39 38.76 38.76 16.33 31.15
BERTSUMEXTABS (Liu & Lapata, 2019) 42.13 19.60 39.18 38.81 16.50 31.27
BART 44.16 21.28 40.90 45.14 22.27 37.25

Table 3: Results on two standard summarization datasets. BART outperforms previous work on summarization on
two tasks and all metrics, with gains of roughly 6 points on the more abstractive dataset.

on the CNN/DM summarization dataset, we hypothe- ConvAI2


sised that larger pre-trained models may be better able Valid F1 Valid PPL
to learn from this task. To help the model better fit the
data, we disabled dropout for the final 10% of training Seq2Seq + Attention 16.02 35.07
steps. We use the same pre-training data as Liu et al. Best System 19.09 17.51
(2019), consisting of 160Gb of news, books, stories, BART 20.72 11.85
and web text.
Table 4: BART outperforms previous work on conver-
5.2 Discriminative Tasks sational response generation. Perplexities are renor-
malized based on official tokenizer for ConvAI2.
Table 2 compares the performance of BART with sev-
eral recent approaches on the well-studied SQuAD and
GLUE tasks (Warstadt et al., 2018; Socher et al., 2013; Summarization To provide a comparison with the
Dolan & Brockett, 2005; Agirre et al., 2007; Williams state-of-the-art in summarization, we present results
et al., 2018; Dagan et al., 2006; Levesque et al., 2011). on two summarization datasets, CNN/DailyMail and
The most directly comparable baseline is RoBERTa, XSum, which have distinct properties.
which was pre-trained with the same resources, but Summaries in the CNN/DailyMail tend to resemble
a different objective. Overall, BART performs simi- source sentences. Extractive models do well here, and
larly, with only small differences between the models even the baseline of the first-three source sentences is
on most tasks. suggesting that BART’s improvements highly competitive. Nevertheless, BART outperforms
on generation tasks do not come at the expense of clas- all existing work.
sification performance.
In contrast, XSum is highly abstractive, and extrac-
5.3 Generation Tasks tive models perform poorly. BART outperforms the
best previous work, which leverages BERT, by roughly
We also experiment with several text generation tasks. 6.0 points on all ROUGE metrics—representing a sig-
BART is fine-tuned as a standard sequence-to-sequence nificant advance in performance on this problem. Qual-
model from the input to the output text. During fine- itatively, sample quality is high (see §6).
tuning we use a label smoothed cross entropy loss
(Pereyra et al., 2017), with the smoothing parameter Dialogue We evaluate dialogue response generation
set to 0.1. During generation, we set beam size as 5, on C ONVAI2 (Dinan et al., 2019), in which agents
remove duplicated trigrams in beam search, and tuned must generate responses conditioned on both the pre-
the model with min-len, max-len, length penalty on the vious context and a textually-specified persona. BART
validation set (Fan et al., 2017). outperforms previous work on two automated metrics.
ELI5 Table 7 shows example summaries generated by
R1 R2 RL BART. Examples are taken from WikiNews articles
published after the creation of the pre-training corpus,
Best Extractive 23.5 3.1 17.5 to eliminate the possibility of the events described be-
Language Model 27.8 4.7 23.1 ing present in the model’s training data. Following
Seq2Seq 28.3 5.1 22.8 Narayan et al. (2018), we remove the first sentence of
Seq2Seq Multitask 28.9 5.4 23.1 the article prior to summarizing it, so there is no easy
BART 30.6 6.2 24.3 extractive summary of the document.
Unsurprisingly, model output is fluent and grammat-
Table 5: BART achieves state-of-the-art results on ical English. However, model output is also highly ab-
the challenging ELI5 abstractive question answering stractive, with few phrases copied from the input. The
dataset. Comparison models are from Fan et al. (2019). output is also generally factually accurate, and inte-
grates supporting evidence from across the input doc-
RO-EN ument with background knowledge (for example, cor-
Baseline 36.80 rectly completing names, or inferring that PG&E oper-
Fixed BART 36.29 ates in California). In the first example, inferring that
Tuned BART 37.96 fish are protecting reefs from global warming requires
non-trivial inference from the text. However, the claim
Table 6: The performance (BLEU) of baseline and that the work was published in Science is not supported
BART on WMT’16 RO-EN augmented with back- by the source.
translation data. BART improves over a strong back- These samples demonstrate that the BART pretrain-
translation (BT) baseline by using monolingual English ing has learned a strong combination of natural lan-
pre-training. guage understanding and generation.

7 Related Work
Abstractive QA We use the recently proposed ELI5
Early methods for pretraining were based on language
dataset to test the model’s ability to generate long free-
models. GPT (Radford et al., 2018) only models left-
form answers. We find BART outperforms the best pre-
ward context, which is problematic for some tasks.
vious work by 1.2 ROUGE-L, but the dataset remains
ELMo (Peters et al., 2018) concatenates left-only and
a challenging, because answers are only weakly speci-
right-only representations, but does not pre-train inter-
fied by the question.
actions between these features. Radford et al. (2019)
5.4 Translation demonstrated that very large language models can act
as unsupervised multitask models.
We also evaluated performance on WMT16 Romanian-
BERT (Devlin et al., 2019) introduced masked lan-
English, augmented with back-translation data
guage modelling, which allows pre-training to learn in-
from Sennrich et al. (2016). We use a 6-layer
teractions between left and right context words. Re-
transformer source encoder to map Romanian into
cent work has shown that very strong performance can
a representation that BART is able to de-noise into
be achieved by training for longer (Liu et al., 2019),
English, following the approach introduced in §3.4.
by tying parameters across layers (Lan et al., 2019),
Experiment results are presented in Table 6. We
and by masking spans instead of words (Joshi et al.,
compare our results against a baseline Transformer
2019). Predictions are not made auto-regressively, re-
architecture (Vaswani et al., 2017) with Transformer-
ducing the effectiveness of BERT for generation tasks.
large settings (the baseline row). We show the
UniLM (Dong et al., 2019) fine-tunes BERT with an
performance of both steps of our model in the fixed
ensemble of masks, some of which allow only leftward
BART and tuned BART rows. For each row we
context. Like BART, this allows UniLM to be used for
experiment on the original WMT16 Romanian-English
both generative and discriminative tasks. A difference
augmented with back-translation data. We use a
is that UniLM predictions are conditionally indepen-
beam width of 5 and a length penalty of ↵ = 1.
dent, whereas BART’s are autoregressive. BART re-
Preliminary results suggested that our approach was
duces the mismatch between pre-training and genera-
less effective without back-translation data, and prone
tion tasks, because the decoder is always trained on un-
to overfitting—future work should explore additional
corrupted context.
regularization techniques.
MASS (Song et al., 2019) is perhaps the most similar
model to BART. An input sequence where a contiguous
6 Qualitative Analysis
span of tokens is masked is mapped to a sequence con-
BART shows large improvements on summarization sisting of the missing tokens. MASS is less effective
metrics, of up to 6 points over the prior state-of-the-art. for discriminative tasks, because disjoint sets of tokens
To understand BART’s performance beyond automated are fed into the encoder and decoder.
metrics, we analyse its generations qualitatively. XL-Net (Yang et al., 2019) extends BERT by pre-
Source Document (abbreviated) BART Summary
The researchers examined three types of coral in reefs off the Fisheries off the coast of Fiji are protect-
coast of Fiji ... The researchers found when fish were plentiful, ing coral reefs from the effects of global
they would eat algae and seaweed off the corals, which appeared warming, according to a study in the jour-
to leave them more resistant to the bacterium Vibrio coralliilyti- nal Science.
cus, a bacterium associated with bleaching. The researchers sug-
gested the algae, like warming temperatures, might render the
corals’ chemical defenses less effective, and the fish were pro-
tecting the coral by removing the algae.
Sacoolas, who has immunity as a diplomat’s wife, was involved Boris Johnson has said he will raise the is-
in a traffic collision ... Prime Minister Johnson was questioned sue of US diplomat Anne Sacoolas’ diplo-
about the case while speaking to the press at a hospital in Wat- matic immunity with the White House.
ford. He said, “I hope that Anne Sacoolas will come back ...
if we can’t resolve it then of course I will be raising it myself
personally with the White House.”
According to Syrian state media, government forces began de- Syrian government forces have entered
ploying into previously SDF controlled territory yesterday. ... territory held by the US-backed Syrian
On October 6, US President Donald Trump and Turkish Presi- Democratic Forces (SDF) in response to
dent Recep Tayyip Erdoan spoke on the phone. Then both na- Turkey’s incursion into the region.
tions issued statements speaking of an imminent incursion into
northeast Syria ... . On Wednesday, Turkey began a military
offensive with airstrikes followed by a ground invasion.
This is the first time anyone has been recorded to run a full Kenyan runner Eliud Kipchoge has run a
marathon of 42.195 kilometers (approximately 26 miles) under marathon in less than two hours.
this pursued landmark time. It was not, however, an officially
sanctioned world record, as it was not an ”open race” of the
IAAF. His time was 1 hour 59 minutes 40.2 seconds. Kipchoge
ran in Vienna, Austria. It was an event specifically designed to
help Kipchoge break the two hour barrier.
PG&E stated it scheduled the blackouts in response to forecasts Power has been turned off to millions of
for high winds amid dry conditions. The aim is to reduce the risk customers in California as part of a power
of wildfires. Nearly 800 thousand customers were scheduled to shutoff plan.
be affected by the shutoffs which were expected to last through
at least midday tomorrow.

Table 7: Example summaries from the XSum-tuned BART model on WikiNews articles. For clarity, only relevant
excerpts of the source are shown. Summaries combine information from across the article and prior knowledge.

dicting masked tokens auto-regressively in a permuted 8 Conclusions


order. This objective allows predictions to condition on
We introduced BART, a pre-training approach that
both left and right context. In contrast, the BART de-
learns to map corrupted documents to the original.
coder works left-to-right during pre-training, matching
BART achieves similar performance to RoBERTa on
the setting during generation.
discriminative tasks, while achieving new state-of-the-
art results on a number of text generation tasks. Fu-
ture work should explore new methods for corrupting
Several papers have explored using pre-trained rep- documents for pre-training, perhaps tailoring them to
resentations to improve machine translation. The specific end tasks.
largest improvements have come from pre-training on
both source and target languages (Song et al., 2019;
Lample & Conneau, 2019), but this requires pre-
training on all languages of interest. Other work has
shown that encoders can be improved using pre-trained
representations (Edunov et al., 2019), but gains in de-
coders are more limited. We show how BART can be
used to improve machine translation decoders.
References Karl Moritz Hermann, Tomas Kocisky, Edward
Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su-
Eneko Agirre, Llu’is M‘arquez, and Richard Wicen- leyman, and Phil Blunsom. Teaching machines to
towski (eds.). Proceedings of the Fourth Interna- read and comprehend. In Advances in neural infor-
tional Workshop on Semantic Evaluations (SemEval- mation processing systems, pp. 1693–1701, 2015.
2007). Association for Computational Linguistics,
Prague, Czech Republic, June 2007. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld,
Luke Zettlemoyer, and Omer Levy. Spanbert: Im-
Ido Dagan, Oren Glickman, and Bernardo Magnini. proving pre-training by representing and predicting
The PASCAL recognising textual entailment chal- spans. arXiv preprint arXiv:1907.10529, 2019.
lenge. In Machine learning challenges. evaluat-
ing predictive uncertainty, visual object classifica- Guillaume Lample and Alexis Conneau. Cross-
tion, and recognising tectual entailment, pp. 177– lingual language model pretraining. arXiv preprint
190. Springer, 2006. arXiv:1901.07291, 2019.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman,
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kevin Gimpel, Piyush Sharma, and Radu Sori-
Kristina Toutanova. BERT: Pre-training of deep cut. Albert: A lite bert for self-supervised learn-
bidirectional transformers for language understand- ing of language representations. arXiv preprint
ing. In Proceedings of the 2019 Conference of the arXiv:1909.11942, 2019.
North American Chapter of the Association for Com-
putational Linguistics: Human Language Technolo- Hector J Levesque, Ernest Davis, and Leora Morgen-
gies, Volume 1 (Long and Short Papers), pp. 4171– stern. The Winograd schema challenge. In AAAI
4186, Minneapolis, Minnesota, June 2019. Associa- Spring Symposium: Logical Formalizations of Com-
tion for Computational Linguistics. doi: 10.18653/ monsense Reasoning, volume 46, pp. 47, 2011.
v1/N19-1423. URL https://www.aclweb.
org/anthology/N19-1423. Yang Liu and Mirella Lapata. Text summariza-
tion with pretrained encoders. arXiv preprint
Emily Dinan, Varvara Logacheva, Valentin Malykh, arXiv:1908.08345, 2019.
Alexander Miller, Kurt Shuster, Jack Urbanek,
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man-
Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan
dar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Lowe, et al. The second conversational in-
Luke Zettlemoyer, and Veselin Stoyanov. Roberta:
telligence challenge (convai2). arXiv preprint
A robustly optimized bert pretraining approach.
arXiv:1902.00098, 2019.
arXiv preprint arXiv:1907.11692, 2019.
William B Dolan and Chris Brockett. Automatically Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey
constructing a corpus of sentential paraphrases. In Dean. Efficient estimation of word representations
Proceedings of the International Workshop on Para- in vector space. arXiv preprint arXiv:1301.3781,
phrasing, 2005. 2013.
Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xi- Shashi Narayan, Shay B Cohen, and Mirella Lapata.
aodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, Don’t give me the details, just the summary! topic-
and Hsiao-Wuen Hon. Unified language model pre- aware convolutional neural networks for extreme
training for natural language understanding and gen- summarization. arXiv preprint arXiv:1808.08745,
eration. arXiv preprint arXiv:1905.03197, 2019. 2018.

Sergey Edunov, Alexei Baevski, and Michael Auli. Gabriel Pereyra, George Tucker, Jan Chorowski,
Pre-trained language model representations for lan- Łukasz Kaiser, and Geoffrey Hinton. Regularizing
guage generation. In Proceedings of the 2019 Con- neural networks by penalizing confident output dis-
ference of the North American Chapter of the Asso- tributions. arXiv preprint arXiv:1701.06548, 2017.
ciation for Computational Linguistics: Human Lan- Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt
guage Technologies, Volume 1 (Long and Short Pa- Gardner, Christopher Clark, Kenton Lee, and Luke
pers), 2019. Zettlemoyer. Deep contextualized word representa-
tions. arXiv preprint arXiv:1802.05365, 2018.
Angela Fan, David Grangier, and Michael Auli. Con-
trollable abstractive summarization. arXiv preprint Alec Radford, Karthik Narasimhan, Tim Salimans,
arXiv:1711.05217, 2017. and Ilya Sutskever. Improving language un-
derstanding by generative pre-training. URL
Angela Fan, Yacine Jernite, Ethan Perez, David https://s3-us-west-2. amazonaws. com/openai-
Grangier, Jason Weston, and Michael Auli. Eli5: assets/researchcovers/languageunsupervised/language
Long form question answering. arXiv preprint understanding paper. pdf, 2018.
arXiv:1907.09190, 2019.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan,
Dan Hendrycks and Kevin Gimpel. Gaussian error lin- Dario Amodei, and Ilya Sutskever. Language mod-
ear units (gelus). arXiv preprint arXiv:1606.08415, els are unsupervised multitask learners. OpenAI
2016. Blog, 1(8), 2019.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev,
and Percy Liang. Squad: 100,000+ questions for
machine comprehension of text. arXiv preprint
arXiv:1606.05250, 2016.
Abigail See, Peter J Liu, and Christopher D
Manning. Get to the point: Summarization
with pointer-generator networks. arXiv preprint
arXiv:1704.04368, 2017.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
Edinburgh neural machine translation systems for
WMT 16. In Proceedings of the First Conference
on Machine Translation: Volume 2, Shared Task Pa-
pers, 2016.
Richard Socher, Alex Perelygin, Jean Wu, Jason
Chuang, Christopher D Manning, Andrew Ng, and
Christopher Potts. Recursive deep models for se-
mantic compositionality over a sentiment treebank.
In Proceedings of EMNLP, pp. 1631–1642, 2013.
Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-
Yan Liu. Mass: Masked sequence to sequence pre-
training for language generation. In International
Conference on Machine Learning, 2019.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz
Kaiser, and Illia Polosukhin. Attention is all you
need. In Advances in neural information processing
systems, pp. 5998–6008, 2017.
Alex Wang, Amanpreet Singh, Julian Michael, Felix
Hill, Omer Levy, and Samuel R Bowman. Glue:
A multi-task benchmark and analysis platform for
natural language understanding. arXiv preprint
arXiv:1804.07461, 2018.
Alex Warstadt, Amanpreet Singh, and Samuel R.
Bowman. Neural network acceptability judgments.
arXiv preprint 1805.12471, 2018.
Adina Williams, Nikita Nangia, and Samuel R Bow-
man. A broad-coverage challenge corpus for
sentence understanding through inference. arXiv
preprint arXiv:1704.05426, 2017.
Adina Williams, Nikita Nangia, and Samuel R. Bow-
man. A broad-coverage challenge corpus for sen-
tence understanding through inference. In Proceed-
ings of NAACL-HLT, 2018.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime
Carbonell, Ruslan Salakhutdinov, and Quoc V
Le. Xlnet: Generalized autoregressive pretrain-
ing for language understanding. arXiv preprint
arXiv:1906.08237, 2019.

You might also like