[PDF][PDF] Measuring attribution in natural language generation models

H Rashkin, V Nikolaev, M Lamm, L Aroyo… - Computational …, 2023 - direct.mit.edu
Computational Linguistics, 2023direct.mit.edu
Large neural models have brought a new challenge to natural language generation (NLG): It
has become imperative to ensure the safety and reliability of the output of models that
generate freely. To this end, we present an evaluation framework, Attributable to Identified
Sources (AIS), stipulating that NLG output pertaining to the external world is to be verified
against an independent, provided source. We define AIS and a two-stage annotation
pipeline for allowing annotators to evaluate model output according to annotation …
Abstract
Large neural models have brought a new challenge to natural language generation (NLG): It has become imperative to ensure the safety and reliability of the output of models that generate freely. To this end, we present an evaluation framework, Attributable to Identified Sources (AIS), stipulating that NLG output pertaining to the external world is to be verified against an independent, provided source. We define AIS and a two-stage annotation pipeline for allowing annotators to evaluate model output according to annotation guidelines. We successfully validate this approach on generation datasets spanning three tasks (two conversational QA datasets, a summarization dataset, and a table-to-text dataset). We provide full annotation guidelines in the appendices and publicly release the annotated data at https://github.com/google-research-datasets/AIS.
MIT Press