Ankita De
2021
Annotation Inconsistency and Entity Bias in MultiWOZ
Kun Qian
|
Ahmad Beirami
|
Zhouhan Lin
|
Ankita De
|
Alborz Geramifard
|
Zhou Yu
|
Chinnadhurai Sankar
Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue
MultiWOZ (Budzianowski et al., 2018) is one of the most popular multi-domain taskoriented dialog datasets, containing 10K+ annotated dialogs covering eight domains. It has been widely accepted as a benchmark for various dialog tasks, e.g., dialog state tracking (DST), natural language generation (NLG) and end-to-end (E2E) dialog modeling. In this work, we identify an overlooked issue with dialog state annotation inconsistencies in the dataset, where a slot type is tagged inconsistently across similar dialogs leading to confusion for DST modeling. We propose an automated correction for this issue, which is present in 70% of the dialogs. Additionally, we notice that there is significant entity bias in the dataset (e.g., “cambridge” appears in 50% of the destination cities in the train domain). The entity bias can potentially lead to named entity memorization in generative models, which may go unnoticed as the test set suffers from a similar entity bias as well. We release a new test set with all entities replaced with unseen entities. Finally, we benchmark joint goal accuracy (JGA) of the state-of-theart DST baselines on these modified versions of the data. Our experiments show that the annotation inconsistency corrections lead to 7-10% improvement in JGA. On the other hand, we observe a 29% drop in JGA when models are evaluated on the new test set with unseen entities.
2020
Situated and Interactive Multimodal Conversations
Seungwhan Moon
|
Satwik Kottur
|
Paul Crook
|
Ankita De
|
Shivani Poddar
|
Theodore Levin
|
David Whitney
|
Daniel Difranco
|
Ahmad Beirami
|
Eunjoon Cho
|
Rajen Subba
|
Alborz Geramifard
Proceedings of the 28th International Conference on Computational Linguistics
Next generation virtual assistants are envisioned to handle multimodal inputs (e.g., vision, memories of previous interactions, and the user’s utterances), and perform multimodal actions (, displaying a route while generating the system’s utterance). We introduce Situated Interactive MultiModal Conversations (SIMMC) as a new direction aimed at training agents that take multimodal actions grounded in a co-evolving multimodal input context in addition to the dialog history. We provide two SIMMC datasets totalling ~13K human-human dialogs (~169K utterances) collected using a multimodal Wizard-of-Oz (WoZ) setup, on two shopping domains: (a) furniture – grounded in a shared virtual environment; and (b) fashion – grounded in an evolving set of images. Datasets include multimodal context of the items appearing in each scene, and contextual NLU, NLG and coreference annotations using a novel and unified framework of SIMMC conversational acts for both user and assistant utterances. Finally, we present several tasks within SIMMC as objective evaluation protocols, such as structural API prediction, response generation, and dialog state tracking. We benchmark a collection of existing models on these SIMMC tasks as strong baselines, and demonstrate rich multimodal conversational interactions. Our data, annotations, and models will be made publicly available.
Search
Co-authors
- Ahmad Beirami 2
- Alborz Geramifard 2
- Seungwhan Moon 1
- Satwik Kottur 1
- Paul A. Crook 1
- show all...