Structure-Aware Multimodal Sequential Learning for Visual Dialog

Authors

  • Young-Jin Kim Hanyang University
  • Min-Jun Kim Hanyang University
  • Kyunghwan An Hanyang University
  • Jinwoo Ahn Hanyang University
  • Jaeseok Kim KT Corporation
  • Yu-Jung Heo KT Corporation
  • Du-Seong Chang KT Corporation
  • Eun-Sol Kim Hanyang University

DOI:

https://doi.org/10.1609/aaai.v38i12.29219

Keywords:

ML: Multimodal Learning, CV: Multi-modal Vision, ML: Time-Series/Data Streams

Abstract

With the ability to collect vast amounts of image and natural language data from the web, there has been a remarkable advancement in Large-scale Language Models (LLMs). This progress has led to the emergence of chatbots and dialogue systems capable of fluent conversations with humans. As the variety of devices enabling interactions between humans and agents expands, and the performance of text-based dialogue systems improves, there has been recently proposed research on visual dialog. However, visual dialog requires understanding sequences of pairs consisting of images and sentences, making it challenging to gather sufficient data for training large-scale models from the web. In this paper, we propose a new multimodal learning method leveraging existing large-scale models designed for each modality, to enable model training for visual dialog with small visual dialog datasets. The key ideas of our approach are: 1) storing the history or context during the progression of visual dialog in the form of spatiotemporal graphs, and 2) introducing small modulation blocks between modality-specific models and the graphs to align the semantic spaces. For implementation, we introduce a novel structure-aware cross-attention method, which retrieves relevant image and text knowledge for utterance generation from the pretrained models. For experiments, we achieved a new state-of-the-art performance on three visual dialog datasets, including the most challenging one COMET.

Published

2024-03-24

How to Cite

Kim, Y.-J., Kim, M.-J., An, K., Ahn, J., Kim, J., Heo, Y.-J., Chang, D.-S., & Kim, E.-S. (2024). Structure-Aware Multimodal Sequential Learning for Visual Dialog. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 13193-13201. https://doi.org/10.1609/aaai.v38i12.29219

Issue

Section

AAAI Technical Track on Machine Learning III