Multi-modal Graph Fusion for Named Entity Recognition with Targeted Visual Guidance

Authors

  • Dong Zhang Soochow University
  • Suzhong Wei Southeast University
  • Shoushan Li Soochow University
  • Hanqian Wu Southeast University
  • Qiaoming Zhu Soochow University
  • Guodong Zhou Soochow University

DOI:

https://doi.org/10.1609/aaai.v35i16.17687

Keywords:

Information Extraction

Abstract

Multi-modal named entity recognition (MNER) aims to discover named entities in free text and classify them into pre-defined types with images. However, dominant MNER models do not fully exploit fine-grained semantic correspondences between semantic units of different modalities, which have the potential to refine multi-modal representation learning. To deal with this issue, we propose a unified multi-modal graph fusion (UMGF) approach for MNER. Specifically, we first represent the input sentence and image using a unified multi-modal graph, which captures various semantic relationships between multi-modal semantic units (words and visual objects). Then, we stack multiple graph-based multi-modal fusion layers that iteratively perform semantic interactions to learn node representations. Finally, we achieve an attention-based multi-modal representation for each word and perform entity labeling with a CRF decoder. Experimentation on the two benchmark datasets demonstrates the superiority of our MNER model.

Downloads

Published

2021-05-18

How to Cite

Zhang, D., Wei, S., Li, S., Wu, H., Zhu, Q., & Zhou, G. (2021). Multi-modal Graph Fusion for Named Entity Recognition with Targeted Visual Guidance. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16), 14347-14355. https://doi.org/10.1609/aaai.v35i16.17687

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing III