Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Low-Rank HOCA: Efficient High-Order Cross-Modal Attention for Video Captioning

Tao Jin, Siyu Huang, Yingming Li, Zhongfei Zhang


Abstract
This paper addresses the challenging task of video captioning which aims to generate descriptions for video data. Recently, the attention-based encoder-decoder structures have been widely used in video captioning. In existing literature, the attention weights are often built from the information of an individual modality, while, the association relationships between multiple modalities are neglected. Motivated by this, we propose a video captioning model with High-Order Cross-Modal Attention (HOCA) where the attention weights are calculated based on the high-order correlation tensor to capture the frame-level cross-modal interaction of different modalities sufficiently. Furthermore, we novelly introduce Low-Rank HOCA which adopts tensor decomposition to reduce the extremely large space requirement of HOCA, leading to a practical and efficient implementation in real-world applications. Experimental results on two benchmark datasets, MSVD and MSR-VTT, show that Low-rank HOCA establishes a new state-of-the-art.
Anthology ID:
D19-1207
Volume:
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Kentaro Inui, Jing Jiang, Vincent Ng, Xiaojun Wan
Venues:
EMNLP | IJCNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2001–2011
Language:
URL:
https://aclanthology.org/D19-1207/
DOI:
10.18653/v1/D19-1207
Bibkey:
Cite (ACL):
Tao Jin, Siyu Huang, Yingming Li, and Zhongfei Zhang. 2019. Low-Rank HOCA: Efficient High-Order Cross-Modal Attention for Video Captioning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2001–2011, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Low-Rank HOCA: Efficient High-Order Cross-Modal Attention for Video Captioning (Jin et al., EMNLP-IJCNLP 2019)
Copy Citation:
PDF:
https://aclanthology.org/D19-1207.pdf
Attachment:
 D19-1207.Attachment.zip
Data
MSVD