Mm-vit: Multi-modal video transformer for compressed video action recognition

J Chen, CM Ho - Proceedings of the IEEE/CVF winter …, 2022 - openaccess.thecvf.com
J Chen, CM Ho
Proceedings of the IEEE/CVF winter conference on applications …, 2022openaccess.thecvf.com
This paper presents a pure transformer-based approach, dubbed the Multi-Modal Video
Transformer (MM-ViT), for video action recognition. Different from other schemes which
solely utilize the decoded RGB frames, MM-ViT operates exclusively in the compressed
video domain and exploits all readily available modalities, ie, I-frames, motion vectors,
residuals and audio waveform. In order to handle the large number of spatiotemporal tokens
extracted from multiple modalities, we develop several scalable model variants which …
Abstract
This paper presents a pure transformer-based approach, dubbed the Multi-Modal Video Transformer (MM-ViT), for video action recognition. Different from other schemes which solely utilize the decoded RGB frames, MM-ViT operates exclusively in the compressed video domain and exploits all readily available modalities, ie, I-frames, motion vectors, residuals and audio waveform. In order to handle the large number of spatiotemporal tokens extracted from multiple modalities, we develop several scalable model variants which factorize self-attention across the space, time and modality dimensions. In addition, to further explore the rich inter-modal interactions and their effects, we develop and compare three distinct cross-modal attention mechanisms that can be seamlessly integrated into the transformer building block. Extensive experiments on three public action recognition benchmarks (UCF-101, Something-Something-v2, Kinetics-600) demonstrate that MM-ViT outperforms the state-of-the-art video transformers in both efficiency and accuracy, and performs better or equally well to the state-of-the-art CNN counterparts with computationally-heavy optical flow
openaccess.thecvf.com