Abstract
Temporal action detection (TAD) aims to find action boundaries in untrimmed videos. Video sequences contain multiple actions with various durations, which brings about great challenges to accurate boundary location. Many methods are usually ineffective in multi-scale issues in complex scenes. Besides, there is a crucial fact that local information is vital for clear boundaries. However, the traditional convolution receptive field is limited in local methods and lacks critical frame-level attention. To deal with these problems, we propose the Parallel Adaptive Graph-Based Network (PAGN), which constructs a multi-branch parallel subnetwork that retains multiple video resolutions and enables information interaction between different levels. This results in feature outputs that can represent precise location information and rich semantic information simultaneously, making it more efficient and adaptable to changes in action scale. Additionally, we also propose a novel Complementary Graph Module (CGM) that assigns differential attention to different numbers of neighbors at the current timestamp. Extensive experimental validations are conducted on the challenging datasets ActivityNet-1.3 and THUMOS-14, respectively, and PAGN all consistently exhibit significantly better performance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bai, Y., Wang, Y., Tong, Y., Yang, Y., Liu, Q., Liu, J.: Boundary content graph neural network for temporal action proposal generation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12373, pp. 121–137. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58604-1_8
Chang, S., Wang, P., Wang, F., Li, H., Shou, Z.: Augmented transformer with adaptive graph for temporal action proposal generation. In: Proceedings of the 3rd International Workshop on Human-Centric Multimedia Analysis, pp. 41–50 (2022)
Gao, J., Yang, Z., Chen, K., Sun, C., Nevatia, R.: Turn tap: temporal unit regression network for temporal action proposals. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3628–3636 (2017)
Han, B., Zhang, X., Ren, S.: Pu-GACnet: graph attention convolution network for point cloud upsampling. Image Vis. Comput. 118, 104371 (2022)
Heilbron, F.C., Escorcia, V., Ghanem, B., Niebles, J.C.: Activitynet: A large-scale video benchmark for human activity understanding. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 961–970. IEEE (2015)
Idrees, H., et al.: The thumos challenge on action recognition for videos “in the wild.” Comput. Vis. Image Underst. 155, 1–23 (2017)
Kang, H., Kim, H., An, J., Cho, M., Kim, S.J.: Soft-landing strategy for alleviating the task discrepancy problem in temporal action localization tasks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6514–6523 (2023)
Khaire, P.A., Kumar, P.: Deep learning and RGB-D based human action, human-human and human-object interaction recognition: a survey. J. Vis. Commun. Image Represent. 86, 103531 (2022)
Lin, C., et al.: Learning salient boundary feature for anchor-free temporal action localization. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3319–3328 (2021)
Lin, T., Liu, X., Li, X., Ding, E., Wen, S.: BMN: boundary-matching network for temporal action proposal generation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3889–3898 (2019)
Lin, T., Zhao, X., Su, H., Wang, C., Yang, M.: BSN: boundary sensitive network for temporal action proposal generation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11208, pp. 3–21. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01225-0_1
Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017)
Liu, Q., Wang, Z.: Progressive boundary refinement network for temporal action detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 11612–11619 (2020)
Liu, S., Zhao, X., Su, H., Hu, Z.: TSI: temporal scale invariant network for action proposal generation. In: Ishikawa, H., Liu, C.-L., Pajdla, T., Shi, J. (eds.) ACCV 2020. LNCS, vol. 12626, pp. 530–546. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-69541-5_32
Liu, X., et al.: End-to-end temporal action detection with transformer. IEEE Trans. Image Process. 31, 5427–5441 (2022)
Qing, Z., et al.: Temporal context aggregation network for temporal action proposal refinement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 485–494 (2021)
Ratre, A., Pankajakshan, V.: Deep imbalanced data learning approach for video anomaly detection. In: 2022 National Conference on Communications (NCC), pp. 391–396 (2022)
Shang, J., Wei, P., Li, H., Zheng, N.: Multi-scale interaction transformer for temporal action proposal generation. Image Vis. Comput. 129, 104589 (2023)
Tan, J., Tang, J., Wang, L., Wu, G.: Relaxed transformer decoders for direct action proposal generation. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 13506–13515 (2021)
Vahdani, E., Tian, Y.: Deep learning-based action detection in untrimmed videos: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 45, 4302–4320 (2021)
Vo, K., Yamazaki, K., Truong, S., Tran, M.T., Sugimoto, A., Le, N.: ABN: agent-aware boundary networks for temporal action proposal generation. IEEE Access 9, 126431–126445 (2021)
Wang, J., Long, X., Chen, G., Wu, Z., Chen, Z., Ding, E.: U-HRnet: delving into improving semantic representation of high resolution network for dense prediction. arXiv preprint arXiv:2210.07140 (2022)
Wang, J., et al.: Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 43(10), 3349–3364 (2020)
Wang, L., Zhai, C., Zhang, Q., Tang, W., Zheng, N., Hua, G.: Graph-based temporal action co-localization from an untrimmed video. Neurocomputing 434, 211–223 (2021)
Wang, L., Xiong, Y., Lin, D., Van Gool, L.: Untrimmednets for weakly supervised action recognition and detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4325–4334 (2017)
Xiong, Y., et al.: Cuhk & ethz & siat submission to activitynet challenge 2016. arXiv preprint arXiv:1608.00797 (2016)
Xu, M., Zhao, C., Rojas, D.S., Thabet, A.K., Ghanem, B.: G-tad: sub-graph localization for temporal action detection. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10153–10162 (2020)
Zeng, R., et al.: Graph convolutional networks for temporal action localization. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 7093–7102 (2019)
Zhang, W., et al.: Graph attention multi-layer perceptron. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2022)
Zhao, Y., Xiong, Y., Wang, L., Wu, Z., Tang, X., Lin, D.: Temporal action detection with structured segment networks. Int. J. Comput. Vision 128(1), 74–96 (2020)
Zhu, Z., Tang, W., Wang, L., Zheng, N., Hua, G.: Enriching local and global contexts for temporal action localization. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 13516–13525 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Jiao, Y., Yang, W., Xing, W. (2024). Learning Complementary Instance Representation with Parallel Adaptive Graph-Based Network for Action Detection. In: Rudinac, S., et al. MultiMedia Modeling. MMM 2024. Lecture Notes in Computer Science, vol 14555. Springer, Cham. https://doi.org/10.1007/978-3-031-53308-2_34
Download citation
DOI: https://doi.org/10.1007/978-3-031-53308-2_34
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-53307-5
Online ISBN: 978-3-031-53308-2
eBook Packages: Computer ScienceComputer Science (R0)