Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

IoUNet++: : Spatial cross‐layer interaction‐based bounding box regression for visual tracking

Published: 16 October 2023 Publication History

Abstract

Accurate target prediction, especially bounding box estimation, is a key problem in visual tracking. Many recently proposed trackers adopt the refinement module called IoU predictor by designing a high‐level modulation vector to achieve bounding box estimation. However, due to the lack of spatial information that is important for precise box estimation, this simple one‐dimensional modulation vector has limited refinement representation capability. In this study, a novel IoU predictor (IoUNet++) is designed to achieve more accurate bounding box estimation by investigating spatial matching with a spatial cross‐layer interaction model. Rather than using a one‐dimensional modulation vector to generate representations of the candidate bounding box for overlap prediction, this paper first extracts and fuses multi‐level features of the target to generate template kernel with spatial description capability. Then, when aggregating the features of the template and the search region, the depthwise separable convolution correlation is adopted to preserve the spatial matching between the target feature and candidate feature, which makes their IoUNet++ network have better template representation and better fusion than the original network. The proposed IoUNet++ method with a plug‐and‐play style is applied to a series of strengthened trackers including DiMP++, SuperDiMP++ and SuperDIMP_AR++, which achieve consistent performance gain. Finally, experiments conducted on six popular tracking benchmarks show that their trackers outperformed the state‐of‐the‐art trackers with significantly fewer training epochs.

Graphical Abstract

This paper proposes a novel IoU predictor (IoUNet++) for visual tracking that uses multi‐layer fused spatial template features and depthwise separable convolutional correlations to achieve more accurate bounding box estimation. The tracker improved by the IoUNet++ method outperforms state‐of‐the‐art trackers on six popular tracking benchmarks.

References

[1]
Huang, L., Zhao, X., Huang, K.: Got‐10k: a large high‐diversity benchmark for generic object tracking in the wild. IEEE Trans. Pattern Anal. Mach. Intell. 43(5), 1562–1577 (2019). https://doi.org/10.1109/tpami.2019.2957464
[2]
Fan, H., et al.: A high‐quality benchmark for large‐scale single object tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5374–5383 (2019)
[3]
Yan, B., et al.: Learning spatio‐temporal transformer for visual tracking. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10448–10457 (2021)
[4]
Mayer, C., et al.: Transforming model prediction for tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8731–8740 (2022)
[5]
Chen, X., et al.: Transformer tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8126–8135 (2021)
[6]
Wang, N., et al.: Transformer meets tracker: exploiting temporal context for robust visual tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1571–1580 (2021)
[7]
Bertinetto, L., et al.: Fully‐convolutional siamese networks for object tracking. In: European Conference on Computer Vision, pp. 850–865. Springer (2016)
[8]
Danelljan, M., et al.: Atom: accurate tracking by overlap maximization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4660–4669 (2019)
[9]
Bhat, G., et al.: Learning discriminative model prediction for tracking. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6182–6191 (2019)
[10]
Li, B., et al.: High performance visual tracking with siamese region proposal network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8971–8980 (2018)
[11]
Li, B., et al.: Evolution of siamese visual tracking with very deep networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4282–4291 (2019)
[12]
Danelljan, M., Gool, L.V., Timofte, R.: Probabilistic regression for visual tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7183–7192 (2020)
[13]
Nam, H., Han, B.: Learning multi‐domain convolutional neural networks for visual tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4293–4302 (2016)
[14]
Held, D., Thrun, S., Savarese, S.: Learning to track at 100 fps with deep regression networks. In: European Conference on Computer Vision, pp. 749–765. Springer (2016)
[15]
Han, Y., et al.: Ensemble tracking based on diverse collaborative framework with multi‐cue dynamic fusion. IEEE Trans. Multimed. 22(10), 2698–2710 (2019). https://doi.org/10.1109/tmm.2019.2958759
[16]
Jung, I., et al.: Real‐time mdnet. In: Proceedings of the European Conference on Computer Vision, pp. 83–98. ECCV (2018)
[17]
Danelljan, M., Bhat, G.: Pytracking. Visual tracking library based on pytorch (2019)
[18]
Jiang, B., et al.: Acquisition of localization confidence for accurate object detection. In: Proceedings of the European Conference on Computer Vision, pp. 784–799 (2018). ECCV
[19]
Yan, B., et al.: Alpha‐refine: boosting tracking performance by precise bounding box estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5289–5298 (2021)
[20]
Howard, A.G., et al.:Mobilenets. Efficient convolutional neural networks for mobile vision applications, 04861 (2017). arXiv preprint arXiv:1704
[21]
Chen, B., et al.: Backbone is all your need: A simplified architecture for visual object tracking (2022). arXiv preprint arXiv:2203.05328
[22]
Gao, S., et al.: Aiatrack: attention in attention for transformer visual tracking. In: European Conference on Computer Vision, pp. 146–164. Springer (2022)
[23]
Cui, Y., et al.: Mixformer: end‐to‐end tracking with iterative mixed attention. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13608–13618 (2022)
[24]
Ye, B., et al.: Joint feature learning and relation modeling for tracking: a one‐stream framework. In: European Conference on Computer Vision, pp. 341–357. Springer (2022)
[25]
Zhu, Z., et al.: Distractor‐aware siamese networks for visual object tracking. In: Proceedings of the European Conference on Computer Vision, pp. 101–117 (2018). ECCV
[26]
Chen, Z., et al.: Siamese box adaptive network for visual tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6668–6677 (2020)
[27]
Yu, Y., et al.: Deformable siamese attention networks for visual object tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6728–6737 (2020)
[28]
Yuan, D., et al.: Self‐supervised deep correlation tracking. In: IEEE Transactions on Image Processing, vol. 30, pp. 976–985 (2020). https://doi.org/10.1109/tip.2020.3037518
[29]
Zhang, Z., et al.: Ocean: object‐aware anchor‐free tracking. In: European Conference on Computer Vision, pp. 771–787. Springer (2020)
[30]
Fan, N., et al.: Learning dual‐margin model for visual tracking. Neural Network. 140, 344–354 (2021). https://doi.org/10.1016/j.neunet.2021.04.004
[31]
Liu, Q., et al.: Learning dual‐level deep representation for thermal infrared tracking. IEEE Trans. Multimed. 25, 1269–1281 (2023). https://doi.org/10.1109/tmm.2022.3140929
[32]
Liu, Q., et al.: Learning deep multi‐level similarity for thermal infrared object tracking. IEEE Trans. Multimed. 23, 2114–2126 (2021). https://doi.org/10.1109/tmm.2020.3008028
[33]
Voigtlaender, P., et al.: Siam r‐cnn: visual tracking by re‐detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6578–6588 (2020)
[34]
Bolme, D.S., et al.: Visual object tracking using adaptive correlation filters. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, 2010, pp, 2544–2550
[35]
Danelljan, M., et al.: Discriminative scale space tracking. IEEE Trans. Pattern Anal. Mach. Intell. 39(8), 1561–1575 (2016). https://doi.org/10.1109/tpami.2016.2609928
[36]
Danelljan, M., et al.: Learning spatially regularized correlation filters for visual tracking. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4310–4318 (2015)
[37]
Henriques, J.F., et al.: High‐speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2014). https://doi.org/10.1109/tpami.2014.2345390
[38]
Henriques, J.F., et al.: Exploiting the circulant structure of tracking‐by‐detection with kernels. In: European Conference on Computer Vision, pp. 702–715. Springer (2012)
[39]
Danelljan, M., et al.: Adaptive color attributes for real‐time visual tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1090–1097 (2014)
[40]
Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017)
[41]
Li, S., et al.: Tracking every thing in the wild. In: European Conference on Computer Vision, pp. 498–515. Springer (2022)
[42]
Liu, Z., et al.: A convnet for the 2020s. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11976–11986 (2022)
[43]
Agarap, A.F.: Deep Learning Using Rectified Linear Units (Relu) (2018). arXiv preprint arXiv:1803.08375
[44]
Sifre, L., Mallat, S.: Rigid‐motion Scattering for Texture Classification (2014). arXiv preprint arXiv:1403.1687
[45]
Muller, M., et al.: Trackingnet: a large‐scale dataset and benchmark for object tracking in the wild. In: Proceedings of the European Conference on Computer Vision, pp. 300–317. ECCV (2018)
[46]
Mueller, M., Smith, N., Ghanem, B.: European conference on computer vision. In: A Benchmark and Simulator for Uav Tracking, pp. 445–461. Springer (2016)
[47]
Wu, Y., Lim, J., Yang, M.H.: Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1834–1848 (2015). https://doi.org/10.1109/tpami.2014.2388226
[48]
Kiani Galoogahi, H., et al.: Need for speed: a benchmark for higher frame rate object tracking. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1125–1134 (2017)
[49]
Wu, Q., et al.: Dropmae: masked autoencoders with spatial‐attention dropout for tracking tasks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14561–14571 (2023)
[50]
Lin, L., et al.: A simple and strong baseline for transformer tracking. In: Advances in Neural Information Processing Systems, vol. 35, pp. 16743–16754 (2022)
[51]
Xie, F., et al.: Correlation‐aware deep tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8751–8760 (2022)
[52]
Song, Z., et al.: Transformer tracking with cyclic shifting window attention. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8791–8800 (2022)
[53]
Ma, F., et al.: Unified transformer tracker for object tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8781–8790 (2022)
[54]
Yu, B., et al.: High‐performance discriminative tracking with transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9856–9865 (2021)
[55]
Yuan, D., et al.: Accurate bounding‐box regression with distance‐iou loss for visual tracking. J. Vis. Commun. Image Represent. 83, 103428 (2022). https://doi.org/10.1016/j.jvcir.2021.103428
[56]
Qiu, S., et al.: A dynamic adjust‐head siamese network for object tracking. IET Comput. Vis. 17(2), 203–210 (2022). https://doi.org/10.1049/cvi2.12148
[57]
Guo, D., et al.: Graph attention tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9543–9552 (2021)
[58]
Du, F., et al.: Correlation‐guided attention for corner detection based visual tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6836–6845 (2020)
[59]
Xu, Y., et al.: Siamfc++: towards robust and accurate visual tracking with target estimation guidelines. In: National Conference on Artificial Intelligence, vol. 34, pp. 12549–12556 (2020). https://doi.org/10.1609/aaai.v34i07.6944
[60]
Bhat, G., et al.: Know your surroundings: exploiting scene information for object tracking. In: European Conference on Computer Vision, pp. 205–221. Springer (2020)
[61]
Lin, T.‐Y., et al.: European conference on computer vision. In: Microsoft Coco: Common Objects in Context, pp. 740–755. Springer (2014)
[62]
Hu, L., et al.: Do we really need frame‐by‐frame annotation datasets for object tracking? In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 4949–4957 (2021)

Index Terms

  1. IoUNet++: Spatial cross‐layer interaction‐based bounding box regression for visual tracking
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image IET Computer Vision
        IET Computer Vision  Volume 18, Issue 1
        February 2024
        189 pages
        EISSN:1751-9640
        DOI:10.1049/cvi2.v18.1
        Issue’s Table of Contents
        This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

        Publisher

        John Wiley & Sons, Inc.

        United States

        Publication History

        Published: 16 October 2023

        Author Tags

        1. computer vision
        2. convolutional neural nets
        3. object tracking

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 0
          Total Downloads
        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 18 Aug 2024

        Other Metrics

        Citations

        View Options

        View options

        Get Access

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media