Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

A parallel and robust object tracking approach synthesizing adaptive Bayesian learning and improved incremental subspace learning

  • Research Article
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Abstract

This paper presents a novel tracking algorithm which integrates two complementary trackers. Firstly, an improved Bayesian tracker(B-tracker) with adaptive learning rate is presented. The classification score of B-tracker reflects tracking reliability, and a low score usually results from large appearance change. Therefore, if the score is low, we decrease the learning rate to update the classifier fast so that B-tracker can adapt to the variation and vice versa. In this way, B-tracker is more suitable than its traditional version to solve appearance change problem. Secondly, we present an improved incremental subspace learning method tracker(S-tracker). We propose to calculate projected coordinates using maximum posterior probability, which results in a more accurate reconstruction error than traditional subspace learning tracker. Instead of updating at every time, we present a stop-strategy to deal with occlusion problem. Finally, we present an integrated framework(BAST), in which the pair of trackers run in parallel and return two candidate target states separately. For each candidate state, we define a tracking reliability metrics to measure whether the candidate state is reliable or not, and the reliable candidate state will be chosen as the target state at the end of each frame. Experimental results on challenging sequences show that the proposed approach is very robust and effective in comparison to the state-of-the-art trackers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Ali A, Jalil A, Niu J, Zhao X, Rathore S, Ahmed J, Iftikhar MA. Visual object tracking-classical and contemporary approaches. Frontiers of Computer Science, 2016, 10(1): 167–188

    Article  Google Scholar 

  2. Wang Y, Zhao Q. Patchwise tracking via spatio-temporal constraintbased sparse representation and multiple-instance learning-based SVM. In: Proceedings of International Conference on Neural Information Processing. 2015, 264–271

    Chapter  Google Scholar 

  3. Li K, He F, Chen X. Real-time object tracking via compressive feature selection. Frontiers of Computer Science, 2016, 10(4): 689–701

    Article  Google Scholar 

  4. Wu G, Lu W, Gao G, Zhao C, Liu J. Regional deep learning model for visual tracking. Neurocomputing, 2016, 175: 310–323

    Article  Google Scholar 

  5. Wu Y, PeiM, Yang M, Yuan J, Jia Y. Robust discriminative tracking via landmark-based label propagation. IEEE Transactions on Image Processing, 2015, 24(5): 1510–1523

    Article  MathSciNet  MATH  Google Scholar 

  6. Wang L, Liu T, Wang G, Chan K L, Yang Q. Video tracking using learned hierarchical features. IEEE Transactions on Image Processing, 2015, 24(4): 1424–1435

    Article  MathSciNet  MATH  Google Scholar 

  7. Zhang K, Liu Q, Wu Y, Yang M H. Robust visual tracking via convolutional networks without training. IEEE Transactions on Image Processing, 2016, 25(4): 1779–1792

    MathSciNet  MATH  Google Scholar 

  8. Xu C, Tao W, Meng Z, Feng Z. Robust visual tracking via online multiple instance learning with fisher information. Pattern Recognition, 2015, 48(12): 3917–3926

    Article  Google Scholar 

  9. Wang G, Qin X, Zhong F, Liu Y, Li H, Peng Q, Yang M. Visual tracking via sparse and local linear coding. IEEE Transactions on Image Processing, 2015, 24(11): 3796–3809

    Article  MathSciNet  MATH  Google Scholar 

  10. Sun X, Yao H, Zhang S, Li D. Non-rigid object contour tracking via a novel supervised level set model. IEEE Transactions on Image Processing, 2015, 24(11): 3386–3399

    Article  MathSciNet  MATH  Google Scholar 

  11. Sui Y, Zhang S, Zhang L. Robust visual tracking via sparsity-induced subspace learning. IEEE Transactions on Image Processing, 2015, 24(12): 4686–4700

    Article  MathSciNet  MATH  Google Scholar 

  12. Jang S I, Choi K, Toh K A, Teoh A B J, Kim J. Object tracking based on an online learning network with total error rate minimization. Pattern Recognition, 2015, 48(1): 126–139

    Article  MATH  Google Scholar 

  13. Sun J, He F, Chen Y, Chen X. A multiple template approach for robust tracking of fast motion target. Applied Mathematics-A Journal of Chinese Universities, 2016, 31(2): 177–197

    Article  MathSciNet  MATH  Google Scholar 

  14. Hong-tu H, Du-yan B, Yu-fei Z, Shi-ping M, Shan G, Chang L. Robust visual tracking based on product sparse coding. Pattern Recognition Letters, 2015, 56: 52–59

    Article  Google Scholar 

  15. Chen C, Li S, Qin H, Hao A. Real-time and robust object tracking in video via low-rank coherency analysis in feature space. Pattern Recognition, 2015, 48(9): 2885–2905

    Article  Google Scholar 

  16. Zhang T, Liu S, Ahuja N, Yang M H, Ghanem B. Robust visual tracking via consistent low-rank sparse learning. International Journal of Computer Vision, 2015, 111(2): 171–190

    Article  MATH  Google Scholar 

  17. Zhang X, Hu W, Xie N, Bao H, Maybank S. A robust tracking system for low frame rate video. International Journal of Computer Vision, 2015, 115(3): 279–304

    Article  MathSciNet  Google Scholar 

  18. Zhou Y, Bai X, Liu W, Latecki L J. Similarity fusion for visual tracking. International Journal of Computer Vision, 2016, 118(3): 337–363

    Article  MathSciNet  Google Scholar 

  19. Zhang D, He F, Han S, Zou L, WuY, Chen Y. An efficient approach to directly compute the exact Hausdorff distance for 3D point sets. Integrated Computer-Aided Engineering, 2017, 24(3), 261–277

    Article  Google Scholar 

  20. Li X, Shen C, Dick A, Zhang Z M, Zhuang Y. Online metric-weighted linear representations for robust visual tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(5): 931–950

    Article  Google Scholar 

  21. Zhang T, Liu S, Xu C, Yan S, Ghanem B, Ahuja N, Yang M H. Structural sparse tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015, 150–158

    Google Scholar 

  22. Li K, He F, Yu H P. Robust visual tracking based on convolutional features with illumination and occlusion handling. Journal of Computer Science and Technology, 2018, 33(1): 223–236

    Article  Google Scholar 

  23. Liu T, Wang G, Yang Q. Real-time part-based visual tracking via adaptive correlation filters. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2015, 4902–4912

    Google Scholar 

  24. Chen Y L, He F Z, Wu Y Q, Hou N. A local start search algorithm to compute exact Hausdorff distance for arbitrary point sets. Pattern Recognition, 2017, 67: 139–148

    Article  Google Scholar 

  25. Zhang Z, Hong Wong K. Pyramid-based visual tracking using sparsity represented mean transform. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014, 1226–1233

    Google Scholar 

  26. Zhang T, Jia K, Xu C, Ma Y, Ahuja N. Partial occlusion handling for visual tracking via robust part matching. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014, 1258–1265

    Google Scholar 

  27. Yu H P, He F, Pan Y. A novel region-based active contour model via local patch similarity measure for image segmentation. Multimedia Tools and Applications, 2018: 1–23

    Google Scholar 

  28. Danelljan M, Shahbaz Khan F, Felsberg M, Van de Weijer J. Adaptive color attributes for real-time visual tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014, 1090–1097

    Google Scholar 

  29. Yang M, Pei M T, Wu Y W, Jia Y. Learning online structural appearance model for robust object tracking. Science China Information Sciences, 2015, 58(3): 1–14

    Article  Google Scholar 

  30. Smeulders A W, Chu D M, Cucchiara R, Calderara S, Dehghan A, Shah M. Visual tracking: an experimental survey. IEEE Transactions on Pattern Analysis andMachine Intelligence, 2014, 36(7): 1442–1468

    Article  Google Scholar 

  31. Zhang H, Hu S, Yang G. Video object tracking based on appearance models learning. Journal of Computer Research and Development, 2015, 52(1): 177–190

    Article  Google Scholar 

  32. Cehovin L, Leonardis A, Kristan M. Visual object tracking performance measures revisited. IEEE Transactions on Image Processing, 2016, 25(3): 1261–1274

    MathSciNet  MATH  Google Scholar 

  33. Wu Y, Lim J, Yang M H. Online object tracking: a benchmark. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2013, 2411–2418

    Google Scholar 

  34. Ni B, He F, Pan Y, Yuan Z. Using shapes correlation for active contour segmentation of uterine fibroid ultrasound images in computeraided therapy. AppliedMathematics-A Journal of Chinese Universities, 2016, 31(1): 37–52

    Article  MATH  Google Scholar 

  35. Yan X, He F, Hou N, Ai H. An efficient particle swarm optimization for large scale hardware/software co-design system. International Journal of Cooperative Information Systems, 2018, 27(1): 1741001

    Article  Google Scholar 

  36. Yu Q, Dinh T B, Medioni G. Online tracking and reacquisition using co-trained generative and discriminative trackers. In: Proceedings of European Conference on Computer Vision. 2008, 678–691

    Google Scholar 

  37. Zhang D, He F, Han S, Li X. Quantitative optimization of interoperability during feature-based data exchange. Integrated Computer-Aided Engineering, 2016, 23(1): 31–51

    Article  Google Scholar 

  38. Kwon J, Lee K M. Visual tracking decomposition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2010, 1269–1276

    Google Scholar 

  39. Ross D A, Lim J, Lin R S, Yang M H. Incremental learning for robust visual tracking. International Journal of Computer Vision, 2008, 77(1): 125–141

    Article  Google Scholar 

  40. Belhumeur P N, Kriegman D J. What is the set of images of an object under all possible illumination conditions? International Journal of Computer Vision, 1998, 28(3): 245–260

    Article  Google Scholar 

  41. Mei X, Ling H. Robust visual tracking using L1 minimization. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2009, 1436–1443

    Google Scholar 

  42. Bao C, Wu Y, Ling H, Ji H. Real time robust L1 tracker using accelerated proximal gradient approach. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2012, 1830–1837

    Google Scholar 

  43. Jia X, Lu H, Yang M H. Visual tracking via adaptive structural local sparse appearance model. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2012, 1822–1829

    Google Scholar 

  44. Zhong W, Lu H, Yang M H. Robust object tracking via sparsity-based collaborative model. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2012, 1838–1845

    Google Scholar 

  45. Wang D, Lu H, Yang M H. Online object tracking with sparse prototypes. IEEE Transactions on Image Processing, 2013, 22(1): 314–325

    Article  MathSciNet  MATH  Google Scholar 

  46. Wang N, Wang J, Yeung D Y. Online robust non-negative dictionary learning for visual tracking. In: Proceedings of the IEEE International Conference on Computer Vision. 2013, 657–664

    Google Scholar 

  47. Zhang S, Yao H, Sun X, Lu X. Sparse coding based visual tracking: review and experimental comparison. Pattern Recognition, 2013, 46(7): 1772–1788

    Article  Google Scholar 

  48. Babenko B, Yang M H, Belongie S. Visual tracking with online multiple instance learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2009, 983–990

    Google Scholar 

  49. Wang N, Shi J, Yeung D Y, Jia J. Understanding and diagnosing visual tracking systems. In: Proceedings of the IEEE International Conference on Computer Vision. 2015, 3101–3109

    Google Scholar 

  50. Hare S, Golodetz S, Saffari A, Vineet V, Cheng MM, Hicks S L, Torr P H. Struck: structured output tracking with kernels. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(10): 2096–2109

    Article  Google Scholar 

  51. Zhang K, Zhang L, Yang M. Real-time compressive tracking. In: Proceedings of European Conference on Computer Vision. 2012, 864–877

    Google Scholar 

  52. Kalal Z, Matas J, Mikolajczyk K. PN learning: bootstrapping binary classifiers by structural constraints. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2010, 49–56

    Google Scholar 

  53. Kalal Z, Mikolajczyk K, Matas J. Tracking-learning-detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(7): 1409–1422

    Article  Google Scholar 

  54. Grabner H, Grabner M, Bischof H. Realtime tracking via on-line boosting. In: Proceedings of British Machine Vision Conference. 2006, 47–56

    Google Scholar 

  55. Arulampalam M S, Maskell S, Gordon N, Clapp T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Transactions on Signal Processing, 2002, 50(2): 174–188

    Article  Google Scholar 

  56. Li K, He F, Ye H P, Chen X. A correlative classiffiers approach based on particle filter and sample set for tracking occluded target. Applied Mathematics-A Journal of Chinese Universities, 2017, 32(3): 294–312

    Article  MathSciNet  Google Scholar 

  57. Levey A, Lindenbaum M. Sequential Karhunen-Loeve basis extraction and its application to images. IEEE Transactions on Image Processing, 2000, 9(8): 1371–1374

    Article  MATH  Google Scholar 

  58. Wang D, Lu H C, Yang M H. Least soft-thresold squares tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2013, 2371–2378

    Google Scholar 

  59. Wang D, Lu H. Visual tracking via probability continuous outlier model. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014, 3478–3485

    Google Scholar 

  60. Jia X, Lu H, Yang M H. Visual tracking via adaptive structural local sparse appearance model. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2012, 1822–1829

    Google Scholar 

  61. Dinh T B, Vo N, Medioni G. Context tracker: exploring supporters and distracters in unconstrained environments. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2011, 1177–1184

    Google Scholar 

  62. Viola P, Jones M. Rapid object detection using a boosted cascade of simple features. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2001, 511–518

    Google Scholar 

  63. Zhou Y, He F, Qiu Y. Optimization of parallel iterated local search algorithms on graphics processing unit. The Journal of Supercomputing, 2016, 72(6): 2394–2416

    Article  Google Scholar 

  64. Zhou Y, He F, Qiu Y. Dynamic strategy based parallel ant colony optimization on GPUs for TSPs. Science China Information Sciences, 2017, 60(6): 068102

    Article  Google Scholar 

  65. Wu Y, He F, Zhang D, Li X. Service-oriented feature-based data exchange for cloud-based design and manufacturing. IEEE Transactions on Services Computing, DOI 10.1109/TSC.2015.2501981

  66. Zhou Y, He F, Hou N. Parallel ant colony optimization on multi-core simdcpus. Future Generation Computer Systems, 2018, 79(2): 473–487

    Article  Google Scholar 

  67. Yan X, He F, Chen Y. A novel hardware/software partitioning method based on position disturbed particle swarm optimization with invasive weed optimization. Journal of Computer Science and Technology, 2017, 32(2): 340–355

    Article  MathSciNet  Google Scholar 

  68. Lv X, He F, Cai W. Supporting selective undo of string-wise operations for collaborative editing systems. Future Generation Computer Systems, 2018, 82: 41–62

    Article  Google Scholar 

  69. Lv X, He F, Cai W, Cheng Y. A string-wise CRDT algorithm for smart and large-scale collaborative editing systems. Advanced Engineering Informatics, 2017, 33: 397–409

    Article  Google Scholar 

  70. Zhu H, Nie Y, Yue T, Cao X. The role of prior in image based 3D modeling: a survey. Frontiers of Computer Science, 2017, 11(2): 175–191

    Article  Google Scholar 

  71. Han Y, Jia G. Optimizing product manufacturability in 3D printing. Frontiers of Computer Science, 2017, 11(2): 347–357

    Article  Google Scholar 

Download references

Acknowledgements

This paper is supported by the National Natural Science Foundation of China (Grant No. 61472289) and the National Key Research and Development Project of China (2016YFC0106305).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fazhi He.

Additional information

Kang Li received the MS degree in Computer Science from Central China Normal University, China in 2012. He is currently a PhD candidate in the Wuhan University, school of computer science. His research interests are pattern recognition, image processing, and computer vision.

Fazhi He received PhD degree from Wuhan University of Technology. Now he is a professor in School of Computer, Wuhan University, China. His research interests are computer graphics, computer-aided design, image processing and computer supported cooperative work.

Haiping Yu received the MS degree in Wuhan University of Science and Technology, China in 2005. She is currently a PhD candidate in the Wuhan University, school of computer science. Her research interests are medical image processing, computer vision and social recommendation.

Xiao Chen received the MS degree in Computer Science from Three Gorges University, China in 2010. He is currently a PhD candidate in the Wuhan University, school of computer science. His research interests are machine learning, image matting, and computer vision.

Electronic supplementary material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, K., He, F., Yu, H. et al. A parallel and robust object tracking approach synthesizing adaptive Bayesian learning and improved incremental subspace learning. Front. Comput. Sci. 13, 1116–1135 (2019). https://doi.org/10.1007/s11704-018-6442-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11704-018-6442-4

Keywords