Abstract
In the process of target tracking, effective extraction and description of the target feature is very important, however, the existing methods mainly depend on the intensity image and ignore the color information. In this paper, an improved compressive tracking algorithm efficiently fusing adaptive color information is proposed for more robust tracking. At first, the CN color attribute is used to describe the color appearance of the target. In order to reduce the operation processing burden, the two most distinctive color components are extracted adaptively by PCA from 11-channel CN color space. In order to adapt to the scale change caused by motion, a scale-invariant normalized rectangular feature is proposed. Constant learning rate of naïve Bayesian classifier parameters results in poor robustness of original compressive tracking algorithm, a novel non-linear parameter updating strategy based on double S-shape function is adopted to automatically adjust the learning rate for higher stability to interference. Finally, the scale-invariant appearance model combining the adaptive color information is integrated with particle filter frame to eliminate the negative effects such as scale change, occlusion and illumination change. Experimental results on testing sequences demonstrate the remarkable performance of our method. Compared with several well-known tracking algorithms, the average center location error is reduced to 9.88 while the suboptimal tracker is 23.9, the average overlap ratio increases by 7% points at least.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Jia, X. (2012). Visual tracking via adaptive structural local sparse appearance model. In IEEE conference on computer vision and pattern recognition (pp. 1822–1829).
Mei, X., & Ling, H. (2010). Robust visual tracking using ℓ1 minimization. In International conference on computer vision (pp. 1436–1443).
Ross, D. A., Lim, J., Lin, R. S., et al. (2008). Incremental learning for robust visual tracking. International Journal of Computer Vision, 77(1–3), 125–141.
Zhang, K., Zhang, L., Liu, Q., Zhang, D., & Yang, M. H. (2014). Fast visual tracking via dense spatio-temporal context learning. In European Conference on Computer Vision (pp. 127–141).
Zhang, T., Ghanem, B., Liu, S., & Ahuja, N. (2013). Robust visual tracking via structured multi-task sparse learning. International Journal of Computer Vision, 101(2), 367–383.
Hare, S., Golodetz, S., Saffari, A., et al. (2016). Struck: Structured output tracking with kernels. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(10), 2096–2109.
Grabner, H., Grabner, M., & Bischof, H. (2006). Real-time tracking via on-line boosting. In British machine vision conference (pp. 47–56).
Babenko, B., Yang, M. H., & Belongie, S. (2009). Visual tracking with online multiple instance learning. In Computer vision and pattern recognition (pp. 983–990).
Zhang, K., Zhang, L., & Yang, M. H. (2014). Fast compressive tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(10), 2002–2015.
Wu, Y., Jia, N., & Sun, J. (2014). Real-time multi-scale tracking based on compressive sensing. The Visual Computer, 31(4), 471–484.
Henriques, J. F., Caseiro, R., Martins, P., et al. (2015). High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3), 583–596.
Ma, C., Yang, X. K., Zhang, C. Y., & Yang, M. H. (2015). Long-term correlation tracking. In IEEE conference on computer vision and pattern recognition (pp. 5388–5396).
Qi, Y., Zhang, S., Qin, L., et al. (2016). Hedged deep tracking. In IEEE conference on computer vision and pattern recognition (pp. 4303–4311).
Danelljan, M., Shahbaz Khan, F., Felsberg, M., et al. (2014). Adaptive color attributes for real-time visual tracking. In IEEE conference on computer vision and pattern recognition (pp. 1090–1097).
Khan, F. S., Van de Weijer, J., & Vanrell, M. (2012). Modulating shape features by color attention for object recognition. International Journal of Computer Vision, 98(1), 49–64.
Khan, F. S., Anwer, R. M., Van de Weijer, J., et al. (2012). Color attributes for object detection. In IEEE conference on computer vision and pattern recognition (pp. 3306–3313).
Achlioptas, D. (2003). Database-friendly random projections: Johnson–lindenstrauss with binary coins. Journal of Computer and System Sciences, 66(4), 671–687.
Zhong, W., Lu, H., & Yang, M. H. (2014). Robust visual tracking via sparse collaborative appearance model. IEEE Transactions on Image Processing, 23(5), 2356–2368.
Zhang, K., Liu, Q., Wu, Y., et al. (2016). Robust visual tracking via convolutional networks without training. IEEE Transactions on Image Processing, 25(4), 1779–1792.
Danelljan, M., Häger, G., Khan, F. S., et al. (2014). Accurate Scale estimation for robust visual tracking. In British machine vision conference (pp. 1–11).
Bertinetto, L., Valmadre, J., Golodetz, S., et al. (2015). Staple: Complementary learners for real-time tracking. In: Computer vision and pattern recognition (pp. 1401–1409).
Gao, J., Ling, H., Hu, W., et al. (2014). Transfer learning based visual tracking with Gaussian processes regression. In European conference on computer vision (pp. 188–203).
Acknowledgements
This work has been supported by the National Natural Science Foundation of China (Grant: 61403130 and 61402152), the Foundation of Key Laboratory of control engineering of Henan Province, Henan Polytechnic University (Grant: KG2014-17), the Doctor Foundation of Henan Polytechnic University (Grant: B2012-060).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Sun, K., Li, X. & Shi, W. The Fusion of Adaptive Color Attributes for Robust Compressive Tracking. Wireless Pers Commun 102, 879–894 (2018). https://doi.org/10.1007/s11277-017-5111-5
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11277-017-5111-5