Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
rapid-communication

Weighted multi-view common subspace learning method

Published: 01 November 2021 Publication History

Highlights

A novel multi-view common subspace learning model is proposed.
The model can obtain a unified common subspace by using the discriminant information within and between views.
The discriminant information within and between views can be adjusted by the weighted coefficient.
Our model adopts the maximize scatter difference criterion as the metric between and within the views after projection.

Abstract

How to use multi-view data effectively has become one of the challenging problems in the computer vision community. The existing multi-view learning methods are mainly based on common subspace learning which aim to explore the discriminative information between multi-view data and find its potential common subspace. Most of the existing multi-view subspace learning methods rely on the within-class scatter matrix and between-class scatter matrix while capturing the discriminative information of multiple views. However, these methods just roughly minimize the within-class distance and maximize the between-class distance, and do not make full use of the intra-view and inter-view information. To address this problem, we propose a weighted common subspace learning method, which can effectively adjust the contribution ratio of between-class and within-class information through a weighted parameter, so that an optimized common subspace can be obtained. And we use the maximum scatter difference criterion as the metric of inter-view and intra-view after projection. Extensive experiments on the public data sets show the superiority of this method.

References

[1]
G. Lisanti, S. Karaman, I. Masi, Multichannel-kernel canonical correlation analysis for cross-view person reidentification, ACM Trans. Multimedia Comput.Commun. Appl. (TOMM) 13 (2) (2017) 1–19.
[2]
C. Zhao, Y. Chen, X. Wang, W.K. Wong, D. Miao, J. Lei, Kernelized random KISS metric learning for person re-identification, Neurocomputing 275 (2018) 403–417.
[3]
H. Dong, P. Lu, S. Zhong, C. Liu, Y. Ji, S. Gong, Person re-identification by enhanced local maximal occurrence representation and generalized similarity metric learning, Neurocomputing 307 (2018) 25–37.
[4]
H. Dong, P. Lu, C. Liu, Y. Ji, Y. Li, S. Gong, Person re-identification by kernel null space marginal fisher analysis, Pattern Recognit. Lett. 107 (2018) 66–74.
[5]
J. Dai, Y. Zhang, H. Lu, H. Wang, Cross-view semantic projection learning for person re-identification, Pattern Recognit. 75 (2018) 63–76.
[6]
X. Xin, J. Wang, R. Xie, S. Zhou, W. Huang, N. Zheng, Semi-supervised person re-identification using multi-view clustering, Pattern Recognit. 88 (2019) 285–297.
[7]
X. Ma, X. Zhu, S. Gong, X. Xie, J. Hu, K.-M. Lam, Y. Zhong, Person re-identification by unsupervised video matching, Pattern Recognit. 65 (2017) 197–210.
[8]
L. Wu, C. Shen, A. Van Den Hengel, Deep linear discriminant analysis on fisher networks: a hybrid architecture for person re-identification, Pattern Recognit. 65 (2017) 238–250.
[9]
L. Zhu, Z. Huang, X. Liu, X. He, J. Sun, X. Zhou, Discrete multimodal hashing with canonical views for robust mobile landmark search, IEEE Trans. Multimedia 19 (9) (2017) 2066–2079.
[10]
L. Zhu, J. Shen, H. Jin, L. Xie, R. Zheng, Landmark classification with hierarchical multi-modal exemplar feature, IEEE Trans. Multimedia 17 (7) (2015) 981–993.
[11]
L. Zhu, J. Shen, H. Jin, R. Zheng, L. Xie, Content-based visual landmark search via multimodal hypergraph learning, IEEE Trans. Cybern. 45 (12) (2015) 2756–2769.
[12]
L. Nie, L. Zhang, L. Meng, X. Song, X. Chang, X. Li, Modeling disease progression via multisource multitask learners: a case study with Alzheimer’s disease, IEEE Trans Neural Netw Learn Syst 28 (7) (2016) 1508–1519.
[13]
L. Nie, L. Zhang, Y. Yang, M. Wang, R. Hong, T.-S. Chua, Beyond doctors: future health prediction from multimedia and multimodal observations, Proceedings of the 23rd ACM International Conference on Multimedia, 2015, pp. 591–600.
[14]
P. Jing, Y. Su, L. Nie, X. Bai, J. Liu, M. Wang, Low-rank multi-view embedding learning for micro-video popularity prediction, IEEE Trans. Knowl. Data Eng. 30 (8) (2017) 1519–1532.
[15]
L. Nie, X. Wang, J. Zhang, X. He, H. Zhang, R. Hong, Q. Tian, Enhancing micro-video understanding by harnessing external sounds, Proceedings of the 25th ACM International Conference on Multimedia, 2017, pp. 1192–1200.
[16]
P. Jing, Y. Su, L. Nie, H. Gu, J. Liu, M. Wang, A framework of joint low-rank and sparse regression for image memorability prediction, IEEE Trans. Circuits Syst. Video Technol. 29 (5) (2018) 1296–1309.
[17]
P. Jing, Y. Su, L. Nie, H. Gu, Predicting image memorability through adaptive transfer learning from external sources, IEEE Trans. Multimedia 19 (5) (2016) 1050–1062.
[18]
J. Zhao, X. Xie, X. Xu, S. Sun, Multi-view learning overview: recent progress and new challenges, Inf. Fusion 38 (2017) 43–54.
[19]
A. Blum, T. Mitchell, Combining labeled and unlabeled data with co-training, Proceedings of the Eleventh Annual Conference on Computational Learning Theory, 1998, pp. 92–100.
[20]
H. Wang, F. Sun, Y. Cai, C. Ning, L. Ding, On multiple kernel learning methods, Acta Autom. Sin. 36 (36) (2010) 1037–1050.
[21]
P. Yang, K. Huang, C.-L. Liu, A multi-task framework for metric learning with common subspace, Neural Comput. Appl. 22 (7) (2013) 1337–1347.
[22]
D.R. Hardoon, S. Szedmak, J. Shawe-Taylor, Canonical correlation analysis: an overview with application to learning methods, Neural Comput. 16 (12) (2004) 2639–2664.
[23]
N. Abramson, D.J. Braverman, G.S. Sebestyen, Pattern recognition and machine learning, Publ. Am. Stat.Assoc. 9 (4) (1963) 257–261.
[24]
X. Zhao, J. Guo, F. Nie, L. Chen, Z. Li, H. Zhang, Joint principal component and discriminant analysis for dimensionality reduction, IEEE Trans. Neural Netw. Learn. Syst. 31 (2) (2019) 433–444.
[25]
P.L. Lai, C. Fyfe, Kernel and nonlinear canonical correlation analysis, Int. J. Neural Syst. 10 (05) (2000) 365–377.
[26]
D.R. Hardoon, J. Shawe-Taylor, Sparse canonical correlation analysis, Mach. Learn. 83 (3) (2011) 331–353.
[27]
X. Chen, L. Han, J. Carbonell, Structured sparse canonical correlation analysis, Artificial Intelligence and Statistics, PMLR, 2012, pp. 199–207.
[28]
J. Rupnik, J. Shawe-Taylor, Multi-view canonical correlation analysis, Conference on Data Mining and Data Warehouses (SiKDD 2010), 2010, pp. 1–4.
[29]
A. Sharma, A. Kumar, H. Daume, D.W. Jacobs, Generalized multiview analysis: a discriminative latent space, 2012 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2012, pp. 2160–2167.
[30]
P.N. Belhumeur, J.P. Hespanha, D.J. Kriegman, Eigenfaces vs. fisherfaces: recognition using class specific linear projection, IEEE Trans. Pattern Anal. Mach. Intell. 19 (7) (1997) 711–720.
[31]
S. Mika, G. Ratsch, J. Weston, B. Scholkopf, K.-R. Mullers, Fisher discriminant analysis with kernels, Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (cat. no. 98th8468), Ieee, 1999, pp. 41–48.
[32]
M. Kan, S. Shan, H. Zhang, S. Lao, X. Chen, Multi-view discriminant analysis, European Conference on Computer Vision, Springer, 2012, pp. 808–821.
[33]
M. Kan, S. Shan, H. Zhang, S. Lao, X. Chen, Multi-view discriminant analysis, IEEE Trans. Pattern Anal. Mach. Intell. 38 (1) (2015) 188–194.
[34]
X. You, J. Xu, W. Yuan, X.-Y. Jing, D. Tao, T. Zhang, Multi-view common component discriminant analysis for cross-view classification, Pattern Recognit. 92 (2019) 37–51.
[35]
P. Horst, Relations amongm sets of measures, Psychometrika 26 (2) (1961) 129–149.
[36]
Y.-F. Guo, S.-J. Li, J.-Y. Yang, T.-T. Shu, L.-D. Wu, A generalized Foley–Sammon transform based on generalized fisher discriminant criterion and its application to face recognition, Pattern Recognit. Lett. 24 (1–3) (2003) 147–158.
[37]
J. Chen, G. Wang, G.B. Giannakis, Graph multiview canonical correlation analysis, IEEE Trans. Signal Process. 67 (11) (2019) 2826–2838.
[38]
H. Pan, J. He, Y. Ling, L. Ju, G. He, Graph regularized multiview marginal discriminant projection, J Vis Commun Image Represent 57 (2018) 12–22.
[39]
T. Sim, S. Baker, M. Bsat, The CMU pose, illumination, and expression database, IEEE Trans. Pattern Anal. Mach. Intell. 25 (12) (2004) 1615–1618.
[40]
X. Cai, F. Nie, W. Cai, H. Huang, Heterogeneous image features integration via multi-modal semi-supervised learning model, Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 1737–1744.
[41]
Z. Ding, Y. Fu, Low-rank common subspace for multi-view learning, 2014 IEEE International Conference on Data Mining, IEEE, 2014, pp. 110–119.
[42]
J. Xu, S. Yu, X. You, M. Leng, X.-Y. Jing, C.P. Chen, Multiview hybrid embedding: a divide-and-conquer approach, IEEE Trans. Cybern. 50 (8) (2019) 3640–3653.
[43]
L. Fei-Fei, R. Fergus, P. Perona, Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories, 2004 Conference on Computer Vision and Pattern Recognition Workshop, IEEE, 2004, p. 178.

Cited By

View all

Index Terms

  1. Weighted multi-view common subspace learning method
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image Pattern Recognition Letters
      Pattern Recognition Letters  Volume 151, Issue C
      Nov 2021
      362 pages

      Publisher

      Elsevier Science Inc.

      United States

      Publication History

      Published: 01 November 2021

      Author Tags

      1. Weighted parameter
      2. Multi-view learning
      3. Common subspace learning
      4. Supervised learning

      Qualifiers

      • Rapid-communication

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 16 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all

      View Options

      View options

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media