Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Efficient Multi-View -Means for Image Clustering<italic/>

Published: 13 December 2023 Publication History

Abstract

Nowadays, data in the real world often comes from multiple sources, but most existing multi-view <inline-formula> <tex-math notation="LaTeX">${K}$ </tex-math></inline-formula>-Means perform poorly on linearly non-separable data and require initializing the cluster centers and calculating the mean, which causes the results to be unstable and sensitive to outliers. This paper proposes an efficient multi-view <inline-formula> <tex-math notation="LaTeX">${K}$ </tex-math></inline-formula>-Means to solve the above-mentioned issues. Specifically, our model avoids the initialization and computation of clusters centroid of data. Additionally, our model use the Butterworth filters function to transform the adjacency matrix into a distance matrix, which makes the model is capable of handling linearly inseparable data and insensitive to outliers. To exploit the consistency and complementarity across multiple views, our model constructs a third tensor composed of discrete index matrices of different views and minimizes the tensor&#x2019;s rank by tensor Schatten <inline-formula> <tex-math notation="LaTeX">${p}$ </tex-math></inline-formula>-norm. Experiments on two artificial datasets verify the superiority of our model on linearly inseparable data, and experiments on several benchmark datasets illustrate the performance.

References

[1]
S. Bickel and T. Scheffer, “Multi-view clustering,” in Proc. ICDM, 2004, pp. 19–26.
[2]
T. Xia, D. Tao, T. Mei, and Y. Zhang, “Multiview spectral embedding,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 40, no. 6, pp. 1438–1446, Dec. 2010.
[3]
C.-K. Lee and T.-L. Liu, “Guided co-training for multi-view spectral clustering,” in Proc. IEEE Int. Conf. Image Process. (ICIP), Sep. 2016, pp. 393–400.
[4]
K. Zhan, C. Zhang, J. Guan, and J. Wang, “Graph learning for multiview clustering,” IEEE Trans. Cybern., vol. 48, no. 10, pp. 2887–2895, Oct. 2018.
[5]
Q. Gao, W. Xia, Z. Wan, D. Xie, and P. Zhang, “Tensor-SVD based graph learning for multi-view subspace clustering,” in Proc. AAAI, 2020, pp. 3930–3937.
[6]
W. Xia, Q. Gao, Q. Wang, X. Gao, C. Ding, and D. Tao, “Tensorized bipartite graph learning for multi-view clustering,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 4, pp. 5187–5202, Apr. 2023.
[7]
H. Yang, Q. Gao, W. Xia, M. Yang, and X. Gao, “Multiview spectral clustering with bipartite graph,” IEEE Trans. Image Process., vol. 31, pp. 3591–3605, 2022.
[8]
Q. Wang, Z. Ding, Z. Tao, Q. Gao, and Y. Fu, “Generative partial multi-view clustering with adaptive fusion and cycle consistency,” IEEE Trans. Image Process., vol. 30, pp. 1771–1783, 2021.
[9]
K. Zhan, F. Nie, J. Wang, and Y. Yang, “Multiview consensus graph clustering,” IEEE Trans. Image Process., vol. 28, no. 3, pp. 1261–1270, Mar. 2019.
[10]
B. Yang, X. Zhang, F. Nie, and F. Wang, “Fast multiview clustering with spectral embedding,” IEEE Trans. Image Process., vol. 31, pp. 3884–3895, 2022.
[11]
Y. Chen, X. Xiao, C. Peng, G. Lu, and Y. Zhou, “Low-rank tensor graph learning for multi-view subspace clustering,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 1, pp. 92–104, Jan. 2022.
[12]
B. Yang, X. Zhang, Z. Lin, F. Nie, B. Chen, and F. Wang, “Efficient and robust MultiView clustering with anchor graph regularization,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 9, pp. 6200–6213, Sep. 2022.
[13]
J. B. MacQueen, “Some methods for classification and analysis of multivariate observations,” in Proc. Berkeley Symp. Math. Statist. Probab., vol. 1, no. 14, 1965, pp. 281–297.
[14]
D. Arthur and S. Vassilvitskii, “k-means++: The advantages of careful seeding,” in Proc. SODA, 2007, pp. 1027–1035.
[15]
S. Pei, H. Chen, F. Nie, R. Wang, and X. Li, “Centerless clustering,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 1, pp. 167–181, Jan. 2023.
[16]
D.-W. Kim, K. Y. Lee, D. Lee, and K. H. Lee, “Evaluation of the performance of clustering algorithms in kernel-induced feature space,” Pattern Recognit., vol. 38, no. 4, pp. 607–611, Apr. 2005.
[17]
R. Wang, J. Lu, Y. Lu, F. Nie, and X. Li, “Discrete and parameter-free multiple kernel k-Means,” IEEE Trans. Image Process., vol. 31, pp. 2796–2808, 2022.
[18]
L. He and H. Zhang, “Kernel k-means sampling for Nyström approximation,” IEEE Trans. Image Process., vol. 27, no. 5, pp. 2108–2120, May 2018.
[19]
Z. Ren, Q. Sun, and D. Wei, “Multiple kernel clustering with kernel k-means coupled graph tensor learning,” in Proc. AAAI, 2021, pp. 9411–9418.
[20]
J. Ren, K. Hua, and Y. Cao, “Global optimal k-medoids clustering of one million samples,” in Proc. NeurIPS, 2022, pp. 982–994.
[21]
J. Newling and F. Fleuret, “K-medoids for k-means seeding,” in Proc. NeurIPS, 2017, pp. 5195–5203.
[22]
X. Cai, F. Nie, and H. Huang, “Multi-view k-means clustering on big data,” in Proc. IJCAI, 2013, pp. 2598–2604.
[23]
J. Han, J. Xu, F. Nie, and X. Li, “Multi-view K-means clustering with adaptive sparse memberships and weight allocation,” IEEE Trans. Knowl. Data Eng., vol. 34, no. 2, pp. 816–827, Feb. 2022.
[24]
C. H. Q. Ding, X. He, and H. D. Simon, “Nonnegative Lagrangian relaxation of K-means and spectral clustering,” in Proc. ECML, vol. 3720, 2005, pp. 530–538.
[25]
Q. Gao, P. Zhang, W. Xia, D. Xie, X. Gao, and D. Tao, “Enhanced tensor RPCA and its application,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 6, pp. 2133–2140, Jun. 2021.
[26]
W. Xia, X. Zhang, Q. Gao, X. Shu, J. Han, and X. Gao, “Multiview subspace clustering by an enhanced tensor nuclear norm,” IEEE Trans. Cybern., vol. 52, no. 9, pp. 8962–8975, Sep. 2022. 10.1109/TCYB.2021.3052352.
[27]
X. Li, H. Zhang, R. Wang, and F. Nie, “Multiview clustering: A scalable and parameter-free bipartite graph fusion method,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 1, pp. 330–344, Jan. 2022.
[28]
F. Nie, X. Wang, M. I. Jordan, and H. Huang, “The constrained Laplacian rank algorithm for graph-based clustering,” in Proc. AAAI, 2016, pp. 1969–1976.
[29]
W. Liu, J. He, and S. Chang, “Large graph construction for scalable semi-supervised learning,” in Proc. ICML, 2010, pp. 679–686.
[30]
Z. Lin, R. Liu, and Z. Su, “Linearized alternating direction method with adaptive penalty for low-rank representation,” in Proc. NeurIPS, 2011, pp. 612–620.
[31]
H. Xu, X. Zhang, W. Xia, Q. Gao, and X. Gao, “Low-rank tensor constrained co-regularized multi-view spectral clustering,” Neural Netw., vol. 132, pp. 245–252, Dec. 2020.
[32]
J. Winn and N. Jojic, “LOCUS: Learning object classes with unsupervised segmentation,” in Proc. 10th IEEE Int. Conf. Comput. Vis. (ICCV), Oct. 2005, pp. 756–763.
[33]
D. Dua and C. Graff, “UCI machine learning repository,” School Inf. Comput. Sci., Univ. California, Irvine, CA, USA, 2019. [Online]. Available: https://archive.ics.uci.edu/ml
[34]
S. Luo, C. Zhang, W. Zhang, and X. Cao, “Consistent and specific multi-view subspace clustering,” in Proc. AAAI, 2018, pp. 3730–3737.
[35]
L. Deng, “The MNIST database of handwritten digit images for machine learning research,” IEEE Signal Process. Mag., vol. 29, no. 6, pp. 141–142, 2012.
[36]
T.-S. Chua, J. Tang, R. Hong, H. Li, Z. Luo, and Y. Zheng, “NUS-WIDE: A real-world web image database from national university of Singapore,” in Proc. ACM Int. Conf. Image Video Retr., Jul. 2009, pp. 1–9.
[37]
C. Apté, F. Damerau, and S. M. Weiss, “Automated learning of decision rules for text categorization,” ACM Trans. Inf. Syst., vol. 12, no. 3, pp. 233–251, Jul. 1994.
[38]
A. Y. Ng, M. I. Jordan, and Y. Weiss, “On spectral clustering: Analysis and an algorithm,” in Proc. NIPS, 2001, pp. 849–856.
[39]
A. Kumar, P. Rai, and H. D. III, “Co-regularized multi-view spectral clustering,” in Proc. NeurIPS, 2011, pp. 1413–1421.
[40]
Y.-M. Xu, C.-D. Wang, and J.-H. Lai, “Weighted multi-view clustering with feature selection,” Pattern Recognit., vol. 53, pp. 25–35, May 2016.
[41]
J. Xu, J. Han, F. Nie, and X. Li, “Re-weighted discriminatively embedded K-means for multi-view clustering,” IEEE Trans. Image Process., vol. 26, no. 6, pp. 3016–3027, Jun. 2017.
[42]
J. Wu, Z. Lin, and H. Zha, “Essential tensor learning for multi-view spectral clustering,” IEEE Trans. Image Process., vol. 28, no. 12, pp. 5910–5922, Dec. 2019.
[43]
M.-S. Yang and K. P. Sinaga, “A feature-reduction multi-view k-means clustering algorithm,” IEEE Access, vol. 7, pp. 114472–114486, 2019.

Cited By

View all
  • (2024)Label Learning Method Based on Tensor ProjectionProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671671(1599-1609)Online publication date: 25-Aug-2024
  • (2024)Dual Consensus Anchor Learning for Fast Multi-View ClusteringIEEE Transactions on Image Processing10.1109/TIP.2024.345965133(5298-5311)Online publication date: 1-Jan-2024

Index Terms

  1. Efficient Multi-View -Means for Image Clustering
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image IEEE Transactions on Image Processing
      IEEE Transactions on Image Processing  Volume 33, Issue
      2024
      6889 pages

      Publisher

      IEEE Press

      Publication History

      Published: 13 December 2023

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 13 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Label Learning Method Based on Tensor ProjectionProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671671(1599-1609)Online publication date: 25-Aug-2024
      • (2024)Dual Consensus Anchor Learning for Fast Multi-View ClusteringIEEE Transactions on Image Processing10.1109/TIP.2024.345965133(5298-5311)Online publication date: 1-Jan-2024

      View Options

      View options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media