Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2939672.2939753acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Structural Deep Network Embedding

Published: 13 August 2016 Publication History

Abstract

Network embedding is an important method to learn low-dimensional representations of vertexes in networks, aiming to capture and preserve the network structure. Almost all the existing network embedding methods adopt shallow models. However, since the underlying network structure is complex, shallow models cannot capture the highly non-linear network structure, resulting in sub-optimal network representations. Therefore, how to find a method that is able to effectively capture the highly non-linear network structure and preserve the global and local structure is an open yet important problem. To solve this problem, in this paper we propose a Structural Deep Network Embedding method, namely SDNE. More specifically, we first propose a semi-supervised deep model, which has multiple layers of non-linear functions, thereby being able to capture the highly non-linear network structure. Then we propose to exploit the first-order and second-order proximity jointly to preserve the network structure. The second-order proximity is used by the unsupervised component to capture the global network structure. While the first-order proximity is used as the supervised information in the supervised component to preserve the local network structure. By jointly optimizing them in the semi-supervised deep model, our method can preserve both the local and global network structure and is robust to sparse networks. Empirically, we conduct the experiments on five real-world networks, including a language network, a citation network and three social networks. The results show that compared to the baselines, our method can reconstruct the original network significantly better and achieves substantial gains in three applications, i.e. multi-label classification, link prediction and visualization.

References

[1]
M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373--1396, 2003.
[2]
Y. Bengio. Learning deep architectures for ai. Foundations and trends R in Machine Learning, 2(1):1--127, 2009.
[3]
Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(8):1798--1828, 2013.
[4]
S. Cao, W. Lu, and Q. Xu. Grarep: Learning graph representations with global structural information. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 891--900. ACM, 2015.
[5]
S. Chang, W. Han, J. Tang, G.-J. Qi, C. C. Aggarwal, and T. S. Huang. Heterogeneous network embedding via deep architectures. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 119--128. ACM, 2015.
[6]
N. S. Dash. Context and contextual word meaning. SKASE Journal of Theoretical Linguistics, 5(2):21--31, 2008.
[7]
D. Erhan, Y. Bengio, A. Courville, P.-A. Manzagol, P. Vincent, and S. Bengio. Why does unsupervised pre-training help deep learning? The Journal of Machine Learning Research, 11:625--660, 2010.
[8]
R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. Liblinear: A library for large linear classification. The Journal of Machine Learning Research, 9:1871--1874, 2008.
[9]
K. Georgiev and P. Nakov. A non-iid framework for collaborative filtering with restricted boltzmann machines. In ICML-13, pages 1148--1156, 2013.
[10]
G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82--97, 2012.
[11]
G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527--1554, 2006.
[12]
G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 2006.
[13]
M. Jamali and M. Ester. A matrix factorization technique with trust propagation for recommendation in social networks. In Proceedings of the fourth ACM conference on Recommender systems, pages 135--142. ACM, 2010.
[14]
E. M. Jin, M. Girvan, and M. E. Newman. Structure of growing social networks. Physical review E, 64(4):046132, 2001.
[15]
A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097--1105, 2012.
[16]
J. Leskovec, J. Kleinberg, and C. Faloutsos. Graph evolution: Densification and shrinking diameters. ACM Transactions on Knowledge Discovery from Data (TKDD), 1(1):2, 2007.
[17]
D. Liben-Nowell and J. Kleinberg. The link-prediction problem for social networks. Journal of the American society for information science and technology, 58(7):1019--1031, 2007.
[18]
T. Liu and D. Tao. Classification with noisy labels by importance reweighting. TPAMI, (1):1--1.
[19]
D. Luo, F. Nie, H. Huang, and C. H. Ding. Cauchy graph embedding. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 553--560, 2011.
[20]
A. Y. Ng, M. I. Jordan, Y. Weiss, et al. On spectral clustering: Analysis and an algorithm. Advances in neural information processing systems, 2:849--856, 2002.
[21]
B. Perozzi, R. Al-Rfou, and S. Skiena. Deepwalk: Online learning of social representations. In SIGKDD, pages 701--710. ACM, 2014.
[22]
S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323--2326, 2000.
[23]
R. Salakhutdinov and G. Hinton. Semantic hashing. International Journal of Approximate Reasoning, 50(7):969--978, 2009.
[24]
B. Shaw and T. Jebara. Structure preserving embedding. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 937--944. ACM, 2009.
[25]
R. Socher, A. Perelygin, J. Y. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical methods in natural language processing (EMNLP), volume 1631, page 1642. Citeseer, 2013.
[26]
J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, pages 1067--1077. International World Wide Web Conferences Steering Committee, 2015.
[27]
L. Tang and H. Liu. Relational learning via latent social dimensions. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2009.
[28]
L. Tang and H. Liu. Scalable learning of collective behavior based on sparse social dimensions. In Proceedings of the 18th ACM conference on Information and knowledge management, pages 1107--1116. ACM, 2009.
[29]
J. B. Tenenbaum, V. De Silva, and J. C. Langford. A globalgeometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319--2323, 2000.
[30]
F. Tian, B. Gao, Q. Cui, E. Chen, and T.-Y. Liu. Learning deep representations for graph clustering. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pages 1293--1299, 2014.
[31]
L. Van der Maaten and G. Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(2579-2605):85, 2008.
[32]
S. V. N. Vishwanathan, N. N. Schraudolph, R. Kondor, and K. M. Borgwardt. Graph kernels. The Journal of Machine Learning Research, 11:1201--1242, 2010.
[33]
D. Wang, P. Cui, M. Ou, and W. Zhu. Deep multimodal hashing with orthogonal regularization. In Proceedings of the 24th International Conference on Artificial Intelligence, pages 2291--2297. AAAI Press, 2015.
[34]
Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In Advances in neural information processing systems, pages 1753--1760, 2009.
[35]
C. Xu, D. Tao, and C. Xu. Multi-view intact space learning. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 37(12):2531--2544, 2015.
[36]
J. Zhuang, I. W. Tsang, and S. Hoi. Two-layer multiple kernel learning. In International conference on artificial intelligence and statistics, pages 909--917, 2011.

Cited By

View all
  • (2025)Clustering Enhanced Multiplex Graph Contrastive Representation LearningIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2023.333475136:1(1341-1355)Online publication date: Jan-2025
  • (2025)Anchor-Enhanced Geographical Entity Representation LearningIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2023.332982236:1(924-938)Online publication date: Jan-2025
  • (2025)Distance-Aware Learning for Inductive Link Prediction on Temporal NetworksIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2023.332892436:1(978-990)Online publication date: Jan-2025
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
KDD '16: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
August 2016
2176 pages
ISBN:9781450342322
DOI:10.1145/2939672
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 August 2016

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. deep learning
  2. network analysis
  3. network embedding

Qualifiers

  • Research-article

Funding Sources

  • National Natural Science Foundation of China
  • National Program on Key Basic Research Project

Conference

KDD '16
Sponsor:

Acceptance Rates

KDD '16 Paper Acceptance Rate 66 of 1,115 submissions, 6%;
Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

Upcoming Conference

KDD '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,214
  • Downloads (Last 6 weeks)47
Reflects downloads up to 20 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Clustering Enhanced Multiplex Graph Contrastive Representation LearningIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2023.333475136:1(1341-1355)Online publication date: Jan-2025
  • (2025)Anchor-Enhanced Geographical Entity Representation LearningIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2023.332982236:1(924-938)Online publication date: Jan-2025
  • (2025)Distance-Aware Learning for Inductive Link Prediction on Temporal NetworksIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2023.332892436:1(978-990)Online publication date: Jan-2025
  • (2025)Data-Centric Graph Learning: A SurveyIEEE Transactions on Big Data10.1109/TBDATA.2024.348941211:1(1-20)Online publication date: Feb-2025
  • (2025)Research on Optimization of Large-Scale Heterogeneous Combat Network Based on Graph EmbeddingIEEE Access10.1109/ACCESS.2025.352665013(5773-5784)Online publication date: 2025
  • (2025)Continual learning with high-order experience replay for dynamic network embeddingPattern Recognition10.1016/j.patcog.2024.111093159(111093)Online publication date: Mar-2025
  • (2025)Uncertainty embedding of attribute networks based on multi-view information fusion and multi-order proximity preservationNeurocomputing10.1016/j.neucom.2024.129188620(129188)Online publication date: Mar-2025
  • (2025)SCGC : Self-supervised contrastive graph clusteringNeurocomputing10.1016/j.neucom.2024.128629611(128629)Online publication date: Jan-2025
  • (2025)Self-supervised dual graph learning for recommendationKnowledge-Based Systems10.1016/j.knosys.2025.112967310(112967)Online publication date: Feb-2025
  • (2025)A survey of dynamic graph neural networksFrontiers of Computer Science: Selected Publications from Chinese Universities10.1007/s11704-024-3853-219:6Online publication date: 1-Jun-2025
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media