Abstract
Knowledge graphs, as linked data, can be extracted from texts in triple form that illustrate the structure of “entity–relation–entity.” Knowledge graph embedding (KGE) models are used to map entities and relations into a continuous vector space with semantic constraints so as to learn a knowledge graph with fact triples. In the KGE model training process, both positive and negative triples are necessarily provided. Thus, negative sampling methods are meaningful in generating negative samples based on the representations of entities and relations. This paper proposes an innovative neighborhood knowledge selective adversarial network (NKSGAN), which leverages the representation of aggregating neighborhood information to generate high-quality negative samples for enhancing the performances of the discriminator. Experiments are conducted on widely used standard datasets such as FB15k, FB15k-237, WN18 and WN18RR to evaluate our model for link prediction task. The results present the superiority of our proposed NKSGAN than other baseline methods, indicating that the negative sampling process in NKSGAN is effective in generating high-quality negative samples for boosting KGE models.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Change history
25 May 2020
A Correction to this paper has been published: https://doi.org/10.1007/s00521-020-05040-0
References
Bollacker K, Evans C, Paritosh P, Sturge T, Taylor J (2008) Freebase: a collaboratively created graph database for structuring human knowledge. In: ACM SIGMOD international conference on Management of data, pp 1247–1250
Miller GA (1995) WordNet: a lexical database for English. Commun ACM 38(11):39–41
Suchanek FM, Kasneci G, Weikum G (2007) Yago: a core of semantic knowledge. In: WWW, pp 697–706
Jiang T, Liu T, Ge T, Sha L, Li S, Chang B, Sui Z (2016) Encoding temporal information for time-aware link prediction. In: EMNLP, pp 2350–2354
Trivedi R, Dai H, Wang Y, Song L (2017) Know-evolve: deep temporal reasoning for dynamic knowledge graphs. In: ICML, pp 3462–3471
Dasgupta S S, Ray S N, Talukdar P (2018) Hyte: hyperplane-based temporally aware knowledge graph embedding. In: EMNLP, pp 2001–2011
Chen X, Chen M, Shi W, Sun Y, Zaniolo C (2019) Embedding uncertain knowledge graphs. In: AAAI, pp 3363–3370
Liu H, Singh P (2004) ConceptNet—a practical commonsense reasoning tool-kit. BT Technol J 22(4):211–226
Bordes A, Usunier N, Garcia-Durán A, Weston J, Yakhnenko O (2013) Translating embeddings for modeling multi-relational data. In: NIPS, pp 2787–2795
Ji G, He S, Xu L, Liu K, Zhao J (2015) Knowledge graph embedding via dynamic mapping matrix. In: ACL and IJCNLP, pp 687–696
Yang B, Yih Wt, He X, Gao J, Deng L (2014) Embedding entities and relations for learning and inference in knowledge bases. Preprint arXiv:1412.6575
Trouillon T, Welbl J, Riedel S, Gaussier É, Bouchard G (2016) Complex embeddings for simple link prediction. In: ICML, pp 2071–2080
Wang Q, Mao Z, Wang B, Guo L (2017) Knowledge graph embedding: a survey of approaches and applications. IEEE Trans Knowl Data Eng 29(12):2724–2743
Wang P, Li S, Pan R (2018) Incorporating GAN for negative sampling in knowledge representation learning. In: AAAI, pp 2005–2012
Drumond L, Rendle S, Schmidt-Thieme L (2012) Predicting RDF triples in incomplete knowledge bases with tensor factorization. In: SAC, pp 326–331
Wang Z, Zhang J, Feng J, Chen Z (2014) Knowledge graph embedding by translating on hyperplanes. In: AAAI, pp 1112–1119
Cai L, Wang WY (2018) KBGAN: adversarial learning for knowledge graph embeddings. In: NAACL, pp 1470–1480
Hu K, Liu H, Hao T (2019) A knowledge selective adversarial network for link prediction in knowledge graph. In: NLPCC, pp 171–183
Veličković P, Cucurull G, Casanova A, Romero A, Lio P, Bengio Y (2018) Graph attention networks. In: ICLR
Nathani D, Chauhan J, Sharma C, Kaul M (2019) Learning attention-based embeddings for relation prediction in knowledge graphs. Preprint arXiv:1906.01195
Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2013) Distributed representations of words and phrases and their compositionality. In: NIPS, pp 3111–3119
Lin Y, Liu Z, Sun M, Liu Y, Zhu X (2015) Learning entity and relation embeddings for knowledge graph completion. In: AAAI, pp 2181–2187
Fan M, Zhou Q, Chang E, Zheng TF (2014) Transition-based knowledge graph embedding with relational mapping properties. In: PACLIC, pp 328–337
Feng J, Huang M, Wang M, Zhou M, Hao Y, Zhu X (2016) Knowledge graph embedding by flexible translation. In: KR, pp 557–560
Xiao H, Huang M, Zhu X (2016) From one point to a manifold: knowledge graph embedding for precise link prediction. In: IJCAI, pp 1315–1321
Tan Z, Zhao X, Fang Y, Xiao W (2018) GTrans: generic knowledge graph embedding via multi-state entities and dynamic relation spaces. IEEE Access 6:8232–8244
Sun Z, Deng ZH, Nie JY, Tang J (2019) RotatE: knowledge graph embedding by relational rotation in complex space. In: ICLR
Nickel M, Tresp V, Kriegel HP (2011) A three-way model for collective learning on multi-relational data. In: ICML, pp 809–816
Nickel M, Rosasco L, Poggio T (2016) Holographic embeddings of knowledge graphs. In: AAAI, pp 1955–1961
Kazemi SM, Poole D (2018) SimplE embedding for link prediction in knowledge graphs. In: NIPS, pp 4284–4295
Balažević I, Allen C, Hospedales TM (2019) TuckER: tensor factorization for knowledge graph completion. Preprint arXiv:1901.09590
Socher R, Chen D, Manning CD, Ng A (2013) Reasoning with neural tensor networks for knowledge base completion. In: NIPS, pp 926–934
Dong X, Gabrilovich E, Heitz G, Horn W, Lao N, Murphy K, Strohmann T, Sun S, Zhang W (2014) Knowledge vault: a web-scale approach to probabilistic knowledge fusion. In: KDD, pp 601–610
Dettmers T, Minervini P, Stenetorp P, Riedel S (2018) Convolutional 2D knowledge graph embeddings. In: AAAI, pp 1811–1818
Nguyen DQ, Nguyen TD, Nguyen DQ, Phung D (2018) A novel embedding model for knowledge base completion based on convolutional neural network. In: NAACL-HLT, pp 327–333
Nguyen DQ, Vu T, Nguyen TD, Nguyen DQ, Phung D (2019) A capsule network-based embedding model for knowledge graph completion and search personalization. In: NAACL-HLT, pp 2180–2189
Shang C, Tang Y, Huang J, Bi J, He X, Zhou B (2019) End-to-end structure-aware convolutional networks for knowledge base completion, In: AAAI, pp 3060–3067
Krompaß D, Baier S, Tresp V (2015) Type-constrained representation learning in knowledge graphs. In: ISWC, pp 640–655
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: NIPS, pp 2672–2680
Rensink RA (2000) The dynamic representation of scenes. Vis Cognit 7(1–3):17–42
Corbetta M, Shulman GL (2002) Control of goal-directed and stimulus-driven attention in the brain. Nat Rev Neurosci 3(3):201–215
Mnih V, Heess N, Graves A (2014) Recurrent models of visual attention. In: NIPS, pp 2204–2212
Ba J, Mnih V, Kavukcuoglu K (2014) Multiple object recognition with visual attention. Preprint arXiv:1412.7755
Gregor K, Danihelka I, Graves A, Rezende DJ, Wierstra D (2015) Draw: a recurrent neural network for image generation. Preprint arXiv:1502.04623
Xu K, Ba J, Kiros R, Cho K, Courville A, Salakhudinov R, Zemel R, Bengio Y (2015) Show, attend and tell: neural image caption generation with visual attention. In: ICML, pp 2048–2057
Ji Y, Zhang H, Wu QJ (2018) Salient object detection via multi-scale attention CNN. Neurocomputing 322:130–140
Bahdanau D, Cho K, Bengio Y (2014) Neural machine translation by jointly learning to align and translate. Preprint arXiv:1409.0473
Luong T, Pham H, Manning CD (2015) Effective approaches to attention-based neural machine translation. In: EMNLP, pp 1412–1421
Zhang H, Li J, Ji Y, Yue H (2016) Understanding subtitles by character-level sequence-to-sequence learning. IEEE Trans Ind Inform 13(2):616–624
Gehring J, Auli M, Grangier D, Dauphin Y (2017) A convolutional encoder model for neural machine translation. In: ACL, pp 123–135
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. In: NIPS, pp 5998–6008
Devlin J, Chang MW, Lee K, Toutanova K (2018) Bert: pre-training of deep bidirectional transformers for language understanding. Preprint arXiv:1810.04805
Dai Z, Yang Z, Yang Y, Cohen WW, Carbonell J, Le QV, Salakhutdinov R (2019) Transformer-xl: attentive language models beyond a fixed-length context. Preprint arXiv:1901.02860
Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV (2019) XLNet: generalized autoregressive pretraining for language understanding. Preprint arXiv:1906.08237
Hermann KM, Kocisky T, Grefenstette E, Espeholt L, Kay W, Suleyman M, Blunsom P (2015) Teaching machines to read and comprehend. In: NIPS, pp 1693–1701
Rush AM, Chopra S, Weston J (2015) A neural attention model for abstractive sentence summarization. In: EMNLP, pp 379–389
Wang L, Cao Z, De Melo G, Liu Z (2016) Relation classification via multi-level attention cnns. In: ACL, pp 1298–1307
Lee JB, Rossi RA, Kim S, Ahmed NK, Koh E (2018) Attention models in graphs: a survey. Preprint arXiv:1807.07984
Feng J, Huang M, Yang Y (2016) GAKE: graph aware knowledge embedding. In: COLING, pp 641–651
Choi E, Bahadori MT, Song L, Stewart WF, Sun J (2017) GRAM: graph-based attention model for healthcare representation learning. In: KDD, pp 787–795
Kipf TN, Welling M (2017) Semi-supervised classification with graph convolutional networks. In: ICLR
Schlichtkrull M, Kipf T N, Bloem P, Van Den Berg R, Titov I, Welling M (2018) Modeling relational data with graph convolutional networks. In: ESWC, pp 593–607
Sutton RS, McAllester DA, Singh SP, Mansour Y (2000) Policy gradient methods for reinforcement learning with function approximation. In: NIPS, pp 1057–1063
Toutanova K, Chen D, Pantel P, Poon H, Choudhury P, Gamon M (2015) Representing text for joint embedding of text and knowledge bases. In: EMNLP, pp 1499–1509
Carlson A, Betteridge J, Kisiel B, Settles B, Hruschka ER, Mitchell TM (2010) Toward an architecture for never-ending language learning. In: AAAI, pp 1306–1313
Xiong W, Hoang T, Wang WY (2017) DeepPath: a reinforcement learning method for knowledge graph reasoning. In: EMNLP, pp 564–573
Mahdisoltani F, Biega J, Suchanek FM (2015) YAGO3: a knowledge base from multilingual wikipedias. In: CIDR
Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. Preprint arXiv:1412.6980
Acknowledgements
This work is supported by National Natural Science Foundation of China (No. 61772146), Guangdong Natural Science Foundation (Nos. 2016A030313441, 2018A030310051), and Guangzhou Science Technology and Innovation Commission (No. 201803010063).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors confirm that this article content has no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Liu, H., Hu, K., Wang, FL. et al. Aggregating neighborhood information for negative sampling for knowledge graph embedding. Neural Comput & Applic 32, 17637–17653 (2020). https://doi.org/10.1007/s00521-020-04940-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-020-04940-5