Abstract
In this paper, we focus on temporal-aware knowledge graph (TKG) completion, which aims to automatically predict missing links in a TKG by making inferences from the existing temporal facts and the temporal information among the facts. Existing methods conducted on this task mainly focus on modeling temporal ordering of relations contained in the temporal facts to learn the low-dimensional vector space of TKG. However, these models either ignore the evolving strength of temporal ordering relations in the structure of relational chain, or discard more consideration to the revision of candidate prediction results produced by the TKG embeddings. To address these two limitations, we propose a novel two-phase framework called TKGFrame to boost the final performance of the task. Specifically, TKGFrame employs two major models. The first one is a relation evolving enhanced model to enhance evolving strength representations of pairwise relations pertaining to the same relational chain, resulting in more accurate TKG embeddings. The second one is a refinement model to revise the candidate predictions from the embeddings and further improve the performance of predicting missing temporal facts via solving a constrained optimization problem. Experiments conducted on three popular datasets for entity prediction and relation prediction demonstrate that TKGFrame achieves more accurate prediction results as compared to several state-of-the-art baselines.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
KG completion, as known as link prediction in KG, aims to automatically predict missing links between entities based on known facts involved in KG.
- 2.
The experimental details and source code of the model are publicly available at https://github.com/zjs123/TKGComplt.
- 3.
The relational chain can be constructed by connecting temporal relations sharing the same head entity ranked by an order of their timestamps.
- 4.
- 5.
The code for TransE and TransH is from https://github.com/thunlp/OpenKE.
- 6.
The code for TTransE is from https://github.com/INK-USC/RE-Net/tree/master/baselines.
- 7.
The code for TA-TransE is from https://github.com/nle-ml/mmkb.
- 8.
The code for HyTE is from https://github.com/malllabiisc/HyTE.
- 9.
We train TransE and TransH on all datasets with embedding dimension \(\textit{d}\) = 100, margin \(\gamma \) = 1.0, learning rate \(l = 10^{-3}\) and taking \(\textit{l}_{1}\)-norm. The configuration of TAE-TransE and TAE-TransH are set as embedding dimension \(\textit{d}\) = 100, margin \(\gamma _{1}\) = \(\gamma _{2}\) = 4, learning rate \(l= 10^{-4}\), regularization hyperparameter \(t = 10^{-3}\) and taking \(\textit{l}_{1}\)-norm for YAGO11k and Wikidata12k datasets, and \(\textit{d}\) = 100, \(\gamma _{1}\) = \(\gamma _{2}\) = 2, \(l = 10^{-5}\), \(t = 10^{-3}\), taking \(\textit{l}_{1}\)-norm for Wikidata11k. We train TA-TransE and TTransE with the same parameter setting as introduced in [11]. For TA-TransE model, the configuration are embedding dimension \(\textit{d}\) = 100, margin \(\gamma \) = 1, batch size bs = 512, learning rate \(l = 10^{-4}\) and taking \(\textit{l}_{1}\)-norm for all the datasets. For HyTE, we initialize the same parameter setting as HyTE, in which embedding dimension \(\textit{d}\) = 128, margin \(\gamma \) = 10, learning rate \(l = 10^{-5}\), negative sampling ratio n = 5 and using \(\textit{l}_{1}\)-norm for all the datasets.
References
Barbosa, D., Wang, H., Yu, C.: Shallow information extraction for the knowledge web. In: ICDE, pp. 1264–1267 (2013)
Bollacker, K., Evans, C., Paritosh, P., Sturge, T., Taylor, J.: Freebase: a collaboratively created graph database for structuring human knowledge. In: SIGMOD, pp. 1247–1250 (2008)
Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: NIPS, pp. 2787–2795 (2013)
Clarke, J., Lapata, M.: Global inference for sentence compression: an integer linear programming approach. J. Artif. Intell. Res. 31, 399–429 (2008)
Dasgupta, S.S., Ray, S.N., Talukdar, P.: Hyte: hyperplane-based temporally aware knowledge graph embedding. In: EMNLP, pp. 2001–2011 (2018)
Dong, L., Wei, F., Zhou, M., Xu, K.: Question answering over freebase with multi-column convolutional neural networks. In: ACL-IJCNLP (vol. 1: Long Papers), pp. 260–269 (2015)
Erxleben, F., Günther, M., Krötzsch, M., Mendez, J., Vrandečić, D.: Introducing wikidata to the linked data web. In: Mika, P., et al. (eds.) ISWC 2014. LNCS, vol. 8796, pp. 50–65. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-11964-9_4
García-Durán, A., Dumančić, S., Niepert, M.: Learning sequence encoders for temporal knowledge graph completion (2018). https://arxiv.org/abs/1809.03202
Jiang, T., et al.: Towards time-aware knowledge graph completion. In: COLING, pp. 1715–1724 (2016)
Jiang, T., et al.: Encoding temporal information for time-aware link prediction. In: EMNLP, pp. 2350–2354 (2016)
Jin, W., et al.: Recurrent event network: global structure inference over temporal knowledge graph (2019). https://arxiv.org/abs/1904.05530
Leblay, J., Chekol, M.W.: Deriving validity time in knowledge graph. In: WWW, pp. 1771–1776 (2018)
Lehmann, J., et al.: DBpedia-a large-scale, multilingual knowledge base extracted from Wikipedia. Semant. Web 6(2), 167–195 (2015)
Lin, Y., Liu, Z., Sun, M., Liu, Y., Zhu, X.: Learning entity and relation embeddings for knowledge graph completion. In: AAAI, pp. 2081–287 (2015)
Mahdisoltani, F., Biega, J., Suchanek, F.M.: Yago3: a knowledge base from multilingual wikipedias. In: CIDR (2013)
Nickel, M., Tresp, V., Kriegel, H.P.: A three-way model for collective learning on multi-relational data. In: ICML, vol. 11, pp. 809–816 (2011)
Suchanek, F.M., Kasneci, G., Weikum, G.: Yago: a core of semantic knowledge. In: WWW, pp. 697–706 (2007)
Sun, Z., Hu, W., Zhang, Q., Qu, Y.: Bootstrapping entity alignment with knowledge graph embedding. In: IJCAI, pp. 4396–4402 (2018)
Trivedi, R., Dai, H., Wang, Y., Song, L.: Know-evolve: deep temporal reasoning for dynamic knowledge graphs. In: ICML, vol. 70, pp. 3462–3471 (2017)
Wang, Z., Zhang, J., Feng, J., Chen, Z.: Knowledge graph embedding by translating on hyperplanes. In: AAAI, pp. 1112–1119 (2014)
Xiong, C., Callan, J.: Query expansion with freebase. In: ICTIR, pp. 111–120 (2015)
Zhou, X., Zhu, Q., Liu, P., Guo, L.: Learning knowledge embeddings by combining limit-based scoring loss. In: CIKM, pp. 1009–1018 (2017)
Acknowledgments
This work was supported by Major Scientific and Technological Special Project of Guizhou Province (No. 20183002), Sichuan Science and Technology Program (No. 2020YFS0057, No. 2020YJ0038 and No. 2019YFG0535), Fundamental Research Funds for the Central Universities (No. ZYGX2019Z015) and Dongguan Songshan Lake Introduction Program of Leading Innovative and Entrepreneurial Talents. Yongpan Sheng’s research was supported by the National Key Research and Development Project (No. 2018YFB2101200).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, J., Sheng, Y., Wang, Z., Shao, J. (2020). TKGFrame: A Two-Phase Framework for Temporal-Aware Knowledge Graph Completion. In: Wang, X., Zhang, R., Lee, YK., Sun, L., Moon, YS. (eds) Web and Big Data. APWeb-WAIM 2020. Lecture Notes in Computer Science(), vol 12317. Springer, Cham. https://doi.org/10.1007/978-3-030-60259-8_16
Download citation
DOI: https://doi.org/10.1007/978-3-030-60259-8_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-60258-1
Online ISBN: 978-3-030-60259-8
eBook Packages: Computer ScienceComputer Science (R0)