Abstract
Most of the real world data we encounter are asynchronous event sequence, so the last decades have been characterized by the implementation of various point process into the field of social networks, electronic medical records and financial transactions. At the beginning, Hawkes process and its variants which can simulate simultaneously the self-triggering and mutual triggering patterns between different events in complex sequences in a clear and quantitative way are more popular. Later on, with the advances of neural network, neural Hawkes process has been proposed one after another, and gradually become a research hotspot. The proposal of the transformer Hawkes process (THP) has gained a huge performance improvement, so a new upsurge of the neural Hawkes process based on transformer is set off. However, THP does not make full use of the information of occurrence time and type of event in the asynchronous event sequence. It simply adds the encoding of event type conversion and the location encoding of time conversion to the source encoding. At the same time, the learner built from a single transformer will result in an inescapable learning bias. In order to mitigate these problems, we propose a tri-transformer Hawkes process (Tri-THP) model, in which the event and time information are added to the dot-product attention as auxiliary information to form a new multi-head attention. The effectiveness of the Tri-THP is proved by a series of well-designed experiments on both real world and synthetic data.
This work was supported by the Science Foundation of China University of Petroleum Beijing (No. 2462020YXZZ023).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bacry, E., Mastromatteo, I., Muzy, J.: Hawkes processes in finance. Mark. Microstruct. Liq. 1, 1550005 (2015)
Reynaud-Bouret, P., Schbath, S.: Adaptive estimation for Hawkes processes; application to genome analysis. Ann. Stat. 38, 2781–2822 (2010)
Ogata, Y.: Space-time point-process models for earthquake occurrences. Ann. Inst. Stat. Math. 50, 379–402 (1998)
Wang, L., Zhang, W., He, X., Zha, H.: Supervised reinforcement learning with recurrent neural network for dynamic treatment recommendation. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2447–2456 (2018)
Zhou, K., Zha, H., Song, L.: Learning social infectivity in sparse low-rank networks using multi-dimensional Hawkes processes. In: Artificial Intelligence and Statistics, pp. 641–649. PMLR (2013)
Vere-Jones, D., Daley, D.J.: An Introduction to the Theory of Point Processes. Springer Series in Statistics, Springer, New York (1988)
Hawkes, A.: Spectra of some self-exciting and mutually exciting point processes. Biometrika 58, 83–90 (1971)
Ozaki, T.: Maximum likelihood estimation of Hawkes’ self-exciting point processes. Ann. Inst. Stat. Math. 31(1), 145–155 (1979). https://doi.org/10.1007/BF02480272
Veen, A., Schoenberg, F.P.: Estimation of space-time branching process models in seismology using an EM-type algorithm. J. Am. Stat. Assoc. 103, 614–624 (2008)
Lewis, E., Mohler, G.: A nonparametric EM algorithm for multiscale Hawkes processes. J. Nonparametr. Stat. 1, 1–20 (2011)
Marsan, D.O.L.: Extending earthquakes’ reach through cascading. Science 319, 1076–1079 (2008)
Xu, H., Farajtabar, M., Zha, H.: Learning granger causality for Hawkes processes. ArXiv abs/1602.04511 (2016)
Graves, A.: Supervised Sequence Labelling with Recurrent Neural Networks. Studies in Computational Intelligence (2008)
Du, N., Dai, H., Trivedi, R.S., Upadhyay, U., Gomez-Rodriguez, M., Song, L.: Recurrent marked temporal point processes: embedding event history to vector. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)
Xiao, S., Yan, J., Yang, X., Zha, H., Chu, S.: Modeling the intensity function of point process via recurrent neural networks. In: AAAI (2017)
Mei, H., Eisner, J.: The neural hawkes process: a neurally self-modulating multivariate point process. In: NIPS (2017)
Zhang, Q., Lipani, A., Kirnap, O., Yilmaz, E.: Self-attentive Hawkes process. In: ICML, pp. 11183–11193 (2020)
Zuo, S., Jiang, H., Li, Z., Zhao, T., Zha, H.: Transformer Hawkes process. In: ICML, pp. 11692–11702 (2020)
Breiman, L.: Bagging predictors. Mach. Learn. 24, 123–140 (1996)
Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q.V., Salakhutdinov, R.: Transformer-XL: attentive language models beyond a fixed-length context. In: ACL (2019)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25, 1097–1105 (2012)
Ba, J.L., Kiros, J.R., Hinton, G.E.: Layer normalization. arXiv preprint arXiv:1607.06450 (2016)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR abs/1412.6980 (2015)
Robert, C., Casella, G.: Monte Carlo Statistical Methods. Springer Texts in Statistics, Springer, Heidelberg (2004). https://doi.org/10.1007/978-1-4757-4145-2
Neumaier, A.: Introduction to Numerical Analysis (2001)
Johnson, A.E.W., et al.: MIMIC-III, a freely accessible critical care database. Sci. Data 3, 1–9 (2016)
Leskovec, J., Krevl, A.: SNAP datasets: Stanford large network dataset collection (2014)
Zhao, Q., Erdogdu, M., He, H.Y., Rajaraman, A., Leskovec, J.: SEISMIC: a self-exciting point process model for predicting tweet popularity. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Song, Zy., Liu, Jw., Zhang, Ln., Han, Yn. (2021). Tri-Transformer Hawkes Process: Three Heads are Better Than One. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds) Neural Information Processing. ICONIP 2021. Lecture Notes in Computer Science(), vol 13108. Springer, Cham. https://doi.org/10.1007/978-3-030-92185-9_32
Download citation
DOI: https://doi.org/10.1007/978-3-030-92185-9_32
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-92184-2
Online ISBN: 978-3-030-92185-9
eBook Packages: Computer ScienceComputer Science (R0)