Document-Level Causal Event Extraction Enhanced by Temporal Relations Using Dual-Channel Neural Network
Abstract
:1. Introduction
- Document-level event temporal and causal structure modeling: We employ an event–event temporal relation extraction channel (ETC) to assist the event–event causal relation extraction channel (ECC) in modeling document-level causal events. A single-channel model can only capture explicit causal relations and struggles to effectively incorporate temporal information. In contrast, the dual-channel model enhances robustness in handling implicit causal relations by separately modeling temporal and causal relations through the event–event temporal relation extraction channel (ETC) and the event–event causal relation extraction channel (ECC), followed by their integration. By explicitly incorporating temporal order and leveraging a Gaussian kernel function to adjust causal edge weights, the model ensures that the predicted causal relations align with temporal logic.
- ECG modeling based on Association Link Network (ALN) and Graph-based Weight Assignment: In ECC, ALN constructs a richer event association network, enabling better handling of causal inference across sentences and paragraphs. ALN provides a framework for building event association networks, allowing our model to flexibly combine event semantics, temporal order, and causal features, thereby enhancing the accuracy of causal relation extraction. To determine the direction of edges in the initial event graph, improved Gaussian kernel function is employed to integrate features from both channels. This approach leverages temporal information of events to construct a hierarchical and sequential causal association network.
- Dual-level GCN: We adopt a dual-level GCN architecture in ETC. The first level focuses on modeling and learning explicit causal event pairs using a conventional GCN structure. In the second level, we incorporate Kullback–Leibler (KL) divergence to compute the directional dependency of cross-sentence causal events, improving the ability to capture document-level implicit causal events. This structure fully exploits temporal relations between events.
- Our model demonstrates outstanding performance on the MAVEN-ERE and EventStoryLine datasets, achieving significant improvements in extracting relational facts that were unseen in the training set.
2. Related Work
2.1. Document-Level Relation Extraction
2.2. Event–Event Causal Relation Extraction
2.3. Event–Event Temporal Relation Extraction
3. Methodology
3.1. Feature Encoder
3.2. Event–Event Causal Relation Extraction Channel
3.2.1. Association Link Network
3.2.2. Dual-Level GCN
3.3. Event–Event Temporal Relation Extraction Channel
3.3.1. Multi-Tasking Auxiliary Structure
3.3.2. Adjustment of Node Weight
4. Experiment
4.1. Datasets
4.2. Baseline Models
- BiLSTM+CRF: This is a fundamental information extraction model that utilizes BiLSTM to capture the sequential and contextual information in the text and employs CRF to decode the resulting label sequence.
- CNN+BiLSTM+CRF [43]: The model employs CNN to extract multiple n-gram semantic features and utilizes a BiLSTM+CRF layer to capture dependencies between the features.
- BERT+SCITE [41]: This method integrates multiple prior knowledge sources into the embedding representations to generate hybrid embeddings and employs a multi-head attention mechanism to learn dependencies between causal words. Instead of using hybrid embeddings, we utilize BERT to encode sentences.
- Hierarchical dependency tree (HDT) [61]: A model based on the hierarchical dependency tree (HDT) was proposed to efficiently capture global semantics and contextual information across sentences in a document. The document is divided into four layers: document layer, sentence layer, context layer, and entity layer, to capture multi-granular features. This training approach enables the model to better capture and utilize the structural causal relations between events.
- CSNN [62]: The model integrates CNN to capture local features, a self-attention mechanism to enhance semantic associations between features, and BiLSTM to handle long dependencies in causal relations. CRF is employed to predict word-level labels, ensuring global consistency in causal relation annotations.
4.3. Experimental Settings
4.4. Experimental Results and Analysis
4.5. Ablation Experiment
- w/o BERT: We removed the BERT embedding layer and replaced it with randomly initialized embeddings while keeping the positional encoding unchanged.
- w/o event–event temporal relation extraction channel (ETC): We removed the temporal relation extraction channel while keeping the ECC unchanged.
- w/o event–event causal relation extraction channel (ECC): We removed the causal relation extraction channel while keeping the temporal relation extraction channel unchanged.
- w/o positional encoding: We removed the positional encoding method and directly constructed the graph structure using the original dataset after passing it through the BERT embedding layer.
- w/o dual-level GCN: For the dual-level GCN, we separately removed the first level of GCN and the second level of GCN to evaluate the individual contribution of each level.
- w/o adjustment of node weight: We removed the process of using the Gaussian kernel function to assist in graph structure construction and directly passed the output of ETC to the dual-level GCN.
- w/o Gaussian kernel function: We used semantic similarity to compute causal relations while keeping all other components unchanged.
- w/o temporal adjustment term: We only used
5. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
ALN | Association Link Network |
BERT | Bidirectional encoder representations from transformers |
BiLSTM | Bidirectional Long Short-Term Memory network |
BRAN | Bi-affine Relation Attention Network |
CEA | Cause–effect association |
CNN | Convolutional Neural Network |
CRF | Conditional random field |
CSNN | Cascaded multi-structure neural network |
ECC | Event–event causal relation extraction channel |
ECE | Event causality extraction |
ECG | Event Causality Graph |
ECI | Event causality identification |
ECRE | Event–event causal relation extraction |
EoG | Edge-oriented graph |
ERE | Event relation extraction |
ETC | Event–event temporal relation extraction channel |
ETRE | Event–event temporal relation extraction |
GCN | Graph Convolutional Network |
GNN | Graph Neural Network |
HDT | Hierarchical dependency tree |
IE | Information extraction |
KEC | Knowledge Enhancement Channel |
KL | Kullback–Leibler |
LSTM | Long Short-Term Memory network |
MCNN | Multi-column Convolutional Neural Network |
MLM | Masked Language Model |
NLP | Natural language processing |
PE | Positional encoding |
R-GCN | Relational Graph Convolutional Network |
RNN | Recurrent Neural Network |
SCITE | Self-attentive BiLSTM-CRF with transferred embeddings |
TEC | Textual Enhancement Channel |
TGNN | Temporal Graph Neural Network |
References
- Wang, Y.; Liang, D.; Charlin, L.; Blei, D.M. Causal inference for recommender systems. In Proceedings of the 14th ACM Conference on Recommender Systems, New York, NY, USA, 22–26 September 2020; pp. 426–431. [Google Scholar] [CrossRef]
- van Geloven, N.; Swanson, S.A.; Ramspek, C.L.; Luijken, K.; van Diepen, M.; Morris, T.P.; Groenwold, R.H.H.; van Houwelingen, H.C.; Putter, H.; le Cessie, S. Prediction meets causal inference: The role of treatment in clinical prediction models. Eur. J. Epidemiol. 2020, 35, 619–630. [Google Scholar] [CrossRef] [PubMed]
- Yao, L.; Chu, Z.; Li, S.; Li, Y.; Gao, J.; Zhang, A. A survey on causal inference. ACM Trans. Knowl. Discov. Data 2021, 15, 74. [Google Scholar] [CrossRef]
- Wang, A.; Cho, K.; Lewis, M. Asking and answering questions to evaluate the factual consistency of summaries. arXiv 2020, arXiv:2004.04228. [Google Scholar]
- Guo, J.; Fan, Y.; Pang, L.; Yang, L.; Ai, Q.; Zamani, H.; Wu, C.; Croft, W.B.; Cheng, X. A deep look into neural ranking models for information retrieval. Inf. Process. Manag. 2019, 57, 102067. [Google Scholar] [CrossRef]
- Dong, C.; Li, Y.; Gong, H.; Chen, M.; Li, J.; Shen, Y.; Yang, M. A survey of natural language generation. ACM Comput. Surv. 2022, 55, 173. [Google Scholar] [CrossRef]
- Granger, C.W. Investigating causal relations by econometric models and cross-spectral methods. Econometrica 1969, 37, 424–438. [Google Scholar] [CrossRef]
- Khoo, C.S.; Kornfilt, J.; Oddy, R.N.; Myaeng, S.H. Automatic extraction of cause-effect information from newspaper text without knowledge-based inferencing. Lit. Linguistic Comput. 1998, 13, 177–186. [Google Scholar] [CrossRef]
- Blanco, N.E.; Castell, D. Moldovan, Causal relation extraction. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08), Marrakech, Morocco, 28–30 May 2008. [Google Scholar]
- Wang, Z.; Wang, H.; Luo, X.; Gao, J. Back to prior knowledge: Joint event-event causal relation extraction via convolutional semantic infusion. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining, Delhi, India, 11–14 May 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 346–357. [Google Scholar] [CrossRef]
- Schuster, M.; Paliwal, K.K. Bidirectional Recurrent Neural Networks. IEEE Trans. Signal Process. 1997, 45, 2673–2681. [Google Scholar] [CrossRef]
- Collobert, R.; Weston, J.; Bottou, L.; Karlen, M.; Kavukcuoglu, K.; Kuksa, P. Natural language processing (almost) from scratch. J. Mach. Learn. Res. 2011, 12, 2493–2537. [Google Scholar]
- Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
- Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
- Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is All You Need. Advances in Neural Information Processing Systems (NeurIPS). arXiv 2017, arXiv:1706.03762. [Google Scholar]
- Zhao, L.; Sun, Q.; Ye, J.; Chen, F.; Lu, C.-T.; Ramakrishnan, N. Multi-task learning for spatio-temporal event forecasting. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, Australia, 10–13 August 2015; ACM: New York, NY, USA, 2015; pp. 1503–1512. [Google Scholar] [CrossRef]
- Wongsuphasawat, K.; Gotz, D. Exploring flow, factors, and outcomes of temporal event sequences with the outflow visualization. IEEE Trans. Vis. Comput. Graph. 2012, 18, 2659–2668. [Google Scholar] [CrossRef]
- Sinha, C.; Sinha, V.D.S.; Zinken, J.; Sampaio, W. When time is not space: The social and linguistic construction of time intervals and temporal event relations in an Amazonian culture. Lang. Cogn. 2011, 3, 137–169. [Google Scholar] [CrossRef]
- O'Gorman, T.; Wright-Bettner, K.; Palmer, M. Richer event description: Integrating event coreference with temporal, causal and bridging annotation. In Proceedings of the CNS, Jeju, Republic of Korea, 2–7 July 2016; pp. 47–56. [Google Scholar] [CrossRef]
- Huang, Y.; Zhang, L.; Zhang, P. A framework for mining sequential patterns from spatio-temporal event data sets. IEEE Trans. Knowl. Data Eng. 2008, 20, 433–448. [Google Scholar] [CrossRef]
- Boogaart, R. Aspect and Temporal Ordering. A Contrastive Analysis of Dutch and English; LOT: Warsaw, Poland, 1999. [Google Scholar]
- Mani, I.; Schiffman, B.; Zhang, J. Inferring temporal ordering of events in news. In Companion Volume of the Proceedings of HLT-NAACL 2003-Short Papers; ACL: Stroudsburg, PA, USA, 2003; pp. 55–57. [Google Scholar]
- Haffar, N.; Ayadi, R.; Hkiri, E.; Zrigui, M. Temporal ordering of events via deep neural networks. In Document Analysis and Recognition, Proceedings of the ICDAR 2021: 16th International Conference, Lausanne, Switzerland, 5–10 September 2021; Springer: Berlin/Heidelberg, Germany, 2021; Part II 16; pp. 762–777. [Google Scholar] [CrossRef]
- Zhou, H.; Zheng, D.; Nisa, I.; Ioannidis, V.; Song, X.; Karypis, G. Tgl: A general framework for temporal gnn training on billion-scale graphs. arXiv 2022, arXiv:2203.14883. [Google Scholar] [CrossRef]
- Do, Q.; Chan, Y.S.; Roth, D. Minimally supervised event causality identification. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, Edinburgh, UK, 27–29 July 2011; pp. 294–303. [Google Scholar]
- Kim, Y. Convolutional neural networks for sentence classification. In Proceedings of the EMNLP, Doha, Qatar, 25–29 October 2014; pp. 1746–1751. [Google Scholar] [CrossRef]
- Hu, Z.; Li, Z.; Jin, X.; Bai, L.; Guan, S.; Guo, J.; Cheng, X. Semantic structure enhanced event causality identification. arXiv 2023, arXiv:2305.12792. [Google Scholar]
- Aved, U.; Ijaz, K.; Jawad, M.; Khosa, I.; Ansari, E.A.; Zaidi, K.S.; Rafiq, M.N.; Shabbir, N. A novel short receptive field based dilated causal convolutional network integrated with Bidirectional LSTM for short-term load forecasting. Expert Syst. Appl. 2022, 205, 117689. [Google Scholar] [CrossRef]
- Christopoulou, F.; Miwa, M.; Ananiadou, S. Connecting the dots: Document-level neural relation extraction with edge-oriented graphs. arXiv 2019, arXiv:1909.00228. [Google Scholar]
- Guo, Z.; Zhang, Y.; Lu, W. Attention guided graph convolutional networks for relation extraction. arXiv 2019, arXiv:1906.07510. [Google Scholar]
- Huang, K.; Wang, G.; Ma, T.; Huang, J. Entity and evidence guided relation extraction for docred. arXiv 2020, arXiv:2008.12283. [Google Scholar]
- Verga, P.; Strubell, E.; McCallum, A. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. arXiv 2018, arXiv:1802.10569. [Google Scholar]
- Gu, J.; Qian, L.; Zhou, G. Chemical-induced disease relation extraction with various linguistic features. Database 2016, 2016, baw042. [Google Scholar] [CrossRef]
- Zhou, H.; Deng, H.; Chen, L.; Yang, Y.; Jia, C.; Huang, D. Exploiting syntactic and semantics information for chemical–disease relation extraction. Database 2016, 2016, baw048. [Google Scholar] [CrossRef] [PubMed]
- Nan, G.; Guo, Z.; Sekulić, I.; Lu, W. Reasoning with latent structure refinement for document-level relation extraction. arXiv 2020, arXiv:2005.06312. [Google Scholar]
- Girju, R. Automatic detection of causal relations for question answering. In Proceedings of the ACL 2003 Workshop on Multilingual Summarization and Question Answering, Sapporo, Japan, 7–12 July 2003; pp. 76–83. [Google Scholar] [CrossRef]
- Khoo, C.S.; Chan, S.; Niu, Y. Extracting causal knowledge from a medical database using graphical patterns. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, Hong Kong, China, 3–6 October 2020; pp. 336–343. [Google Scholar] [CrossRef]
- Kruengkrai, C.; Torisawa, K.; Hashimoto, C.; Kloetzer, J.; Oh, J.H.; Tanaka, M. Improving event causality recognition with multiple background knowledge sources using multi-column convolutional neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31. [Google Scholar] [CrossRef]
- Li, Z.; Li, Q.; Zou, X.; Ren, J. Causality extraction based on self-attentive BiLSTM-CRF with transferred embeddings. Neurocomputing 2021, 423, 207–219. [Google Scholar] [CrossRef]
- Shang, X.; Ma, Q.; Lin, Z.; Yan, J.; Chen, Z. A span-based dynamic local attention model for sequential sentence classification. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Online, 1–6 August 2021; ACL: Stroudsburg, PA, USA, 2021; Volume 2, pp. 198–203. [Google Scholar] [CrossRef]
- Gao, J.; Yu, H.; Zhang, S. Joint event-event causal relation extraction using dual-channel enhanced neural network. Knowl. Based Syst. 2022, 258, 109935. [Google Scholar] [CrossRef]
- Shen, S.; Zhou, H.; Wu, T.; Qi, G. Event causality identification via derivative prompt joint learning. In Proceedings of the 29th International Conference on Computational Linguistics, Gyeongju, Republic of Korea, 12–17 October 2022; pp. 2288–2299. [Google Scholar]
- Liu, J.; Chen, Y.; Zhao, J. Knowledge enhanced event causality identification with mention masking generalizations. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, Yokohama, Japan, 7–15 January 2021; pp. 3608–3614. [Google Scholar] [CrossRef]
- Hashimoto, C. Weakly supervised multilingual causality extraction from Wikipedia. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Proceedings of the 2019 Conference on Empirical Methods in Natural Language, Hong Kong, China, 3–7 November 2019; ACL: Stroudsburg, PA, USA, 2019; pp. 2988–2999. [Google Scholar] [CrossRef]
- An, H.; Nguyen, M.; Nguyen, T.H. Event causality identification via generation of important context words. In Proceedings of the 11th Joint Conference on Lexical and Computational Semantics, Seattle, WA, USA, 14–15 July 2022; pp. 323–330. [Google Scholar] [CrossRef]
- Phu, M.T.; Nguyen, T.H. Graph convolutional networks for event causality identification with rich document-level structures. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online, 6–11 June 2021; pp. 3480–3490. [Google Scholar] [CrossRef]
- Allen, J.F. Maintaining knowledge about temporal intervals. Commun. ACM 1983, 26, 832–843. [Google Scholar] [CrossRef]
- Chambers, N.; Jurafsky, D. Jointly combining implicit constraints improves temporal ordering. In Proceedings of the Conference on Empirical Methods for Natural Language Processing (EMNLP 2008), Honolulu, HA, USA, 25–27 October 2008; pp. 698–706. [Google Scholar]
- Mani Inderjeet Wellner, B.; Verhagen, M.; Pustejovsky, J. Three Approaches to Learning Tlinks in Timeml; Technical Report CS-07-268; Brandeis University: Waltham, MA, USA, 2007. [Google Scholar]
- Marta, T.; Srikanth, M. Experiments with reasoning for temporal relations between events. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING 2008), Manchester, UK, 18–22 August 2008; COLING 2008 Organizing Committee: Manchester, UK, 2008; pp. 857–864. [Google Scholar]
- Amigo, E.; Artiles, J.; Li, Q.; Ji, H. An evaluation framework for aggregated temporal information extraction. In Proceedings of the SIGIR workshop on entity-oriented, Taipei, Taiwan, 23–27 July 2023; ACM: New York, NY, USA, 2011. [Google Scholar]
- Han, R.; Ning, Q.; Peng, N. Joint event and temporal relation extraction with shared representations and structured prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Proceedings of the 2019 Conference on Empirical Methods in Natural Language, Hong Kong, China, 3–7 November 2019; ACL: Stroudsburg, PA, USA, 2019; pp. 434–444. [Google Scholar] [CrossRef]
- Huang, Q.; Hu, Y.; Zhu, S.; Feng, Y.; Liu, C.; Zhao, D. More than classification: A unified framework for event temporal relation extraction. arXiv 2023, arXiv:2305.17607. [Google Scholar]
- Maisonnave, M.; Delbianco, F.; Tohme, F.; Milios, E.; Maguitman, A.G. Causal graph extraction from news: A comparative study of time-series causality learning techniques. PeerJ Comput. Sci. 2022, 8, e1066. [Google Scholar] [CrossRef] [PubMed]
- Wang, X.; Chen, Y.; Ding, N.; Peng, H.; Wang, Z.; Lin, Y.; Han, X.; Hou, L.; Li, J.; Liu, Z.; et al. Maven-ere: A unified large-scale dataset for event coreference, temporal, causal, and subevent relation extraction. arXiv 2022, arXiv:2211.07342. [Google Scholar]
- Liu, Y.; Borhan, N.; Luo, X.; Zhang, H.; He, X. Association link network based core events discovery on the web. In Proceedings of the 2013 IEEE 16th International Conference on Computational Science and Engineering, Sydney, Australia, 3–5 December 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 553–560. [Google Scholar] [CrossRef]
- Caselli, T.; Vossen, P. The EventStoryLine Corpus: A new benchmark for causal and temporal relation extraction. In Proceedings of the Events and Stories in the News Workshop, Vancouver, BC, Canada, 4 August 2017; pp. 77–86. [Google Scholar] [CrossRef]
- Tao, Z.; Jin, Z.; Bai, X.; Zhao, H.; Dou, C.; Zhao, Y.; Wang, F.; Tao, C. SEAG: Structure-aware event causality generation. In Findings of the Association for Computational Linguistics, Proceedings of the ACL 2023, Toronto, ON, Canada, 9–14 July 2023; ACL: Stroudsburg, PA, USA, 2023; pp. 4631–4644. [Google Scholar] [CrossRef]
- Wan, Q.; Du, S.; Liu, Y.; Fang, J.; Wei, L.; Liu, S. Document-level relation extraction with hierarchical dependency tree and bridge path. Knowl. Based Syst. 2023, 278, 110873. [Google Scholar] [CrossRef]
- Jin, X.; Wang, X.; Luo, X.; Huang, S.; Gu, S. Inter-sentence and implicit causality extraction from Chinese corpus. In Proceedings of the PAKDD 2020. LNCS (LNAI), Singapore, 11–14 May 2020; Lauw, H.W., Wong, R.C.-W., Ntoulas, A., Lim, E.-P., Ng, S.-K., Pan, S.J., Eds.; Springer: Cham, Switzerland, 2020; Volume 12084, pp. 739–751. [Google Scholar] [CrossRef]
Parameters | Value |
---|---|
Optimizer | Adam |
Learning rate | 4 × 10−6 |
Batch size | 8 |
Max length | 341 |
Weight decay | 0.01 |
Warmup | 0.1 |
Environment | Configuration |
---|---|
Operating system | Windows 11 64-bit |
GPU | NVIDIA RTX 4090 |
Framework | PyTorch 1.7.0 |
Dataset | Model | Metric | |||
---|---|---|---|---|---|
Accuracy | Precision | Recall | F1 | ||
EventStoryLine | BiLSTM+CRF | 59.32 | 55.68 | 40.28 | 46.74 |
CNN+BiLSTM+CRF | 58.67 | 59.36 | 60.53 | 59.94 | |
BERT+SCITE | 61.32 | 55.69 | 53.78 | 54.72 | |
HDT | 68.58 | 62.57 | 65.36 | 63.93 | |
CSNN | 60.56 | 49.67 | 53.61 | 51.56 | |
Our model | 75.58 | 68.32 | 75.66 | 71.80 | |
MAVEN | BiLSTM+CRF | 58.39 | 42.17 | 32.93 | 36.98 |
CNN+BiLSTM+CRF | 89.31 | 69.83 | 37.56 | 48.84 | |
BERT+SCITE | 61.26 | 59.63 | 58.29 | 58.95 | |
HDT | 87.84 | 85.61 | 67.82 | 75.68 | |
CSNN | 62.37 | 53.72 | 58.31 | 55.92 | |
Our model | 87.32 | 83.46 | 78.70 | 81.01 |
Model | MAVEN-ERE | EventStoryLine |
---|---|---|
Our model | 81.01 | 71.80 |
Our model w/o BERT | 57.41 | 50.58 |
Our model w/o ETC | 62.46 | 54.39 |
Our model w/o ECC | - | - |
Our model w/o positional encoding | 72.69 | 69.03 |
Our model w/o the first level of GCN | 67.36 | 68.42 |
Our model w/o the second level of GCN | 62.19 | 59.68 |
Our model w/o the adjustment of node weight | 61.22 | 61.29 |
Our model w/o the Gaussian kernel function | 63.41 | 55.23 |
Our model w/o the temporal adjustment term | 59.64 | 51.83 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, Z.; Liang, Y.; Ni, W. Document-Level Causal Event Extraction Enhanced by Temporal Relations Using Dual-Channel Neural Network. Electronics 2025, 14, 992. https://doi.org/10.3390/electronics14050992
Liu Z, Liang Y, Ni W. Document-Level Causal Event Extraction Enhanced by Temporal Relations Using Dual-Channel Neural Network. Electronics. 2025; 14(5):992. https://doi.org/10.3390/electronics14050992
Chicago/Turabian StyleLiu, Zishu, Yongquan Liang, and Weijian Ni. 2025. "Document-Level Causal Event Extraction Enhanced by Temporal Relations Using Dual-Channel Neural Network" Electronics 14, no. 5: 992. https://doi.org/10.3390/electronics14050992
APA StyleLiu, Z., Liang, Y., & Ni, W. (2025). Document-Level Causal Event Extraction Enhanced by Temporal Relations Using Dual-Channel Neural Network. Electronics, 14(5), 992. https://doi.org/10.3390/electronics14050992