Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1007/978-3-031-66538-7_26guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Enhancing Abstract Screening Classification in Evidence-Based Medicine: Incorporating Domain Knowledge into Pre-trained Models

Published: 25 July 2024 Publication History

Abstract

Evidence-based medicine (EBM) represents a cornerstone in medical research, guiding policy and decision-making. However, the robust steps involved in EBM, particularly in the abstract screening stage, present significant challenges to researchers. Numerous attempts to automate this stage with pre-trained language models (PLMs) are often hindered by domain-specificity, particularly in EBMs involving animals and humans. Thus, this research introduces a state-of-the-art (SOTA) transfer learning approach to enhance abstract screening by incorporating domain knowledge into PLMs without altering their base weights. This is achieved by integrating small neural networks, referred to as knowledge layers, within the PLM architecture. These knowledge layers are trained on key domain knowledge pertinent to EBM, PICO entities, PubmedQA, and the BioASQ 7B biomedical Q &A benchmark datasets. Furthermore, the study explores a fusion method to combine these trained knowledge layers, thereby leveraging multiple domain knowledge sources. Evaluation of the proposed method on four highly imbalanced EBM abstract screening datasets demonstrates its effectiveness in accelerating the screening process and surpassing the performance of strong baseline PLMs.

References

[1]
Burns, P.B., Rohrich, R.J., Chung, K.C.: The levels of evidence and their role in evidence-based medicine. Plast. Reconstr. Surg. 128(1), 305–310 (2011)
[2]
Ofori-Boateng, R., Aceves-Martins, M., Jayne, C., Wiratunga, N., Moreno-Garcia, C.F.: Evaluation of attention-based LSTM and Bi-LSTM networks for abstract text classification in systematic literature review automation. Procedia Comput. Sci. 222, 114–126 (2023)
[3]
Xie, Q., Bishop, J.A., Tiwari, P., Ananiadou, S.: Pre-trained language models with domain knowledge for biomedical extractive summarization. Knowl. Based Syst. 252, 109460 (2022)
[4]
Timsina, P., Liu, J., El-Gayar, O.: Advanced analytics for the automation of medical systematic reviews. Inf. Syst. Front. 18(2), 237–252 (2015)
[5]
Almeida, H., Meurs, M.-J., Kosseim, L., Tsang, A.: Data sampling and supervised learning for HIV literature screening. IEEE Trans. Nanobiosci. 15(4), 354–361 (2016)
[6]
Kontonatsios, G., Spencer, S., Matthew, P., Korkontzelos, I.: Using a neural network-based feature extraction method to facilitate citation screening for systematic reviews. Expert Syst. Appl. X 6, 100030 (2020)
[7]
Natukunda, A., Muchene, L.K.: Unsupervised title and abstract screening for systematic review: a retrospective case-study using topic modelling methodology. Syst. Rev. 12(1), 1 (2023)
[8]
Hasny, M., et al.: BERT for complex systematic review screening to support the future of medical research. In: Artificial Intelligence in Medicine (2023)
[9]
Moreno-Garcia, C.F., Jayne, C., Elyan, E., Aceves-Martins, M.: A novel application of machine learning and zero-shot classification methods for automated abstract screening in systematic reviews. Decis. Analytics J. 6, 100162 (2023)
[10]
Guo, E., Gupta, M., Deng, J., Park, Y.-J., Paget, M., Naugler, C.: Automated Paper Screening for Clinical Reviews Using Large Language Models (2023)
[11]
Houlsby, N., et al.: Parameter-Efficient Transfer Learning for NLP (2019)
[12]
Maloof, M.A.: Learning when data sets are imbalanced and when costs are unequal and unknown. In: ICML-2003 Workshop on Learning from Imbalanced Data Sets II (2003)
[13]
Rebuffi, S.-A., Bilen, H., Vedaldi, A.: Learning multiple visual domains with residual adapters. In: Guyon, I., Von Luxburg, U., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 30 (2017)
[14]
Mahabadi, R.K., Henderson, J., Ruder, S.: Compacter: Efficient Low-Rank Hypercomplex Adapter Layers (2021)
[15]
Pfeiffer, J., Kamath, A., Rücklé, A., Cho, K., Gurevych, I.: AdapterFusion: non-destructive task composition for transfer learning. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume (2021)
[16]
Aceves-Martins, M., et al.: Interventions to treat obesity in Mexican children and adolescents: systematic review and meta-analysis. Nutr. Rev. 80(3), 544–560 (2022)

Index Terms

  1. Enhancing Abstract Screening Classification in Evidence-Based Medicine: Incorporating Domain Knowledge into Pre-trained Models
              Index terms have been assigned to the content through auto-classification.

              Recommendations

              Comments

              Information & Contributors

              Information

              Published In

              cover image Guide Proceedings
              Artificial Intelligence in Medicine: 22nd International Conference, AIME 2024, Salt Lake City, UT, USA, July 9–12, 2024, Proceedings, Part I
              Jul 2024
              437 pages
              ISBN:978-3-031-66537-0
              DOI:10.1007/978-3-031-66538-7

              Publisher

              Springer-Verlag

              Berlin, Heidelberg

              Publication History

              Published: 25 July 2024

              Author Tags

              1. Pre-trained Language Models
              2. Domain Integration
              3. Transfer Learning
              4. Evidence-Based Medicine
              5. Abstract Text Classification

              Qualifiers

              • Article

              Contributors

              Other Metrics

              Bibliometrics & Citations

              Bibliometrics

              Article Metrics

              • 0
                Total Citations
              • 0
                Total Downloads
              • Downloads (Last 12 months)0
              • Downloads (Last 6 weeks)0
              Reflects downloads up to 09 Feb 2025

              Other Metrics

              Citations

              View Options

              View options

              Figures

              Tables

              Media

              Share

              Share

              Share this Publication link

              Share on social media