Abstract
In this preliminary work, we present a domain fine-tuned LLM model for radiology, an experimental large language model adapted for radiology. This model, created through an exploratory application of instruction tuning on a comprehensive dataset of radiological information, demonstrates promising performance when compared with broader language models such as StableLM, Dolly, and LLaMA. This model exhibits initial versatility in applications related to radiological diagnosis, research, and communication. Our work contributes an early but encouraging step towards the evolution of clinical NLP by implementing a large language model that is local and domain-specific, conforming to stringent privacy norms like HIPAA. The hypothesis of creating customized, large-scale language models catering to distinct requirements of various medical specialties, presents a thought-provoking direction. The blending of conversational prowess and specific domain knowledge in these models kindles hope for future enhancements in healthcare AI. While it is still in its early stages, the potential of generative large language models is intriguing and worthy of further exploration. The demonstration code of our domain fine-tuned LLM model for radiology can be accessed at https://anonymous.4open.science/r/radiology-llm-demo-C3E2/.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Free Dolly. Introducing the World’s First Truly Open Instruction-Tuned LLM. databricks.com. https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm. Accessed 09 June 2023
Stanford CRFM. crfm.stanford.edu. https://crfm.stanford.edu/2023/03/13/alpaca.html. Accessed 09 June 2023
Alhendawi, K., Baharudin, A.S.: String matching algorithms (SMAS): survey & empirical analysis. J. Comput. Sci. Manag. (2013)
Anil, R., et al.: Palm 2 technical report. arXiv preprint arXiv:2305.10403 (2023)
Dai, H., et al.: Ad-autogpt: an autonomous gpt for alzheimer’s disease infodemiology. arXiv preprint arXiv:2306.10095 (2023)
Dai, H., et al.: Chataug: leveraging chatgpt for text data augmentation. arXiv preprint arXiv:2302.13007 (2023)
Demner-Fushman, D., et al.: Preparing a collection of radiology examinations for distribution and retrieval. J. Am. Med. Inform. Assoc. 23(2), 304–310 (2016)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Hu, E.J., et al.: Lora: low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 (2021)
Hu, J., et al.: Word graph guided summarization for radiology findings. In: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pp. 4980–4990 (2021)
Islamovic, A.: Stability AI Launches the First of its StableLM Suite of Language Models - Stability AI. stability.ai. https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models. Accessed 09 June 2023
Johnson, A.E., et al.: Mimic-cxr, a de-identified publicly available database of chest radiographs with free-text reports. Sci. Data 6(1), 317 (2019)
Liao, W., et al.: Differentiate chatgpt-generated and human-written medical texts. arXiv preprint arXiv:2304.11567 (2023)
Lin, C.Y.: Rouge: a package for automatic evaluation of summaries. In: Text Summarization Branches Out, pp. 74–81 (2004)
Liu, Y., et al.: Summary of chatgpt/gpt-4 research and perspective towards the future of large language models. arXiv preprint arXiv:2304.01852 (2023)
Liu, Z., et al.: Survey on natural language processing in medical image analysis. Zhong nan da xue xue bao. Yi xue ban J. Central South Univ. Med. Sci. 47(8), 981–993 (2022)
Liu, Z., He, X., Liu, L., Liu, T., Zhai, X.: Context matters: a strategy to pre-train language model for science education. arXiv preprint arXiv:2301.12031 (2023)
Liu, Z., et al.: Deid-gpt: zero-shot medical text de-identification by gpt-4. arXiv preprint arXiv:2303.11032 (2023)
Ma, C., et al.: Impressiongpt: an iterative optimizing framework for radiology report summarization with chatgpt. arXiv preprint arXiv:2304.08448 (2023)
OpenAI, R.: Gpt-4 technical report. arXiv (2023)
Ouyang, L., et al.: Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155 (2022)
Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311–318 (2002)
Rezayi, S., et al.: Clinicalradiobert: knowledge-infused few shot learning for clinical notes named entity recognition. In: Machine Learning in Medical Imaging: 13th International Workshop, MLMI 2022, Held in Conjunction with MICCAI 2022. LNCS, pp. 269–278. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-21014-3_28
Sonn, G.A., et al.: Prostate magnetic resonance imaging interpretation varies substantially across radiologists. Eur. Urol. Focus 5(4), 592–599 (2019)
Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017)
Wallis, A., McCoubrie, P.: The radiology report-are we getting the message across? Clin. Radiol. 66(11), 1015–1022 (2011)
Wei, J., et al.: Emergent abilities of large language models. arXiv preprint arXiv:2206.07682 (2022)
Wu, Z., Geiger, A., Potts, C., Goodman, N.D.: Interpretability at scale: identifying causal mechanisms in alpaca. arXiv preprint arXiv:2305.08809 (2023)
Wu, Z., et al.: Exploring the trade-offs: Unified large language models vs local fine-tuned models for highly-specific radiology nli task. arXiv preprint arXiv:2304.09138 (2023)
Yan, A., et al.: Radbert: adapting transformer-based language models to radiology. Radiol. Artif. Intell. 4(4), e210258 (2022)
Zhao, L., et al.: When brain-inspired AI meets AGI. arXiv preprint arXiv:2303.15935 (2023)
Zhong, T., et al.: Chatabl: abductive learning via natural language interaction with chatgpt. arXiv preprint arXiv:2304.11107 (2023)
Zhou, C., et al.: A comprehensive survey on pretrained foundation models: a history from bert to chatgpt. arXiv preprint arXiv:2302.09419 (2023)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Liu, Z. et al. (2024). Tailoring Large Language Models to Radiology: A Preliminary Approach to LLM Adaptation for a Highly Specialized Domain. In: Cao, X., Xu, X., Rekik, I., Cui, Z., Ouyang, X. (eds) Machine Learning in Medical Imaging. MLMI 2023. Lecture Notes in Computer Science, vol 14348. Springer, Cham. https://doi.org/10.1007/978-3-031-45673-2_46
Download citation
DOI: https://doi.org/10.1007/978-3-031-45673-2_46
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-45672-5
Online ISBN: 978-3-031-45673-2
eBook Packages: Computer ScienceComputer Science (R0)