Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

D-Rax: Domain-Specific Radiologic Assistant Leveraging Multi-modal Data and eXpert Model Predictions

  • Conference paper
  • First Online:
Foundation Models for General Medical AI (MedAGI 2024)

Abstract

Large vision language models (VLMs) have progressed incredibly from research to applicability for general-purpose use cases. LLaVA-Med, a pioneering large language and vision assistant for biomedicine, can perform multi-modal biomedical image and data analysis to provide a natural language interface for radiologists. While it is highly generalizable and works with multi-modal data, it is currently limited by well-known challenges in the large language model space. Hallucinations and imprecision in responses can lead to misdiagnosis, which currently hinders VLMs’ clinical adaptability. To create precise, user-friendly models in healthcare, we propose D-Rax- a domain-specific, conversational, radiologic assistance tool that can be used to gain insights about a particular radiologic image. In this study, we enhance the conversational analysis of chest X-ray (CXR) images to support radiological reporting, offering comprehensive insights from medical imaging and aiding in the formulation of accurate diagnosis. D-Rax is achieved by fine-tuning the LLaVA-Med architecture on our curated enhanced instruction-following data, comprising of images, instructions, as well as disease diagnosis and demographic predictions derived from MIMIC-CXR imaging data, CXR-related visual question answer (VQA) pairs, and predictive outcomes from multiple expert AI models. We observe statistically significant improvement in responses when evaluated for both open and close-ended conversations. Leveraging the power of state-of-the-art diagnostic models combined with VLMs, D-Rax empowers clinicians to interact with medical images using natural language, which could potentially streamline their decision-making process, enhance diagnostic accuracy, and conserve their time.

H. Nisar, S. M. Anwar and Z. Jiang—These authors contributed equally.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Alayrac, J.B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, Y., Lenc, K., Mensch, A., Millican, K., Reynolds, M., Ring, R., Rutherford, E., Han, S.C.T., Gong, Z., Samangooei, S., Monteiro, M., Menick, J., Borgeaud, S., Brock, A., Nematzadeh, A., Sharifzadeh, S., Binkowski, M., Barreira, R., Vinyals, O., Zisserman, A., Simonyan, K.: Flamingo: a Visual Language Model for Few-Shot Learning. Adv. Neural. Inf. Process. Syst. 35, 23716–23736 (2022)

    Google Scholar 

  2. Bruls, R.J., Kwee, R.M.: Workload for radiologists during on-call hours: dramatic increase in the past 15 years. Insights Imaging 11, 1–7 (2020)

    Article  Google Scholar 

  3. Cohen, J.P., Viviano, J.D., Bertin, P., Morrison, P., Torabian, P., Guarrera, M., Lungren, M.P., Chaudhari, A., Brooks, R., Hashir, M., Bertrand, H.: TorchXRayVision: A library of chest X-ray datasets and models. In: Medical Imaging with Deep Learning (2022)

    Google Scholar 

  4. Fawzy, N.A., Tahir, M.J., Saeed, A., Ghosheh, M.J., Alsheikh, T., Ahmed, A., Lee, K.Y., Yousaf, Z.: Incidence and factors associated with burnout in radiologists: A systematic review. European Journal of Radiology Open 11, 100530 (2023)

    Article  Google Scholar 

  5. Gao, W., Deng, Z., Niu, Z., Rong, F., Chen, C., Gong, Z., Zhang, W., Xiao, D., Li, F., Cao, Z., Ma, Z., Wei, W., Ma, L.: Ophglm: Training an ophthalmology large language-and-vision assistant based on instructions and dialogue (2023), https://arxiv.org/abs/2306.12174

  6. Gichoya, J.W., Banerjee, I., Bhimireddy, A.R., Burns, J.L., Celi, L.A., Chen, L.C., Correa, R., Dullerud, N., Ghassemi, M., Huang, S.C., Kuo, P.C., Lungren, M.P., Palmer, L.J., Price, B.J., Purkayastha, S., Pyrros, A.T., Oakden-Rayner, L., Okechukwu, C., Seyyed-Kalantari, L., Trivedi, H., Wang, R., Zaiman, Z., Zhang, H.: Ai recognition of patient race in medical imaging: a modelling study. The Lancet Digital Health (2022)

    Google Scholar 

  7. Goldberger, A., Amaral, L., Glass, L., Hausdorff, J., Ivanov, P., Mark, R., Mietus, J., Moody, G., Peng, C., Stanley, H.: PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation 101(23), e215–e220 (2000)

    Article  Google Scholar 

  8. Hemmer, P., Schemmer, M., Riefle, L., Rosellen, N., Vössing, M., Kühl, N.: Factors that influence the adoption of human-AI collaboration in clinical decision-making. In: Thirtieth European Conference on Information Systems (ECIS 2022) (2022)

    Google Scholar 

  9. Hu, X., Gu, L., An, Q., Zhang, M., Liu, L., Kobayashi, K., Harada, T., Summers, R., Zhu, Y.: Medical-Diff-VQA: A Large-Scale Medical Dataset for Difference Visual Question Answering on Chest X-Ray Images. PhysioNet (2023)

    Google Scholar 

  10. Hu, X., Gu, L., An, Q., Zhang, M., Liu, L., Kobayashi, K., Harada, T., Summers, R.M., Zhu, Y.: Expert knowledge-aware image difference graph representation learning for difference-aware medical visual question answering. Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining pp. 4156–4165 (2023)

    Google Scholar 

  11. Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 2261–2269 (2017)

    Google Scholar 

  12. Ieki, H.e.a.: Deep learning-based age estimation from chest X-rays indicates cardiovascular prognosis. Communications Medicine (2022)

    Google Scholar 

  13. Irvin, J., Rajpurkar, P., Ko, M., Yu, Y., Ciurea-Ilcus, S., Chute, C., Marklund, H., Haghgoo, B., Ball, R., Shpanskaya, K., et al.: Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In: Proceedings of the AAAI conference on artificial intelligence. pp. 590–597 (2019)

    Google Scholar 

  14. Johnson, A., Lungren, M., Peng, Y., Lu, Z., Mark, R., Berkowitz, S., Horng, S.: MIMIC-CXR-JPG - chest radiographs with structured labels. PhysioNet (2019)

    Google Scholar 

  15. Johnson, A.E.W., Pollard, T.J., Greenbaum, N.R., Lungren, M.P., Deng, C.y., Peng, Y., Lu, Z., Mark, R.G., Berkowitz, S.J., Horng, S.: MIMIC-CXR-JPG, a large publicly available database of labeled chest radiographs. PysioNet (2019)

    Google Scholar 

  16. Lau, J.J., Gayen, S., Ben Abacha, A., Demner-Fushman, D.: A dataset of clinically generated visual questions and answers about radiology images. Scientific data 5(1), 1–10 (2018)

    Article  Google Scholar 

  17. Lee, C.S., Nagy, P.G., Weaver, S.J., Newman-Toker, D.E.: Cognitive and system factors contributing to diagnostic errors in radiology. Am. J. Roentgenol. 201(3), 611–617 (2013)

    Article  Google Scholar 

  18. Li, C., Wong, C., Zhang, S., Usuyama, N., Liu, H., Yang, J., Naumann, T., Poon, H., Gao, J.: LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day (2023)

    Google Scholar 

  19. Liu, B., Zhan, L.M., Xu, L., Ma, L., Yang, Y., Wu, X.M.: Slake: A semantically-labeled knowledge-enhanced dataset for medical visual question answering. In: 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI). pp. 1650–1654. IEEE (2021)

    Google Scholar 

  20. Liu, H., Li, C., Wu, Q., Lee, Y.J.: Visual Instruction Tuning (2023)

    Google Scholar 

  21. Mukherjee, P., Hou, B., Lanfredi, R.B., Summers, R.M.: Feasibility of using the privacy-preserving large language model vicuna for labeling radiology reports. Radiology 309 (2023)

    Google Scholar 

  22. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning Transferable Visual Models From Natural Language Supervision (2021)

    Google Scholar 

  23. Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., Rodriguez, A., Joulin, A., Grave, E., Lample, G.: Llama: Open and efficient foundation language models (2023)

    Google Scholar 

  24. Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2097–2106 (2017)

    Google Scholar 

  25. Wu, C., Zhang, X., Zhang, Y., Wang, Y., Xie, W.: Towards generalist foundation model for radiology by leveraging web-scale 2d &3d medical data (2023)

    Google Scholar 

  26. Yi, X.: chestviewsplit. https://github.com/xinario/chestViewSplit

  27. Zhang, S., Xu, Y., Usuyama, N., Bagga, J., Tinn, R., Preston, S., Rao, R., Wei, M., Valluri, N., Wong, C., Lungren, M.P., Naumann, T., Poon, H.: Large-Scale Domain-Specific Pretraining for Biomedical Vision-Language Processing (2023)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Syed Muhammad Anwar .

Editor information

Editors and Affiliations

Appendices

Expert Enhanced Training

(See Fig. 3)

Fig. 3.
figure 3

Data organization for expert enhanced training containing the following information: (1) image identifiers, (2) question-answer pairs, (3) diagnostic prediction on 18 medical conditions, (4) predicted age of the patient, (5) predicted race of the patient, and (6) predicted view of the CXR.

No Abnormality Questions

(See Table 5)

Table 5. Removing abnormality questions (\(27\%\) of the data) from training. Token recall (%) for open-ended questions (O) and accuracy (%) for close-ended questions (C) are reported to show the performance of LLaVA models finetuned on enhanced instruction dataset using \(100\%\) and \(73\%\) data, respectively. Each value is an average of three inferences and standard deviations are reported in parentheses. The asterisks show statistical significance across paired comparisons using the Wilcoxon signed rank test (* for p-value \(< 0.05\) and ** for p-value \(< 0.001\)).

Expert Model Metrics

(See Table 6)

Table 6. Quantitative evaluation of the expert model for disease diagnosis (DenseNet121) on \(20\%\) of MIMIC-CXR. The AUC performance is reported. https://github.com/mlmed/torchxrayvision/blob/master/BENCHMARKS.md

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nisar, H. et al. (2025). D-Rax: Domain-Specific Radiologic Assistant Leveraging Multi-modal Data and eXpert Model Predictions. In: Deng, Z., et al. Foundation Models for General Medical AI. MedAGI 2024. Lecture Notes in Computer Science, vol 15184. Springer, Cham. https://doi.org/10.1007/978-3-031-73471-7_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-73471-7_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-73470-0

  • Online ISBN: 978-3-031-73471-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics