Semi-supervised segmentation of lesion from breast ultrasound images with attentional generative adversarial network

L Han, Y Huang, H Dou, S Wang, S Ahamad… - Computer methods and …, 2020 - Elsevier
L Han, Y Huang, H Dou, S Wang, S Ahamad, H Luo, Q Liu, J Fan, J Zhang
Computer methods and programs in biomedicine, 2020Elsevier
Background and objective Automatic segmentation of breast lesion from ultrasound images
is a crucial module for the computer aided diagnostic systems in clinical practice. Large-
scale breast ultrasound (BUS) images remain unannotated and need to be effectively
explored to improve the segmentation quality. To address this, a semi-supervised
segmentation network is proposed based on generative adversarial networks (GAN).
Methods In this paper, a semi-supervised learning model, denoted as BUS-GAN, consisting …
Background and objective
Automatic segmentation of breast lesion from ultrasound images is a crucial module for the computer aided diagnostic systems in clinical practice. Large-scale breast ultrasound (BUS) images remain unannotated and need to be effectively explored to improve the segmentation quality. To address this, a semi-supervised segmentation network is proposed based on generative adversarial networks (GAN).
Methods
In this paper, a semi-supervised learning model, denoted as BUS-GAN, consisting of a segmentation base network—BUS-S and an evaluation base network—BUS-E, is proposed. The BUS-S network can densely extract multi-scale features in order to accommodate the individual variance of breast lesion, thereby enhancing the robustness of segmentation. Besides, the BUS-E network adopts a dual-attentive-fusion block having two independent spatial attention paths on the predicted segmentation map and leverages the corresponding original image to distill geometrical-level and intensity-level information, respectively, so that to enlarge the difference between lesion region and background, thus improving the discriminative ability of the BUS-E network. Then, through adversarial training, the BUS-GAN model can achieve higher segmentation quality because the BUS-E network guides the BUS-S network to generate more accurate segmentation maps with more similar distribution as ground truth.
Results
The counterpart semi-supervised segmentation methods and the proposed BUS-GAN model were trained with 2000 in-house images, including 100 annotated images and 1900 unannotated images, and tested on two different sites, including 800 in-house images and 163 public images. The results validate that the proposed BUS-GAN model can achieve higher segmentation accuracy on both the in-house testing dataset and the public dataset than state-of-the-art semi-supervised segmentation methods.
Conclusions
The developed BUS-GAN model can effectively utilize the unannotated breast ultrasound images to improve the segmentation quality. In the future, the proposed segmentation method can be a potential module for the automatic breast ultrasound diagnose system, thus relieving the burden of a tedious image annotation process and alleviating the subjective influence of physicians’ experiences in clinical practice. Our code will be made available on https://github.com/fiy2W/BUS-GAN.
Elsevier