JSR 17 Task 002 Aiforhealthandhealthcare12122017
JSR 17 Task 002 Aiforhealthandhealthcare12122017
JSR 17 Task 002 Aiforhealthandhealthcare12122017
We use cookies on kaggle to deliver our services, analyze web traffic, and improve your experience on the site.
Got it Learn more
By using kaggle, you agree to our use of cookies.
Eduardo Mineo
2
U-Net lung segmentation (Montgomery + Shenzhen)
voters
Notebook Code Data (3) Output Comments (1) Log Versions (11) Forks Fork Notebook
Notebook
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 1/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
Contents
1. Overview
2. Data preparation
3. Segmentation training
4. Results
1. Overview
This notebook follows the work of Kevin Mader (https://www.kaggle.com/kmader/training-u-net-on-
tb-images-to-segment-lungs/notebook) for lung segmentation. Our motivation is to automatically
identify lung opacities in chest x-rays for the RSNA Pneumonia Detection Challenge
(https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/leaderboard).
Medical Image Segmentation is the process of automatic detection of boundaries within images. In
this exercise, we train a convolutional neural network with U-Net (https://arxiv.org/abs/1505.04597)
architecture, which training strategy relies on the strong use of data augmentation to improve the
efficiency of available annotated samples.
The training is done with two chest x-rays datasets: Montgomery County and Shenzhen Hospital
(https://ceb.nlm.nih.gov/repositories/tuberculosis-chest-x-ray-image-data-sets/). The Montgomery
County dataset includes manually segmented lung masks, whereas Shenzhen Hospital dataset was
manually segmented by Stirenko et al (https://arxiv.org/abs/1803.01199). The lung segmentation
masks were dilated to load lung boundary information within the training net and the images were
resized to 512x512 pixels.
2. Data preparation
Prepare the input segmentation directory structure.
In [1]:
!mkdir ../input/segmentation
!mkdir ../input/segmentation/test
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 2/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
!mkdir ../input/segmentation/train
!mkdir ../input/segmentation/train/augmentation
!mkdir ../input/segmentation/train/image
!mkdir ../input/segmentation/train/mask
!mkdir ../input/segmentation/train/dilate
In [2]:
import os
import numpy as np
import cv2
import matplotlib.pyplot as plt
In [3]:
INPUT_DIR = os.path.join("..", "input")
MONTGOMERY_TRAIN_DIR = os.path.join(SEGMENTATION_SOURCE_DIR, \
"Montgomery", "MontgomerySet")
MONTGOMERY_IMAGE_DIR = os.path.join(MONTGOMERY_TRAIN_DIR, "CXR_png")
MONTGOMERY_LEFT_MASK_DIR = os.path.join(MONTGOMERY_TRAIN_DIR, \
"ManualMask", "leftMask")
MONTGOMERY_RIGHT_MASK_DIR = os.path.join(MONTGOMERY_TRAIN_DIR, \
"ManualMask", "rightMask")
#Prod
STEPS_PER_EPOC=512
EPOCHS=48
#Desv
#STEPS_PER_EPOC=64
#EPOCHS=16
1. Combine left and right lung segmentation masks of Montgomery chest x-rays
2. Resize images to 512x512 pixels
3. Dilate masks to gain more information on the edge of lungs
4. Split images into training and test datasets
5. Write images to /segmentation directory
In [4]:
montgomery_left_mask_dir = glob(os.path.join(MONTGOMERY_LEFT_MASK_DIR,
'*.png'))
montgomery_test = montgomery_left_mask_dir[0:50]
montgomery_train= montgomery_left_mask_dir[50:]
image = cv2.imread(image_file)
left_mask = cv2.imread(left_image_file, cv2.IMREAD_GRAYSCALE)
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 4/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
if (left_image_file in montgomery_train):
cv2.imwrite(os.path.join(SEGMENTATION_IMAGE_DIR, base_file), \
image)
cv2.imwrite(os.path.join(SEGMENTATION_MASK_DIR, base_file), \
mask)
cv2.imwrite(os.path.join(SEGMENTATION_DILATE_DIR, base_file),
\
mask_dilate)
else:
filename, fileext = os.path.splitext(base_file)
cv2.imwrite(os.path.join(SEGMENTATION_TEST_DIR, base_file), \
image)
cv2.imwrite(os.path.join(SEGMENTATION_TEST_DIR, \
"%s_mask%s" % (filename, fileext)), m
ask)
cv2.imwrite(os.path.join(SEGMENTATION_TEST_DIR, \
"%s_dilate%s" % (filename, fileext)),
mask_dilate)
In [5]:
def add_colored_dilate(image, mask_image, dilate_image):
mask_image_gray = cv2.cvtColor(mask_image, cv2.COLOR_BGR2GRAY)
dilate_image_gray = cv2.cvtColor(dilate_image, cv2.COLOR_BGR2GRAY)
mask_coord = np.where(mask!=[0,0,0])
dilate_coord = np.where(dilate!=[0,0,0])
return ret
mask_coord = np.where(mask!=[0,0,0])
mask[mask_coord[0],mask_coord[1],:]=[255,0,0]
return ret
mask_coord = np.where(mask!=[0,0,0])
mask[mask_coord[0],mask_coord[1],:]=[255,0,0]
Show some Montgomery chest x-rays and its lung segmentation masks from training and test dataset
to verify the procedure above. In merged image it is possible to see the difference between the dilated
mask in blue and the original mask in red.
In [6]:
base_file = os.path.basename(montgomery_train[0])
image = cv2.imread(image_file)
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 6/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
g ( g )
mask_image = cv2.imread(mask_image_file)
dilate_image = cv2.imread(dilate_image_file)
merged_image = add_colored_dilate(image, mask_image, dilate_image)
axs[0, 0].set_title("X-Ray")
axs[0, 0].imshow(image)
axs[0, 1].set_title("Mask")
axs[0, 1].imshow(mask_image)
axs[0, 2].set_title("Dilate")
axs[0, 2].imshow(dilate_image)
axs[0, 3].set_title("Merged")
axs[0, 3].imshow(merged_image)
base_file = os.path.basename(montgomery_test[0])
filename, fileext = os.path.splitext(base_file)
image_file = os.path.join(SEGMENTATION_TEST_DIR, base_file)
mask_image_file = os.path.join(SEGMENTATION_TEST_DIR, \
"%s_mask%s" % (filename, fileext))
dilate_image_file = os.path.join(SEGMENTATION_TEST_DIR, \
"%s_dilate%s" % (filename, fileext))
image = cv2.imread(image_file)
mask_image = cv2.imread(mask_image_file)
dilate_image = cv2.imread(dilate_image_file)
merged_image = add_colored_dilate(image, mask_image, dilate_image)
axs[1, 0].set_title("X-Ray")
axs[1, 0].imshow(image)
axs[1, 1].set_title("Mask")
axs[1, 1].imshow(mask_image)
axs[1, 2].set_title("Dilate")
axs[1, 2].imshow(dilate_image)
axs[1, 3].set_title("Merged")
axs[1, 3].imshow(merged_image)
Out[6]:
<matplotlib.image.AxesImage at 0x7f7a1f0ed5c0>
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 7/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
In [7]:
shenzhen_mask_dir = glob(os.path.join(SHENZHEN_MASK_DIR, '*.png'))
shenzhen_test = shenzhen_mask_dir[0:50]
shenzhen_train= shenzhen_mask_dir[50:]
image = cv2.imread(image_file)
mask = cv2.imread(mask_file, cv2.IMREAD_GRAYSCALE)
if (mask_file in shenzhen_train):
cv2.imwrite(os.path.join(SEGMENTATION_IMAGE_DIR, base_file), \
image)
cv2.imwrite(os.path.join(SEGMENTATION_MASK_DIR, base_file), \
mask)
cv2.imwrite(os.path.join(SEGMENTATION_DILATE_DIR, base_file),
\
mask_dilate)
else:
filename, fileext = os.path.splitext(base_file)
Show some Shenzhen Hospital chest x-rays and its lung segmentation masks from training and test
dataset to verify the procedure above. In merged image it is possible to see the difference between the
dilated mask in blue and the original mask in red.
In [8]:
base_file = os.path.basename(shenzhen_train[0].replace("_mask", ""))
image = cv2.imread(image_file)
mask_image = cv2.imread(mask_image_file)
dilate_image = cv2.imread(dilate_image_file)
merged_image = add_colored_dilate(image, mask_image, dilate_image)
axs[0, 0].set_title("X-Ray")
axs[0, 0].imshow(image)
axs[0, 1].set_title("Mask")
axs[0, 1].imshow(mask_image)
axs[0, 2].set_title("Dilate")
axs[0, 2].imshow(dilate_image)
axs[0, 3].set_title("Merged")
axs[0, 3].imshow(merged_image)
image = cv2.imread(image_file)
mask_image = cv2.imread(mask_image_file)
dilate_image = cv2.imread(dilate_image_file)
merged_image = add_colored_dilate(image, mask_image, dilate_image)
axs[1, 0].set_title("X-Ray")
axs[1, 0].imshow(image)
axs[1, 1].set_title("Mask")
axs[1, 1].imshow(mask_image)
axs[1, 2].set_title("Dilate")
axs[1, 2].imshow(dilate_image)
axs[1, 3].set_title("Merged")
axs[1, 3].imshow(merged_image)
Out[8]:
<matplotlib.image.AxesImage at 0x7f7a1c5d5908>
Print the count of images and segmentation lung masks available to test and train the model
In [9]:
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 10/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
In [9]:
(len(glob(os.path.join(SEGMENTATION_TEST_DIR, "*.png"))), \
len(glob(os.path.join(SEGMENTATION_IMAGE_DIR, "*.png"))), \
len(glob(os.path.join(SEGMENTATION_MASK_DIR, "*.png"))), \
len(glob(os.path.join(SEGMENTATION_DILATE_DIR, "*.png"))))
Out[9]:
(300, 604, 604, 604)
3. Segmentation training
References: https://github.com/zhixuhao/unet/ (https://github.com/zhixuhao/unet/),
https://github.com/jocicmarko/ultrasound-nerve-segmentation
(https://github.com/jocicmarko/ultrasound-nerve-segmentation)
In [10]:
# From: https://github.com/zhixuhao/unet/blob/master/data.py
def train_generator(batch_size, train_path, image_folder, mask_folder,
aug_dict,
image_color_mode="grayscale",
mask_color_mode="grayscale",
image_save_prefix="image",
mask_save_prefix="mask",
save_to_dir=None,
target_size=(256,256),
seed=1):
'''
can generate image and mask at the same time use the same seed for
image_datagen and mask_datagen to ensure the transformation for ima
ge
and mask is the same if you want to visualize the results of genera
tor,
set save_to_dir = "your path"
'''
image_datagen = ImageDataGenerator(**aug_dict)
mask_datagen = ImageDataGenerator(**aug_dict)
image_generator = image_datagen.flow_from_directory(
train_path,
classes = [image_folder],
class_mode = None,
color_mode = image_color_mode,
target_size = target_size,
batch_size = batch_size,
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 11/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
save_to_dir = save_to_dir,
save_prefix = image_save_prefix,
seed = seed)
mask_generator = mask_datagen.flow_from_directory(
train_path,
classes = [mask_folder],
class_mode = None,
color_mode = mask_color_mode,
target_size = target_size,
batch_size = batch_size,
save_to_dir = save_to_dir,
save_prefix = mask_save_prefix,
seed = seed)
def adjust_data(img,mask):
img = img / 255
mask = mask / 255
mask[mask > 0.5] = 1
mask[mask <= 0.5] = 0
U-net architecture
In [11]:
# From: https://github.com/jocicmarko/ultrasound-nerve-segmentation/blo
b/master/train.py
def dice_coef(y_true, y_pred):
y_true_f = keras.flatten(y_true)
y_pred_f = keras.flatten(y_pred)
intersection = keras.sum(y_true_f * y_pred_f)
return (2. * intersection + 1) / (keras.sum(y_true_f) + keras.sum(
y_pred_f) + 1)
def unet(input_size=(256,256,1)):
inputs = Input(input_size)
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 12/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
In [12]:
# From: https://github.com/zhixuhao/unet/blob/master/data.py
def test_load_image(test_file, target_size=(256,256)):
img = cv2.imread(test_file, cv2.IMREAD_GRAYSCALE)
img = img / 255
img = cv2.resize(img, target_size)
img = np.reshape(img, img.shape + (1,))
img = np.reshape(img,(1,) + img.shape)
return img
cv2.imwrite(result_file, img)
In [13]:
def add suffix(base file suffix):
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 14/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
def add_suffix(base_file, suffix):
filename, fileext = os.path.splitext(base_file)
return "%s_%s%s" % (filename, suffix, fileext)
len(test_files), len(validation_data)
Out[13]:
(100, 2)
Prepare the U-Net model and train the model. It will take a while...
In [14]:
train_generator_args = dict(rotation_range=0.2,
width_shift_range=0.05,
height_shift_range=0.05,
shear_range=0.05,
zoom_range=0.05,
horizontal_flip=True,
fill_mode='nearest')
train_gen = train_generator(2,
SEGMENTATION_TRAIN_DIR,
'image',
'dilate',
train_generator_args,
target_size=(512,512),
save_to_dir=os.path.abspath(SEGMENTATION_A
UG_DIR))
model = unet(input_size=(512,512,1))
model.compile(optimizer=Adam(lr=1e-5), loss=dice_coef_loss, \
metrics=[dice_coef, 'binary_accuracy'])
model.summary()
model_checkpoint = ModelCheckpoint('unet_lung_seg.hdf5',
monitor='loss',
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 15/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
verbose=1,
save_best_only=True)
history = model.fit_generator(train_gen,
steps_per_epoch=STEPS_PER_EPOC,
epochs=EPOCHS,
callbacks=[model_checkpoint],
validation_data = validation_data)
_____________________________________________________________________
_____________________________
Layer (type) Output Shape Param # Conn
ected to
=====================================================================
=============================
input_1 (InputLayer) (None, 512, 512, 1) 0
_____________________________________________________________________
_____________________________
conv2d_1 (Conv2D) (None, 512, 512, 32) 320 inpu
t_1[0][0]
_____________________________________________________________________
_____________________________
conv2d_2 (Conv2D) (None, 512, 512, 32) 9248 conv
2d_1[0][0]
_____________________________________________________________________
_____________________________
max_pooling2d_1 (MaxPooling2D) (None, 256, 256, 32) 0 conv
2d_2[0][0]
_____________________________________________________________________
_____________________________
conv2d_3 (Conv2D) (None, 256, 256, 64) 18496 max_
pooling2d_1[0][0]
_____________________________________________________________________
_____________________________
conv2d_4 (Conv2D) (None, 256, 256, 64) 36928 conv
2d_3[0][0]
_____________________________________________________________________
_____________________________
max_pooling2d_2 (MaxPooling2D) (None, 128, 128, 64) 0 conv
2d_4[0][0]
_____________________________________________________________________
_____________________________
conv2d_5 (Conv2D) (None, 128, 128, 128 73856 max_
pooling2d_2[0][0]
_____________________________________________________________________
_____________________________
conv2d_6 (Conv2D) (None, 128, 128, 128 147584 conv
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 16/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
( ) ( , , ,
2d_5[0][0]
_____________________________________________________________________
_____________________________
max_pooling2d_3 (MaxPooling2D) (None, 64, 64, 128) 0 conv
2d_6[0][0]
_____________________________________________________________________
_____________________________
conv2d_7 (Conv2D) (None, 64, 64, 256) 295168 max_
pooling2d_3[0][0]
_____________________________________________________________________
_____________________________
conv2d_8 (Conv2D) (None, 64, 64, 256) 590080 conv
2d_7[0][0]
_____________________________________________________________________
_____________________________
max_pooling2d_4 (MaxPooling2D) (None, 32, 32, 256) 0 conv
2d_8[0][0]
_____________________________________________________________________
_____________________________
conv2d_9 (Conv2D) (None, 32, 32, 512) 1180160 max_
pooling2d_4[0][0]
_____________________________________________________________________
_____________________________
conv2d_10 (Conv2D) (None, 32, 32, 512) 2359808 conv
2d_9[0][0]
_____________________________________________________________________
_____________________________
conv2d_transpose_1 (Conv2DTrans (None, 64, 64, 256) 524544 conv
2d_10[0][0]
_____________________________________________________________________
_____________________________
concatenate_1 (Concatenate) (None, 64, 64, 512) 0 conv
2d_transpose_1[0][0]
conv
2d_8[0][0]
_____________________________________________________________________
_____________________________
conv2d_11 (Conv2D) (None, 64, 64, 256) 1179904 conc
atenate_1[0][0]
_____________________________________________________________________
_____________________________
conv2d_12 (Conv2D) (None, 64, 64, 256) 590080 conv
2d_11[0][0]
_____________________________________________________________________
_____________________________
conv2d_transpose_2 (Conv2DTrans (None, 128, 128, 128 131200 conv
2d_12[0][0]
_____________________________________________________________________
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 17/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
_____________________________
concatenate_2 (Concatenate) (None, 128, 128, 256 0 conv
2d_transpose_2[0][0]
conv
2d_6[0][0]
_____________________________________________________________________
_____________________________
conv2d_13 (Conv2D) (None, 128, 128, 128 295040 conc
atenate_2[0][0]
_____________________________________________________________________
_____________________________
conv2d_14 (Conv2D) (None, 128, 128, 128 147584 conv
2d_13[0][0]
_____________________________________________________________________
_____________________________
conv2d_transpose_3 (Conv2DTrans (None, 256, 256, 64) 32832 conv
2d_14[0][0]
_____________________________________________________________________
_____________________________
concatenate_3 (Concatenate) (None, 256, 256, 128 0 conv
2d_transpose_3[0][0]
conv
2d_4[0][0]
_____________________________________________________________________
_____________________________
conv2d_15 (Conv2D) (None, 256, 256, 64) 73792 conc
atenate_3[0][0]
_____________________________________________________________________
_____________________________
conv2d_16 (Conv2D) (None, 256, 256, 64) 36928 conv
2d_15[0][0]
_____________________________________________________________________
_____________________________
conv2d_transpose_4 (Conv2DTrans (None, 512, 512, 32) 8224 conv
2d_16[0][0]
_____________________________________________________________________
_____________________________
concatenate_4 (Concatenate) (None, 512, 512, 64) 0 conv
2d_transpose_4[0][0]
conv
2d_2[0][0]
_____________________________________________________________________
_____________________________
conv2d_17 (Conv2D) (None, 512, 512, 32) 18464 conc
atenate_4[0][0]
_____________________________________________________________________
_____________________________
conv2d_18 (Conv2D) (None, 512, 512, 32) 9248 conv
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 18/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
2d_17[0][0]
_____________________________________________________________________
_____________________________
conv2d_19 (Conv2D) (None, 512, 512, 1) 33 conv
2d_18[0][0]
=====================================================================
=============================
Total params: 7,759,521
Trainable params: 7,759,521
Non-trainable params: 0
_____________________________________________________________________
_____________________________
Epoch 1/48
Found 604 images belonging to 1 classes.
Found 604 images belonging to 1 classes.
512/512 [==============================] - 196s 384ms/step - loss: -
0.4672 - dice_coef: 0.4672 - binary_accuracy: 0.4820 - val_loss: -0.4
817 - val_dice_coef: 0.4817 - val_binary_accuracy: 0.6168
Epoch 00001: loss improved from inf to -0.46716, saving model to unet
_lung_seg.hdf5
Epoch 2/48
512/512 [==============================] - 186s 364ms/step - loss: -
0.6585 - dice_coef: 0.6585 - binary_accuracy: 0.7137 - val_loss: -0.5
781 - val_dice_coef: 0.5781 - val_binary_accuracy: 0.7190
In [15]:
fig, axs = plt.subplots(1, 2, figsize = (15, 4))
training_loss = history.history['loss']
validation_loss = history.history['val_loss']
training_accuracy = history.history['binary_accuracy']
validation_accuracy = history.history['val_binary_accuracy']
Out[15]:
<matplotlib.legend.Legend at 0x7f7a143ede48>
In [16]:
test_gen = test_generator(test_files, target_size=(512,512))
results = model.predict_generator(test_gen, len(test_files), verbose=1
)
save result(SEGMENTATION TEST DIR results test files)
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 22/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
save_result(SEGMENTATION_TEST_DIR, results, test_files)
4. Results
Below, we see some results from our work, presented as Predicted, Gold Standard (manually
segmented) and the difference between segmentations.
The next step will be the selection of lungs area on RSNA images dataset and the generation of a
lungs-only image dataset.
In [17]:
image = cv2.imread("../input/segmentation/test/CHNCXR_0003_0.png")
predict_image = cv2.imread("../input/segmentation/test/CHNCXR_0003_0_p
redict.png")
mask_image = cv2.imread("../input/segmentation/test/CHNCXR_0003_0_dila
te.png")
axs[0, 0].set_title("Predicted")
axs[0, 0].imshow(add_colored_mask(image, predict_image))
axs[0, 1].set_title("Gold Std.")
axs[0, 1].imshow(add_colored_mask(image, mask_image))
axs[0, 2].set_title("Diff.")
axs[0, 2].imshow(diff_mask(mask_image, predict_image))
image = cv2.imread("../input/segmentation/test/MCUCXR_0003_0.png")
predict_image = cv2.imread("../input/segmentation/test/MCUCXR_0003_0_p
redict.png")
mask_image = cv2.imread("../input/segmentation/test/MCUCXR_0003_0_dila
te.png")
axs[1, 0].set_title("Predicted")
axs[1, 0].imshow(add_colored_mask(image, predict_image))
axs[1, 1].set_title("Gold Std.")
axs[1, 1].imshow(add_colored_mask(image, mask_image))
axs[1, 2].set_title("Diff.")
axs[1, 2].imshow(diff_mask(mask_image, predict_image))
image = cv2.imread("../input/segmentation/test/CHNCXR_0020_0.png")
predict_image = cv2.imread("../input/segmentation/test/CHNCXR_0020_0_p
redict.png")
mask_image = cv2.imread("../input/segmentation/test/CHNCXR_0020_0_dila
te.png")
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 23/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
axs[2, 0].set_title("Predicted")
axs[2, 0].imshow(add_colored_mask(image, predict_image))
axs[2, 1].set_title("Gold Std.")
axs[2, 1].imshow(add_colored_mask(image, mask_image))
axs[2, 2].set_title("Diff.")
axs[2, 2].imshow(diff_mask(mask_image, predict_image))
image = cv2.imread("../input/segmentation/test/MCUCXR_0016_0.png")
predict_image = cv2.imread("../input/segmentation/test/MCUCXR_0016_0_p
redict.png")
mask_image = cv2.imread("../input/segmentation/test/MCUCXR_0016_0_dila
te.png")
axs[3, 0].set_title("Predicted")
axs[3, 0].imshow(add_colored_mask(image, predict_image))
axs[3, 1].set_title("Gold Std.")
axs[3, 1].imshow(add_colored_mask(image, mask_image))
axs[3, 2].set_title("Diff.")
axs[3, 2].imshow(diff_mask(mask_image, predict_image))
Out[17]:
<matplotlib.image.AxesImage at 0x7f7a1410acc0>
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 24/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
In [18]:
!tar zcvf results.tgz --directory=../input/segmentation/test .
./
./CHNCXR_0657_1_dilate.png
./MCUCXR_0071_0_dilate.png
./CHNCXR_0283_0_predict.png
./MCUCXR_0006_0_dilate.png
./MCUCXR_0367_1.png
./CHNCXR_0020_0.png
./CHNCXR_0030_0_mask.png
./MCUCXR_0006_0_mask.png
./CHNCXR_0462_1_mask.png
./MCUCXR_0113_1_mask.png
./MCUCXR_0101_0_dilate.png
./CHNCXR_0091_0_dilate.png
./MCUCXR_0030_0_dilate.png
./MCUCXR_0399_1_dilate.png
./MCUCXR_0367_1_dilate.png
./CHNCXR_0572_1_mask.png
./CHNCXR_0608_1_mask.png
./CHNCXR_0649_1.png
./MCUCXR_0046_0.png
./MCUCXR_0188_1.png
./CHNCXR_0070_0_mask.png
./CHNCXR_0385_1_mask.png
./MCUCXR_0350_1.png
./MCUCXR_0255_1_mask.png
./MCUCXR_0017_0.png
./MCUCXR_0188_1_predict.png
./MCUCXR_0390_1_predict.png
./MCUCXR_0030_0_mask.png
./CHNCXR_0658_1_dilate.png
./CHNCXR_0460_1.png
./CHNCXR_0091_0_mask.png
./CHNCXR_0157_0_mask.png
./MCUCXR_0059_0_mask.png
./CHNCXR_0460_1_dilate.png
./MCUCXR_0350_1_predict.png
./CHNCXR_0572_1.png
./CHNCXR_0320_0_mask.png
./CHNCXR_0462_1.png
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 25/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
./C C _0 6 _ .p g
./CHNCXR_0152_0_mask.png
./CHNCXR_0230_0_predict.png
./MCUCXR_0011_0.png
./MCUCXR_0258_1_mask.png
./CHNCXR_0538_1.png
./CHNCXR_0152_0_dilate.png
./CHNCXR_0608_1_dilate.png
./MCUCXR_0367_1_mask.png
./MCUCXR_0313_1_predict.png
./MCUCXR_0051_0_mask.png
./MCUCXR_0150_1_mask.png
./CHNCXR_0506_1.png
./MCUCXR_0289_1_predict.png
./CHNCXR_0658_1.png
./MCUCXR_0101_0.png
./MCUCXR_0058_0_dilate.png
./MCUCXR_0095_0_dilate.png
./MCUCXR_0275_1_dilate.png
./CHNCXR_0620_1.png
./CHNCXR_0375_1_mask.png
./MCUCXR_0275_1.png
./CHNCXR_0446_1_mask.png
./MCUCXR_0017_0_mask.png
./CHNCXR_0238_0.png
./MCUCXR_0141_1_predict.png
./CHNCXR_0005_0_predict.png
./MCUCXR_0311_1_predict.png
./CHNCXR_0620_1_mask.png
./MCUCXR_0195_1_mask.png
./MCUCXR_0350_1_mask.png
./MCUCXR_0091_0_mask.png
./CHNCXR_0628_1_mask.png
./MCUCXR_0046_0_predict.png
./CHNCXR_0520_1_mask.png
./MCUCXR_0046_0_mask.png
./MCUCXR_0141_1_dilate.png
./MCUCXR_0080_0_mask.png
./MCUCXR_0075_0_predict.png
./MCUCXR_0080_0_dilate.png
./CHNCXR_0651_1.png
./MCUCXR_0026_0_mask.png
./CHNCXR_0030_0_predict.png
./CHNCXR_0334_1_dilate.png
./MCUCXR_0049_0_dilate.png
./MCUCXR_0003_0.png
./CHNCXR_0030_0.png
./CHNCXR_0510_1_mask.png
./MCUCXR_0113_1_dilate.png
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 26/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
./MCUCXR_0150_1_predict.png
./MCUCXR_0049_0.png
./MCUCXR_0313_1.png
./MCUCXR_0016_0_mask.png
./CHNCXR_0085_0_mask.png
./CHNCXR_0657_1.png
./CHNCXR_0538_1_predict.png
./CHNCXR_0567_1_dilate.png
./CHNCXR_0334_1_predict.png
./CHNCXR_0320_0.png
./CHNCXR_0032_0.png
./CHNCXR_0238_0_mask.png
./CHNCXR_0423_1_dilate.png
./MCUCXR_0035_0.png
./MCUCXR_0049_0_mask.png
./MCUCXR_0375_1_mask.png
./CHNCXR_0157_0_predict.png
./CHNCXR_0259_0.png
./CHNCXR_0003_0_predict.png
./CHNCXR_0658_1_predict.png
./CHNCXR_0329_1.png
./CHNCXR_0375_1_dilate.png
./MCUCXR_0182_1.png
./CHNCXR_0409_1_mask.png
./CHNCXR_0572_1_predict.png
./MCUCXR_0057_0.png
./MCUCXR_0051_0.png
./MCUCXR_0017_0_predict.png
./MCUCXR_0095_0_predict.png
./CHNCXR_0003_0_dilate.png
./MCUCXR_0057_0_mask.png
./MCUCXR_0003_0_predict.png
./CHNCXR_0387_1_mask.png
./MCUCXR_0258_1_dilate.png
./MCUCXR_0170_1_mask.png
./MCUCXR_0188_1_mask.png
./CHNCXR_0329_1_predict.png
./CHNCXR_0157_0_dilate.png
./MCUCXR_0059_0_dilate.png
./CHNCXR_0329_1_dilate.png
./MCUCXR_0289_1.png
./CHNCXR_0611_1_dilate.png
./CHNCXR_0152_0_predict.png
./CHNCXR_0423_1_predict.png
./MCUCXR_0375_1.png
./CHNCXR_0329_1_mask.png
./CHNCXR_0608_1_predict.png
./MCUCXR_0011_0_predict.png
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 27/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
./CHNCXR_0020_0_mask.png
./CHNCXR_0259_0_predict.png
./MCUCXR_0258_1.png
./CHNCXR_0004_0.png
./MCUCXR_0017_0_dilate.png
./MCUCXR_0099_0_mask.png
./MCUCXR_0026_0_dilate.png
./CHNCXR_0122_0_dilate.png
./CHNCXR_0085_0_predict.png
./MCUCXR_0350_1_dilate.png
./CHNCXR_0005_0.png
./CHNCXR_0567_1.png
./CHNCXR_0068_0_mask.png
./CHNCXR_0070_0.png
./MCUCXR_0059_0_predict.png
./CHNCXR_0283_0_mask.png
./MCUCXR_0058_0.png
./MCUCXR_0046_0_dilate.png
./MCUCXR_0030_0.png
./CHNCXR_0032_0_mask.png
./CHNCXR_0155_0.png
./CHNCXR_0651_1_predict.png
./MCUCXR_0058_0_mask.png
./CHNCXR_0538_1_dilate.png
./CHNCXR_0584_1_predict.png
./MCUCXR_0101_0_predict.png
./CHNCXR_0409_1_predict.png
./MCUCXR_0182_1_mask.png
./MCUCXR_0311_1_mask.png
./MCUCXR_0030_0_predict.png
./MCUCXR_0141_1.png
./MCUCXR_0099_0_dilate.png
./MCUCXR_0071_0.png
./MCUCXR_0051_0_dilate.png
./MCUCXR_0266_1_dilate.png
./MCUCXR_0064_0_dilate.png
./MCUCXR_0375_1_predict.png
./MCUCXR_0399_1.png
./CHNCXR_0575_1_mask.png
./CHNCXR_0384_1_mask.png
./MCUCXR_0048_0.png
./MCUCXR_0051_0_predict.png
./MCUCXR_0390_1.png
./CHNCXR_0032_0_predict.png
./MCUCXR_0352_1.png
./CHNCXR_0408_1_dilate.png
./MCUCXR_0058_0_predict.png
./CHNCXR_0060_0.png
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 28/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
./CHNCXR_0238_0_predict.png
./CHNCXR_0446_1.png
./CHNCXR_0506_1_dilate.png
./MCUCXR_0141_1_mask.png
./CHNCXR_0004_0_predict.png
./MCUCXR_0144_1_predict.png
./MCUCXR_0255_1_predict.png
./CHNCXR_0608_1.png
./MCUCXR_0313_1_mask.png
./CHNCXR_0230_0_dilate.png
./MCUCXR_0006_0.png
./CHNCXR_0091_0.png
./MCUCXR_0077_0.png
./MCUCXR_0144_1.png
./MCUCXR_0016_0_predict.png
./CHNCXR_0155_0_mask.png
./CHNCXR_0259_0_dilate.png
./CHNCXR_0409_1.png
./CHNCXR_0567_1_mask.png
./CHNCXR_0649_1_mask.png
./CHNCXR_0575_1.png
./MCUCXR_0102_0_mask.png
./CHNCXR_0575_1_dilate.png
./CHNCXR_0020_0_dilate.png
./MCUCXR_0077_0_mask.png
./CHNCXR_0005_0_dilate.png
./MCUCXR_0275_1_predict.png
./CHNCXR_0658_1_mask.png
./CHNCXR_0275_0_predict.png
./MCUCXR_0091_0.png
./MCUCXR_0057_0_dilate.png
./MCUCXR_0113_1.png
./MCUCXR_0188_1_dilate.png
./CHNCXR_0408_1_mask.png
./MCUCXR_0082_0_mask.png
./CHNCXR_0628_1_predict.png
./MCUCXR_0077_0_dilate.png
./MCUCXR_0352_1_dilate.png
./MCUCXR_0311_1.png
./CHNCXR_0375_1.png
./CHNCXR_0387_1_predict.png
./MCUCXR_0080_0.png
./MCUCXR_0049_0_predict.png
./MCUCXR_0170_1_dilate.png
./CHNCXR_0584_1_dilate.png
./CHNCXR_0030_0_dilate.png
./MCUCXR_0311_1_dilate.png
./CHNCXR_0122_0.png
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 29/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
./CHNCXR_0320_0_predict.png
./MCUCXR_0352_1_mask.png
./CHNCXR_0387_1.png
./CHNCXR_0385_1_dilate.png
./MCUCXR_0003_0_mask.png
./CHNCXR_0275_0.png
./MCUCXR_0195_1_predict.png
./MCUCXR_0289_1_dilate.png
./CHNCXR_0060_0_mask.png
./CHNCXR_0423_1.png
./MCUCXR_0170_1.png
./CHNCXR_0460_1_mask.png
./CHNCXR_0375_1_predict.png
./MCUCXR_0367_1_predict.png
./MCUCXR_0096_0_mask.png
./MCUCXR_0399_1_predict.png
./MCUCXR_0016_0_dilate.png
./CHNCXR_0628_1_dilate.png
./CHNCXR_0122_0_mask.png
./MCUCXR_0077_0_predict.png
./CHNCXR_0275_0_dilate.png
./CHNCXR_0122_0_predict.png
./MCUCXR_0082_0_predict.png
./MCUCXR_0075_0_dilate.png
./MCUCXR_0075_0_mask.png
./CHNCXR_0155_0_predict.png
./CHNCXR_0384_1_predict.png
./MCUCXR_0399_1_mask.png
./CHNCXR_0334_1_mask.png
./MCUCXR_0301_1_predict.png
./MCUCXR_0099_0.png
./MCUCXR_0006_0_predict.png
./CHNCXR_0446_1_predict.png
./CHNCXR_0060_0_dilate.png
./CHNCXR_0567_1_predict.png
./CHNCXR_0070_0_predict.png
./CHNCXR_0572_1_dilate.png
./CHNCXR_0423_1_mask.png
./CHNCXR_0238_0_dilate.png
./CHNCXR_0112_0.png
./CHNCXR_0384_1.png
./CHNCXR_0409_1_dilate.png
./CHNCXR_0510_1_dilate.png
./MCUCXR_0150_1_dilate.png
./MCUCXR_0390_1_mask.png
./MCUCXR_0011_0_mask.png
./MCUCXR_0195_1_dilate.png
./MCUCXR_0375_1_dilate.png
/CHNCXR 0003 0
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 30/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
./CHNCXR_0003_0.png
./MCUCXR_0266_1_mask.png
./CHNCXR_0628_1.png
./CHNCXR_0283_0.png
./MCUCXR_0096_0_dilate.png
./MCUCXR_0266_1_predict.png
./MCUCXR_0150_1.png
./MCUCXR_0071_0_mask.png
./CHNCXR_0005_0_mask.png
./CHNCXR_0032_0_dilate.png
./CHNCXR_0520_1_predict.png
./MCUCXR_0071_0_predict.png
./MCUCXR_0144_1_mask.png
./CHNCXR_0649_1_predict.png
./MCUCXR_0255_1_dilate.png
./CHNCXR_0651_1_dilate.png
./CHNCXR_0584_1_mask.png
./CHNCXR_0510_1_predict.png
./MCUCXR_0255_1.png
./MCUCXR_0042_0_dilate.png
./MCUCXR_0080_0_predict.png
./MCUCXR_0301_1_dilate.png
./MCUCXR_0258_1_predict.png
./CHNCXR_0506_1_predict.png
./MCUCXR_0101_0_mask.png
./MCUCXR_0082_0_dilate.png
./CHNCXR_0657_1_mask.png
./MCUCXR_0035_0_predict.png
./MCUCXR_0102_0_dilate.png
./MCUCXR_0266_1.png
./MCUCXR_0102_0.png
./MCUCXR_0035_0_mask.png
./CHNCXR_0620_1_predict.png
./MCUCXR_0096_0.png
./MCUCXR_0102_0_predict.png
./MCUCXR_0059_0.png
./CHNCXR_0259_0_mask.png
./CHNCXR_0462_1_predict.png
./CHNCXR_0020_0_predict.png
./CHNCXR_0651_1_mask.png
./MCUCXR_0042_0.png
./CHNCXR_0152_0.png
./CHNCXR_0068_0_dilate.png
./MCUCXR_0003_0_dilate.png
./MCUCXR_0016_0.png
./CHNCXR_0091_0_predict.png
./MCUCXR_0099_0_predict.png
./MCUCXR_0289_1_mask.png
/CHNCXR 0462 1 dilate png
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 31/32
19/10/2018 U-Net lung segmentation (Montgomery + Shenzhen) | Kaggle
./CHNCXR_0462_1_dilate.png
./CHNCXR_0387_1_dilate.png
./MCUCXR_0182_1_dilate.png
./MCUCXR_0011_0_dilate.png
./MCUCXR_0113_1_predict.png
./CHNCXR_0408_1.png
./MCUCXR_0275_1_mask.png
./CHNCXR_0003_0_mask.png
./MCUCXR_0048_0_dilate.png
./CHNCXR_0657_1_predict.png
./MCUCXR_0042_0_predict.png
./CHNCXR_0510_1.png
./MCUCXR_0301_1_mask.png
./CHNCXR_0538_1_mask.png
./MCUCXR_0095_0_mask.png
./MCUCXR_0026_0_predict.png
./MCUCXR_0170_1_predict.png
./CHNCXR_0575_1_predict.png
./CHNCXR_0320_0_dilate.png
./MCUCXR_0301_1.png
/CHNCXR 0004 0 mask png
14 hours ago
https://www.kaggle.com/eduardomineo/u-net-lung-segmentation-montgomery-shenzhen 32/32