Purpose: Spectral Domain Optical Coherence Tomography (SD-OCT) images are a series of Bscans which capture the volume of the retina and reveal structural information. Diseases of the outer retina cause changes to the retinal layers which are evident on SD-OCT images, revealing disease etiology and risk factors for disease progression. Quantitative thickness information of the retina layers provide disease relevant data that reveal important aspects of disease pathogenesis. Manually labeling these layers is extremely laborious, time consuming and costly. Recently, deep learning algorithms have been used for automating the process of segmentation. While retinal volumes are inherently 3 dimensional, state-of-the-art segmentation approaches have been limited in their utilization of the 3 dimensional nature of the structural information. Methods: In this work, we train a 3D-UNet using 150 retinal volumes and test using 191 retinal volumes from a hold out test set (with AMD severity grade ranging from no disease through the intermediate stages to the advanced disease, and presence of geographic atrophy). The 3D deep features learned by the model captures spatial information simultaneously from all the three volumetric dimensions. Since unlike the ground truth, the output of 3D-UNet is not single pixel wide, we perform a column wise probabilistic maximum operation to obtain single pixel wide layers, for quantitative evaluations. Results: We compare our results to the publicly available OCT Explorer and deep learning based 2D-UNet algorithms and observe a low error within 3.11 pixels with respect to the ground truth locations (for some of the most challenging or advanced stage AMD eyes with AMD severity score: 9 and 10). Conclusion: Our results show that both qualitatively and quantitatively there is a significant advantage of extracting and utilizing 3D features over the traditionally used OCT Explorer or 2D-UNet.
|