Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Precise Reflectance/Transmittance Measurements of Highly Reflective Optics with Saturated Cavity Ring-Down Signals
Previous Article in Journal
A Hybrid Network Integrating MHSA and 1D CNN–Bi-LSTM for Interference Mitigation in Faster-than-Nyquist MIMO Optical Wireless Communications
Previous Article in Special Issue
In Situ Structural Characterization of Cardiomyocyte Microenvironment by Multimodal STED Microscopy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Resolution Enhancement of Confocal Laser Scanning Microscopy via Deep Learning

1
State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou 310027, China
2
Research Institute, Ningbo YongXin Optics Co., Ltd., Ningbo 315000, China
*
Authors to whom correspondence should be addressed.
Photonics 2024, 11(10), 983; https://doi.org/10.3390/photonics11100983 (registering DOI)
Submission received: 23 September 2024 / Revised: 11 October 2024 / Accepted: 17 October 2024 / Published: 19 October 2024
(This article belongs to the Special Issue Advanced Optical Microscopy and Imaging Technology)

Abstract

:
Confocal laser scanning microscopy is one of the most widely used tools for high-resolution imaging of biological cells. However, the imaging resolution of conventional confocal technology is limited by diffraction, and more complex optical principles and expensive optical-mechanical structures are usually required to improve the resolution. This study proposed a deep residual neural network algorithm that can effectively improve the imaging resolution of the confocal microscopy in real time. The reliability and real-time performance of the algorithm were verified through imaging experiments on different biological structures, and an imaging resolution of less than 120 nm was achieved in a more cost-effective manner. This study contributes to the real-time improvement of the imaging resolution of confocal microscopy and expands the application scenarios of confocal microscopy in biological imaging.

1. Introduction

Confocal microscopy plays an important role in numerous scientific research areas. The resolution and other performances of traditional confocal microscopy are slightly better than those of wide-field fluorescence microscopy. The pinhole, the core component of confocal microscopy, can provide the system with a high signal-to-noise ratio and optical sectioning ability by rejecting most of the out-of-focus light. However, the imaging resolution limit of conventional confocal microscopy, which is approximately 180 nm, remains limited by diffraction and cannot satisfy the requirements for observing finer biological structures.
Recently, a series of methods have been proposed to improve the imaging performance of confocal microscopy. Heintzmann. et al. proposed confocal subtraction imaging, which can improve the resolution of confocal microscopes by capturing two pinhole images of different sizes and performing a weighted subtraction [1]. Di Franco. et al. used a lifetime tuning algorithm to process a series of pinhole images of different sizes and reduce the confocal defocus background [2]. However, subtraction imaging cannot significantly improve the imaging resolution of the confocal microscopy. Kuang. et al. proposed fluorescence difference microscopy (FED) and achieved a spatial resolution smaller than λ/4 by subtracting the intensity of solid and hollow spots with coefficients [3,4]. Spinning disk confocal microscopy uses a pinhole array to scan samples, increasing the imaging speed by hundreds of times. It can achieve a resolution of approximately 120 nm when combined with a back-end deconvolution algorithm [5,6,7]. Airyscan uses a 32-channel GaAsP detector array to simultaneously improve the signal-to-noise ratio and resolution to approximately 120 nm [8,9]. Most of these methods increase the resolution to 120 nm which is beyond the diffraction limit by adding the complex and excessively expensive optical components.
In contrast, from an algorithmic perspective, improving the imaging performance of confocal microscopy is both efficient and more cost-effective. Nicolas. et al. proposed a more robust Richardson–Lucy algorithm with total variation regularization to improve the image quality of the 3D confocal microscopy [10]. Dupé. et al. used a fast proximal backward forward split iterative deconvolution algorithm to improve image quality at low signal-to-noise ratios [11]. He. et al. employed prominent total variation regularization to enhance the sparsity of confocal images and reduce artifacts [12]. In addition, combining adaptive optics with microscope systems can improve the quality of focused beams, reduce aberrations, and effectively increase imaging depth [13,14,15,16,17].
Presently, deep learning is playing an increasingly important role in improving the imaging performance of confocal microscopy. Martin. et al. combined confocal microscopy with a content-aware restoration network to achieve higher resolution at higher rates and lower light intensities [18]. Li. et al. proposed a back-projection generative adversarial network that can achieve a 64 times higher imaging speed of confocal microscopy [19]. Wang. et al. realized a resolution comparable to that of a pixel reassignment reconstruction algorithm using deep learning methods [20]. Fang. et al. obtained training pairs by downsampling and adding noise to high-resolution confocal images. They used information from adjacent frames to reduce flicker artifacts, thus improving the resolution and signal-to-noise ratio of confocal microscopy [21]. Huang. et al. proposed a dual-channel attention network that builds a training set of confocal and stimulated emission depletion (STED) images to learn the mapping relationship from low to high resolution, thereby improving the resolution of confocal microscopy [22]. Although STED has a higher imaging resolution, both structured illumination microscopy (SIM) and confocal microscopy are low-phototoxicity fluorescence imaging techniques, and their application scenarios are similar. In addition, the real-time performance of the output images of the above works still needs to be improved.
In this paper, we propose a deep learning algorithm based on an end-to-end deep residual neural network to improve the resolution of confocal microscopy and the real-time performance of the algorithm. Confocal microscopy and SIM were used on the same host as low- and high-resolution systems, respectively. Training pairs with pixel-level alignment in the same tissue area could be easily obtained in real-time, and a real-shot training set could be quickly constructed by adjusting the imaging system parameters. The trained neural network was experimentally verified to significantly improve the resolution of confocal microscopy to approximately 120 nm in real-time. We experimentally demonstrated the improved resolution of confocal microscopy using fluorescent beads and different biological samples. The real-time performance of the algorithm was verified in the fluorescence imaging of mitochondria in living cells.

2. Materials and Methods

We developed a deep learning algorithm that can realize instant super-resolution imaging using confocal microscopy. As shown in Figure 1, a deep learning dataset was constructed through experiments and simulations. We used confocal microscopy (NCF1000, Ningbo YongXin Optics Co., Ltd., Ningbo, China) and SIM [23] (NSR1000, Ningbo YongXin Optics Co., Ltd., Ningbo, China) on the same microscopy host. The voltage of the scanning galvanometer of the confocal microscope was changed, and its field of view was adjusted so that the fields of view of the confocal image and the SIM-reconstructed images were consistent. Low-resolution images (confocal) and high-resolution (SIM) images of the same sample were obtained directly to constitute a dataset. The 1:1 dataset can reduce the processing time of the prediction algorithm. The samples used in the experiment included 100 nm beads (Nanoparticles 4C flour 100 nm slide, Abberior Instruments, Gottingen, Germeny), mitochondria (labeled with 250 nM PK Mito Green in U20S cells, prepared by Zhang Yuhui’s group at Huazhong University of Science and Technology, Wuhan, China) and microtubes (GATTA-Cells 4C, labeled with Alexa Fluor 555, GATTAQUANT, Brunswick, Germany). The excitation and central emission wavelength of 100 nm fluorescent beads were 488 nm and 520 nm, respectively. The illumination wavelengths for mitochondria and microtubes are 488 nm and 561 nm, respectively. The experiment was conducted under a 100× objective lens (numerical aperture 1.49, Ningbo YongXin Optics Co., Ltd., Ningbo, China). In addition, the high-resolution images of clathrin-coated pit (CCP) were obtained from the SIM-reconstructed images of CCP in open-source BioSR [24], and the low-resolution images of CCP were obtained by using simulation algorithms to simulate the confocal imaging process. Confocal images and SIM super-resolution images were used as Low Resolution (LR) and High Resolution (HR) image pairs, which were divided into training and validation datasets in a ratio of 9:1. The training dataset was down-sampled multiple times using the Res U-Net [25] network to learn the mapping features between image pairs. The graph of the first prediction result for the LR data of the validation set was restored based on the current network. Then, the difference between the prediction results and the HR data of the validation set was calculated based on the loss function to change the weight factor. The samples in the training dataset were retrained until the loss function converged. The weight factors were saved, the network was exported when the loss function was minimal, and the training process was complete.
The network structure of the proposed Res U-Net, which includes modules such as convolution, downsampling and upsampling, is shown in Figure 2. First, the input LR image is convolved to obtain the LR feature map, which is then down-sampled. Four downsampling modules are responsible for extracting image features from the LR image. Each down-sampling process includes 3 × 3 convolution layers, a rectified linear unit planning function (ReLU), and a maximum pooling layer. The ReLU module can improve the nonlinearity of the neural network and alleviate the gradient disappearance problem. The spatial dimensions can be reduced by a maximum pooling layer with a 2 × 2 pool size. Next, the HR feature map is generated through four up-sampling modules. The up-sampling module includes 3 × 3 convolution layers, ReLU and a maximum pooling layer. The HR image can be output by convolving the HR feature map. The down- and up-sampling layers in each layer of the network can add a feature map directly from the input to the output through jump connections to retain the original input details and expand the network depth. We slightly reduced the number of network channels from 64 to 32 to improve the imaging speed. The loss function L uses a combination of the mean square error (MSE) loss [26] and structural similarity (SSIM) loss [27], as shown in Equation (1). I p r e d i c t and I H R represent the image of algorithm prediction and high resolution, respectively. The MSE solves the mean square error between the HR and predicted images to control the accuracy of the network prediction graph, whereas the SSIM controls the perceptual quality of the network prediction model. The more detailed network structure can be found in Supplement S1.
L I p r e d i c t , I H R = M S E I p r e d i c t , I H R + 1 S S I M I p r e d i c t , I H R
The network was implemented using Python 3.9.18, CUDA 11.6, and Keras 2.10.0 [28,29,30]. The entire training and prediction processes were performed using an NVIDIA Quadro RTX4000 (8 GB). The experimental dataset included 58,319 pairs of training sets and 6247 pairs of validation sets (128 × 128 pixels). The CCP dataset from BioSR contained 5598 pairs of training sets and 516 pairs of validation sets (128 × 128 pixels). We selected the Adam Optimizer [31] with a learning rate of 0.0001, and the initial number of iterations was set to a maximum of 40,000. The hyperparameters β1 and β2 were 0.9 and 0.999, respectively. The training time for the entire dataset was approximately 9 h.

3. Results and Discussion

To verify the accuracy of the deep learning network, we tested 100 nm fluorescently labeled beads and biological samples (microtubes and mitochondria) using confocal microscopy (NCF1000). The experimental results for the beads are shown in Figure 3, where Figure 3a–c are the images from confocal microscopy, deep learning algorithm prediction, and SIM reconstruction, respectively. Figure 3d–f show enlarged images of the solid-line boxes on the right sides of Figure 3a–c. Figure 3g–i show enlarged images of the dotted-line boxes on the left sides of Figure 3a–c. The illumination wavelength is 488 nm. A comparison of the results of the three imaging modes shows that the two closely arranged 100 nm beads shown in Figure 3d,g could not be clearly distinguished via the confocal system. However, they could be clearly distinguished via the algorithm predictions shown in Figure 3e,h and the SIM-reconstructed images shown in Figure 3f,i. To further compare the resolutions of the three imaging modes, we make cross-sections (dashed lines in the figure, showing three 100 nm beads with different spacings) in the same area of Figure 3d–f. The normalized intensity distributions of the three curves are shown in Figure 3j. Confocal imaging (yellow curve) reveals only two peaks, and the two 100 nm beads on the left cannot be clearly distinguished. The deep learning algorithm prediction (blue curve) and SIM reconstruction (green curve) reveal three peaks. The two 100 nm beads on the left can be clearly distinguished based on the Rayleigh criterion. The full width at half maximum (FWHM) of the rightmost beads in the confocal curve was 240.75 nm, the FWHM of the rightmost beads in the prediction curve was 125.20 nm and the FWHM of the SIM reconstruction was 122.20 nm. Compared to confocal imaging, the resolution gains of the algorithm prediction and SIM reconstruction were 1.92 and 1.97 times, respectively. Figure 3k shows the normalized intensity distribution of the three cross-sections in Figure 3g–i. The three beads that could not be distinguished via confocal imaging, and were clearly distinguished in both algorithm prediction and SIM-reconstructed images. The FWHM of the beads on the right side in confocal imaging was 247.25 nm, and the FWHM of the beads on the right side in both the algorithm prediction and SIM was 126.50 nm. The resolution gain was 1.95 times. We measured the FWHM of 20 dispersed fluorescent beads in Figure 3a–c, and plotted the measurement results in Figure 3l. The average FWHM of confocal microscopy, algorithm prediction and SIM were 243.82 ± 13.03 nm, 125.24 ± 8.92 nm and 121.29 ± 7.15 nm, respectively. Moreover, the peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM) of the algorithm-predicted image were 29.56 and 0.89, respectively. The image pixel resolution in Figure 3a–c was 512 × 512, and the algorithm calculation time was approximately 0.09 s.
To further verify the reliability of the algorithm, we trained and experimentally verified the biological samples. Our training dataset involved microtubes, CCP, and mitochondria. The structural diversity of the test samples constituted a rigorous test of the adaptability and versatility of the proposed algorithm. Figure 4 compares the experimentally measured microtubes obtained using the three imaging methods. The deep learning algorithm significantly improved the resolution of the confocal image, particularly at the boundary. The contrast in the details between the predicted and confocal images is evident. The background between the filamentous structures in the four deep-learning algorithm-predicted images was significantly suppressed. The pixel resolution in Figure 4a–d was 1024 × 1024, and the algorithm calculation time was approximately 0.26 s. We compared the speed of our algorithm with those of existing point-scanning deep learning algorithms [31], as shown in Table 1. Our algorithm was 4.57 times faster than the reference at 512 × 512 pixels and 2.66 times faster at 1024 × 1024 pixels.
To further evaluate the image quality in Figure 4a–d, the following image quality evaluation functions were introduced: PSNR, RMSE and SSIM [33]. The image quality evaluation functions for the four images are show in Figure 5. The average PSNR was approximately 23.36, the average SSIM was approximately 0.86, and the average RMSE was approximately 0.146.
To generalize the algorithm, we added the CCP to the open-source dataset BioSR for simulation testing. Confocal images were generated using the wide-field image of the CCP in the open-source dataset instead of the SIM-reconstructed image as the original image to increase the prediction ability of the network in low-SNR confocal imaging. Numerical simulation was conducted on confocal imaging with a pinhole size of 1 Airy unit. The training and validation sets were formed together with the SIM-reconstructed images. Figure 6a shows confocal simulation, algorithm prediction, and SIM reconstruction images of the same sample. Compared with Figure 6b–d on the right, the algorithm-predicted and SIM-reconstructed images clearly distinguish the hollow structure of the CCP. Figure 6h shows the normalized intensity along the dotted lines in Figure 6e–f. The confocal imaging cannot distinguish the hollow structure of CCP. The deep learning algorithm could distinguish the spacing of the hollow structure to be approximately 113.75 nm, and the hollow spacing measured in the SIM-reconstructed image was approximately 145.00 nm. Although the algorithm could distinguish hollow results, the small number of samples led to a slight gap between the prediction results of the hollow structures and the SIM reconstruction results.
To further verify the real-time performance of the algorithm, we performed the algorithm prediction on the fluorescently labeled mitochondrial structure in living cells. The confocal microscopy and algorithm prediction results are shown in Figure 7a. By comparing Figure 7b,c, a mitochondrial ridge structure that could not be observed in the confocal image was observed in the algorithm-predicted image, and the resolution was significantly improved. We drew a series of time-lapse deep-learning algorithm-predicted images of the mitochondria in living cells with a pixel resolution of 512 × 512 and enlarged one of the areas, as shown in Figure 7d. The calculation time of a single algorithm-predicted image was approximately 0.09 s, and the confocal imaging speed was 4 fps, which fully satisfied the speed requirements of deep-learning real-time super-resolution prediction imaging of the confocal microscopy at 512 × 512 pixels. Our deep learning algorithm can provide real-time resolution enhancement for the confocal system. The figure shows that the morphologies of the two mitochondria changed continuously over time. We used two arrows to qualitatively depict the movement trends of the two mitochondrial structures in Figure 7d. The two arrows represent directions of movement of two types of cell structures, respectively. From the direction change in the arrow in Figure 7d and Supplementary Video S1, it can be seen that the two cell structures have undergone contraction and relaxation changes over time. The current computing speed fully meets the speed requirements of our confocal system. In the future, it is possible to simply replace better Graphics cards or adopt lighter networks to further improve the speed of algorithm calculations. The current network is suitable for predicting the aforementioned four types of structures. When different types of biological structures need to be predicted, the model requires additional training.

4. Conclusions

In this study, we proposed a deep learning algorithm based on Res U-Net that can effectively improve the imaging resolution of a confocal imaging system to approximately 120 nm in real time. We used confocal microscopy (NCF 1000) and SIM (NSR 1000) to build an experimental platform for quickly constructing the dataset. The confocal and SIM images constituted a low-resolution–high-resolution data pair. A supervised dataset could be quickly built by adjusting the field-of-view size of the confocal image and the pixel-by-pixel correspondence of the SIM-reconstructed image. The network was trained using the constructed experimental dataset and open-source data. Through simulation and experimental verification, the reliability of the deep learning algorithm was demonstrated on multiple samples: 100 nm beads, microtubes, and CCP. In the dynamic imaging of mitochondrial structures in living cells, the resolution of the confocal microscopy was improved in real-time to 512 × 512 pixels. In conclusion, our deep learning algorithm can help confocal systems achieve SIM-level imaging resolution in real time, avoid the construction of complex hardware equipment and facilitate organelle-level dynamic research.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/photonics11100983/s1, Figure S1: Model structure details of the network; Video S1: Time-lapse photography of mitochondrial using deep learning algorithm.

Author Contributions

Conceptualization, Z.C., Y.X., C.K. and Y.C. (Youhua Chen); methodology, Z.C. and Y.X.; software, Y.X., Y.C. (Yunbo Chen), X.Z. and W.L.; validation, Z.C. and Y.X.; formal analysis, Z.C. and Y.X.; investigation, Z.C. and Y.X., resources, Z.C.; data curation, Z.C.; writing—original draft preparation, Z.C.; writing—review and editing, Y.X., C.K. and Y.C. (Youhua Chen); visualization, Z.C. and Y.X.; supervision, C.K. and Y.C. (Youhua Chen); project administration, C.K. and Y.C. (Youhua Chen); funding acquisition, C.K. and Y.C. (Youhua Chen). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (61975188), Natural Science Foundation of Zhejiang Province (LY23F050010), National Key Research and Development Program of China (2021YFF0700302), Ningbo Key Scientific and Technological Project (2022Z123) and National Science Fund for Distinguished Young Scholars (62125504).

Data Availability Statement

Data underlying the results presented in this paper could be obtained from the authors upon reasonable request.

Conflicts of Interest

Author Yi Xing and Youhua Chen were employed by the company Ningbo YongXin Optics Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Heintzmann, R.; Sarafis, V.; Munroe, P.; Nailon, J.; Hanley, Q.S.; Jovin, T.M. Resolution enhancement by subtraction of confocal signals taken at different pinhole sizes. Micron 2003, 34, 293–300. [Google Scholar] [CrossRef] [PubMed]
  2. Di Franco, E.; Costantino, A.; Cerutti, E.; D’Amico, M.; Privitera, A.P.; Bianchini, P.; Vicidomini, G.; Gulisano, M.; Diaspro, A.; Lanzanò, L. SPLIT-PIN software enabling confocal and super-resolution imaging with a virtually closed pinhole. Sci. Rep. 2023, 13, 2741. [Google Scholar] [CrossRef] [PubMed]
  3. Kuang, C.; Li, S.; Liu, W.; Hao, X.; Gu, Z.; Wang, Y.; Ge, J.; Li, H.; Liu, X. Breaking the diffraction barrier using fluorescence emission difference microscopy. Sci. Rep. 2013, 3, 1441. [Google Scholar] [CrossRef] [PubMed]
  4. Dong, W.; Huang, Y.; Zhang, Z.; Xu, L.; Kuang, C.; Hao, X.; Cao, L.; Liu, X. Fluorescence emission difference microscopy based on polarization modulation. J. Innov. Opt. Health Sci. 2022, 15, 2250034. [Google Scholar] [CrossRef]
  5. Tanaami, T.; Sugiyama, Y.; Kosugi, Y.; Mikuriya, K.; Abe, M. High-speed confocal fluorescence microscopy using a nipkow scanner with microlenses for 3-d imaging of single fluorescent molecule in real time. Bioimages 1996, 4, 57–62. [Google Scholar]
  6. Hayashi, S.; Okada, Y. Ultrafast superresolution fluorescence imaging with spinning disk confocal microscope optics. Mol. Biol. Cell 2015, 26, 1743–1751. [Google Scholar] [CrossRef]
  7. Hayashi, S. Resolution doubling using confocal microscopy via analogy with structured illumination microscopy. Jpn. J. Appl. Phys. 2016, 55, 082501. [Google Scholar] [CrossRef]
  8. Huff, J. The Airyscan detector from ZEISS: Confocal imaging with improved signal-to-noise ratio and super-resolution. Nat. Methods 2015, 12, i–ii. [Google Scholar] [CrossRef]
  9. Huff, J.; Bergter, A.; Birkenbeil, J.; Kleppe, I.; Engelmann, R.; Krzic, U. The new 2D Superresolution mode for ZEISS Airyscan. Nat. Methods 2017, 14, 1223. [Google Scholar] [CrossRef]
  10. Dey, N.; Blanc-Feraud, L.; Zimmer, C.; Roux, P.; Kam, Z.; Olivo-Marin, J.-C.; Zerubia, J. Richardson–Lucy algorithm with total variation regularization for 3D confocal microscope deconvolution. Microsc. Res. Tech. 2006, 69, 260–266. [Google Scholar] [CrossRef]
  11. Dupé, F.X.; Fadili, M.J.; Starck, J.L. Deconvolution of confocal microscopy images using proximal iteration and sparse representations. In Proceedings of the 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Paris, France, 14–17 May 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 736–739. [Google Scholar]
  12. He, T.; Sun, Y.; Qi, J.; Hu, J.; Huang, H. Image deconvolution for confocal laser scanning microscopy using constrained total variation with a gradient field. Appl. Opt. 2019, 58, 3754–3766. [Google Scholar] [CrossRef] [PubMed]
  13. Stockbridge, C.; Lu, Y.; Moore, J.; Hoffman, S.; Paxman, R.; Toussaint, K.; Bifano, T. Focusing through dynamic scattering media. Opt. Express 2012, 20, 15086–15092. [Google Scholar] [CrossRef] [PubMed]
  14. Galaktionov, I.; Nikitin, A.; Sheldakova, J.; Toporovsky, V.; Kudryashov, A. Focusing of a laser beam passed through a moderately scattering medium using phase-only spatial light modulator. Photonics 2022, 9, 296. [Google Scholar] [CrossRef]
  15. Katz, O.; Small, E.; Guan, Y.; Silberberg, Y. Noninvasive nonlinear focusing and imaging through strongly scattering turbid layers. Optica 2014, 1, 170–174. [Google Scholar] [CrossRef]
  16. Hillman, T.R.; Yamauchi, T.; Choi, W.; Dasari, R.R.; Feld, M.S.; Park, Y.; Yaqoob, Z. Digital optical phase conjugation for delivering two-dimensional images through turbid media. Sci. Rep. 2013, 3, 1909. [Google Scholar] [CrossRef] [PubMed]
  17. Tao, X.; Fernandez, B.; Azucena, O.; Fu, M.; Garcia, D.; Zuo, Y.; Chen, D.C.; Kubby, J. Adaptive optics confocal microscopy using direct wavefront sensing. Opt. Lett. 2011, 36, 1062–1064. [Google Scholar] [CrossRef]
  18. Weigert, M.; Schmidt, U.; Boothe, T.; Müller, A.; Dibrov, A.; Jain, A.; Wilhelm, B.; Schmidt, D.; Broaddus, C.; Culley, S.; et al. Content-aware image restoration: Pushing the limits of fluorescence microscopy. Nat. Methods 2018, 15, 1090–1097. [Google Scholar] [CrossRef]
  19. Li, X.; Dong, J.; Li, B.; Zhang, Y.; Zhang, Y.; Veeraraghavan, A.; Ji, X. Fast confocal microscopy imaging based on deep learning. In Proceedings of the 2020 IEEE International Conference on Computational Photography (ICCP), St. Louis, MO, USA, 24–26 April 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–12. [Google Scholar]
  20. Wang, W.; Wu, B.; Zhang, B.; Ma, J.; Tan, J. Deep learning enables confocal laser-scanning microscopy with enhanced resolution. Opt. Lett. 2021, 46, 4932–4935. [Google Scholar] [CrossRef]
  21. Fang, L.; Monroe, F.; Novak, S.W.; Kirk, L.; Schiavon, C.R.; Yu, S.B.; Zhang, T.; Wu, M.; Kastner, K.; Latif, A.A.; et al. Deep learning-based point-scanning super-resolution imaging. Nat. Methods 2021, 18, 406–416. [Google Scholar] [CrossRef]
  22. Huang, B.; Li, J.; Yao, B.; Yang, Z.; Lam, E.Y.; Zhang, J.; Yan, W.; Qu, J. Enhancing image resolution of confocal fluorescence microscopy with deep learning. PhotoniX 2023, 4, 2. [Google Scholar] [CrossRef]
  23. Ji, C.; Zhu, Y.; He, E.; Liu, Q.; Zhou, D.; Xie, S.; Wu, H.; Zhang, J.; Du, K.; Chen, Y.; et al. Full field-of-view hexagonal lattice structured illumination microscopy based on the phase shift of electro–optic modulators. Opt. Express 2024, 32, 1635–1649. [Google Scholar] [CrossRef] [PubMed]
  24. Qiao, C.; Li, D.; Guo, Y.; Liu, C.; Jiang, T.; Dai, Q.; Li, D. Evaluation and development of deep neural networks for image super-resolution in optical microscopy. Nat. Methods 2021, 18, 194–202. [Google Scholar] [CrossRef] [PubMed]
  25. Xiao, X.; Lian, S.; Luo, Z.; Li, S. Weighted res-unet for high-quality retina vessel segmentation. In Proceedings of the 2018 9th International Conference on Information Technology in Medicine and Education (ITME), Hangzhou, China, 19–21 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 327–331. [Google Scholar]
  26. Qi, J.; Du, J.; Siniscalchi, S.M.; Ma, X.; Lee, C.-H. On mean absolute error for deep neural network based vector-to-vector regression. IEEE Signal Process. Lett. 2020, 27, 1485–1489. [Google Scholar] [CrossRef]
  27. Zhao, H.; Gallo, O.; Frosio, I.; Kautz, J. Loss functions for image restoration with neural networks. IEEE Trans. Comput. Imaging 2016, 3, 47–57. [Google Scholar] [CrossRef]
  28. Luebke, D. CUDA: Scalable parallel programming for high-performance scientific computing. In Proceedings of the 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Paris, France, 14–17 May 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 836–838. [Google Scholar]
  29. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-scale machine learning on heterogeneous systems. arXiv 2015, arXiv:1603.04467. [Google Scholar]
  30. Ketkar, N. Introduction to keras. In Deep Learning with Python: A Hands-On Introduction; Apress: Berkeley, CA, USA, 2017; pp. 97–111. [Google Scholar]
  31. Kingma, D.P. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  32. Qiao, C.; Zeng, Y.; Meng, Q.; Chen, X.; Chen, H.; Jiang, T.; Wei, R.; Guo, J.; Fu, W.; Lu, H.; et al. Zero-shot learning enables instant denoising and super-resolution in optical fluorescence microscopy. Nat. Commun. 2024, 15, 4180. [Google Scholar] [CrossRef]
  33. Ndajah, P.; Kikuchi, H.; Yukawa, M.; Watanabe, H.; Muramatsu, S. SSIM image quality metric for denoised images. In Proceedings of the 3rd WSEAS Conference International on Visualization, Imaging and Simulation, Faro, Portugal, 3–5 November 2010; pp. 53–58. [Google Scholar]
Figure 1. Working principle of the deep learning algorithm.
Figure 1. Working principle of the deep learning algorithm.
Photonics 11 00983 g001
Figure 2. Network framework of the deep learning algorithm.
Figure 2. Network framework of the deep learning algorithm.
Photonics 11 00983 g002
Figure 3. Comparison of 100 nm beads through confocal microscopy, algorithm prediction, and SIM of 100 nm beads. (a) diffraction-limited 100 nm beads acquired via the NCF1000 confocal microscopy; (b) 100 nm-bead image predicted by the deep learning algorithm; (c) 100 nm beads image acquired by the NSR1000 SIM; (df) are enlarged images of the solid-line boxes on the right sides of (ac), respectively; (gi) are the method images of the dashed-line boxes on the left sides of (ac), respectively; (j) is the normalized intensity distribution along the dotted-line cross-sections in (df); (k) is the normalized intensity distribution along the dotted-line cross-sections in (gi); (l) the average FWHM of 100 nm fluorescent beads with three imaging methods. Scale bar, 1 μm (ac), 0.2 μm (df), 0.2 μm (gi).
Figure 3. Comparison of 100 nm beads through confocal microscopy, algorithm prediction, and SIM of 100 nm beads. (a) diffraction-limited 100 nm beads acquired via the NCF1000 confocal microscopy; (b) 100 nm-bead image predicted by the deep learning algorithm; (c) 100 nm beads image acquired by the NSR1000 SIM; (df) are enlarged images of the solid-line boxes on the right sides of (ac), respectively; (gi) are the method images of the dashed-line boxes on the left sides of (ac), respectively; (j) is the normalized intensity distribution along the dotted-line cross-sections in (df); (k) is the normalized intensity distribution along the dotted-line cross-sections in (gi); (l) the average FWHM of 100 nm fluorescent beads with three imaging methods. Scale bar, 1 μm (ac), 0.2 μm (df), 0.2 μm (gi).
Photonics 11 00983 g003
Figure 4. (ad) Comparison of microtubes by confocal microscopy, algorithm prediction and SIM, respectively. (a) top left: image of deep learning algorithm prediction, right: image of confocal, bottom: image of SIM; (b) top right: image of deep learning algorithm prediction, left: image of confocal, bottom: image of SIM; (c) bottom left: image of deep learning algorithm prediction, right: image of confocal, top: image of SIM; (d) bottom right: image of deep learning algorithm prediction, left: image of confocal, top: image of SIM. Scale bar,1 μm.
Figure 4. (ad) Comparison of microtubes by confocal microscopy, algorithm prediction and SIM, respectively. (a) top left: image of deep learning algorithm prediction, right: image of confocal, bottom: image of SIM; (b) top right: image of deep learning algorithm prediction, left: image of confocal, bottom: image of SIM; (c) bottom left: image of deep learning algorithm prediction, right: image of confocal, top: image of SIM; (d) bottom right: image of deep learning algorithm prediction, left: image of confocal, top: image of SIM. Scale bar,1 μm.
Photonics 11 00983 g004
Figure 5. Image quality evaluation functions for the images in Figure 4a–d.
Figure 5. Image quality evaluation functions for the images in Figure 4a–d.
Photonics 11 00983 g005
Figure 6. (a) Comparison of confocal simulation, deep learning algorithm prediction, and SIM reconstruction; (bd) enlarged images of the dashed line boxes on the left side of (a); (eg) enlarged images of the solid-line boxes on the right side of (a); (h) normalized intensity along the dashed lines in (eg). Scale bar, 0.5 μm (a), 0.2 μm (bg).
Figure 6. (a) Comparison of confocal simulation, deep learning algorithm prediction, and SIM reconstruction; (bd) enlarged images of the dashed line boxes on the left side of (a); (eg) enlarged images of the solid-line boxes on the right side of (a); (h) normalized intensity along the dashed lines in (eg). Scale bar, 0.5 μm (a), 0.2 μm (bg).
Photonics 11 00983 g006
Figure 7. Deep-learning prediction results for mitochondrial structures in living cells. (a) Comparison of confocal imaging and algorithm prediction results; (b,c) are enlarged images of the dotted-line box in (a); (d) time-lapse photography of the enlarged area in (a), arrows represent the movement trend of mitochondrial structures. Scale bar: 0.5 μm.
Figure 7. Deep-learning prediction results for mitochondrial structures in living cells. (a) Comparison of confocal imaging and algorithm prediction results; (b,c) are enlarged images of the dotted-line box in (a); (d) time-lapse photography of the enlarged area in (a), arrows represent the movement trend of mitochondrial structures. Scale bar: 0.5 μm.
Photonics 11 00983 g007
Table 1. Speed comparison of different algorithms.
Table 1. Speed comparison of different algorithms.
PixelsOurs
(frame/s)
Reference [32]
(frame/s)
512 × 512≈11.11≈2.43
1024 × 1024≈3.85≈1.45
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cui, Z.; Xing, Y.; Chen, Y.; Zheng, X.; Liu, W.; Kuang, C.; Chen, Y. Real-Time Resolution Enhancement of Confocal Laser Scanning Microscopy via Deep Learning. Photonics 2024, 11, 983. https://doi.org/10.3390/photonics11100983

AMA Style

Cui Z, Xing Y, Chen Y, Zheng X, Liu W, Kuang C, Chen Y. Real-Time Resolution Enhancement of Confocal Laser Scanning Microscopy via Deep Learning. Photonics. 2024; 11(10):983. https://doi.org/10.3390/photonics11100983

Chicago/Turabian Style

Cui, Zhiying, Yi Xing, Yunbo Chen, Xiu Zheng, Wenjie Liu, Cuifang Kuang, and Youhua Chen. 2024. "Real-Time Resolution Enhancement of Confocal Laser Scanning Microscopy via Deep Learning" Photonics 11, no. 10: 983. https://doi.org/10.3390/photonics11100983

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop