Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Self-Supervised Domain Adaptation for Patient-Specific, Real-Time Tissue Tracking

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 (MICCAI 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12263))

Abstract

Estimating tissue motion is crucial to provide automatic motion stabilization and guidance during surgery. However, endoscopic images often lack distinctive features and fine tissue deformation can only be captured with dense tracking methods like optical flow. To achieve high accuracy at high processing rates, we propose fine-tuning of a fast optical flow model to an unlabeled patient-specific image domain. We adopt multiple strategies to achieve unsupervised fine-tuning. First, we utilize a teacher-student approach to transfer knowledge from a slow but accurate teacher model to a fast student model. Secondly, we develop self-supervised tasks where the model is encouraged to learn from different but related examples. Comparisons with out-of-the-box models show that our method achieves significantly better results. Our experiments uncover the effects of different task combinations. We demonstrate that unsupervised fine-tuning can improve the performance of CNN-based tissue tracking and opens up a promising future direction.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Training samples should at best be identical to application samples. We therefore also propose to obtain training samples directly prior to the surgical intervention in the operation room. Intra-operative training time was on average 15 min.

References

  1. Armin, M.A., Barnes, N., Khan, S., Liu, M., Grimpen, F., Salvado, O.: Unsupervised learning of endoscopy video frames’ correspondences from global and local transformation. In: Stoyanov, D., et al. (eds.) CARE/CLIP/OR 2.0/ISIC -2018. LNCS, vol. 11041, pp. 108–117. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01201-4_13

    Chapter  Google Scholar 

  2. Buciluă, C., Caruana, R., Niculescu-Mizil, A.: Model compression. In: ACM SIGKDD, pp. 535–541 (2006)

    Google Scholar 

  3. Butler, D.J., Wulff, J., Stanley, G.B., Black, M.J.: A naturalistic open source movie for optical flow evaluation. In: IEEE ECCV, pp. 611–625 (2012). https://doi.org/10.1007/978-3-642-33783-3_44

  4. Doersch, C., Zisserman, A.: Multi-task self-supervised visual learning. In: IEEE ICCV, pp. 2051–2060 (2017)

    Google Scholar 

  5. Dosovitskiy, A., et al.: FlowNet: learning optical flow with convolutional networks. In: IEEE ICCV (2015). https://doi.org/10.1109/ICCV.2015.316

  6. French, G., Mackiewicz, M., Fisher, M.: Self-ensembling for visual domain adaptation. arXiv:1706.05208 (2017)

  7. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The Kitti vision benchmark suite, pp. 3354–3361, May 2012. https://doi.org/10.1109/CVPR.2012.6248074

  8. Giannarou, S., Visentini-Scarzanella, M., Yang, G.Z.: Probabilistic tracking of affine-invariant anisotropic regions. IEEE TPAMI 35(1), 130–143 (2013). https://doi.org/10.1109/TPAMI.2012.81

    Article  Google Scholar 

  9. Guerre, A., Lamard, M., Conze, P.H., Cochener, B., Quellec, G.: Optical flow estimation in ocular endoscopy videos using flownet on simulated endoscopy data. In: IEEE International Symposium on Biomedical Imaging (ISBI), pp. 1463–1466 (2018). https://doi.org/10.1109/ISBI.2018.8363848

  10. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv:1503.02531 (2015)

  11. Horn, B.K.P., Schunck, B.G.: Determining optical flow. Artif. Intell. 17, 185–203 (1981). https://doi.org/10.1016/0004-3702(81)90024-2

    Article  Google Scholar 

  12. Ihler, S., Laves, M.H., Ortmaier, T.: Patient-specific domain adaptation for fast optical flow based on teacher-student knowledge transfer. arXiv:2007.04928 (2020)

  13. Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., Brox, T.: FlowNet 2.0: evolution of optical flow estimation with deep networks. In: IEEE CVPR, July 2017. https://doi.org/10.1109/CVPR.2017.179

  14. Liu, P., King, I., Lyu, M.R., Xu, J.: DDFlow: learning optical flow with unlabeled data distillation. In: AAAI, vol. 33, pp. 8770–8777 (2019). https://doi.org/10.1609/aaai.v33i01.33018770

  15. Meister, S., Hur, J., Roth, S.: UnFlow: unsupervised learning of optical flow with a bidirectional census loss. In: AAAI, New Orleans, Louisiana, pp. 7251–7259, February 2018. arXiv:1711.07837

  16. Menze, M., Geiger, A.: Object scene flow for autonomous vehicles. In: IEEE CVPR (2015). https://doi.org/10.1109/CVPR.2015.7298925

  17. Mountney, P., Stoyanov, D., Yang, G.: Three-dimensional tissue deformation recovery and tracking. IEEE Signal Process. Mag. 27(4), 14–24 (2010). https://doi.org/10.1109/MSP.2010.936728

    Article  Google Scholar 

  18. Reda, F., Pottorff, R., Barker, J., Catanzaro, B.: flownet2-pytorch: pytorch implementation of flownet 2.0: evolution of optical flow estimation with deep networks (2017). https://github.com/NVIDIA/flownet2-pytorch

  19. Sun, D., Roth, S., Black, M.J.: Secrets of optical flow estimation and their principles. In: IEEE CVPR, pp. 2432–2439, June 2010. https://doi.org/10.1109/CVPR.2010.5539939

  20. Sun, D., Yang, X., Liu, M.Y., Kautz, J.: Models matter, so does training: an empirical study of CNNs for optical flow estimation. arXiv:1809.05571 (2018)

  21. Sun, D., Yang, X., Liu, M.Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: IEEE CVPR, pp. 8934–8943 (2018). https://doi.org/10.1109/CVPR.2018.00931

  22. Wulff, J., Black, M.J.: Efficient sparse-to-dense optical flow estimation using a learned basis and layers. In: IEEE CVPR, pp. 120–130 (2015). https://doi.org/10.1109/CVPR.2015.7298607

  23. Yip, M.C., Lowe, D.G., Salcudean, S.E., Rohling, R.N., Nguan, C.Y.: Tissue tracking and registration for image-guided surgery. IEEE Trans. Med. Imaging 31(11), 2169–2182 (2012). https://doi.org/10.1109/TMI.2012.2212718

    Article  Google Scholar 

  24. Yu, J.J., Harley, A.W., Derpanis, K.G.: Back to basics: unsupervised learning of optical flow via brightness constancy and motion smoothness. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 3–10. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49409-8_1

    Chapter  Google Scholar 

Download references

Acknowledgements

This work has received funding from the European Union as being part of the EFRE OPhonLas project.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sontje Ihler .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ihler, S., Kuhnke, F., Laves, MH., Ortmaier, T. (2020). Self-Supervised Domain Adaptation for Patient-Specific, Real-Time Tissue Tracking. In: Martel, A.L., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12263. Springer, Cham. https://doi.org/10.1007/978-3-030-59716-0_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59716-0_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59715-3

  • Online ISBN: 978-3-030-59716-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics