Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3458709.3458991acmotherconferencesArticle/Chapter ViewAbstractPublication PagesahsConference Proceedingsconference-collections
research-article

CV-Based Analysis for Microscopic Gauze Suturing Training

Published: 11 July 2021 Publication History

Abstract

This paper proposes the basis of a microscopic suture practice support system aimed to reduce the time required for neurosurgeons to practice microscopic gauze suturing. The system detects instruments in real-time from the video of the microscope camera and provides an immediate analysis. After practitioners have completed practicing, they can immediately view their results. We introduce a sequential image dataset in which the surgery phases, as well as the bounding boxes of surgical instruments, are annotated. A YOLO V4 network is fine-tuned with the proposed dataset and achieves an accuracy of approximately 94%. We also propose a tool for a phase estimation after each suturing using a Dynamic Programming based algorithm from the tracking data, which allows us to estimate the phase of the previous practice session with about 83 % accuracy. This is used for an application to detect and provide feedback on points of concern. This proposal augments the practitioners’ acquisition of skills, allowing them to immediately reflect on their most recent practice session, rather than repeating aimlessly.

Supplementary Material

3458709.3458991 (3458709.3458991.mp4)
Supplementary video

References

[1]
Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao. 2020. YOLOv4: Optimal Speed and Accuracy of Object Detection. arxiv:2004.10934 [cs.CV]
[2]
H. Doughty, D. Damen, and W. Mayol-Cuevas. 2018. Who’s Better? Who’s Best? Pairwise Deep Ranking for Skill Determination. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6057–6066. https://doi.org/10.1109/CVPR.2018.00634
[3]
Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, and Andrew Zisserman. 2019. Temporal Cycle-Consistency Learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4]
Kanako Harada, Akio Morita, Yoshiaki Minakawa, Young Min Baek, Shigeo Sora, Naohiko Sugita, Toshikazu Kimura, Rokuya Tanikawa, Tatsuya Ishikawa, and Mamoru Mitsuishi. 2015. Assessing Microneurosurgical Skill with Medico-Engineering Technology. World Neurosurgery 84, 4 (2015), 964 – 971. https://doi.org/10.1016/j.wneu.2015.05.033
[5]
Tomohiro Inoue, Kazuo Tsutsumi, Shinobu Adachi, Shota Tanaka, Kuniaki Saito, and Naoto Kunii. 2006. Effectiveness of suturing training with 10-0 nylon under fixed and maximum magnification (×20) using desk type microscope. Surgical Neurology 66, 2 (2006), 183 – 187. https://doi.org/10.1016/j.surneu.2005.11.064
[6]
Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller. 2019. Accurate and interpretable evaluation of surgical skills from kinematic data using fully convolutional neural networks. International Journal of Computer Assisted Radiology and Surgery 14, 9(2019), 1611–1617. https://doi.org/10.1007/s11548-019-02039-4
[7]
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll¥’ar, and C. Lawrence Zitnick. 2014. Microsoft COCO: Common Objects in Context. In Computer Vision – ECCV 2014, David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars (Eds.). Springer International Publishing, Cham, 740–755.
[8]
Atsushi Nakazawa, Kanako Harada, Mamoru Mitsuishi, and Pierre Jannin. 2019. Real-time surgical needle detection using region-based convolutional neural networks. International Journal of Computer Assisted Radiology and Surgery 15 (08 2019). https://doi.org/10.1007/s11548-019-02050-9
[9]
Andru Twinanda, Didier Mutter, Jacques Marescaux, Michel De Mathelin, and Nicolas Padoy. 2016. Single- and Multi-Task Architecture for Surgical Workflow at M2CAI 2016. (10 2016).
[10]
Ziheng Wang and Ann Majewicz Fey. 2018. Deep learning with convolutional neural network for objective skill evaluation in robot-assisted surgery. International Journal of Computer Assisted Radiology and Surgery 13, 12(2018), 1959–1970. https://doi.org/10.1007/s11548-018-1860-1
[11]
Congmin Yang, Zijian Zhao, and Sanyuan Hu. 2020. Image-based laparoscopic tool detection and tracking using convolutional neural networks: a review of the literature. Computer Assisted Surgery 25, 1 (2020), 15–28. https://doi.org/10.1080/24699322.2020.1801842 arXiv:https://doi.org/10.1080/24699322.2020.1801842PMID: 32886540.
[12]
Zijian Zhao, Zhaorui Chen, Sandrine Voros, and Xiaolin Cheng. 2019. Real-time tracking of surgical instruments based on spatio-temporal context and deep learning. Computer Assisted Surgery 24, sup1 (2019), 20–29. https://doi.org/10.1080/24699322.2018.1560097 arXiv:https://doi.org/10.1080/24699322.2018.1560097PMID: 30760050.

Cited By

View all
  • (2024)MR Microsurgical Suture Training System with Level-Appropriate SupportProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642324(1-19)Online publication date: 11-May-2024
  • (2023)GAuze-MIcrosuture-FICATION: Gamification in Microsuture training with real-time feedbackProceedings of the Augmented Humans International Conference 202310.1145/3582700.3582704(15-26)Online publication date: 12-Mar-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
AHs '21: Proceedings of the Augmented Humans International Conference 2021
February 2021
321 pages
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 July 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Dataset
  2. Neurosurgical training
  3. Object detection
  4. Visual feedback

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • JST CREST

Conference

AHs '21
AHs '21: Augmented Humans International Conference 2021
February 22 - 24, 2021
Rovaniemi, Finland

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)6
  • Downloads (Last 6 weeks)1
Reflects downloads up to 18 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)MR Microsurgical Suture Training System with Level-Appropriate SupportProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642324(1-19)Online publication date: 11-May-2024
  • (2023)GAuze-MIcrosuture-FICATION: Gamification in Microsuture training with real-time feedbackProceedings of the Augmented Humans International Conference 202310.1145/3582700.3582704(15-26)Online publication date: 12-Mar-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media