Abstract
Augmented reality (AR) offers a new medical treatment approach. We aimed to evaluate frameless (mask) fixation navigation using a 3D-printed patient model with fixed-AR technology for gamma knife radiosurgery (GKRS). Fixed-AR navigation was developed using the inside-out method with visual inertial odometry algorithms, and the flexible Quick Response marker was created for object-feature recognition. Virtual 3D-patient models for AR-rendering were created via 3D-scanning utilizing TrueDepth and cone-beam computed tomography (CBCT) to generate a new GammaKnife Icon⢠model. A 3D-printed patient model included fiducial markers, and virtual 3D-patient models were used to validate registration accuracy. Registration accuracy between initial frameless fixation and re-fixation navigated fixed-AR was validated through visualization and quantitative method. The quantitative method was validated through set-up errors, fiducial marker coordinates, and high-definition motion management (HDMM) values. A 3D-printed model and virtual models were correctly overlapped under frameless fixation. Virtual models from both 3D-scanning and CBCT were enough to tolerate the navigated frameless re-fixation. Although the CBCT virtual model consistently delivered more accurate results, 3D-scanning was sufficient. Frameless re-fixation accuracy navigated in virtual models had mean set-up errors within 1 mm and 1.5° in all axes. Mean fiducial marker differences from coordinates in virtual models were within 2.5 mm in all axes, and mean 3D errors were within 3 mm. Mean HDMM difference values in virtual models were within 1.5 mm of initial HDMM values. The variability from navigation fixed-AR is enough to consider repositioning frameless fixation without CBCT scanning for treating patients fractionated with large multiple metastases lesions (>â3 cm) who have difficulty enduring long beam-on time. This system could be applied to novel GKRS navigation for frameless fixation with reduced preparation time.
Similar content being viewed by others
Introduction
Augmented reality (AR) is an advanced technology that mixes the virtual world with the real world in different proportions1. It has found good potential applications in many fields, such as military training, entertainment, manufacturing, and medical in recent years. In the neurosurgery field, AR is used as a volumetric image guide2,3 and phone-based neurosurgical navigation system4,5,6. AR navigation uses various devices such as smartphones, desktop PCs, head-mounted displays, and AR glasses. The AR system commonly consists of an outside-in method using sensors either attached to the computer, head-mounted display, or pre-installed, which can be operated intuitively in conjunction with hand movements7, but sensor recognition could go out-of-range due to misalignment after initial registration. The inside-out tracking method can detect a continuous tracing of the target from the userâs location through cameras and sensors mounted simultaneously on the device used for visualization8. Although inside-out tracking has less accuracy compared to outside-in tracking, installing the equipment for visualization and object detection for registration is low cost. We previously reported on an inside-out tracking-based AR-neuro-navigation system using ARKit® (Apple Inc.) based software9. We applied the inside-out tracking for radiosurgery by mounting it to a 4th generation iPad Pro (Apple Inc. San Francisco, USA) to test the feasibility to develop clinically usable inside-out tracking AR in gamma knife radiosurgery (GKRS), which utilizes devices in a fixed-stated, called fixed-AR.
The high dose of radiation delivered through GKRS requires a high degree of accuracy and immobilization of the head10,11. Accuracy with head-frame fixation is essential to limit irradiation of the surrounding anatomical structures12. Head-frame placement is invasive involving screw fixation at four specific points in the patientâs skull and difficult for fractionated treatment of large lesions11,13. The latest version in GammaKnife (GK) Icon⢠is capable of non-invasive fixation and fractionated treatment by utilizing cone-beam computed tomography (CBCT) and detecting the motion by high-definition motion monitoring (HDMM). CBCT could be acquired by either a higher signal (CTDI 6.3) preset or lower dose (CTDI 2.5) and registered with the stereotactically-defined image set for comparison between patient coordinates at the time of treatment imaging; the HDMM system can currently be used for head immobilization with a thermoplastic mask instead of a head-frame14. It is the frameless fixation of the GK Iconâ¢. During subsequent delivery of the adapted treatment plan from the HDMM system, it tracks the displacement of the patientâs nose marker related to the four immobile reflectors fixed to the Icon⢠head support system in real time15. However, the irradiation is executed only when the magnitude of displacement returns under the threshold; if the threshold is exceeded, the new CBCT scan is processed to allow coordinates acquire a new position. Wright et al. suggested that the target and nose marker typically varies throughout a clinically relevant extent of stereotactic space, and the average HDMM threshold of 1.4 mm may be appreciated for 41 volumes15. However, patients with multiple metastases or older age seem to be intolerant to the lower threshold according to the increased beam-on time. Kim et al. reported that the elapsed beam-on time, including beam-paused time due to motion of the patient, defines the tolerance for around 30 min (min) in older patients (>â65 years)16. Thus, keeping the appropriate HDMM threshold without the new position for CBCT scanning is important for intolerant patients in GKRS frameless fixation. The navigation of patient positioning under frameless fixation is useful to reduce the preparation time and unnecessary CBCT scanning. If we used CBCT images to make a 3-dimensional (3D)-virtual model using fixed-AR, the frameless fixation could possibly be repositioned, guided by a 3D-virtual model, based on initial planning for CBCT without unnecessary CBCT scanning.
3D-scanning increases the accuracy and makes it easy to obtain the virtual 3D-model. Recently, TrueDepth technology in the latest devices by Apple Inc. is used for measuring the task with accuracies in the millimeter range. TrueDepth uses vertical-cavity surface-emitting laser (VCSEL) technology and consists of an infrared camera, a conventional camera, proximity sensor, spot projector, and flood illuminator17. The front-facing camera provides the depth data in real-time along with visual information, and the system uses a light-emitting diode to project an irregular grid of over 30,000 infrared dots to record the depth within milliseconds. To scan objects, an additional application was installed18. Although Heges application was evaluated with the finest 3D resolutions under 0.5 mm17, the accuracy is affected by the scanning strategy and post-processing19. The potential of TrueDepth in the recent iPad Pro as 3D-scanning in Heges application was evaluated using fixed-AR in GKRS.
To apply this new AR technology for GKRS, the virtual models were established using existing planning CBCT images, and a novel TrueDepth 3D-scanning method. In this study, we investigated the navigation of frameless fixation using fixed-AR with the virtual models of CBCT scans and 3D-scans into a 3D-printed patient model for GKRS.
Results
Execution of fixed-AR navigation in frameless fixation
When the navigation of frameless fixation was executed through the new application, it overlapped on the 3D-printed model. To complement the fixed device state, the quick response (QR) marker attached to the mask indicator and iPad Pro was installed to the cradle beside a couch bed. The QR marker was designed for various directions by adjusting the registration target. Fixed-AR navigation is confirmed to correctly overlap with the 3D-printed patient model and virtual models based on 3D-scanning and CBCT in Fig. 1.
Validation of frameless fixation based on virtual models using Fixed-AR
Rotational and translational set-up errors for frameless fixation based on virtual models using fixed-AR are summarized in Table 1. The mean rotational errors for the virtual models were small in all axes, less than 1.0° except for the Y-axis (1.36â±â1.064°) in the 3D-scanning virtual model. The mean translation error for the virtual models was less than 1 mm in all axes.
The fiducial markers in the 3D-printed model are defined in seven points out of eight points because of the exceeding image definition from CBCT scanning. The mean error of fiducial marker coordinates for frameless fixation based on virtual models in fixed-AR are summarized in Table 2. Comparison of the planning CBCT with pretreatment CBCT revealed all mean errors were under 1.0Â mm. However, the planning CBCT with virtual models had a mean error of 0.28 to 2.72Â mm, including 3D errors.
For patients undergoing GKRS treatment with frameless fixation for a few days, we measured the HDMM differences between initial fixation and re-fixation, navigated by the virtual models for three days. The mean differences in the virtual model of 3D-scanning and CBCT were 1.19â±â0.32 mm and 1.21â±â0.02 mm, respectively. The re-fixation navigated by the virtual models without CBCT revealed HDMM values under 1.5 mm.
The AR-guided initial preparation time was approximately 8Â min whereas the CBCT rescan was approximately 5Â min. If we initially set the fixed-AR, it would take only 1Â min to repeat the GKRS procedure.
Optimization of frameless fixation with correction function
In cases of inaccurate registration after fixed-AR navigation, the user could use the correction function to adjust the surface position of the fixed-AR navigation that accurately overlaps with the nose marker in GKRS. Both 3D rendering in 3D scanning and CBCT distorted from registration were adjusted to the correct position using the correction function.
Discussion
GKRS requires an accurate and precise high-dose radiation for a specific target while minimizing the potential radiation toxicity for the surrounding tissue. Recently, the GK Icon⢠version is capable of frameless (mask) fixation using CBCT, which is used to verify the patient position during set-up prior to irradiation20. Frameless fixation is widely used for large lesions with hypofractionated treatment; it is important to endure the tolerance and long beam-on-time, allowing immobilization of the patientâs head. If AR navigation using virtual models replace the patientâs head position navigation, maintaining the appropriate HDMM threshold without repositioning CBCT scanning will be accomplished more easily with the reducing of the preparation time and the irradiation due to CBCT scanning in GKRS. This study represented the mean set-up errors within 1.5°, and 1 mm in both methods. The mean differences of fiducial markers from coordinates were within 2.5 mm, and the 3D errors were within 3 mm in both methods. Our fiducial marker error results were larger than set-up errors; however, the mean errors determined by manual measurement are generally acceptable within the 3 mm threshold. The mean differences of HDMM values were within 1.5 mm compared to initial HDMM values.
We demonstrated that virtual models with fixed-AR could be applied for intolerant patients with frameless fixation, requiring repeated CBCT scanning to exceed the HDMMÂ threshold.
Patient-specific planning based on individual characteristics and conditions is important for precise treatment in neurosurgery21,22. In radiosurgery, all patients have various lesions including malignant or benign tumors, vascular disease, and functional diseases. In our country, demographic projections for older adults have increased, implying a consequent increase in cancer incidence and mortality in this population23. Radiosurgery delivered by highly focusing radiation with sharp dose fall-off is theoretically expected to reduce delayed neurotoxicity24. Recently, hypofractionated radiosurgery was found to be effective as a single-session radiosurgery with minimal toxicity for large brain metastases (>â10 cm3)25,26. However, the patients with multiple metastases or older age seem to be intolerant to the lower HDMM values according to increased beam-on time. Thus, keeping the appropriate HDMM threshold without a new CBCT scanning position is important for intolerant patients in frameless fixation in GKRS. In case of radiation exposure, although CBCT (2.5Â mGy or 6.4Â mGy) is a low dose CT, CBCT scans still involve radiation exposure. Furthermore, repeated CBCT scanning could require continual CBCT data storage. CBCT data is approximately 200Â MB, the virtual model 3D data is approximately 80Â MB. The virtual model 3D data requires less storage capacity and replace CBCT rescan data.
The scanning accuracy decides the potential use of 3D scanner applications. The iOS of Appleâs smartphones and tablets provides 3D data without the operatorâs measurement experiences, called TrueDepth Scanner, based on the structure-light principle27. TrueDepth-based 3D scanning reveals the highest deviations in cylindricity (0.82 mm on average) and roundness (1.17 mm on average)17. In another 3D scanning study, Camison et al. demonstrated that the points calculated from a total of 136 distances had an average deviation (mm) of 0.8428. Our results revealed that translational errors were less than 1.00 mm in all axes in virtual models. The rotational error in the Y-axis was only higher in the virtual model of 3D-scanning compared to that of CBCT. However, all fiducial marker errors were under 3 mm without CBCT scanning. These results could be applied to the patients with metastases, long beam-on-time, and no eloquent areas. Paul et al. demonstrated that when using 3 mm as a cut-off there was no effect on local recurrences identified 29.
The novel method of inside-out fixed AR is performed by a physician or physicist experienced in GKRS to match the nose marker target under frameless fixation based on user-determined registration with the 3D-printed patient model and virtual models. The fixed-AR navigation do not execute the automatic registration into frameless fixation; however, the position of fixed-AR images could be adjusted using the correction function into a fixated-state in a frameless adaptor. We recommend that the navigation of the fixed-AR system could be utilized simultaneously along with the frameless-fixation before pretreatment CBCT scanning. If a patient cannot endure the long beam-on time, frameless re-fixation without CBCT can be performed with a short resting period using the fixed-AR system. Frameless fixation can be maintained for a long time, which may result in pressure on the face and make the patient feel uncomfortable. For this reason, it is important to conduct the frameless fixation close to patient's initial position. The AR navigation has the potential capability for real-time monitoring of a patientâs movement in GKRS. The detection of the patientâs movement only depends on the motion marker in GK Iconâ¢; however, if the real-time monitoring is possible in AR navigation, frameless GKRS can be performed without a mask or with the mask loosely fixed in the future.
This study has a few limitations. The fixed-AR system is not automatically registered to the object; it takes a few attempts to overlap the object and virtual models. The development of a fixed-AR system should be considered to register AR automatically into the object using real-time tracking, and we are planning to build a storage server with patient specific information, including virtual models in the fixed-AR application in order to conduct a clinical trial in the future. Although the virtual model of 3D-scanning consists of a fixed-AR system, it is still inaccurate to register it to the entire real object. 3D-scanning accuracy will determine the potential applications of 3D-scanners. To guide the surface fixed-state with 3D-scanning, the general 3D-scanning accuracy of the entire object should be evaluated.
Conclusions
We demonstrated that fixed-AR navigation is a useful tool for frameless fixation without CBCT for GKRS. This method using conventional equipment and fixed-AR with inside-out tracking could be directly adapted to GKRS. Overall, when planning for small lesions or eloquent areas, frameless fixation and repositioning with CBCT scanning should be considered. However, in cases of patients with large metastases, no eloquent area and continual movements under frameless fixation could be navigated well with re-fixation and repositioning using a fixed-AR system without CBCT scanning.
Methods
3D-printed patient model and initial frameless fixation
The 3D-printed patient model was produced in the following three stages: 1) creation of a stereolithography (STL) file for 3D printing; 2) printing physibles using a 3D printer; and 3) post-processing performed via manual editing. The process was approved by the institutional review board (Seoul National University Hospital, IRB No. 1811â040-986 and Chungbuk National University Hospital, IRB No. 2019â06-015-001) and used to validate set-up errors, fiducial marker coordinates, and HDMM values. The IRBs of two institutions where a case originated waived the requirement for informed consent for using simulated patientâs phantom since there was no interaction with the patient. The 3D-printed patient model had a frameless adapter with a head cushion and was fixed with a thermoplastic Nanor® mask (Elekta instrument AB, Stockholm, Sweden). In the fixed-AR setting, we measured the initial fiducial marker coordinates after planning CBCT and HDMM values, which kept the motion value for 10 min, as presented in Fig. 2.
3D scanning and planning CBCT
The 3D-printed patient model with frameless fixation was taken for TrueDepth scanning using Heges application with iPad Pro. The STL scan file of the scan was imported to Meshmixer version 3.5 (https://www.meshmixer.com) to trim the background scanning, and smoothing processes were performed to improve the pixilation and obtain more uniform triangles.
The 3D-printed patient model with frameless fixation was also scanned for planning CBCT scanning using Leksell GammaPlan Version 11.1 (Elekta instrument AB, Stockholm, Sweden), and exported CBCT Digital Imaging and Communications in Medicine (DICOM) files. The authors developed and released the software (MEDIP, https://medicalip.com/Download, MEDICALIP). The DICOM files were imported to MEDIP software for 3D-rendering, trimming the background, and smoothing processing. The MEDIP software is easy to operate due to its machine learning-based semi-automated segmentation function30. Thus, the 3D data can be prepared by medical staff without computer expertise. This procedure is presented in Fig. 3.
Inside-out AR navigation and running fixed-AR
The inside-out AR navigation (METAMEDIP navigation) was developed using ARkit® (Apple Inc.). It has not been released yet, but the technical methods were described in the previous study30. The inside-out AR navigation was processed in three steps: 1) device recognition is visualized, followed by 2) QR marker recognition, and 3) AR implementation and registration within the running environment. The workflow of the inside-out AR navigation algorithm and running fixed-AR is presented in Fig. 4. The QR marker is attached to the right mask fixation button adjacent to the matching target that has minimal light reflection and is unlikely to be easily obscured by other objects. The ARkit® is based on Visual Inertial Odometry (VIO), which measures the device location from inertial measurement unit (IMU)-based data that has a fast collection and calibrates using camera images. Moreover, ARkit® supports close-loop processing that corrects the trajectory by matching the trajectory with the starting point when moving the device and returning to the starting point during calculation with the VIO algorithms. This is collectively referred to as visual inertial simultaneous localization and mapping. The loop-closure process can be omitted due to the nature of this AR system wherein the device operates in a fixed state.
The QR marker images are used for estimating the position recognized by the camera in the feature detection algorithm, which commonly considers the corner and intersection of lines or the part with clear color contrasts (Black and White) as a feature point. After the device position is determined, the QR marker, recognized by the scale of objects in virtual space, is determined by calculating the distance between the device and the QR marker from the size of the recognized marker in the preceding step. After all the requisite data are calculated, the 3D scanning or CBCT-based virtual models are displayed at the designated positions from the marker for confirmation by the user. In case of errors in the automatically processed pre-registration, the user can correct the registration error by adjusting the position of the 3D virtual model using the correction function and then fix the position of the models to complete the registration.
Validating the fixed-AR navigation registration and preparation time
The registration accuracy was measured using the following three methods: intuitive validation through visualization; set-up errors; and quantitative validation. The 3D-printed patient model was created and matched with 3D virtual models. Set-up errors were assessed by comparing the planning and pretreatment CBCTs11, which navigated to re-fixation using the fixed-AR system with the virtual models. Set-up error was defined as displacement of the skull in the stereotactic space. Setup errors were investigated by translational (mm) and rotation (°) methods for three days. The registration accuracy of the fiducial marker was validated by coordinating pretreatment CBCT, scanned from navigated re-fixation by the fixed-AR, to planning CBCT. The fiducial markers were attached to both hemispheres of the 3D-printed patient model in 8 points.
We defined the error of coordinates as follows:31.
Îxâ=âx-axis coordinate error in planning CBCTâ+âpretreatment CBCT or virtual models.
Îyâ=ây-axis coordinate error in planning CBCTâ+âpretreatment CBCT or virtual models.
Îzâ=âz-axis coordinate error in planning CBCTâ+âpretreatment CBCT or virtual models.
Furthermore, the 3D error (Îr) was defined as a localization error by the following formula:
The differences of HDMM values were investigated  in initial fixation and re-fixation navigated fixed-AR. The mean accuracy was evaluated by measuring the X, Y, and Z coordinates for three days. We also measured the preparation time from setup to GKRS procedure between the AR guided re-fixation and the CBCT rescan.
Correction function of misalignment with fixed-AR
For cases when automatic registration using the QR marker could be misaligned, the correction function of fixed-AR was developed and could alter the location of the virtual 3D model of AR navigation based on the following principles9: the virtual space is shown on the device screen through the coordinates (local, world, and camera); the local coordinate is a part of the virtual 3D model itself. The world coordinate refers to the virtual space wherein the model is placed, while the camera coordinate refers to the coordinate system wherein the world coordinate is viewed from the standard reference of the camera. The correction function is based on the world coordinate, and the movement is based on the camera coordinate, which assists the user in determining the desired direction and the rotation of movement intuitively.
Also, the opacity of the virtual models can be adjusted by using opacity-adjustment function. These functions allow AR implementation by further emphasizing the objects.
Data availability
The datasets used in this study are available from the corresponding author on reasonable request.
References
Zlatanova, S. Augmented Reality Technology. GISt Report No. 17, Delft, 2002, 72 p. (2002).
Watanabe, E., Satoh, M., Konno, T., Hirai, M. & Yamaguchi, T. The trans-visible navigator: A see-through neuronavigation system using augmented reality. World Neurosurg 87, 399â405. https://doi.org/10.1016/j.wneu.2015.11.084 (2016).
Sun, G. C. et al. Impact of virtual and augmented reality based on intraoperative magnetic resonance imaging and functional neuronavigation in glioma surgery involving eloquent areas. World Neurosurg 96, 375â382. https://doi.org/10.1016/j.wneu.2016.07.107 (2016).
Hou, Y., Ma, L., Zhu, R., Chen, X. & Zhang, J. A low-cost iphone-assisted augmented reality solution for the localization of intracranial lesions. PLoS ONE 11, e0159185. https://doi.org/10.1371/journal.pone.0159185 (2016).
Soeiro, J., A. P. C., Carmo, M. B., & Ferreira, H. A. Mobile Solution for Brain Visualization Using Augmented and Virtual Reality. 20th International Conference Information Visualisation (IV), https://doi.org/10.1109/IV.2016.18. (2016).
Qian Shan, T. E. D., Samavi, R. & Al-Rei, M. Augmented reality based brain tumor 3D visualization. Proc. Comput. Sci. 113, 400â407. https://doi.org/10.1016/j.procs.2017.08.356 (2017).
Keisuke Hattori, T. H. Inside-out tracking controller for VR/AR HMD using image recognition with smartphones. SIGGRAPH '20: ACM SIGGRAPH 2020 Posters, 1â2 (2020).
Mohamed, S. A. S. et al. A survey on odometry for autonomous navigation systems. IEEE Access 7, 97466â97486 (2019).
Dho, Y. S. et al. Development of an inside-out augmented reality technique for neurosurgical navigation. Neurosurg. Focus 51, E21. https://doi.org/10.3171/2021.5.FOCUS21184 (2021).
Leksell, L. The stereotaxic method and radiosurgery of the brain. Acta Chir. Scand. 102, 316â319 (1951).
Carminucci, A., Nie, K., Weiner, J., Hargreaves, E. & Danish, S. F. Assessment of motion error for frame-based and noninvasive mask-based fixation using the Leksell Gamma Knife Icon radiosurgery system. J. Neurosurg. 129, 133â139. https://doi.org/10.3171/2018.7.GKS181516 (2018).
Maciunas, R. J., Galloway, R. L., Jr. & Latimer, J. W. The application accuracy of stereotactic frames. Neurosurgery 35, 682â694; discussion 694â685 https://doi.org/10.1227/00006123-199410000-00015 (1994).
Pavlica, M., Dawley, T., Goenka, A. & Schulder, M. Frame-based and mask-based stereotactic radiosurgery: The patient experience, Compared. Stereotact. Funct. Neurosurg. 99, 241â249. https://doi.org/10.1159/000511587 (2021).
Duggar, W. N. et al. Gamma Knife((R)) icon CBCT offers improved localization workflow for frame-based treatment. J. Appl. Clin. Med. Phys. 20, 95â103. https://doi.org/10.1002/acm2.12745 (2019).
Wright, G., Schasfoort, J., Harrold, N., Hatfield, P. & Bownes, P. Intra-fraction motion gating during frameless Gamma Knife((R)) Icon therapy: The relationship between cone beam CT assessed intracranial anatomy displacement and infrared-tracked nose marker displacement. J. Radiosurg. SBRT 6, 67â76 (2019).
J.O. Kim, K. F., G. Bednarz, M.S. Huq, J.C. Flickinger, Sr, & E. Monaco, A. N. a. L. D. L. Patient Motion Analysis of First 50 Frameless Fixation Cases with Leksell Gamma Knife ICON. International Journal of Radiation Oncology*Biology*Physics 102, e495-e496 (2018).
Vogt, M., Rips, A. & Emmelmann, C. Comparison of iPad Pro娉s LiDAR and TrueDepth Capabilities with an Industrial 3D Scanning Solution. Technologies 9, 25 (2021).
Verdaasdonk, R. & Liberton, N. The Iphone X as 3D scanner for quantitative photography of faces for diagnosis and treatment follow-up (Conference Presentation). Vol. 10869 PWB (SPIE, 2019).
Nermina Zaimovic-Uzunovic, S. L. Influences of surface parameters on laser 3D scanning. IMEKO CONFERENCE PROCEEDINGS: INTERNATIONAL SYMPOSIUM ON MEASUREMENT AND QUALITY CONTROL (2010).
Chung, H. T. et al. Assessment of image co-registration accuracy for frameless gamma knife surgery. PLoS ONE 13, e0193809. https://doi.org/10.1371/journal.pone.0193809 (2018).
Sugiyama, T. et al. Immersive 3-dimensional virtual reality modeling for case-specific presurgical discussions in cerebrovascular neurosurgery. Oper. Neurosurg. (Hagerstown) 20, 289â299. https://doi.org/10.1093/ons/opaa335 (2021).
Kockro, R. A. et al. Planning and simulation of neurosurgery in a virtual reality environment. Neurosurgery 46, 118â135; discussion 135â117 (2000).
Yoon, H., Kim, Y., Lim, Y. O. & Choi, K. Quality of life of older adults with cancer in Korea. Soc. Work Health Care 57, 526â547. https://doi.org/10.1080/00981389.2018.1467355 (2018).
Yomo, S. & Hayashi, M. Is upfront stereotactic radiosurgery a rational treatment option for very elderly patients with brain metastases? A retrospective analysis of 106 consecutive patients age 80 years and older. BMC Cancer 16, 948. https://doi.org/10.1186/s12885-016-2983-9 (2016).
Kim, J. W. et al. Fractionated stereotactic gamma knife radiosurgery for large brain metastases: A retrospective, Single Center Study. PLoS ONE 11, e0163304. https://doi.org/10.1371/journal.pone.0163304 (2016).
Park, H. R. et al. Frameless fractionated gamma knife radiosurgery with ICON for large metastatic brain tumors. J. Korean Med. Sci. 34, e57. https://doi.org/10.3346/jkms.2019.34.e57 (2019).
Breitbarth, A. et al. Measurement accuracy and dependence on external influences of the iPhone X TrueDepth sensor. Vol. 11144 PEMS19 (SPIE, 2019).
Camison, L. et al. Validation of the Vectra H1 portable three-dimensional photogrammetry system for facial imaging. Int. J. Oral Maxillofac. Surg. 47, 403â410. https://doi.org/10.1016/j.ijom.2017.08.008 (2018).
Rava, P. et al. Feasibility and safety of cavity-directed stereotactic radiosurgery for brain metastases at a high-volume medical center. Adv. Radiat. Oncol. 1, 141â147. https://doi.org/10.1016/j.adro.2016.06.002 (2016).
Dho, Y. S. et al. Clinical application of patient-specific 3D printing brain tumor model production system for neurosurgery. Sci. Rep. 11, 7005. https://doi.org/10.1038/s41598-021-86546-y (2021).
Kim, H. Y. et al. Reliability of stereotactic coordinates of 1.5-Tesla and 3-Tesla MRI in radiosurgery and functional neurosurgery. J. Korean. Neurosurg. Soc. 55, 136â141. https://doi.org/10.3340/jkns.2014.55.3.136 (2014).
Acknowledgements
This work was supported by the Korea Medical Device Development Fund granted by the Korean government (Ministry of Science and ICT, Ministry of Trade, Industry and Energy, Ministry of Health & Welfare, and Ministry of Food and Drug Safety) (Project Number: 202012E08).
Author information
Authors and Affiliations
Contributions
Conception and design: H.C.M., Y.S.D. Data acquisition: S.J.P., Y.D.K. 3D rendering: H.C.M., Y.D.K., Y.S.D. Data analysis and interpretation: K.M.K., H.K., E.J.L., M.S.K., J.W.K., Y.H.K. Manuscript preparation: H.C.M., C.K.P., Y.G.K., Y.S.D.
Corresponding author
Ethics declarations
Competing interests
Sang Joon Park is the founder and CEO of MEDICALIP. Chul-Kee Park and Yun-Sik Dho own stock options in MEDICALIP. Other authors have no conflict of interest to declare.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Moon, H.C., Park, S.J., Kim, Y.D. et al. Navigation of frameless fixation for gamma knife radiosurgery using fixed augmented reality. Sci Rep 12, 4486 (2022). https://doi.org/10.1038/s41598-022-08390-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-022-08390-y
This article is cited by
-
Tumor Displacement Prediction and Augmented Reality Visualization in Brain Tumor Resection Surgery
Journal of Shanghai Jiaotong University (Science) (2024)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.