Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Towards Intra-operative OCT Guidance for Automatic Head Surgery: First Experimental Results Jesús Dı́az Dı́az1 , Dennis Kundrat1 , Kim-Fat Goh1 , Omid Majdani2 , and Tobias Ortmaier1 1 2 Institute of Mechatronic Systems, Leibniz Universität Hannover, Appelstr. 11a, D-30167 Hannover, Germany {jesus.diazdiaz,dennis.kundrat,kimfat.goh, tobias.ortmaier}@imes.uni-hannover.de http://www.imes.uni-hannover.de Clinic for Laryngology, Rhinology and Otology, Hannover Medical School, Carl-Neuberg-Str. 1, D-30625 Hannover, Germany majdani.omid@mh-hannover.de Abstract. In recent years, optical coherence tomography (OCT) has gained increasing attention not only as an imaging device, but also as a guidance system for surgical interventions. In this contribution, we propose OCT as an external high-accuracy guidance system, and present an experimental setup of an OCT combined with a cutting laser. This setup enables not only in situ monitoring, but also automatic, highaccuracy, three-dimensional navigation and processing. Its applicability is evaluated simulating a robotic assisted surgical intervention, including planning, navigation, and processing. First results demonstrate that OCT is suitable as a guidance system, fulfilling accuracy demands of interventions such as the cochlear implant surgery. Keywords: optical coherence tomography, laser, navigation system, guidance system, cochlear implant surgery. 1 Introduction The cochlear implant (CI) surgery is a surgical procedure during which an electrode is inserted into the cochlea in order to electrically stimulate the auditory nerve. Current research investigates the realization of a single-channel using Robot Assisted Surgery (RAS) and the direct insertion from the outer lateral skull to the cochlea. This surgical intervention demands an accuracy of 0.5 mm. In this contribution, we focus on the use of OCT as intra-operative monitoring and guidance system for this purpose. OCT was established in 1991. Its working principle is based on the interference of back-reflected laser light from a sample with reference laser light in a Michelson interferometer. OCT typically has a resolution in the micron-scale and is highly sensitive. It is contact-free and, therefore, a nondestructive imaging device, capable not only of scanning the surface, but also of obtaining three-dimensional K. Mori et al. (Eds.): MICCAI 2013, Part III, LNCS 8151, pp. 347–354, 2013. © Springer-Verlag Berlin Heidelberg 2013 348 J. Dı́az Dı́az et al. tissue information. OCT is used in wide range of applications and usually as qualitative imaging system. With an increased utilization in the field of medical engineering, quantitative applications gain importance. The idea to use OCT as a guidance system may be as old as OCT itself. The visualization using OCT during a surgical procedure and its feedback, enables the user to control the tissue processing at the micron-scale. Boppart et al. [1] used and proposed OCT for surgical guidance by manually imaging a region of interest. Recently, more sophisticated approaches combining OCT with other tools have been developed and are used in a guided manner. In the field of opthalmology, the integration of OCT in an microsurgical instrument enables the surgeon to perform OCT guided retinal microsurgery by visualization of internal structures of the eye [7]. Zhao et al. [6] combined OCT with MRI for neurosurgery guidance. During the insertion and while navigating with MRI, Zhao et al. use real-time 2D OCT to image adjacent structures and navigate the surgeon. These examples have in common, that the guidance is based on forward-imaging. The adjacent target region of the instrument is imaged with OCT, which is used as internal guidance system without a direct feedback to the planning. In this contribution, we propose a novel setup of combined OCT and cutting laser as a monitoring, navigation and processing system for RAS in hard tissue. Moreover, we introduce OCT as an external navigation system for laser ablation. Since OCT is used as stand-alone external guidance system, intra-operative OCT data has to be matched to (pre-operative) planning data. In order to demonstrate OCT’s suitability with regard to the stated accuracies, experiments are performed by simulating a surgical intervention, including planning, navigation and processing. The experimental setup, methods, and workflow are introduced in section 2. The results are presented in section 3 and discussed in section 4. 2 Setup and Methods The following hardware components are part of the proposed system: – – – – – tool for processing: cutting laser. tracking system: high-accurate OCT. tracking landmarks: spherical artificial fiducial landmarks. positioning system: high-accurate parallel robot. sample: imaging and navigation phantom. Recent approaches for navigated material removal involve state-of-the-art optical tracking systems in an eye-to-hand configuration. When using OCT, an eyein-hand configuration is more appropriate due to the limitation of the OCT’s working distance. In clinical applications, a suitable robot iteratively positions the combined laser and OCT. The ablation procedure starts after reaching the target pose with respect to the patient. In the present paper, however, and only for the experimental setup, an eye-to-hand configuration is used to position the sample and not the tool. Methodology and relative motion with respect to the phantom nevertheless remain the same. Intra-operative OCT Guidance for Automatic Head Surgery 349 Fig. 1. Eye-in-hand (left) and eye-to-hand (right) of the experimental system with its components, coordinate frames and transformation matrices 2.1 Experimental Setup The experimental system is sketched out in figure 1 and shown in figure 2 (left). The cutting laser is the erbium-doped yttrium aluminium garnet (Er:YAG) laser of Pantec Biosolutions AG (model DPM-15). It is a pulsed solid-state laser with a wavelength of λlaser = 2940 nm. The functionality of the laser is expanded with scan components. Therefore, the entity of laser and scanner is converted into a three axis laser system, defining a coordinate system (CF)laser . The working space of this entity, in the following just laser, has a working space of 10 mm in each dimension. The OCT used in the optical setup is the system GANYMEDE of Thorlabs, Inc. The OCT has a center wavelength of λOCT = 930 nm. The maximum field of view has been enlarged to image approximately 20 mm × 20 mm × 2.7 mm, defining a coordinate system (CF)OCT . Furthermore, a geometric calibration [2] of this OCT has been performed in order to reduce the imaging error. The optical paths of OCT and laser are combined by a dichroic mirror for an approximate co-axial propagation of the beams and a spatial overlap of the working spaces, keeping the relative configuration constant. This important feature enables an in situ imaging and control of the ablation process. We refer to [4] for further information and first results. Using OCT as tracking device requires to adapt the tracking landmarks to this technology. In this contribution, we focus on artificial fiducial landmarks. Due to previous results, we choose spheres of titan with a diameter of 1 mm. The sample used for the evaluation of the navigation accuracy, i.e., the phantom, is composed of two parts (see figure 2 (right)). The first part is the carrier for the fiducial landmarks and the target area. This second part is a cuboid made of wood. The artificial fiducial landmarks are positioned not only on the front, but also on the flip side of the phantom for evaluation purposes, defining an upper 350 J. Dı́az Dı́az et al. Fig. 2. Experimental setup (left) with OCT (upper left), laser (upper middle), phantom (middle), and robot (lower middle). Phantom (right) including fiducial landmarks and wooden target. and lower fiducial landmark plane, respectively, and defining a sample coordinate frame (CF)sample . The configuration of all fiducial landmarks has been measured with the coordinate measurement machine Zeiss ZMC 550. Both parts have a relevant depth of approximately 12 mm. The phantom is positioned using a high accuracy parallel robot, the F-206.S HexAlign™ 6 Axis-Hexapod of Physik Instrumente (PI) GmbH & Co. KG. 2.2 System Calibration The aim of this subsection is the description of the methods used to determine the transformation between rigid components. First, coordinate frames (CF)laser and (CF)OCT have to be registered, i.e., the homogenous transformation matrix OCT Tlaser has to be determined. An arbitrary material is positioned in the common working space. We perform the ablation by a limited number of single pulses removing a small part of material. After appropriate filtering for noise reduction, the surface including the ablation spot is segmented using snakes [5], i.e., by choosing the curve in the image that minimizes an energy functional composed of internal and external energy. For the external part, an energy map based on the diffusion of the gradient vectors [9] have been used. Using the segmented curves, the volume centroid is calculated. Ablation and image processing is repeated while positioning the material in several different depths. The laser is described by a point and a direction. We use the data of ablation spots to calculate the point of origin by evaluating the spot size as a function of depth, and the direction by calculating the line of best fit in terms of least squares. Second, the homogenous transformation matrix EE Tsample between the robot’s end effector (EE) and the sample has to be determined. The basic idea is to select different poses of the robot, such that the sample’s region of interest is in the OCT imaging volume. After positioning the EE, OCT images are aquired of the sample including the fiducial landmarks. The OCT data is processed automatically. The centroids of the fiducial landmarks are calculated with a template Intra-operative OCT Guidance for Automatic Head Surgery 351 matching algorithm using cross correlation. On the one hand, the localized center points of the fiducial landmarks define the sample coordinate frame (CF)sample with respect to the OCT coordinate frame (CF)OCT , i.e., OCT Tsample . On the other hand, the transformation 0 TEE is defined through the relative position of robot EE with respect to robot basis. The workflow is repeated for different (m) (m) poses of the EE, acquiring pair of matrices 0 TEE and OCT Tsample for the m-th repetition. Due to the hand-to-eye configuration, and in order to calculate the unknown EE Tsample , the following set of algebraic equations A =  A ·EE Tsample = EE Tsample · B, −1 −1  (m) (n) (n) 0 (m) ·OCT Tsample TEE ·0 TEE , B = OCT Tsample (1) (2) is solved for a pair or set of matrices using methods introduced by Tsai et al. [8]. 2.3 Navigation The conventional workflow starts with an appropriate pre-operative imaging of the patient in order to aquire data the planning is going to be based on. This imaging, generally using CT, is omitted, since a ground-truth of the sample, being the patients replacement, is well-known. We realize the planning by localizing artificial tracking landmarks Lk (k = 1, 2, . . .), relative to which we define an entry T0 and an exit target point T1 , i.e., a target transformation matrix sample Ttarget . This enables us to define a target pose of the sample OCT (target) Tsample =OCT Tlaser ·laser Ttarget · (sample Ttarget )−1 , laser Ttarget = I. (3) Intra-operatively, the iterative process is the alternation of tracking landmarks and comparing actual to target sample pose. The residual error of these two data sets is minimized performing a singular value decomposition of the weighted mean fiducial covariance matrix. Tracking is realized through calculating the centroid of the fiducial landmarks Lk (k = 1, 2, . . .) with a template matching algorithm using cross correlation. This results in the sample’s pose in the i-th (i) iteration OCT Tsample . The difference transformation between actual and target sample pose (i) (i) (target) ∆Tsample = (OCT Tsample )−1 ·OCT Tsample , (4) enables us to calculate the positioning of the robot’s EE for iteration step i + 1 0 (i+1) TEE (i) (i) = 0 TEE ·EE Tsample · ∆Tsample · (EE Tsample )−1 . (5) The fiducial registration error (FRE) [3] of the two set of fiducials, target and actual, as well as the estimated target registration error (TRE) [3] of the entry target point are used as quality criterion. If these values fall below a certain threshold, we stop the navigation and start with the ablation. The evaluation is carried out by imaging the upper as well as the bottom part of the sample after ablation and comparing actual and planned target points. 352 J. Dı́az Dı́az et al. Fig. 3. 2D OCT image (B-scan) of ablation spot with segmented surface (left) and segmented 3D surface (right) The errors errin and errout at the target entry and exit point, respectively, as well as the angle error errα between trajectories are used for evaluation. The error err35 mm is extrapolated at a depth of 35 mm from the surface, which is approximately the distance from the outer lateral skull to the cochlea, using the intercept theorem and the entry and exit point. 3 Results The registration of the OCT and laser has been carried out by positioning and cutting a sample of wood at 9 different axial positions. The (laser) parameters for current, puls duration, puls frequency, scanner coordinate have been chosen to I = 220 A, ∆t = 180 µs, f = 200 Hz, and xlaser = (0, 0, 5)⊤ mm, respectively. Dense and calibrated volume OCT scans of the ablation spots with a spatial resolution of 8.2 µm × 8.2 µm × 2.6 µm for a scan region of 3.0 mm × 3.0 mm × 2.7 mm have been acquired. An example of the image processing for calculating the center of the ablation spot in the OCT image data is given in figure 3 (left), showing an original 2D OCT image of an ablation spot. The segmented surface, i.e., the snake is superimposed. The segmentation of the surface of the complete 3D volume is presented in figure 3 (right). The real configuration of both system components is unknown, so the results can only be evaluated in terms of precision. The mean distance between localized ablation spots and line of best fit is 3.5 µm having a standard deviation of 1.5 µm (see figure 4 (left)). The registration of the EE and sample has been carried out performing a hand-eye calibration positioning the EE, and, therefore, the sample in ten poses. The different poses have a translational width of maximal ± 2 mm and rotational width of maximal ± 5 ◦ . With the calibrated OCT measurement system, dense volume scans of the sample have been acquired. A spatial resolution of 15.0 µm× 15.0 µm × 2.6 µm for a scan region of 15.0 mm × 15.0 mm × 2.7 mm has been chosen. Figure 4 shows for the i-th step the translational (middle) and rotational (i) (10) (i) (right) part of the matrix EE Tsample · (EE Tsample )−1 , being EE Tsample the result an hand-eye calibration with the the first i poses. Both registrations show high convergence and small residual errors. 5 9            1 0.8 600 400 200 0 353 1 ℄ ℄ 5 0 1000 800 10                 ℄ Intra-operative OCT Guidance for Automatic Head Surgery 3 6 0.6 0.4 0.2 0 9            3 6 9            Fig. 4. Distance of localized ablation spots to line of best fit (left). Translational (mid(10) dle) and rotational (right) error with respect to converged result (EE Tsample )−1 . The rotational error is the angle of the axis-angle representation of the difference matrix. The key experiment including navigation and cutting has been carried out by pre-positioning the (evaluation) sample laterally in the center and axially approximately at the focal distance of OCT and cutting laser, respectively. With the calibrated OCT measurement system, dense volume scans have been acquired choosing the same parameters as for the hand-eye calibration. The (laser) parameters for current, puls duration, and puls frequency remain unchanged avoiding a possible ”pointing” of the laser. The ablation is carried out performing the scanner a truncated cone geometry with a diameter of 3000 µm at the upper and 200 µm at the lower end of the cutting geometry. The height is of 10000 µm. The entry point T0 is planned to be the center point of the three landmark fiducials on the upper side of the evaluation sample. The target exit point T1 is defined through the intersection of the normal of the upper fiducial landmark plane and the lower fiducial landmark plane at an approximate distance from T0 of 12 mm. The navigation and cutting is performed ten times. Generally, the iterations of the navigation, i.e., the repositioning of the robot, stop, when the FRE and TRE fall below a threshold of 10 µm. Then, we start with the ablation. The navigation errors at the last iteration step and the ablation errors are as follows: exp. 1 4.0 FRE [µm] TRE [µm] 1.2 errin [µm] 32.2 errout [µm] 98.0 err35mm [µm] 226.1 errα [◦ ] 0.3 exp. 2 4.1 3.1 46.6 121.7 336.9 0.5 exp. 3 15.1 10.3 51.2 83.1 341.3 0.64 exp. 4 7.5 3.7 53.3 66.0 237.1 0.44 exp. 5 9.3 1.9 49.1 33.8 133.0 0.28 exp. 6 7.9 1.5 58.2 58.2 179.5 0.27 exp. 7 9.7 2.0 39.9 105.0 379.2 0.68 exp. 8 7.1 2.4 31.3 166.3 473.7 0.76 exp. exp. 9 10 4.3 10.1 1.3 7.6 21.3 22.5 156.36 116.2 462.1 312.5 0.76 0.49 The trials have been carried out in a series of two (exp. 1-2), four (exp. 3-6), and four (exp. 7-10) experiments. Each of these series has been performed with slight variations, e.g., with a different laser OCT registration. The three series 354 J. Dı́az Dı́az et al. show consistent results, with all experiments fulfilling the necessary accuracy for CI surgery. 4 Conclusion This contribution reports ten trials of OCT guided laser ablation, all of which consistently resulted in an error of less than 0.5 mm. Although the number of repetitions is not sufficient to assume statistical significance, the results fulfill the accuracy demands of interventions such as CI surgery and, thus, lend preliminary support to the assumption that OCT may be used as an external high-accuracy guidance system. Simulating a robotic assisted surgical intervention, we demonstrated the feasibility and potential of the combined setup of laser and OCT for navigation and processing. Acknowledgments. The research reported in this paper was suported by the DFG (Deutsche Forschungsgemeinschaft) grants HE 2445/23-1, RE 1488/15-1, and MA 4038/3-1. We want to thank Christian Seiffert and Dipl.-Ing. Moritz Krauß from the Institute of Measurement and Automatic Control of the Leibniz Universität Hannover for the measurements they carried out, facilitating this contribution. References 1. Boppart, S., Herrmann, J., Pitris, C., Bouma, B., Tearney, G.: Interventional optical coherence tomography for surgical guidance. In: Conf. Lasers and Electro-Optics 1998, pp. 123–124 (1998) 2. Dı́az Dı́az, J., Rahlves, M., Majdani, O., Reithmeier, E., Ortmaier, T.: A one step vs. a multi step geometric calibration of an optical coherence tomography. In: Proc. SPIE Photonics WEST/BiOS 2013. SPIE (2013) 3. Fitzpatrick, J., West, J., Maurer Jr., C.R.: Predicting error in rigid-body point-based registration. IEEE Tran. Medical Imaging 17(5), 694–702 (1998) 4. Fuchs, A., Schultz, M., Krüger, A., Kundrat, D., Dı́az Dı́az, J., Ortmaier, T.: Online measurement and evaluation of the er:yag laser ablation process using an integrated oct system. In: Proc. DGBMT Jahrestagung, pp. 434–437 (2012) 5. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: Active contour models. Int. J. Computer Vision, 321–331 (1988) 6. Liang, C.P., Kim, I.K., Makris, G., Desai, J., Gullapalli, R.L., Chen, Y.: Concurrent multi-scale imaging combining optical coherence tomography with MRI for neurosurgery guidance. In: Proc. SPIE Photonics WEST/BiOS. SPIE (2013) 7. Liu, X., Balicki, M., Taylor, R.H., Kang, J.U.: Automatic online spectral calibration of fourier-domain oct for robotic surgery. In: Mahadevan-Jansen, A., Vo-Dinh, T., Grundfest, W.S. (eds.) Proc. SPIE Photonics WEST/BiOS, vol. 7890, p. 78900X. SPIE (2011) 8. Tsai, R., Lenz, R.: A new technique for fully autonomous and efficient 3d robotics hand/eye calibration. IEEE Tran. Robotics and Automation 5(3), 345–358 (1989) 9. Xu, C., Prince, J.: Gradient vector flow: a new external force for snakes. In: IEEE Conf. Computer Vision and Pattern Recognition, pp. 66–71. IEEE (1997)