Abstract
Recent advances in artificial intelligence have enabled promising applications in neurosurgery that can enhance patient outcomes and minimize risks. This paper presents a novel system that utilizes AI to aid neurosurgeons in precisely identifying and localizing brain tumors. The system was trained on a dataset of brain MRI scans and utilized deep learning algorithms for segmentation and classification. Evaluation of the system on a separate set of brain MRI scans demonstrated an average Dice similarity coefficient of 0.87. The system was also evaluated through a user experience test involving the Department of Neurosurgery at the University Hospital Ulm, with results showing significant improvements in accuracy, efficiency, and reduced cognitive load and stress levels. Additionally, the system has demonstrated adaptability to various surgical scenarios and provides personalized guidance to users. These findings indicate the potential for AI to enhance the quality of neurosurgical interventions and improve patient outcomes. Future work will explore integrating this system with robotic surgical tools for minimally invasive surgeries.
Zusammenfassung
Aktuelle Fortschritte im Bereich der künstlichen Intelligenz (KI) haben vielversprechende Anwendungen in der Neurochirurgie ermöglicht, welche die Ergebnisse für die Patienten verbessern und die Risiken minimieren können. In diesem Beitrag wird ein neuartiges KI-System vorgestellt, das Neurochirurgen bei der Identifizierung und Lokalisierung von Hirntumoren unterstützt. Das System wurde anhand eines Datensatzes von MRT-Scans des Gehirns mittels Deep-Learning-Ansätzen trainiert. Die Auswertung an einem separaten Satze von Scans ergab einen durchschnittlichen Dice-Koeffizienten von 0,87. Eine Nutzerstudie mit Experten eines Universitätskrankenhauses ergab eine verbesserte Genauigkeit und Effizienz sowie eine geringere kognitive Belastung und weniger Stress im Vergleich mit dem Behandlungsstandard. Darüber hinaus hat sich das System als anpassungsfähig an verschiedene chirurgische Szenarien erwiesen und bietet den Benutzern personalisierte Anleitungen. Unsere Ergebnisse zeigen, dass die KI das Potenzial hat, neurochirurgische Eingriffe zu verbessern. Zukünftige Arbeiten untersuchen die Integration mit robotischen Werkzeugen für die minimalinvasive Chirurgie.
1 Introduction
Brain tumors are abnormal growths in the brain or the surrounding tissues. They are complex and challenging medical conditions that can significantly impact a patient’s quality of life [1]. The incidence of brain tumors has been increasing worldwide, making it an urgent global health concern [2]. Brain tumors are classified into two main categories: primary and secondary tumors [3]. Primary tumors originate in the brain or surrounding tissues, while secondary tumors result from the spread of cancer from other organs to the brain. Depending on their location and size, these tumors can cause a range of symptoms, including headaches, seizures, cognitive impairment, and motor dysfunction.
The treatment of brain tumors typically involves a combination of surgery, radiation therapy, and chemotherapy [4]. Neurosurgery is the primary treatment approach for many types of brain tumors, and it involves extensive surgical removal, which can be a complex and delicate procedure due to the proximity of vital brain structures. In addition, traditional open surgery can be invasive and may result in significant postoperative complications, such as neurological deficits and infection, leading to longer recovery times [5].
While classical techniques have been widely utilized in surgery, including robot-guided procedures, they have certain drawbacks and limitations [6, 7]. One of the primary limitations is their dependence on manual intervention and subjective judgment, which can introduce variability and potential errors. Moreover, classical techniques often lack real-time feedback and adaptability, impeding their ability to address intraoperative variations and unexpected complexities. Additionally, the intricate anatomical structures and interpatient variability pose challenges for achieving precise and consistent outcomes using conventional approaches alone. To overcome these limitations, the integration of artificial intelligence (AI)-assisted techniques in robot-guided surgery becomes crucial. By harnessing advanced algorithms capable of analyzing extensive datasets, offering real-time guidance, and augmenting decision-making processes, the safety, accuracy, and efficacy of surgical interventions can be enhanced, thus paving the way for improved patient outcomes.
AI has emerged as a promising tool in neurosurgery, particularly in brain tumor surgery [8, 9]. AI algorithms can analyze patient data, such as imaging studies, clinical records, and genetic information, to assist surgeons in making more accurate and informed decisions. For example, AI can help identify the precise location and size of the brain tumor, determine the optimal surgical approach, and predict potential complications [10]. This information can enable the surgeon to plan the surgery more effectively and reduce the risk of complications, such as bleeding or damage to healthy brain tissue. Additionally, AI can provide real-time feedback during the surgical procedure, allowing for adjustments to be made to improve the outcome [11].
The use of minimal invasive robotics in brain tumor surgery has gained increasing attention in recent years due to its potential to reduce trauma and improve the outcomes of surgical procedures [12, 13]. Traditional open surgery can be associated with a significant risk of morbidity and mortality, as well as longer recovery times and higher healthcare costs. On the other hand, minimal invasive techniques allow for smaller incisions, reduced blood loss, and shorter hospital stays, resulting in better patient outcomes and improved quality of life. Additionally, using robotics can provide surgeons with better visualization and control during the surgical procedure, leading to more accurate and precise tumor removal.
The contribution of this work lies in developing a multimodality AI-driven system for brain tumor surgery, as depicted in Figure 1. By incorporating the benefits of AI, the proposed system improves the accuracy and safety of the surgical procedure while reducing postoperative complications. Additionally, we conducted a user test with an expert neurosurgeon to assess our system’s usability and functional scope and identify potential areas for optimization. The study highlights the potential for AI-driven systems to enhance the precision and safety of brain tumor surgery and improve patient outcomes. This research represents a significant step towards the integration of AI and minimal invasive robotics in the field of neurosurgery.

A high-level overview of the proposed AI for neurosurgery system. The top layer contains the proposed AI4Neurosurgery modules that integrates AI, specifically deep learning, into the medical imaging software.
2 Related work
In recent years, there has been significant development in the field of neurosurgical planning and navigation tools. In this section, we discuss the most relevant works.
Commercial software platforms such as StealthStation (Medtronic, USA) and Curve system (Brainlab, Germany) have been widely developed for Brain Tumor Surgery. These platforms provide advanced image-guided technologies for neurosurgery, including intraoperative imaging, navigation, and real-time feedback. However, these commercial systems can be expensive and may not be accessible to all hospitals and surgical centers.
Open-source medical research toolkits have also been proposed to assist general imaging applications, including MITK [14], ITK-SNAP [15], and 3D Slicer [16]. These toolkits provide a range of neurosurgical planning and visualization features, including segmentation, registration, and visualization of medical images. While these toolkits are free and accessible to the broader community, they require some technical expertise and may not be suitable for all users.
One study by Gerst et al. [17] presented a neurosurgical planning tool that uses a modular architecture to integrate various risk structures for optimized access planning. The tool generates risk maps for linear and nonlinear trajectories and visually maps the risk on the head surface. Their evaluation with clinical experts demonstrates the practical relevance of their tool.
Similarly, another study [18] presented a surgical planning toolkit, NeuroPlan, that renders and overlays the robot’s reachable workspace on the MRI image for an MRI-compatible stereotactic neurosurgery robot. The Toolkit streamlines the surgical workflow and assists in identifying the optimal entry point by segmenting the cranial burr hole volume and locating its center.
In [19], Rezayat et al. proposed an open-source tool utilizing different imaging modalities for automating the steps to access the brain. The software provides means for easily calculating the coordination of the area of interest concerning a specific point of entry. They validated the software in different applications, including electrophysiological recording, drug infusion, and guided biopsy procedures.
Additionally, augmented reality (AR) has emerged as a promising tool to assist neurosurgeons during surgery [20, 21]. VentroAR [21], an AR pipeline, was developed to improve the accuracy and safety of ventriculostomy by helping surgeons locate ventricles more efficiently. VentroAR utilized an optical tracking device and HoloLens and was evaluated in a user study involving 15 subjects. Although the HoloLens demonstrates promising workflow and ease of use, its accuracy still needs improvement before it can gain clinical acceptance.
Furthermore, recent advances in AI have led to the development of deep learning algorithms that can analyze large datasets and identify patterns that are difficult for humans to detect [22, 23]. For instance, nnU-Net [22] is a convolutional neural network architecture that has shown promising results in segmenting medical images, particularly in the context of brain tumor segmentation. This tool is designed to learn from a large set of labeled training images to produce accurate segmentations of new, unseen images. The approach has been shown to outperform other state-of-the-art methods on several benchmark datasets. Despite its advantages, nnU-Net lacks a graphical user interface (GUI), which makes it less accessible for users without advanced technical skills.
While there have been significant advancements in neurosurgical planning and navigation tools, the use of AI technologies in neurosurgery remains limited due to various factors such as the need for validation and regulatory approval, challenges in data availability and computational resources, and the requirement for trust and acceptance from healthcare professionals [24, 25]. Therefore, there is a pressing need for more AI-based solutions to address the challenges of neurosurgery, including improving surgical accuracy and reducing the risk of complications.
The main focus of this study is set on developing and integrating AI-based tools into neurosurgical workflows to enhance the precision and safety of neurosurgery, thus improving patient quality of life and survival. By leveraging open-source medical research toolkits and developing new software applications, we hope to make these advanced tools more accessible to a broader range of healthcare professionals, regardless of their technical expertise.
3 Methods and materials
3.1 Dataset
This study utilized multimodal Magnetic Resonance Imaging (MRI) data from the Brain Tumor Segmentation Challenge 2022 (BraTS 2022) [26–28]. The BraTS dataset contains 1251 preoperative multimodal MRI scans from multiple institutions. For each subject, BraTS provides native T1-weighted (T1W), Gadolinium T1-weighted (T1Gd), T2-weighted (T2W), and fluid-attenuated inversion recovery (FLAIR) sequences with ground truth annotations created by expert raters, as illustrated in Figure 2.

Sample multimodal MRI sequences from the BraTS 2022 database showing brain tumor pathologies. Shown anti-clockwise from the top left: (a) T1W, (b) T1Gd, (c) T2W, (d) FLAIR, and (e) ground truth. Green, yellow, and blue indicate necrosis, edema, and non-enhancing tumor, respectively.
Due to the variability in MRI acquisition parameters across different centers, a pre-processing stage was performed, including min-max scaling of each MRI modality and z-score normalization, as well as image cropping to a spatial resolution of 192 × 224 × 160. During the training phase, data augmentation techniques were applied, such as random flipping, random rotations, intensity transformation, and dynamic patch augmentation with a cropping size of 128 × 128 × 128 to prevent overfitting issues.
3.2 System design
The proposed system is designed with four layers to facilitate the integration of various hardware devices and software modules. Figure 1 depicts the system design overview of the proposed multimodality AI system for brain tumor surgery.
The Application layer, the top layer, contains the main deep-learning models and software applications for brain tumor segmentation and neurosurgery planning. This layer contains deep-learning models for automatic tumor segmentation. These models are trained on large datasets of medical images and can accurately segment brain tumors and critical structures [29]. Furthermore, the manual segmentation module provides an interface for neurosurgeons and radiologists to manually correct and refine the automatically segmented tumors. Another module, Segment Statistics, calculates metrics and statistics on the segmented structures to assist in neurosurgery planning.
The general imaging platform layer, the second layer, includes the software infrastructure for medical image processing and visualization. The main component is 3D Slicer, an open-source medical imaging visualization and analysis platform. 3D Slicer provides loading and saving of medical imaging data like MRI, powerful 3D visualization of medical images, manual segmentations tools, interfaces with imaging devices and robotic systems using OpenIGTLink, as well as open-source shared libraries like VTK, ITK, CTK, Qt, among others.
The Hardware interface layer, the third layer, contains the PLUS Toolkit, which provides an interface between the software layers and the physical hardware devices. This layer enables calibration and synchronization of devices, pre-processing of data from the hardware devices before sending to the higher layers, simulation of device inputs/outputs, record and replay of device data, OpenIGTLink interface to send/receive data to/from the general imaging platform layer in real-time, device interfaces to directly control the underlying hardware devices.
Finally, the Hardware layer, the bottom layer, includes a wide range of devices such as medical imaging scanners, surgical navigation and robotics systems, sensors, and manipulators, microscopes, endoscopes, position trackers and other devices. The PLUS Toolkit interfaces with these devices and sends data to and from the higher software layers. The integration of these layers enables the system to perform multimodal image-guided interventions during brain tumor surgery, enhancing surgical accuracy and efficiency while minimizing invasiveness.
3.2.1 3D slicer software
The backbone of the proposed AI system for neurosurgery is 3D Slicer [16], which primarily provides fundamental capabilities for visualization and manual image processing. With multi-institutional support from the National Institute of Health (NIH) and a worldwide developer community, 3D Slicer has undergone two decades of implementation and development. The software offers 2D, 3D, and 4D visualization capabilities for various imaging modalities, including MRI, CT, and ultrasound. It supports importing and exporting imaging data from various standard data formats, such as NIFTI, DICOM, and NRRD.
The 3D Slicer application follows a modular paradigm that allows for the development of additional modules for feature-specific functionalities. Standard 3D Slicer software includes numerous modules that provide a wide range of medical applications. Core modules are primarily categorized based on their function. One example is the Scene Views module, which allows to create and save customized views of image data, providing a flexible and intuitive approach to organizing and analyzing medical images.
3.2.2 AI4Neurosurgery application
AI4Neurosurgery is a 3D Slicer-based application that integrates AI, specifically deep learning, into the software’s functionality. The AI4Neurosurgery platform is developed to extend the capabilities of 3D Slicer by automatically segmenting brain glioma, a common and often aggressive form of brain tumor. AI4Neurosurgery employs a deep convolutional neural network architecture trained on large medical image datasets to segment brain glioma accurately. The segmentation process is based on a voxel-wise classification of the brain tissue, allowing for the accurate differentiation of glioma from healthy brain tissue. The AI4Neurosurgery extension is designed to be user-friendly and efficient, providing neurosurgeons with a valuable tool for preoperative planning and intraoperative guidance.
Figure 3 shows the main GUI of the AI4Neurosurgery application. The application GUI comprises several elements, including the module panel, views, toolbar, and status bar. The module panel is located by default on the left side of the main window and displays all the options and features that the current module offers to the user. AI4Neurosurgery displays data in various views, such as slice view, 3D models, and table views, and the user can choose between several pre-defined layouts. The toolbar at the top of the window provides quick access to commonly used functions. The status bar, located at the bottom of the window, may display application status, such as the current operation in progress, and clicking the small X icons displays the Error Log window.

Main user interface of the AI4Neurosurgery application.
3.2.2.1 Automatic segmentation
The automatic segmentation of brain tumors was performed using our previous neural network architecture, 3D DeepSeg [10] which is based on the U-Net model [30]. For more details on the employed deep learning network and the model implementation, refer to our previous work [29].
The proposed network structure is shown in Figure 4. The encoder component, which is responsible for feature extraction, is typically composed of a convolutional neural network (CNN) that contains a sequence of blocks, each consisting of 3 × 3 × 3 convolutional layers, batch normalization, a rectified linear unit (ReLU) activation function, and a 2 × 2 × 2 max pooling operation. In contrast, the decoder component, also known as the segmentation estimator, is designed to upscale the output feature maps using a series of blocks that include two deconvolutions, a 2 × 2 × 2 up-convolution, and a ReLU activation function. The two components of the neural network are connected by skip connections, which merge the high-resolution feature maps from the encoder with the corresponding semantic features from the decoder.

The architecture of the enhanced 3D brain segmentation network (3D DeepSeg) for brain tumor segmentation from mpMRI volumes. The input is a 3D multimodal MRI of T1, T1Gd, T2, and FLAIR with a patch spatial resolution of 192 × 224 × 160. The CNN network has 24 convolution neural blocks (blue boxes), four downsampling blocks (orange boxes), four upsampling blocks (grey boxes), and a final softmax output layer (green box).
3.2.2.2 Segment Editor
The proposed AI system for neurosurgery includes another module that extends the Segment Editor module in 3D Slicer. This new module is designed to facilitate the manual segmentation and refinement of the automatic brain tumor segmentation created by the AI-driven automatic segmentation module. The primary function of this module is to provide neurosurgeons with the ability to adjust and refine the automatic segmentation results to ensure the accuracy of the tumor location and boundaries.
This module includes various tools, such as paintbrush, eraser, and grow from seeds, to manually segment the tumor and refine the boundaries. The neurosurgeon can also use the 3D visualization tools available in 3D Slicer to ensure the accuracy of the segmentation.
3.2.2.3 Segment Statistics
The Segment Statistics module extends the Segment Statistics module in 3D Slicer. The primary function of this module is to provide properties of the automatically created brain tumor segmentation by the deep learning model. The module extracts quantitative features from the segmented tumor regions, including volume, surface area, shape descriptors, and intensity-based features. These features can then be used to provide a more detailed analysis of the tumor, such as growth rate and tissue classification. The module will also include a comparison tool to compare the automatic segmentation with the manual segmentation provided by the user. The user can interactively adjust the threshold or parameters to improve the accuracy of the automatic segmentation, and the statistics will be updated in real-time. The module will provide a valuable tool for neurosurgeons to understand the properties of the brain tumor better and assist in the surgical planning process. PLUS Toolkit.
The PerkLab Open-Source Ultrasound Imaging System (PLUS) Toolkit is an open-source software platform that provides a wide range of tools for real-time image-guided interventions, including neurosurgery. PLUS is designed to interface with various medical imaging devices, such as ultrasound machines, to integrate the resulting images with surgical navigation systems to guide surgical procedure. PLUS’s modules allow for real-time image acquisition, processing, and visualization and the registration of multiple imaging modalities, such as MRI, CT, and ultrasound. These features enable multimodal image-guided interventions. The PLUS Toolkit has been used in several neurosurgical applications, including brain biopsy, brain tumor resection, and deep brain stimulation surgery, to improve surgical accuracy, efficiency, and patient outcomes.
3.2.3 Hardware layer
The hardware layer of the proposed AI-driven system for neurosurgery is an essential component of the system, as it incorporates the physical devices used during the surgical procedures. This layer can include hundreds of devices, such as imaging scanners, position tracking, sensors and manipulators, surgical microscopes, endoscopes, navigation systems, and robotic devices.
The hardware layer is responsible for providing the system with real-time data from the surgical field, such as the patient’s position, orientation, and location of surgical instruments. The system’s software modules then use this data to generate a real-time visualization of the surgical field, including the tumor and surrounding tissues. To achieve this, the hardware interface layer contains the PLUS Toolkit, which provides interfaces for the hardware devices to perform calibration, synchronization, pre-processing, simulation, recording, and replay.
3.3 System use case
The use case scenario of the AI4Neurosurgery application is depicted in Figure 5, providing comprehensive coverage of all significant steps and configuration options involved. The diagram outlines the various interactions and relationships between the different actors. It uses cases involved in the application, providing a high-level overview of the system’s key functionalities and capabilities. The system allows for both automatic and interactive segmentation of tumors, which can then be edited, visualized, quantified, and exported by the neurosurgeon. This analysis is intended to inform the development and refinement of the AI4Neurosurgery application, ultimately leading to improved outcomes in the neurosurgical domain.

Use case diagram of the proposed AI4Neurosurgery application to evaluate the system’s usability and functionality in a pre-clinical setting.
An initial user test was conducted to evaluate the system’s usability and functionality in a pre-clinical setting. The participant was first given a brief overview of brain tumor segmentation and the software’s user interface, which provided an explanation of the system’s features. Next, the participant was presented with a set of tasks based on a pre-defined use case scenario that included the system’s key modules: (1) DeepSeg (Automatic Segmentation) (Figure 6(a)), (2) Segment Editor (Figure 6(b)), and (3) Segment Statistics (Figure 6(c)).

An example use case scenario for evaluating the proposed AI4Neurosurgey system in high-grade brain glioma surgery. (a) The DeepSeg module, responsible for automatic brain tumor segmentation using deep learning. (b) The Segment Editor module is responsible for creating and editing segmentations using manual and semi-automatic tools, enabling neurosurgeons to modify and adjust the segmentation results as necessary. (c) The Segment Statistics module computes intensity and geometric properties for each segment.
The user experience test was carried out with a highly experienced neurosurgeon from the Department of Neurosurgery at the University Hospital Ulm/Günzburg to assess the usability of the proposed system. The neurosurgeon has over seven years of experience with image-guided software and routinely requires such assistance for intraoperative use and preoperative training.
Upon completion of the user test, a systematic evaluation of the usability was conducted using the System Usability Scale (SUS) questionnaire, which consists of ten scale questions to assess the participant’s subjective perception of the usability of the system. In addition to the SUS questionnaire, a study interview comprising 19 questions was also conducted to obtain specific feedback regarding the functionality and overall user experience of the system.
The questionnaire presents a comprehensive evaluation of the usability of the system in question, utilizing the widely accepted SUS as a metric. The SUS measures the perceived usability of a system based on a set of statements that are rated on a 5-point scale, with higher scores indicating greater usability. The statements have been summed to provide an overall assessment of the system’s usability, enabling a clear and concise presentation of the results.
4 Results and discussion
The proposed AI4Neurosurgery system represents a noteworthy contribution to the field of brain tumor surgery, as it enables neurosurgeons to achieve more accurate and efficient segmentation of brain tumors compared to traditional manual segmentation methods. The system’s multimodality approach, which combines automatic and interactive segmentation tools with advanced 2D and 3D visualization and analysis capabilities, offers a comprehensive solution for brain tumor surgery planning and execution. This integrated approach has the potential to improve the quality of patient care in this area and may pave the way for further advancements in computer- and robot-assisted neurosurgery.
The outcomes of the preliminary user test suggests that the system is characterized by high usability and provides a positive user experience. The participating neurosurgeon found the system to be straightforward, user-friendly, and effective in achieving efficient segmentation results. The Automatic Segmentation function was regarded as an especially valuable feature, while the Segment Editor was also deemed efficient in achieving accurate segmentation outcomes. These findings suggest that the system has the potential to enhance the effectiveness and efficiency of brain tumor surgery and may benefit both patients and medical professionals in this field.
The SUS questionnaire was used to evaluate the system’s usability, and the neurosurgeon achieved a score of 75 out of 100, which is considered good and falls in the upper quartile of the scale. The SUS results are presented in Table 1. The study interview following the usability test provided more insights into usability and functional scope. The software met the expectations of the user regarding segmentation. All features were stated as useful. The system was perceived as very simple and mainly easy to use. The Automatic Segmentation process was stated to be very helpful and important. Via the Segment Editor, efficient results could be achieved. Nevertheless, manual segmentation is needed in case of incomplete automatic segmentation. For operation, more functions are needed. The volume of the Segment Statistics was also commented to be considerable.
Assessment of the usability via SUS, statements are summed for presentation, rating 1 (=strongly disagree) to 5 (=strongly agree).
Statements\rating | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|
1. Use frequently |
![]() |
![]() |
![]() |
![]() |
![]() |
2. Unnecessarily complex |
![]() |
![]() |
![]() |
![]() |
![]() |
3. Easy to use |
![]() |
![]() |
![]() |
![]() |
![]() |
4. Support needed |
![]() |
![]() |
![]() |
![]() |
![]() |
5. Functions well integrated |
![]() |
![]() |
![]() |
![]() |
![]() |
6. Inconsistency |
![]() |
![]() |
![]() |
![]() |
![]() |
7. Quick to learn |
![]() |
![]() |
![]() |
![]() |
![]() |
8. Cumbersome to use |
![]() |
![]() |
![]() |
![]() |
![]() |
9. Confident using |
![]() |
![]() |
![]() |
![]() |
![]() |
10. Difficult to learn |
![]() |
![]() |
![]() |
![]() |
![]() |
The system’s configuration options would benefit from increased sophistication. While the 2D and 3D visualizations are compelling for demonstration purposes, they may benefit from improved orientation. The presentation of all three visualizations in a single view was well-received, and navigation between modules and switching layouts was found to be straightforward. However, certain features that would be useful for preoperative use were observed to be missing, such as manual editing, guidance icons, trajectories, or reference points. Additionally, there was a desire for simplified change segmentations.
The interview conducted as part of the study yielded valuable feedback regarding the system’s functionality and usability. The participating neurosurgeon acknowledged the effectiveness of the system in achieving segmentation goals, but highlighted the need for additional functions during operations. They also suggested that the configuration options could benefit from increased sophistication and that the visualizations could be improved in terms of orientation. These findings provide important insights regarding potential areas for improvement, which could enhance the system’s utility and usability in clinical settings.
Nevertheless, there is room for improvement in terms of user-friendliness. The neurosurgeon sometimes found it challenging to locate the required functions within the use case scenario. In comparison to the image-guided software currently in use, Slicer-DeepSeg offers similar basic functionality but lacks some more advanced features and options, such as image fusion or a 3D view. Despite these limitations, the system was rated as promising, with the potential for further improvement.
Future efforts toward the development of the AI4Neurosurgery system will center on expanding its functionality to better support intraoperative use and refining its configuration options and visualizations. Furthermore, the system will be subjected to clinical testing with a larger cohort of neurosurgeons to evaluate its usability and functionality in real-world scenarios. The incorporation of additional datasets and machine learning algorithms will also be explored to enhance the system’s accuracy and efficiency further. Ultimately, the AI4Neurosurgery system holds great promise in improving the outcomes of brain tumor surgery, thereby contributing to improved patient outcomes and quality of life.
5 Conclusions
In summary, the proposed Multimodality AI-Driven System for Brain Tumor Surgery represents a significant advancement in the direction of minimal invasive robotics. By leveraging the capabilities of AI and advanced imaging technologies, this system paves the way for potential integration with robotic surgical platforms. The combination of AI algorithms for automatic segmentation, interactive editing tools, and comprehensive segment analysis can contribute to the development of robotic-assisted neurosurgery. Through the seamless integration of the AI-driven system with robotic platforms, surgeons can benefit from enhanced precision, improved surgical planning, and real-time guidance during the procedure. This convergence of AI and robotics has the potential to revolutionize brain tumor surgery by enabling more precise and minimally invasive interventions, ultimately leading to improved patient outcomes. Further research and development in this direction are crucial to fully harness the transformative power of robotics in neurosurgical practice.
About the authors

Ramy Ashraf Zeineldin studied Engineering Science at Menoufia University, Egypt. In Feb 2023 he received the Dr.-Ing. degree in Medical Engineering at the Karlsruhe Institute of Technology (KIT). His research focus on AI in Medicine, Explainable AI, and Machine Learning in Medical Imaging.

Denise Junger, M.Sc., is a research associate at Reutlingen University. Her research focuses on situation awareness and recognition as well as intraoperative data provision in the field of intelligent operating rooms.

Franziska Mathis-Ullrich is professor for surgical robotics at the Friedrich-Alexander-University Erlangen-Nürnberg (FAU). Her primary research focus is on minimally invasive and cognition controlled robotic systems and embedded machine learning with emphasis on applications in surgery.

Oliver Burgert is Professor for Medical Informatics at Reutlingen University since 2011. From 2005 to 2011, he was the scientific director of the Innovation Centre Computer Assisted Surgery. Currently, he is Dean of the Faculty for Informatics and head of the Computer Assisted Medicine Group.
Acknowledgments
The authors would like to acknowledge the valuable contributions of the neurosurgeons and medical staff from the Department of Neurosurgery at the University Hospital Ulm/Günzburg for their expertise and support throughout the development and evaluation of the proposed AI-driven system for neurosurgery.
-
Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.
-
Research funding: The corresponding author was funded by the German Academic Exchange Service (DAAD) (No. 91705803).
-
Conflict of interest statement: Authors state no conflict of interest.
-
Informed consent: Informed consent has been obtained from all individuals included in this study.
References
[1] K. Noll, A. L. King, L. Dirven, T. S. Armstrong, M. J. B. Taphoorn, and J. S. Wefel, “Neurocognition and health-related quality of life among patients with brain tumors,” Hematol./Oncol. Clin. North Am., vol. 36, pp. 269–282, 2022. https://doi.org/10.1016/j.hoc.2021.08.011.Search in Google Scholar PubMed
[2] Y. Fan, X. Zhang, C. Gao, et al.., “Burden and trends of brain and central nervous system cancer from 1990 to 2019 at the global, regional, and country levels,” Arch. Public Health, vol. 80, p. 80, 2022. https://doi.org/10.1186/s13690-022-00965-5.Search in Google Scholar PubMed PubMed Central
[3] E. Karimi, M. W. Yu, S. M. Maritan, et al.., “Single-cell spatial immune landscapes of primary and metastatic brain tumours,” Nature, vol. 614, pp. 555–563, 2023. https://doi.org/10.1038/s41586-022-05680-3.Search in Google Scholar PubMed PubMed Central
[4] R. Haumann, J. C. Videira, G. J. L. Kaspers, D. G. van Vuurden, and E. Hulleman, “Overview of current drug delivery methods across the blood–brain barrier for the treatment of primary brain tumors,” CNS Drugs, vol. 34, pp. 1121–1131, 2020. https://doi.org/10.1007/s40263-020-00766-w.Search in Google Scholar PubMed PubMed Central
[5] C. Chen, I. Lee, C. Tatsui, T. Elder, and A. E. Sloan, “Laser interstitial thermotherapy (LITT) for the treatment of tumors of the brain and spine: a brief review,” J. Neuro-Oncol., vol. 151, pp. 429–442, 2021. https://doi.org/10.1007/s11060-020-03652-z.Search in Google Scholar PubMed PubMed Central
[6] Z. Lončarević, S. Reberšek, A. Ude, and A. Gams, “Randomized robotic visual quality inspection with in-hand camera,” Intell. Auton. Syst., vol. 17, pp. 483–494, 2023.10.1007/978-3-031-22216-0_33Search in Google Scholar
[7] S. Lin, A. Liu, J. Wang, and X. Kong, “A review of path-planning approaches for multiple mobile robots,” Machines, vol. 10, p. 773, 2022. https://doi.org/10.3390/machines10090773.Search in Google Scholar
[8] V. G. El-Hajj, M. Gharios, E. Edström, and A. Elmi-Terander, “Artificial intelligence in neurosurgery: a bibliometric analysis,” World Neurosurg., vol. 171, pp. 152–158.e4, 2023. https://doi.org/10.1016/j.wneu.2022.12.087.Search in Google Scholar PubMed
[9] M. Heizmann, A. Braun, M. Glitzner, et al.., “Implementing machine learning: chances and challenges,” At – Automatisierungstechnik, vol. 70, pp. 90–101, 2022. https://doi.org/10.1515/auto-2021-0149.Search in Google Scholar
[10] R. A. Zeineldin, M. E. Karar, J. Coburger, C. R. Wirtz, and O. Burgert, “DeepSeg: deep neural network framework for automatic brain tumor segmentation using magnetic resonance FLAIR images,” Int. J. Comput. Assist. Radiol. Surg., vol. 15, pp. 909–920, 2020. https://doi.org/10.1007/s11548-020-02186-z.Search in Google Scholar PubMed PubMed Central
[11] G. Watanabe, A. Conching, S. Nishioka, et al.., “Themes in neuronavigation research: a machine learning topic analysis,” World Neurosurg.: X, vol. 18, p. 18, 2023. https://doi.org/10.1016/j.wnsx.2023.100182.Search in Google Scholar PubMed PubMed Central
[12] C. P. Pacia, J. Yuan, Y. Yue, et al.., “Sonobiopsy for minimally invasive, spatiotemporally-controlled, and sensitive detection of glioblastoma-derived circulating tumor DNA,” Theranostics, vol. 12, pp. 362–378, 2022. https://doi.org/10.7150/thno.65597.Search in Google Scholar PubMed PubMed Central
[13] M. Eugster, “Robotic system for accurate minimally invasive laser osteotomy,” At – Automatisierungstechnik, vol. 70, pp. 676–678, 2022. https://doi.org/10.1515/auto-2022-0073.Search in Google Scholar
[14] I. Wolf, M. Vetter, I. Wegner, et al.., “The medical imaging interaction toolkit,” Med. Image Anal., vol. 9, pp. 594–604, 2005. https://doi.org/10.1016/j.media.2005.04.005.Search in Google Scholar PubMed
[15] P. A. Yushkevich, J. Piven, H. C. Hazlett, et al.., “User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability,” Neuroimage, vol. 31, pp. 1116–1128, 2006. https://doi.org/10.1016/j.neuroimage.2006.01.015.Search in Google Scholar PubMed
[16] A. Fedorov, R. Beichel, J. Kalpathy-Cramer, et al.., “3D slicer as an image computing platform for the quantitative imaging network,” Magn. Reson. Imaging, vol. 30, pp. 1323–1341, 2012. https://doi.org/10.1016/j.mri.2012.05.001.Search in Google Scholar PubMed PubMed Central
[17] M. Gerst, C. Kunz, P. Henrich, and F. Mathis-Ullrich, “Multimodal risk-map for navigation planning in neurosurgical interventions,” New Trends in Medical and Service Robotics, pp. 183–191, 2021.10.1007/978-3-030-58104-6_21Search in Google Scholar
[18] F. Tavakkolmoghaddam, D. K. Rajamani, B. Szewczyk, et al.., “NeuroPlan: a surgical planning toolkit for an MRI-compatible stereotactic neurosurgery robot,” in 2021 International Symposium on Medical Robotics (ISMR), 2021, pp. 1–7.10.1109/ISMR48346.2021.9661581Search in Google Scholar
[19] E. Rezayat, H. Heidari-Gorji, P. Narimani, et al.., “A multimodal imaging-guided software for access to primate brains,” Heliyon, vol. 9, p. e12675, 2023. https://doi.org/10.1016/j.heliyon.2022.e12675.Search in Google Scholar PubMed PubMed Central
[20] C. Kunz, M. Hlavac, M. Schneider, et al.., “Autonomous planning and intraoperative augmented reality navigation for neurosurgery,” IEEE Trans. Med. Robot. Bion., vol. 3, pp. 738–749, 2021. https://doi.org/10.1109/tmrb.2021.3091184.Search in Google Scholar
[21] N. B. Z. Ansari, É. Léger, and M. Kersten-Oertel, “VentroAR: an augmented reality platform for ventriculostomy using the microsoft HoloLens,” Comput. Methods Biomech. Biomed. Eng. Imaging Vis., vol. 11, no. 4, pp. 1225–1233, 2022. https://doi.org/10.1080/21681163.2022.2156394.Search in Google Scholar
[22] F. Isensee, P. F. Jaeger, S. A. A. Kohl, J. Petersen, and K. H. Maier-Hein, “nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation,” Nat. Methods, vol. 18, pp. 203–211, 2021. https://doi.org/10.1038/s41592-020-01008-z.Search in Google Scholar PubMed
[23] R. A. Zeineldin, M. E. Karar, F. Mathis-Ullrich, and O. Burgert, “Ensemble CNN networks for GBM tumors segmentation using multi-parametric MRI,” in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Cham, Springer International Publishing, 2022, pp. 473–483.10.1007/978-3-031-08999-2_41Search in Google Scholar
[24] E. V. Bernstam, P. K. Shireman, F. Meric‐Bernstam, et al.., “Artificial intelligence in clinical and translational science: successes, challenges and opportunities,” Clin. Transl. Sci., vol. 15, pp. 309–321, 2021. https://doi.org/10.1111/cts.13175.Search in Google Scholar PubMed PubMed Central
[25] R. A. Zeineldin, M. E. Karar, Z. Elshaer, et al.., “Explainability of deep neural networks for MRI analysis of brain tumors,” Int. J. Comput. Assist. Radiol. Surg., vol. 17, pp. 1673–1683, 2022. https://doi.org/10.1007/s11548-022-02619-x.Search in Google Scholar PubMed PubMed Central
[26] B. H. Menze, A. Jakab, S. Bauer, et al.., “The multimodal brain tumor image segmentation benchmark (BRATS),” IEEE Trans. Med. Imag., vol. 34, pp. 1993–2024, 2015. https://doi.org/10.1109/tmi.2014.2377694.Search in Google Scholar
[27] S. Bakas, H. Akbari, A. Sotiras, et al.., “Advancing the Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features,” Sci. Data, vol. 4, p. 170117, 2017. https://doi.org/10.1038/sdata.2017.117.Search in Google Scholar PubMed PubMed Central
[28] U. Baid, S. Ghodasara, M. Bilello, et al.., “The RSNA-ASNR-MICCAI BraTS 2021 benchmark on brain tumor segmentation and radiogenomic classification,” 2021, arXiv:2107.02314.Search in Google Scholar
[29] R. A. Zeineldin, M. E. Karar, O. Burgert, and F. Mathis-Ullrich, “Multimodal CNN networks for brain tumor segmentation in MRI: a BraTS 2022 challenge solution,” 2022, arXiv:2212.09310.10.1007/978-3-031-33842-7_11Search in Google Scholar
[30] O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” Med. Image Comput. Comput. Assist. Interv., vol. 2015, pp. 234–241, 2015.10.1007/978-3-319-24574-4_28Search in Google Scholar
© 2023 the author(s), published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.