Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
“BrainHeart”: Pilot Study on a Novel Application for Elderly Well-Being Based on Mindfulness Acceptance and Commitment Therapy
Next Article in Special Issue
Deep Learning-Based Fine-Tuning Approach of Coarse Registration for Ear–Nose–Throat (ENT) Surgical Navigation Systems
Previous Article in Journal
A Bioreactor for Celullarised Membrane Culture and Delivery under Sterile Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Medical Image Registration for Different Modalities

by
Fatemehzahra Darzi
1 and
Thomas Bocklitz
1,2,*
1
Institute of Physical Chemistry, Friedrich Schiller University Jena, Helmholtzweg 4, 07743 Jena, Germany
2
Department of Photonic Data Science, Leibniz Institute of Photonic Technology, Albert-Einstein-Straße 9, 07745 Jena, Germany
*
Author to whom correspondence should be addressed.
Bioengineering 2024, 11(8), 786; https://doi.org/10.3390/bioengineering11080786
Submission received: 13 June 2024 / Revised: 26 July 2024 / Accepted: 30 July 2024 / Published: 2 August 2024
(This article belongs to the Special Issue Optical Imaging for Biomedical Applications)

Abstract

:
Medical image registration has become pivotal in recent years with the integration of various imaging modalities like X-ray, ultrasound, MRI, and CT scans, enabling comprehensive analysis and diagnosis of biological structures. This paper provides a comprehensive review of registration techniques for medical images, with an in-depth focus on 2D-2D image registration methods. While 3D registration is briefly touched upon, the primary emphasis remains on 2D techniques and their applications. This review covers registration techniques for diverse modalities, including unimodal, multimodal, interpatient, and intra-patient. The paper explores the challenges encountered in medical image registration, including geometric distortion, differences in image properties, outliers, and optimization convergence, and discusses their impact on registration accuracy and reliability. Strategies for addressing these challenges are highlighted, emphasizing the need for continual innovation and refinement of techniques to enhance the accuracy and reliability of medical image registration systems. The paper concludes by emphasizing the importance of accurate medical image registration in improving diagnosis.

1. Introduction

In recent years, medical image registration has developed as a critical area of research due to the integration of various imaging modalities like X-ray, ultrasound, MRI, and CT scans. This integration has significantly advanced the field, enabling comprehensive analysis and diagnosis of biological structures. Image registration plays a crucial role in this advancement, allowing for the alignment of images to standardized reference frames. It involves the use of various transformation models, similarity measures, and optimization techniques [1]. Deep learning has been increasingly applied in this field, with algorithms such as Voxelmorph, Demons, and SyN being used for medical image registration [2]. The process involves feature detection, matching, transform model estimation, and image resampling and transformation. These methods have significant potential to improve image visualization for different human anatomy in medical diagnosis and treatment. This paper presents a comprehensive review of registration techniques for biological medical images obtained from diverse modalities, including unimodal, multimodal, interpatient, and intra-patient [3].
In this paper, we explore various types of image registration techniques in medical imaging. Unimodal registration, as depicted in Figure 1, refers to the process of aligning images obtained from the same type of imaging sensors or modalities, ensuring that they are spatially aligned. In unimodal registration, the goal is to align images captured using a single imaging modality to enable precise comparison and analysis. This technique is crucial in various applications, particularly in medical imaging, where it enables precise comparison and analysis of images acquired from the same imaging modality, such as multiple MRI scans or X-ray images. The importance of unimodal registration lies in its ability to correct spatial discrepancies and align anatomical structures or features within images, facilitating detailed analysis and informed decision-making. It finds extensive use in longitudinal studies for tracking disease progression, motion correction in dynamic imaging, atlas-based segmentation, image quality enhancement through averaging, intra-operative guidance, quality control processes, and change detection in fields like remote sensing. The primary advantage of unimodal registration is its relative simplicity compared to multimodal approaches, as images from the same modality typically share similar intensity characteristics and feature representations. This similarity often allows for more straightforward and robust registration processes, employing simpler similarity metrics and transformation models. However, challenges such as intensity inhomogeneities, noise, or anatomical variations between scans still necessitate the careful selection of registration algorithms and parameters to achieve optimal results in unimodal registration tasks [4].
In contrast, multimodal image registration involves aligning images of the same scene acquired from different sensors, at different times, or from varying perspectives, such as the registration of MRI and X-ray images. If images of the same modality and different measurement parameters are registered in some cases, a multimodal image registration task also evolves. For example, when considering MRI scans acquired using different pulse sequences or scan protocols, we classify them as either multimodal or unimodal. Different pulse sequences produce completely different images due to variations in tissue contrast, spatial resolution, and other parameters. If these differences are substantial and significantly impact the information content, we consider the scans as multimodal. However, when the differences are minimal, they can be categorized as unimodal. As we know, X-ray images can suffer from geometric distortions due to variations in the angle of X-ray beam projection and the position of the patient. Additionally, X-ray images are prone to artifacts such as scatter radiation, which can obscure details and reduce the image quality. MRI images are also subject to several types of distortions, including geometric distortions caused by magnetic field inhomogeneities and susceptibility artifacts [5]. These distortions can result in spatial inaccuracies, particularly near interfaces of different tissue types or near metallic implants. Furthermore, patient movement during the scan can introduce motion artifacts, which can blur the images and complicate an accurate interpretation. These distortions can lead to misalignment and inaccuracies in the representation of anatomical structures. Multimodal co-registration techniques are employed to mitigate these distortions and improve the accuracy of image alignment. By aligning images from different modalities, co-registration can correct spatial discrepancies and enhance the reliability of combined image analysis. This process is essential for ensuring precise and reliable interpretation, which is particularly important in clinical decision-making and educational settings. This registration process aims to geometrically align these images to enable comparison, fusion, or analysis. Multimodal image registration methods can be categorized into feature-based, where distinct features are extracted and matched, and area-based, which directly processes image intensity values without identifying specific features [6].
Interpatient registration refers to the process of aligning images obtained from different patients. This type of registration can be challenging due to the differences in underlying anatomical structures across individuals. Accurate interpatient registration is essential for tasks that involve comparing images or data from different patients, such as in research studies or treatment planning where a standard frame of reference is needed. Intra-patient registration, on the other hand, involves aligning images acquired from the same patient, either on the same day (same-day intra-patient) or on different days (different-day intra-patient). This type of registration can be challenging due to changes in the image appearance caused by factors like metabolic processes, weight fluctuations, or bowel movements. Intra-patient registration is crucial for applications such as monitoring disease progression, assessing treatment outcomes, or performing pre- and post-procedural image comparisons within the same individual over time [7].
While this review primarily focuses on 2D image registration techniques, it is important to acknowledge the role of registration aids such as fiducial markers in image alignment across various modalities. Fiducials, which are external reference points or markers placed on or near the subject being imaged, significantly facilitate the registration process by providing consistent reference points across different images [7]. These markers prove particularly useful in scenarios where anatomical landmarks may be unclear or inconsistent. Fiducial-based registration, while extensively used in 3D datasets, also has significant applications in 2D image registration, especially in cases requiring high-precision alignment [8]. In 2D medical imaging, fiducial markers can be external (attached to the patient’s skin) or internal (implanted or anatomical landmarks). Common techniques in 2D fiducial-based registration include point-based methods, where corresponding fiducial points are matched between images, and contour-based methods, which align fiducial shapes or outlines. These methods often employ least squares fitting algorithms to minimize the distance between corresponding fiducials. For instance, in radiotherapy planning, radio-opaque markers are frequently used to align treatment images with reference images, ensuring the accurate targeting of radiation beams [9]. Additionally, in surgical navigation, skin-affixed fiducials help register preoperative images to intraoperative patient positioning [10]. While fiducial-based methods offer high accuracy, they must be balanced against considerations such as invasiveness, marker stability, and the potential need for additional imaging procedures.
The challenges during image registration in the medical domain include obstacles such as the lack of standardized ground truth labels for evaluating the registration outcomes convincingly, subjective assessments by medical experts, instability in outcomes from various registration algorithms, and the influence of data quality variations on the registration results [4]. Additionally, issues such as geometric distortions and differences in image properties like intensity, color distributions, and resolution, as well as the impact of noise, can pose significant hurdles during the image registration process in the medical context. Addressing these challenges is crucial to enhancing the accuracy and reliability of medical image registration systems for improved clinical applications.
In the upcoming sections of this review, we will explore different image registration techniques, examining their properties and addressing the challenges inherent in their application across different contexts. Additionally, we will provide an overview of feature-based and area-based frameworks for image registration, highlighting the challenges of aligning images. We will delve into the general image registration framework, offering a comprehensive understanding of the registration process.

2. Materials and Methods

In this section, we explore the fundamental principles of general registration frameworks. Image registration, a critical component of image analysis, involves mapping two images on both the spatial and intensity levels.
Let us denote X F i x ,   Y F i x as the coordinates of points in the fixed image and X M o v ,   Y M o v as the (pixel, line) coordinates of the corresponding points in the moving image undergoing registration. The relationship between X F i x ,   Y F i x and X M o v ,   Y M o v can be expressed as follows:
X F i x = f X M o v ,   Y M o v Y F i x = g X M o v ,   Y M o v
where f and g represent the functions that accurately represent the relationship between the coordinates of the two images [11].
Within the realm of image registration, researchers systematically categorize methodologies into two primary groups: area-based and feature-based methods. Area-based methods are preferable when there are no prominent details in an image and the identifying feature is gray level/color rather than local shape. An example of an area-based method is correlation-based registration, which is extensively used for registering monomodal images, particularly in medical applications [12]. Conversely, feature-based matching methods become imperative when image intensities contain richer local structural information. These methods leverage feature extraction algorithms to discern and extract essential image features [13]. An example of a feature-based method is the scale invariant feature transform (SIFT) algorithm [14], which is widely used for its robustness in detecting and describing local features in images.
As we progress into the broader context of image registration frameworks, it becomes evident that three pivotal components play an indispensable role: alignment metrics, transformation models, and optimization methods. Alignment metrics furnish a quantitative measure of image similarity, guiding the entire registration process. Transformation models define the spatial mapping between images, enabling alignment to occur seamlessly. Optimization methods diligently refine the transformation parameters based on selected alignment metrics, striving to minimize registration error and enhance alignment accuracy. Each step contributes significantly to the overall success of image registration, highlighting the delicate balance between accuracy and computational efficiency in this critical field of study. Figure 2 provides a comprehensive overview of the image registration process, illustrating both classical and deep learning approaches. In the following sections, we will delve into each of these components.

2.1. Alignment Metrics

Alignment metrics are fundamental in image registration, quantifying the similarity or dissimilarity between images to establish meaningful correspondence. These metrics serve dual purposes: evaluating the registration process and optimizing the registration parameters. Traditionally, hand-developed features guide registration but face limitations in matching diverse image data. Recent advancements have introduced learning-based methods that select features from a large pool to characterize each pixel [15].
In the context of image registration, alignment metrics utilize feature extraction techniques to detect distinctive key points like corners or blobs. Key point selection can be manual or automatic. Manual selection is time-consuming and prone to inaccuracies; hence, automatic feature selection algorithms without human intervention have been proposed. It is crucial for these algorithms to be applicable across different applications without being overly specific [16].
Among automatic feature detection methods, several algorithms have gained prominence, including binary robust invariant scalable key points (BRISK) [17], the scale invariant feature transform (SIFT) algorithm [14], features from accelerated segment test (FAST) [18], binary robust independent elementary features (BRIEF) [19], accelerated-KAZE (AKAZE) [20], and oriented FAST and rotated BRIEF (ORB) [21]. Notably, BRISK efficiently detects features with robust description, making it well suited for image registration tasks. AKAZE, an accelerated version of KAZE, excels in capturing both scale and space information, ensuring enhanced repeatability and robustness. The next algorithm, ORB, demonstrates speed and robustness, proving suitable for real-time applications, including image registration tasks. These feature detection methods, especially BRISK, AKAZE, and ORB, are widely used in computer vision, showcasing effectiveness across diverse applications and positioning them as valuable tools for accurate alignment and correspondence estimations in image registration tasks.

2.2. Transformation Models

In the realm of image registration, the primary goal is to align related images by establishing correspondences between their features and applying transformations. Transformations are essential tools in this process, as they enable the adjustment of one image to match the spatial configuration of another. The need for transformations arises from the inherent variability in images acquired at different times, angles, or using diverse devices. These variabilities may include differences in scale, orientation, and spatial position, making it imperative to apply transformations that can rectify such discrepancies.
Geometric transformations, such as rigid, affine, projective, and nonlinear transformations, serve as mechanisms to bring images into spatial congruence. Rigid transformation is a fundamental technique characterized by its ability to preserve distances and angles. This transformation is particularly suitable for aligning images with minimal deformations, making it a valuable tool in scenarios where maintaining the structural integrity of the objects or subjects within the images is crucial. Common applications include aligning medical images for tasks such as tracking changes over time or facilitating a comparative analysis [22]. Affine transformation is a versatile method incorporating translation, rotation, scaling, and shearing. This flexibility allows for a broad range of adjustments during the alignment process. Affine transformations are well suited for applications where a more comprehensive adjustment is needed, such as aligning images with varying sizes, orientations, or spatial relationships. Medical imaging, computer vision, and remote sensing are areas where affine transformations find extensive use [23]. Projective transformation is essential for dealing with perspective distortions and more complex spatial changes in images. This transformation accommodates more complex spatial changes, making it valuable for applications where the scenes captured in images exhibit significant depth variations or non-uniform perspectives. Fields such as computer graphics, virtual reality, and architectural imaging often benefit from projective transformations to rectify distortions introduced by the viewing perspective [24]. Nonlinear deformations cater to intricate transformations beyond the capabilities of the rigid and affine methods, making them indispensable for scenarios involving complex anatomical changes. It allows for a more nuanced adjustment of the spatial relationships within the images, making it indispensable in tasks where subtle and intricate transformations are critical for accurate registration [25,26,27]. Figure 3 illustrates various transformations applied to an original image.
The choice of a particular spatial transformation depends on the characteristics of the images being registered and the specific requirements of the application. In certain cases, a combination of different transformation models may be necessary to achieve optimal registration results [28]. By employing these transformations, image registration achieves the crucial task of bringing images into a common spatial framework, facilitating meaningful comparisons, analyses, and interpretations in various domains, such as medical imaging, computer vision, and remote sensing.

2.3. Optimization Methods

Image registration involves solving an optimization problem to achieve accurate alignment between images. The objective is to minimize dissimilarity or maximize similarity measures by identifying the optimal set of transformation parameters [29].
Advanced optimization algorithms commonly employed in image registration include numerical Gauss–Newton minimization and gradient descent optimization. These methods iteratively update the transformation parameters by computing the gradient or the Hessian matrix of the dissimilarity measure. Navigating the parameter space by following the gradient direction or using a suitable step size allows these algorithms to converge towards the optimal solution. Another optimization method is the Levenberg–Marquardt algorithm, particularly useful for reducing pixel intensity variance in the registration process. By incorporating a regularization term and balancing between the Gauss–Newton and gradient descent steps, the Levenberg–Marquardt algorithm improves the stability and convergence of the optimization process [30,31,32].
During optimization, various errors can occur, as shown in Figure 4, including [33]:
  • Localization error: Arises when a key point is not accurately detected, causing their coordinates to shift. It can be minimized by using a good detection algorithm.
  • Matching error: Measures the quality of key point detections by counting the number of false matches found between control point candidates.
  • Alignment error: Arises when there are geometric distortions between images that deviate from the expected mapping model used for registration. This deviation can be quantified at the key point by calculating the mean squared error, providing a measure of the discrepancy between the actual and expected positions.
In conclusion, general image registration frameworks offer a systematic approach to aligning images through the consideration of alignment metrics, transformation models, and optimization methods. Ongoing research aims to enhance the adaptability, accuracy, and efficiency in image registration, contributing to the advancement of versatile and robust computer vision systems for applications such as image restoration, target tracking, image stitching, image fusion, 3D reconstruction, and pattern recognition. Addressing the challenges associated with image registration is crucial for pushing the boundaries of computer vision development.

3. Classification of Image Registration Techniques

Image registration is both theoretically important and practically useful. There are a number of criteria that can be used to classify image registration techniques and methods. Figure 5 illustrates a classification of registration methods, presenting a comprehensive scheme that highlights five main subsets. These subsets offer a structured framework for categorizing and analyzing the diverse range of image registration techniques available. The modality-driven approaches and transformation methods, having been discussed in previous sections, will not be reiterated here. The remaining classification categories are as follows:
  • Interaction [34]:
    • Interactive: Involving active user participation, interactive registration allows users to guide the registration process actively. This approach is beneficial when human expertise and intuition are crucial in achieving accurate and meaningful alignments.
    • Semi-Automated: Combining user input and automated algorithms, semi-automated registration strikes a balance between user expertise and computational efficiency. This approach is suitable for tasks where human input enhances the registration process.
    • Feedback Integration: Incorporating user feedback during the registration process for real-time adjustments, this method ensures continuous refinement based on user input. Real-time adjustments contribute to the accuracy and efficiency of the registration outcome.
    • Fully Automatic: Requiring minimal to no user intervention, fully automatic registration relies entirely on automated algorithms. This approach is suitable for scenarios where the registration task is well defined and can be efficiently executed without human intervention.
  • Classical Methods:
    • Correlation-based: This method is extensively used for registering monomodal images, playing a crucial role in medical applications. By computing the correlation between images, similarities and discrepancies are identified, allowing for image alignment essential for further analysis and evaluation in medical contexts [12].
    • Entropy-based: Explores measures of entropy to quantify the information content and patterns within images. Particularly useful in multimodal image registration, this method enhances accuracy by incorporating spatial information into entropy estimation. This innovation aims to provide more reliable results, especially in scenarios where traditional methods may fall short [35].
    • Mutual information-based: Leveraging pyramid and window-based techniques, mutual information-based methods estimate the probability of comparable voxels in registered images. This facilitates the alignment of images by maximizing the mutual information, offering an effective approach for establishing correspondence between features and achieving accurate alignment [36].
    • Wavelet-based: This method utilizes wavelet techniques, offering both time and frequency selectivity. Wavelet-based techniques provide a valuable approach for accurate and efficient image registration across different scales and frequencies. The ability to capture information at various resolutions enhances the method’s adaptability to diverse scenarios [37].
    • Contour-based: Involves matching image feature points using statistical features that describe object contours. With applications in remote sensing, medical imaging, and object recognition, this method leverages the distinctive properties of object boundaries to establish accurate correspondences and align images effectively [38].
  • Learning-Based Methods:
Learning-based methods in image registration have gained significant traction in recent years. These approaches leverage machine learning and deep learning techniques to improve registration accuracy, speed, and adaptability across various imaging modalities and applications. The evolution of learning-based methods has been driven by advancements in computational power, the availability of large-scale datasets, and innovations in neural network architectures. Common training approaches include supervised learning, where models learn from paired pre-aligned images; unsupervised learning, which does not require pre-aligned data; and semi-supervised techniques, which combine both approaches [39,40].
These approaches rely heavily on the datasets used for training, which vary widely, depending on the specific application. In medical imaging, datasets may include multimodal brain MRI scans or chest CT images, while remote sensing applications often use satellite imagery. The choice of dataset significantly influences the model performance and generalizability, presenting key challenges such as ensuring robustness in different image qualities and modalities, handling large deformations, and maintaining computational efficiency. Recent research has focused on addressing these challenges through novel network architectures, loss functions, and training strategies, aiming to enhance the versatility and effectiveness of learning-based image registration methods.
In the following, specific approaches within the learning-based methods will be explored:
  • Soft Computing-Based: Integrates artificial neural networks [41], genetic algorithms [42], and fuzzy sets [43] to achieve precise image alignment. This sophisticated approach combines computational intelligence techniques with optimization strategies, resulting in improved accuracy and robustness in image registration outcomes.
  • Curvatures-Based: Another approach in image registration involves curvatures-based methods, which focus on registering a sequence of corresponding points and searching for dimensional projection radiographs. These methods aim to find an optimal fit of the local curvatures in two curves to achieve accurate alignment. This approach is particularly useful in scenarios where the shapes or contours of objects are critical for alignment, such as in medical imaging or object recognition tasks [44].
  • Learning-Based with CNNs: Leveraging convolutional neural networks (CNNs), this method autonomously learns registration patterns and deformable transformations directly from image data. CNNs enhance the accuracy, speed, and adaptability to various types of medical and multimodal images, making them a promising solution in modern image registration [45].
  • Generative Adversarial Networks (GANs): Utilizing GANs, this method generates registered images directly with the deformation field, achieving accurate registration in real time. GANs’ ability to learn and generate realistic images makes them a cutting-edge solution in learning-based image registration [46].
As we delve into learning-based methods for image registration, it is crucial to consider the trends observed in recent research. Figure 6 provides a comparative analysis of the number of publications in classical image registration versus deep learning image registration over the past decade (2013–2023). The plot shows how image registration techniques have evolved over time, particularly with an increased interest in deep learning.
Recent advancements in learning-based methods have explored the integration of attention mechanisms, transformer architectures, and multi-task learning approaches. These innovations aim to improve the model’s ability to focus on relevant image features, handle global and local deformations simultaneously, and leverage information from related tasks to enhance registration performance.
These diverse approaches and ongoing innovations in learning-based methods highlight the dynamic nature of image registration techniques and their integration into the broader landscape of image processing methodologies.
Considering these developments and the various methods discussed throughout this section, we can appreciate the importance of a comprehensive classification framework. In conclusion, the classification of image registration methods presented in this section offers a comprehensive framework for understanding and organizing the diverse approaches in the field. This classification framework not only aids in understanding the existing methodologies but also serves as a roadmap for future advancements, fostering innovation and progress in the dynamic field of image registration.

4. Challenges in the Image Registration Process

The process of image registration in the medical field presents a wide variety of challenges that researchers and clinicians must struggle with. One of the greatest challenges is achieving a balance between efficiency, accuracy, and robustness in registration algorithms. The need for registrations to be computationally efficient while still maintaining high levels of accuracy and robustness is crucial for successful image-guided surgeries and diagnostic decision-making.

4.1. Technical Challenges in Image Registration

Image registration techniques encounter various technical challenges that affect their application in medical imaging. These challenges include:
  • Complexity in Choosing and Implementing Similarity Metrics: Selecting appropriate similarity measures is another critical aspect of the registration process. Various measures, such as mutual information and correlation coefficient, are utilized to evaluate voxel intensity differences between source and target images. However, accurately estimating these measures, especially in the presence of noise and spatial variations, poses significant challenges.
  • Outliers and Data Quality: Addressing outliers and rejecting erroneous data points is another ongoing challenge in medical image registration. Outliers can significantly impact the accuracy of registration results, particularly in image-guided surgery applications where an accurate alignment is crucial. Various methods, including robust statistical methods and outlier rejection algorithms, such as RANSAC (Random Sample Consensus), can minimize the impact of outliers. Consistency tests and intensity transformations can also be used to filter out erroneous data points, enhancing the reliability of the registration process. However, achieving robustness against outliers remains an area of active research.
  • Optimization Convergence: The convergence of optimization methods to local maxima presents a significant obstacle in the registration process. Optimization techniques play a crucial role in finding the optimal transformation between images, but the risk of converging to the local maxima can compromise registration accuracy. Developing advanced optimization strategies that avoid the local maxima while improving the registration performance is essential for overcoming this challenge and advancing the field of medical image registration [48].
  • Geometric Distortion: Geometric distortion, a common issue in remote sensing images, poses a significant challenge in medical imaging as well. Geometric distortion arises from attempts to represent three-dimensional images in two dimensions, leading to relief displacement. Scale distortion and skew distortion are two forms of geometric distortion that can lead to misalignments, complicating the registration process.
  • Differences in Image Properties: Differences in images, such as variations in intensity, contrast, resolution, sensor characteristics, and environmental noise, present formidable challenges. Images captured at different wavelengths by diverse sensors and at various times may exhibit discrepancies in resolution, contrast, and illumination, making direct comparisons challenging. This disparity in image characteristics complicates the construction of descriptors that provide global information about feature points, hindering the matching process and contributing to misalignments during registration [49].

4.2. Multimodal Registration Challenges

In the field of multimodal image registration, the types of images that can be effectively registered typically include those acquired from imaging modalities that share common anatomical structures and features, despite their inherent differences in imaging principles and characteristics. For instance, MRI and CT images are commonly registered due to their complementary strengths in soft tissue and bone visualization, respectively. Many literature reviews have highlighted successful registrations across various modalities, such as PET-CT for cancer diagnosis, where PET scans provide metabolic information and CT scans offer anatomical context, aiding in accurate tumor localization and treatment planning. Conversely, registering images from vastly different modalities, such as MRI and ultrasound, poses greater challenges due to significant differences in image resolution, artifact patterns, and acquisition principles [50]. Despite advancements, achieving robust registration in such cases remains an active area of research, with hybrid approaches integrating feature-based and intensity-based methods showing promise. Thus, while many combinations of modalities can be successfully registered, the effectiveness often depends on the specific characteristics and alignment challenges inherent to each pair of modalities.
In addressing these challenges, clinical practice often emphasizes focusing on specific regions of interest (ROIs) rather than aligning every pixel within an image. This targeted approach simplifies the registration process and enhances accuracy by directing computational efforts to clinically relevant areas. For example, aligning MRI and CT images may prioritize specific anatomical structures such as tumors or organs critical for image-guided interventions and treatment planning. By concentrating on ROIs, clinicians streamline the registration process and improve alignment precision where it matters most [51].

4.3. Interpatient and Intra-Patient Registration Challenges

Interpatient and intra-patient registration in medical imaging face distinct challenges and require specialized solutions. Interpatient registration aims to align images from different individuals, accommodating anatomical variations such as differences in organ shapes, sizes, and positions. Deformable registration techniques are crucial here, capable of modeling complex anatomical transformations influenced by factors like patient movement during scanning and variations in physiological states. Intra-patient registration, on the other hand, involves aligning images acquired from the same patient over time or under different conditions, like varying scan protocols or slice spacing. These differences in acquisition parameters can lead to discrepancies in image appearances, necessitating adaptive registration methodologies. Additionally, the variability introduced by different scan protocols, slice spacing, and scanner manufacturers poses challenges to both interpatient and intra-patient registration processes. Advanced deformable registration algorithms, leveraging anatomical priors or biomechanical models, play a critical role in addressing these challenges. These methods ensure precise alignment essential for longitudinal studies, treatment monitoring, and personalized medicine applications across diverse patient cohorts and imaging setups [52,53].
Advanced deformable registration algorithms leveraging anatomical priors or biomechanical models play a critical role in addressing these challenges. These methods ensure the precise alignment essential for longitudinal studies, treatment monitoring, and personalized medicine applications across diverse patient cohorts and imaging setups. Techniques like these allow for the modeling of complex anatomical transformations and adaptations to variations in image acquisition parameters, leading to more accurate and robust registrations.
Nonetheless, careful consideration must be given when selecting higher-order registration techniques. While these methods have the potential to offer greater accuracy, they also risk introducing distortions if not properly constrained. It is essential to balance the desire for precise registration with the need to preserve the inherent properties of the original images. Therefore, higher-order methods should be evaluated judiciously to ensure they do not alter the fundamental image characteristics in the pursuit of optimal alignment.
In conclusion, the challenges in medical image registration are varied and complex, requiring ongoing innovation and technique improvement. By developing and refining methodologies that account for diverse imaging modalities and patient-specific anatomical variations, researchers and clinicians can enhance the accuracy and reliability of image-based diagnostic and therapeutic interventions.

5. Summary

Image registration is a fundamental process with wide-ranging applications, crucial for change detection, target identification, and medical diagnosis. It addresses challenges in aligning images acquired at different times, angles, or devices, offering solutions through multiview, multitemporal, and multimodal analysis techniques.
Medical image registration plays a crucial role in modern healthcare by enabling the fusing of various imaging modalities for the comprehensive analysis and diagnosis of biological structures. With the increasing use of imaging techniques like X-ray, ultrasound, MRI, and CT scans, the demand for accurate and reliable registration methods has grown significantly. This paper provides an in-depth examination of the challenges and techniques involved in medical image registration, aiming to contribute to the advancement of this critical field.
The paper begins by highlighting the importance of medical image registration in integrating diverse imaging modalities for improved diagnosis and treatment planning. It outlines different types of image registration, including unimodal, multimodal, interpatient, and intra-patient registration, emphasizing their significance in various medical applications. The classification of image registration methods based on modalities, transformation types, and interaction levels is discussed, providing insights into the diverse approaches used in this field.
Moreover, the challenges encountered in medical image registration, such as geometric distortion, differences in image properties, outliers, and optimization convergence, are thoroughly examined. Strategies for addressing these challenges, including advanced transformation models, robust similarity measures, and optimization algorithms, are explored. The paper underscores the need for the ongoing innovation and refinement of techniques to enhance the accuracy and reliability of medical image registration systems.
Moving forward, further research is necessary to investigate emerging trends in medical image registration, such as the integration of deep learning techniques, the development of novel similarity measures, and the enhancement of optimization algorithms.

Author Contributions

F.D.: Formal analysis, Methodology, Writing—Original Draft, and Writing—Review and Editing; T.B.: Conceptualization, Methodology, Resources, Writing—Review and Editing, Supervision, Project administration, and Funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the BMBF funding program Photonics Research Germany (13N15706 (LPI-BT2-FSU), 13N15710 (LPI-BT3), and LPI-BT5-FSU (LPI-BT5)) and was integrated into the Leibniz Center for Photonics in Infection Research (LPI). The LPI initiated by Leibniz-IPHT, Leibniz-HKI, Friedrich Schiller University Jena, and Jena University Hospital was part of the BMBF national roadmap for research infrastructures. Co-funded by the European Union (ERC, STAIN-IT, 101088997). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. We acknowledge support by the German Research Foundation Projekt-Nr. 512648189 and the Open Access Publication Fund of the Thueringer Universitaets- und Landesbibliothek Jena.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, Q.; Song, R.; Ma, X.; Liu, X. A robust registration algorithm for image-guided surgical robot. IEEE Access 2018, 6, 42950–42960. [Google Scholar] [CrossRef]
  2. Hammoudeh, A.; Dupont, S. Deep learning in medical image registration: Introduction and survey. arXiv 2024, arXiv:2309.00727. [Google Scholar] [CrossRef]
  3. Baum, Z.M.; Hu, Y.; Barratt, D.C. Meta-Learning Initializations for Interactive Medical Image Registration. IEEE Trans. Med. Imaging 2022, 42, 823–833. [Google Scholar] [CrossRef] [PubMed]
  4. Liu, J.; Singh, G.; Al’Aref, S.; Lee, B.; Oleru, O.; Min, J.K.; Dunham, S.; Sabuncu, M.R.; Mosadegh, B. Image Registration in Medical Robotics and Intelligent Systems: Fundamentals and Applications. Adv. Intell. Syst. 2019, 1, 1900048. [Google Scholar] [CrossRef]
  5. Chaabane, M.; Koller, B. A Systematic Literature Review on Multi-modal Medical Image Registration. In Proceedings of the Service-Oriented Computing—ICSOC 2022 Workshops, Sevilla, Spain, 29 November–2 December 2023; Springer: Cham, Switzerland, 2023; Volume 13821, pp. 111–126. [Google Scholar] [CrossRef]
  6. Jiang, X.; Ma, J.; Xiao, G.; Shao, Z.; Guo, X. A review of multimodal image matching: Methods and applications. Inf. Fusion 2021, 73, 22–71. [Google Scholar] [CrossRef]
  7. Fu, Y.; Lei, Y.; Wang, T.; Curran, W.J.; Liu, T.; Yang, X. Deep learning in medical image registration: A review. Phys. Med. Biol. 2020, 65, 20TR01. [Google Scholar] [CrossRef] [PubMed]
  8. Boveiri, H.R.; Khayami, R.; Javidan, R.; Mehdizadeh, A. Medical Image Registration Using Deep Neural Networks: A Comprehensive Review. Comput. Electr. Eng. 2020, 87, 106767. [Google Scholar] [CrossRef]
  9. Tetar, S.U.; Bruynzeel, A.M.; Oei, S.S.; Senan, S.; Fraikin, T.; Slotman, B.J.; van Moorselaar, R.J.A.; Lagerwaard, F.J. Magnetic resonance-guided stereotactic radiotherapy for localized prostate cancer: Final results on patient-reported outcomes of a prospective phase 2 study. Eur. Urol. Oncol. 2021, 4, 628–634. [Google Scholar] [CrossRef] [PubMed]
  10. Teatini, A.; Pelanis, E.; Aghayan, D.L.; Kumar, R.P.; Palomar, R.; Fretland, Å.A.; Edwin, B.; Elle, O.J. The effect of intraoperative imaging on surgical navigation for laparoscopic liver resection surgery. Sci. Rep. 2019, 9, 18687. [Google Scholar] [CrossRef] [PubMed]
  11. Nagaraju, N.; Savitri, T.S.; Swamy, C.A. Image Registration Using Scale Invariant Feature Transform. Int. J. Sci. Eng. Technol. 2013, 2, 675–680. [Google Scholar]
  12. Dong, Y.; Jiao, W.; Long, T.; He, G.; Gong, C. An Extension of Phase Correlation-Based Image Registration to Estimate Similarity Transform Using Multiple Polar Fourier Transform. Remote Sens. 2018, 10, 1719. [Google Scholar] [CrossRef]
  13. Tondewad, M.P.S.; Dale, M.M.P. Remote sensing image registration methodology: Review and discussion. Procedia Comput. Sci. 2020, 171, 2390–2399. [Google Scholar] [CrossRef]
  14. Shaharom, M.F.M.; Tahar, K.N. Multispectral Image Matching Using SIFT and SURF Algorithm: A Review. Int. J. Geoinform. 2023, 19, 13–21. [Google Scholar] [CrossRef]
  15. Niethammer, M.; Kwitt, R.; Vialard, F.-X. Metric learning for image registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8463–8472. [Google Scholar] [CrossRef]
  16. Hu, J.; Luo, Z.; Wang, X.; Sun, S.; Yin, Y.; Cao, K.; Song, Q.; Lyu, S.; Wu, X. End-to-end multimodal image registration via reinforcement learning. Med. Image Anal. 2021, 68, 101878. [Google Scholar] [CrossRef] [PubMed]
  17. Soualmi, A.; Benhocine, A.; Midoun, I. Artificial Bee Colony-Based Blind Watermarking Scheme for Color Images Alter Detection Using BRISK Features and DCT. Arab J. Sci. Eng. 2023, 49, 3253–3266. [Google Scholar] [CrossRef]
  18. Semma, A.; Hannad, Y.; Siddiqi, I.; Djeddi, C.; El Kettani, M.E.Y. Writer identification using deep learning with fast keypoints and harris corner detector. Expert Syst. Appl. 2021, 184, 115473. [Google Scholar] [CrossRef]
  19. Liu, C.; Xu, J.; Wang, F. A review of keypoints’ detection and feature description in image registration. Sci. Program. 2021, 2021, 8509164. [Google Scholar] [CrossRef]
  20. Bedruz, R.A.R.; Fernando, A.; Bandala, A.; Sybingco, E.; Dadios, E. Vehicle classification using AKAZE and feature matching approach and artificial neural network. In Proceedings of the TENCON 2018—2018 IEEE Region 10 Conference, Jeju, Republic of Korea, 28–31 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1824–1827. [Google Scholar] [CrossRef]
  21. Bansal, M.; Kumar, M.; Kumar, M. 2D object recognition: A comparative analysis of SIFT, SURF and ORB feature descriptors. Multimed. Tools Appl. 2021, 80, 18839–18857. [Google Scholar] [CrossRef]
  22. Zhang, Z.; Sun, J.; Dai, Y.; Zhou, D.; Song, X.; He, M. Self-supervised Rigid Transformation Equivariance for Accurate 3D Point Cloud Registration. Pattern Recognit. 2022, 130, 108784. [Google Scholar] [CrossRef]
  23. Arora, P.; Mehta, R.; Ahuja, R. An Adaptive Medical Image Registration Using Hybridization of Teaching Learning-Based Optimization with Affine and Speeded Up Robust Features with Projective Transformation. Cluster Comput. 2023, 27, 607–627. [Google Scholar] [CrossRef]
  24. Wu, K. Creating Panoramic Images Using ORB Feature Detection and RANSAC-Based Image Alignment. Adv. Comput. Commun. 2023, 4, 220–224. [Google Scholar] [CrossRef]
  25. Rigaud, B.; Simon, A.; Castelli, J.; Lafond, C.; Acosta, O.; Haigron, P.; Cazoulat, G.; de Crevoisier, R. Deformable image registration for radiation therapy: Principle, methods, applications and evaluation. Acta Oncol. 2019, 58, 1225–1237. [Google Scholar] [CrossRef] [PubMed]
  26. Saha, S.K.; Xiao, D.; Bhuiyan, A.; Wong, T.Y.; Kanagasingam, Y. Color fundus image registration techniques and applications for automated analysis of diabetic retinopathy progression: A review. Biomed. Signal Process. Control 2019, 47, 288–302. [Google Scholar] [CrossRef]
  27. Zhao, P.; Chen, X.; Tang, S.; Xu, Y.; Yu, M.; Xu, P. Fast Recognition and Localization of Electric Vehicle Charging Socket Based on Deep Learning and Affine Correction. In Proceedings of the 2022 IEEE International Conference on Robotics and Biomimetics (ROBIO), Jinghong, China, 5–9 December 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 2140–2145. [Google Scholar] [CrossRef]
  28. Khan, S.U.; Khan, M.A.; Azhar, M.; Khan, F.; Javed, M. Multimodal Medical Image Fusion Towards Future Research: A Review. J. King Saud Univ.-Comput. Inf. Sci. 2023, 35, 101733. [Google Scholar] [CrossRef]
  29. Song, G.; Han, J.; Zhao, Y.; Wang, Z.; Du, H. A Review on Medical Image Registration as an Optimization Problem. Curr. Med. Imaging 2017, 13, 274–283. [Google Scholar] [CrossRef] [PubMed]
  30. Chaudhury, A. Multilevel Optimization for Registration of Deformable Point Clouds. IEEE Trans. Image Process. 2020, 29, 8735–8746. [Google Scholar] [CrossRef] [PubMed]
  31. Tward, D.J. An Optical Flow Based Left-Invariant Metric for Natural Gradient Descent in Affine Image Registration. Front. Appl. Math. Stat. 2021, 7, 718607. [Google Scholar] [CrossRef]
  32. Dong, J.; Lu, K.; Xue, J.; Dai, S.; Zhai, R.; Pan, W. Accelerated Nonrigid Image Registration Using Improved Levenberg–Marquardt Method. Inf. Sci. 2018, 423, 66–79. [Google Scholar] [CrossRef]
  33. Zitova, B.; Flusser, J. Image Registration Methods: A Survey. Image Vis. Comput. 2003, 21, 977–1000. [Google Scholar] [CrossRef]
  34. El-Gamal, F.E.-Z.A.; Elmogy, M.; Atwan, A. Current Trends in Medical Image Registration and Fusion. Egypt. Inform. J. 2016, 17, 99–124. [Google Scholar] [CrossRef]
  35. Sabuncu, M.R. Entropy-Based Image Registration. Ph.D. Dissertation, Princeton University, Princeton, NJ, USA, 2006. [Google Scholar]
  36. Qiu, H.; Qin, C.; Schuh, A.; Hammernik, K.; Rueckert, D. Learning Diffeomorphic and Modality-Invariant Registration Using B-Splines. In Proceedings of the Medical Imaging with Deep Learning, Lübeck, Germany, 7–9 July 2021; Volume 143, pp. 645–664. [Google Scholar]
  37. Nanavati, M.; Shah, M. Performance Comparison of Different Wavelet-Based Image Fusion Techniques for Lumbar Spine Images. J. Integr. Sci. Technol. 2024, 12, 703. [Google Scholar]
  38. Loi, G.; Fusella, M.; Lanzi, E.; Cagni, E.; Garibaldi, C.; Iacoviello, G.; Lucio, F.; Menghi, E.; Miceli, R.; Orlandini, L.C.; et al. Performance of Commercially Available Deformable Image Registration Platforms for Contour Propagation Using Patient-Based Computational Phantoms: A Multi-Institutional Study. Med. Phys. 2018, 45, 748–757. [Google Scholar] [CrossRef] [PubMed]
  39. Haskins, G.; Kruger, U.; Yan, P. Deep learning in medical image registration: A survey. Mach. Vis. Appl. 2020, 31, 8. [Google Scholar] [CrossRef]
  40. Chen, X.; Diaz-Pinto, A.; Ravikumar, N.; Frangi, A.F. Deep learning in medical image registration. Prog. Biomed. Eng. 2021, 3, 012003. [Google Scholar] [CrossRef]
  41. Ho, T.T.; Kim, W.J.; Lee, C.H.; Jin, G.Y.; Chae, K.J.; Choi, S. An Unsupervised Image Registration Method Employing Chest Computed Tomography Images and Deep Neural Networks. Comput. Biol. Med. 2023, 154, 106612. [Google Scholar] [CrossRef] [PubMed]
  42. Arif, M.; Wang, G. Fast Curvelet Transform through Genetic Algorithm for Multimodal Medical Image Fusion. Soft Comput. 2020, 24, 1815–1836. [Google Scholar] [CrossRef]
  43. Orujov, F.; Maskeliūnas, R.; Damaševičius, R.; Wei, W. Fuzzy Based Image Edge Detection Algorithm for Blood Vessel Detection in Retinal Images. Appl. Soft Comput. 2020, 94, 106452. [Google Scholar] [CrossRef]
  44. Sun, J.; Zhang, J.; Zhang, G. An Automatic 3D Point Cloud Registration Method Based on Regional Curvature Maps. Image Vis. Comput. 2016, 56, 49–58. [Google Scholar] [CrossRef]
  45. Islam, K.T.; Wijewickrema, S.; O’Leary, S. A Deep Learning Based Framework for the Registration of Three-Dimensional Multi-Modal Medical Images of the Head. Sci. Rep. 2021, 11, 1860. [Google Scholar] [CrossRef] [PubMed]
  46. Mahapatra, D.; Antony, B.; Sedai, S.; Garnavi, R. Deformable Medical Image Registration Using Generative Adversarial Networks. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 1449–1453. [Google Scholar] [CrossRef]
  47. Scopus. Available online: https://www.scopus.com (accessed on 12 June 2024).
  48. Alam, F.; Rahman, S.U.; Ullah, S.; Gulati, K. Medical Image Registration in Image Guided Surgery: Issues, Challenges and Research Opportunities. Biocybern. Biomed. Eng. 2018, 38, 71–89. [Google Scholar] [CrossRef]
  49. Faiza, B.; Yuhaniz, S.S.; Hashim, S.Z.M.; AbdulRahman, K.K. A Review and Analysis of Image Misalignment Problem in Remote Sensing. Int. J. Sci. Eng. Res. 2012, 3, 82–86. [Google Scholar]
  50. Noble, S.; Scheinost, D.; Finn, E.S.; Shen, X.; Papademetris, X.; McEwen, S.C.; Bearden, C.E.; Addington, J.; Goodyear, B.; Cadenhead, K.S.; et al. Multisite reliability of MR-based functional connectivity. Neuroimage. 2017, 146, 959–970. [Google Scholar] [CrossRef] [PubMed]
  51. Liu, F.-Y.; Chen, C.-C.; Chen, S.-C.; Liao, C.-H. A Practical Framework for ROI Detection in Medical Images—A case study for hip detection in anteroposterior pelvic radiographs. arXiv 2021, arXiv:2103.01584. [Google Scholar] [CrossRef]
  52. Monti, S.; Pacelli, R.; Cella, L.; Palma, G. Inter-patient image registration algorithms to disentangle regional dose bioeffects. Sci. Rep. 2018, 8, 4915. [Google Scholar] [CrossRef] [PubMed]
  53. Zou, J.; Liu, J.; Choi, K.-S.; Qin, J. Intra-Patient Lung CT Registration through Large Deformation Decomposition and Attention-Guided Refinement. Bioengineering 2023, 10, 562. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Illustration of different types of image registration in medical imaging. The figure displays four quadrants representing various registration scenarios. The top-left quadrant depicts multimodal image registration, aligning images acquired from different sensors or modalities. The top-right quadrant shows unimodal registration, aligning images obtained from the same modality or sensor. The bottom-left quadrant illustrates interpatient registration, aligning images from different individuals. Finally, the bottom-right quadrant represents intra-patient registration, aligning images obtained from the same individual over time, either on the same day or on different days.
Figure 1. Illustration of different types of image registration in medical imaging. The figure displays four quadrants representing various registration scenarios. The top-left quadrant depicts multimodal image registration, aligning images acquired from different sensors or modalities. The top-right quadrant shows unimodal registration, aligning images obtained from the same modality or sensor. The bottom-left quadrant illustrates interpatient registration, aligning images from different individuals. Finally, the bottom-right quadrant represents intra-patient registration, aligning images obtained from the same individual over time, either on the same day or on different days.
Bioengineering 11 00786 g001
Figure 2. General framework of image registration. This figure presents a comprehensive overview of the image registration process, showcasing both classical and deep learning approaches. The framework includes two input images, a fixed image ( I F ) and a moving image ( I M ). In the classical registration area, alignment metrics, transformation, and optimization are depicted as key components, emphasizing their iterative interaction for achieving accurate alignment. Additionally, the deep learning registration section showcases the inclusion of a loss function, implicit transformation, and optimization within the iterative process.
Figure 2. General framework of image registration. This figure presents a comprehensive overview of the image registration process, showcasing both classical and deep learning approaches. The framework includes two input images, a fixed image ( I F ) and a moving image ( I M ). In the classical registration area, alignment metrics, transformation, and optimization are depicted as key components, emphasizing their iterative interaction for achieving accurate alignment. Additionally, the deep learning registration section showcases the inclusion of a loss function, implicit transformation, and optimization within the iterative process.
Bioengineering 11 00786 g002
Figure 3. Different types of transformation. This figure illustrates various transformations applied to an original image. (1) Rigid transformation preserves distances and angles; (2) affine transformation incorporates translation, rotation, scaling, and shearing; (3) projective transformation accommodates perspective distortions; and (4) nonlinear deformation addresses intricate deformations. The original image is displayed alongside the transformed versions for comparison.
Figure 3. Different types of transformation. This figure illustrates various transformations applied to an original image. (1) Rigid transformation preserves distances and angles; (2) affine transformation incorporates translation, rotation, scaling, and shearing; (3) projective transformation accommodates perspective distortions; and (4) nonlinear deformation addresses intricate deformations. The original image is displayed alongside the transformed versions for comparison.
Bioengineering 11 00786 g003
Figure 4. This figure illustrates three common types of errors encountered during image registration. In the left panel, a matching error is depicted where key points (denoted by colored markers) in the fixed image fail to align precisely with the corresponding key points in the moving image. This discrepancy underscores the challenge of achieving accurate correspondence between key points in image registration. In the center panel, an alignment error is observed. Here, the vertebral column exhibits misalignment when comparing images, as indicated by the highlighted deviation (red oval) from the expected anatomical alignment. Addressing such alignment discrepancies is crucial for ensuring precise image registration and clinical assessment. The right panel illustrates a localization error where the placement of a key point inaccurately corresponds to the targeted anatomical feature. This means that, while the key point is present in both the reference (fixed) and moving images, its position is shifted or misaligned relative to its intended anatomical location. Inaccuracies like these can significantly affect the precision of diagnostic evaluations and treatment planning. Such errors can significantly impact the diagnostic accuracy and subsequent treatment planning processes. Each type of error underscores distinct challenges in achieving precise and reliable image registration.
Figure 4. This figure illustrates three common types of errors encountered during image registration. In the left panel, a matching error is depicted where key points (denoted by colored markers) in the fixed image fail to align precisely with the corresponding key points in the moving image. This discrepancy underscores the challenge of achieving accurate correspondence between key points in image registration. In the center panel, an alignment error is observed. Here, the vertebral column exhibits misalignment when comparing images, as indicated by the highlighted deviation (red oval) from the expected anatomical alignment. Addressing such alignment discrepancies is crucial for ensuring precise image registration and clinical assessment. The right panel illustrates a localization error where the placement of a key point inaccurately corresponds to the targeted anatomical feature. This means that, while the key point is present in both the reference (fixed) and moving images, its position is shifted or misaligned relative to its intended anatomical location. Inaccuracies like these can significantly affect the precision of diagnostic evaluations and treatment planning. Such errors can significantly impact the diagnostic accuracy and subsequent treatment planning processes. Each type of error underscores distinct challenges in achieving precise and reliable image registration.
Bioengineering 11 00786 g004
Figure 5. Classification of registration methods. This figure presents a comprehensive classification scheme for registration methods, highlighting five main subsets: modalities, type of transformation, interaction, classical techniques, and learning-based techniques. Each subset further encompasses various subcategories, providing a detailed taxonomy for organizing and understanding different registration approaches.
Figure 5. Classification of registration methods. This figure presents a comprehensive classification scheme for registration methods, highlighting five main subsets: modalities, type of transformation, interaction, classical techniques, and learning-based techniques. Each subset further encompasses various subcategories, providing a detailed taxonomy for organizing and understanding different registration approaches.
Bioengineering 11 00786 g005
Figure 6. Number of publications in classical image registration versus deep learning image registration in recent years. This plot provides a comparative analysis of the number of publications in classical image registration and deep learning image registration over the past 10 years (2013–2023) [47]. The red line represents the publications in classical image registration, while the green line represents the publications in deep learning image registration. The figure highlights the evolving trends and growing interest in deep learning approaches within the field of image registration, indicating a shift towards leveraging neural networks and machine learning techniques for improved registration performance.
Figure 6. Number of publications in classical image registration versus deep learning image registration in recent years. This plot provides a comparative analysis of the number of publications in classical image registration and deep learning image registration over the past 10 years (2013–2023) [47]. The red line represents the publications in classical image registration, while the green line represents the publications in deep learning image registration. The figure highlights the evolving trends and growing interest in deep learning approaches within the field of image registration, indicating a shift towards leveraging neural networks and machine learning techniques for improved registration performance.
Bioengineering 11 00786 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Darzi, F.; Bocklitz, T. A Review of Medical Image Registration for Different Modalities. Bioengineering 2024, 11, 786. https://doi.org/10.3390/bioengineering11080786

AMA Style

Darzi F, Bocklitz T. A Review of Medical Image Registration for Different Modalities. Bioengineering. 2024; 11(8):786. https://doi.org/10.3390/bioengineering11080786

Chicago/Turabian Style

Darzi, Fatemehzahra, and Thomas Bocklitz. 2024. "A Review of Medical Image Registration for Different Modalities" Bioengineering 11, no. 8: 786. https://doi.org/10.3390/bioengineering11080786

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop