Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (566)

Search Parameters:
Keywords = three-dimensional (3D) image reconstruction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1045 KiB  
Review
Emerging Applications of Machine Learning in 3D Printing
by Izabela Rojek, Dariusz Mikołajewski, Marcin Kempiński, Krzysztof Galas and Adrianna Piszcz
Appl. Sci. 2025, 15(4), 1781; https://doi.org/10.3390/app15041781 - 10 Feb 2025
Viewed by 512
Abstract
Three-dimensional (3D) printing techniques already enable the precise deposition of many materials, becoming a promising approach for materials engineering, mechanical engineering, or biomedical engineering. Recent advances in 3D printing enable scientists and engineers to create models with precisely controlled and complex microarchitecture, shapes, [...] Read more.
Three-dimensional (3D) printing techniques already enable the precise deposition of many materials, becoming a promising approach for materials engineering, mechanical engineering, or biomedical engineering. Recent advances in 3D printing enable scientists and engineers to create models with precisely controlled and complex microarchitecture, shapes, and surface finishes, including multi-material printing. The incorporation of artificial intelligence (AI) at various stages of 3D printing has made it possible to reconstruct objects from images (including, for example, medical images), select and optimize materials and the printing process, and monitor the lifecycle of products. New emerging opportunities are provided by the ability of machine learning (ML) to analyze complex data sets and learn from previous (historical) experience and predictions to dynamically optimize and individuate products and processes. This includes the synergistic capabilities of 3D printing and ML for the development of personalized products. Full article
(This article belongs to the Special Issue Feature Review Papers in Additive Manufacturing Technologies)
Show Figures

Figure 1

17 pages, 6959 KiB  
Article
A Skeleton-Based Method of Root System 3D Reconstruction and Phenotypic Parameter Measurement from Multi-View Image Sequence
by Chengjia Xu, Ting Huang, Ziang Niu, Xinyue Sun, Yong He and Zhengjun Qiu
Agriculture 2025, 15(3), 343; https://doi.org/10.3390/agriculture15030343 - 5 Feb 2025
Viewed by 426
Abstract
The phenotypic parameters of root systems are vital in reflecting the influence of genes and the environment on plants, and three-dimensional (3D) reconstruction is an important method for obtaining phenotypic parameters. Based on the characteristics of root systems, being featureless, thin structures, this [...] Read more.
The phenotypic parameters of root systems are vital in reflecting the influence of genes and the environment on plants, and three-dimensional (3D) reconstruction is an important method for obtaining phenotypic parameters. Based on the characteristics of root systems, being featureless, thin structures, this study proposed a skeleton-based 3D reconstruction and phenotypic parameter measurement method for root systems using multi-view images. An image acquisition system was designed to collect multi-view images for root system. The input images were binarized by the proposed OTSU-based adaptive threshold segmentation method. Vid2Curve was adopted to realize the 3D reconstruction of root systems and calibration objects, which was divided into four steps: skeleton curve extraction, initialization, skeleton curve estimation, and surface reconstruction. Then, to extract phenotypic parameters, a scale alignment method based on the skeleton was realized using DBSCAN and RANSAC. Furthermore, a small-sized root system point completion algorithm was proposed to achieve more complete root system 3D models. Based on the above-mentioned methods, a total of 30 root samples of three species were tested. The results showed that the proposed method achieved a skeleton projection error of 0.570 pixels and a surface projection error of 0.468 pixels. Root number measurement achieved a precision of 0.97 and a recall of 0.96, and root length measurement achieved an MAE of 1.06 cm, an MAPE of 2.37%, an RMSE of 1.35 cm, and an R2 of 0.99. The whole process of reconstruction in the experiment was very fast, taking a maximum of 4.07 min. With high accuracy and high speed, the proposed methods make it possible to obtain the root phenotypic parameters quickly and accurately and promote the study of root phenotyping. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

15 pages, 2694 KiB  
Article
Dynamic 3D Measurement Based on Camera-Pixel Mismatch Correction and Hilbert Transform
by Xingfan Chen, Qican Zhang and Yajun Wang
Sensors 2025, 25(3), 924; https://doi.org/10.3390/s25030924 - 3 Feb 2025
Viewed by 492
Abstract
In three-dimensional (3D) measurement, the motion of objects inevitably introduces errors, posing significant challenges to high-precision 3D reconstruction. Most existing algorithms for compensating motion-induced phase errors are tailored for object motion along the camera’s principal axis (Z direction), limiting their applicability in real-world [...] Read more.
In three-dimensional (3D) measurement, the motion of objects inevitably introduces errors, posing significant challenges to high-precision 3D reconstruction. Most existing algorithms for compensating motion-induced phase errors are tailored for object motion along the camera’s principal axis (Z direction), limiting their applicability in real-world scenarios where objects often experience complex combined motions in the X/Y and Z directions. To address these challenges, we propose a universal motion error compensation algorithm that effectively corrects both pixel mismatch and phase-shift errors, ensuring accurate 3D measurements under dynamic conditions. The method involves two key steps: first, pixel mismatch errors in the camera subsystem are corrected using adjacent coarse 3D point cloud data, aligning the captured data with the actual spatial geometry. Subsequently, motion-induced phase errors, observed as sinusoidal waveforms with a frequency twice that of the projection fringe pattern, are eliminated by applying the Hilbert transform to shift the fringes by π/2. Unlike conventional approaches that address these errors separately, our method provides a systematic solution by simultaneously compensating for camera-pixel mismatch and phase-shift errors within the 3D coordinate space. This integrated approach enhances the reliability and precision of 3D reconstruction, particularly in scenarios with dynamic and multidirectional object motions. The algorithm has been experimentally validated, demonstrating its robustness and broad applicability in fields such as industrial inspection, biomedical imaging, and real-time robotics. By addressing longstanding challenges in dynamic 3D measurement, our method represents a significant advancement in achieving high-accuracy reconstructions under complex motion environments. Full article
(This article belongs to the Special Issue 3D Reconstruction with RGB-D Cameras and Multi-sensors)
Show Figures

Figure 1

9 pages, 5726 KiB  
Communication
Mixed Reality (Holography)-Guided Minimally Invasive Cardiac Surgery—A Novel Comparative Feasibility Study
by Winn Maung Maung Aye, Laszlo Kiraly, Senthil S. Kumar, Ayyadarshan Kasivishvanaath, Yujia Gao and Theodoros Kofidis
J. Cardiovasc. Dev. Dis. 2025, 12(2), 49; https://doi.org/10.3390/jcdd12020049 - 27 Jan 2025
Viewed by 479
Abstract
The operative field and exposure in minimally invasive cardiac surgery (MICS) are limited. Meticulous preoperative planning and intraoperative visualization are crucial. We present our initial experience with HoloLens® 2 as an intraoperative guide during MICS procedures: aortic valve replacement (AVR) via right [...] Read more.
The operative field and exposure in minimally invasive cardiac surgery (MICS) are limited. Meticulous preoperative planning and intraoperative visualization are crucial. We present our initial experience with HoloLens® 2 as an intraoperative guide during MICS procedures: aortic valve replacement (AVR) via right anterior small thoracotomy, coronary artery bypass graft surgery (CABG) via left anterior small thoracotomy (LAST), and pulmonary valve replacement (PVR) via LAST. Three-dimensional (3D) segmentations were performed using the patient’s computer tomography (CT) data subsequently rendered into a 3D hologram on the HoloLens® 2. The holographic image was then superimposed on the patient lying on the operating table, using the xiphoid and the clavicle as landmarks, and was used as a real-time anatomical image guide for the surgery. The incision site marking made using HoloLens® 2 differed by one intercostal space from the marking made using a conventional surgeon’s mental reconstructed image from the patient’s preoperative imaging and was found to be a more appropriate site of entry into the chest for the structure of interest. The transparent visor of the HoloLens® 2 provided unobstructed views of the operating field. A mixed reality (MR) device could contribute to preoperative surgical planning and intraoperative real-time image guidance, which facilitates the understanding of anatomical relationships. MR has the potential to improve surgical precision, decrease risk, and enhance patient safety. Full article
Show Figures

Figure 1

15 pages, 5853 KiB  
Article
Multi-View Three-Dimensional Reconstruction Based on Feature Enhancement and Weight Optimization Network
by Guobiao Yao, Ziheng Wang, Guozhong Wei, Fengqi Zhu, Qingqing Fu, Qian Yu and Min Wei
ISPRS Int. J. Geo-Inf. 2025, 14(2), 43; https://doi.org/10.3390/ijgi14020043 - 24 Jan 2025
Viewed by 507
Abstract
Aiming to address the issue that existing multi-view stereo reconstruction methods have insufficient adaptability to the repetitive and weak textures in multi-view images, this paper proposes a three-dimensional (3D) reconstruction algorithm based on Feature Enhancement and Weight Optimization MVSNet (Abbreviated as FEWO-MVSNet). To [...] Read more.
Aiming to address the issue that existing multi-view stereo reconstruction methods have insufficient adaptability to the repetitive and weak textures in multi-view images, this paper proposes a three-dimensional (3D) reconstruction algorithm based on Feature Enhancement and Weight Optimization MVSNet (Abbreviated as FEWO-MVSNet). To obtain accurate and detailed global and local features, we first develop an adaptive feature enhancement approach to obtain multi-scale information from the images. Second, we introduce an attention mechanism and a spatial feature capture module to enable high-sensitivity detection for weak texture features. Third, based on the 3D convolutional neural network, the fine depth map for multi-view images can be predicted and the complete 3D model is subsequently reconstructed. Last, we evaluated the proposed FEWO-MVSNet through training and testing on the DTU, BlendedMVS, and Tanks and Temples datasets. The results demonstrate significant superiorities of our method for 3D reconstruction from multi-view images, with our method ranking first in accuracy and second in completeness when compared to the existing representative methods. Full article
Show Figures

Figure 1

29 pages, 4808 KiB  
Article
Multi-Baseline Bistatic SAR Three-Dimensional Imaging Method Based on Phase Error Calibration Combining PGA and EB-ISOA
by Jinfeng He, Hongtu Xie, Haozong Liu, Zhitao Wu, Bin Xu, Nannan Zhu, Zheng Lu and Pengcheng Qin
Remote Sens. 2025, 17(3), 363; https://doi.org/10.3390/rs17030363 - 22 Jan 2025
Viewed by 369
Abstract
Tomographic synthetic aperture radar (TomoSAR) is an advanced three-dimensional (3D) synthetic aperture radar (SAR) imaging technology that can obtain multiple SAR images through multi-track observations, thereby reconstructing the 3D spatial structure of targets. However, due to system limitations, the multi-baseline (MB) monostatic SAR [...] Read more.
Tomographic synthetic aperture radar (TomoSAR) is an advanced three-dimensional (3D) synthetic aperture radar (SAR) imaging technology that can obtain multiple SAR images through multi-track observations, thereby reconstructing the 3D spatial structure of targets. However, due to system limitations, the multi-baseline (MB) monostatic SAR (MonoSAR) encounters temporal decorrelation issues when observing the scene such as forests, affecting the accuracy of the 3D reconstruction. Additionally, during TomoSAR observations, the platform jitter and inaccurate position measurement will contaminate the MB SAR data, which may result in the multiplicative noise with phase errors, thereby leading to the decrease in the imaging quality. To address the above issues, this paper proposes a MB bistatic SAR (BiSAR) 3D imaging method based on the phase error calibration that combines the phase gradient autofocus (PGA) and energy balance intensity-squared optimization autofocus (EB-ISOA). Firstly, the signal model of the MB one-stationary (OS) BiSAR is established and the 3D imaging principle is presented, and then the phase error caused by platform jitter and inaccurate position measurement is analyzed. Moreover, combining the PGA and EB-ISOA methods, a 3D imaging method based on the phase error calibration is proposed. This method can improve the accuracy of phase error calibration, avoid the vertical displacement, and has the noise robustness, which can obtain the high-precision 3D BiSAR imaging results. The experimental results are shown to verify the effectiveness and practicality of the proposed MB BiSAR 3D imaging method. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

26 pages, 5914 KiB  
Article
A Structurally Flexible Occupancy Network for 3-D Target Reconstruction Using 2-D SAR Images
by Lingjuan Yu, Jianlong Liu, Miaomiao Liang, Xiangchun Yu, Xiaochun Xie, Hui Bi and Wen Hong
Remote Sens. 2025, 17(2), 347; https://doi.org/10.3390/rs17020347 - 20 Jan 2025
Viewed by 772
Abstract
Driven by deep learning, three-dimensional (3-D) target reconstruction from two-dimensional (2-D) synthetic aperture radar (SAR) images has been developed. However, there is still room for improvement in the reconstruction quality. In this paper, we propose a structurally flexible occupancy network (SFONet) to achieve [...] Read more.
Driven by deep learning, three-dimensional (3-D) target reconstruction from two-dimensional (2-D) synthetic aperture radar (SAR) images has been developed. However, there is still room for improvement in the reconstruction quality. In this paper, we propose a structurally flexible occupancy network (SFONet) to achieve high-quality reconstruction of a 3-D target using one or more 2-D SAR images. The SFONet consists of a basic network and a pluggable module that allows it to switch between two input modes: one azimuthal image and multiple azimuthal images. Furthermore, the pluggable module is designed to include a complex-valued (CV) long short-term memory (LSTM) submodule and a CV attention submodule, where the former extracts structural features of the target from multiple azimuthal SAR images, and the latter fuses these features. When two input modes coexist, we also propose a two-stage training strategy. The basic network is trained in the first stage using one azimuthal SAR image as the input. In the second stage, the basic network trained in the first stage is fixed, and only the pluggable module is trained using multiple azimuthal SAR images as the input. Finally, we construct an experimental dataset containing 2-D SAR images and 3-D ground truth by utilizing the publicly available Gotcha echo dataset. Experimental results show that once the SFONet is trained, a 3-D target can be reconstructed using one or more azimuthal images, exhibiting higher quality than other deep learning-based 3-D reconstruction methods. Moreover, when the composition of a training sample is reasonable, the number of samples required for the SFONet training can be reduced. Full article
Show Figures

Figure 1

19 pages, 2560 KiB  
Article
Evaluation of Rapeseed Leave Segmentation Accuracy Using Binocular Stereo Vision 3D Point Clouds
by Lili Zhang, Shuangyue Shi, Muhammad Zain, Binqian Sun, Dongwei Han and Chengming Sun
Agronomy 2025, 15(1), 245; https://doi.org/10.3390/agronomy15010245 - 20 Jan 2025
Viewed by 656
Abstract
Point cloud segmentation is necessary for obtaining highly precise morphological traits in plant phenotyping. Although a huge development has occurred in point cloud segmentation, the segmentation of point clouds from complex plant leaves still remains challenging. Rapeseed leaves are critical in cultivation and [...] Read more.
Point cloud segmentation is necessary for obtaining highly precise morphological traits in plant phenotyping. Although a huge development has occurred in point cloud segmentation, the segmentation of point clouds from complex plant leaves still remains challenging. Rapeseed leaves are critical in cultivation and breeding, yet traditional two-dimensional imaging is susceptible to reduced segmentation accuracy due to occlusions between plants. The current study proposes the use of binocular stereo-vision technology to obtain three-dimensional (3D) point clouds of rapeseed leaves at the seedling and bolting stages. The point clouds were colorized based on elevation values in order to better process the 3D point cloud data and extract rapeseed phenotypic parameters. Denoising methods were selected based on the source and classification of point cloud noise. However, for ground point clouds, we combined plane fitting with pass-through filtering for denoising, while statistical filtering was used for denoising outliers generated during scanning. We found that, during the seedling stage of rapeseed, a region-growing segmentation method was helpful in finding suitable parameter thresholds for leaf segmentation, and the Locally Convex Connected Patches (LCCP) clustering method was used for leaf segmentation at the bolting stage. Furthermore, the study results show that combining plane fitting with pass-through filtering effectively removes the ground point cloud noise, while statistical filtering successfully denoises outlier noise points generated during scanning. Finally, using the region-growing algorithm during the seedling stage with a normal angle threshold set at 5.0/180.0* M_PI and a curvature threshold set at 1.5 helps to avoid the under-segmentation and over-segmentation issues, achieving complete segmentation of rapeseed seedling leaves, while the LCCP clustering method fully segments rapeseed leaves at the bolting stage. The proposed method provides insights to improve the accuracy of subsequent point cloud phenotypic parameter extraction, such as rapeseed leaf area, and is beneficial for the 3D reconstruction of rapeseed. Full article
(This article belongs to the Special Issue Unmanned Farms in Smart Agriculture)
Show Figures

Figure 1

27 pages, 20827 KiB  
Article
Three-Dimensional Reconstruction of Space Targets Utilizing Joint Optical-and-ISAR Co-Location Observation
by Wanting Zhou, Lei Liu, Rongzhen Du, Ze Wang, Ronghua Shang and Feng Zhou
Remote Sens. 2025, 17(2), 287; https://doi.org/10.3390/rs17020287 - 15 Jan 2025
Viewed by 496
Abstract
With traditional three-dimensional (3-D) reconstruction methods for space targets, it is difficult to achieve 3-D structure and attitude reconstruction simultaneously. To tackle this problem, a 3-D reconstruction method for space targets is proposed, and the alignment and fusion of optical and ISAR images [...] Read more.
With traditional three-dimensional (3-D) reconstruction methods for space targets, it is difficult to achieve 3-D structure and attitude reconstruction simultaneously. To tackle this problem, a 3-D reconstruction method for space targets is proposed, and the alignment and fusion of optical and ISAR images are investigated. Firstly, multiple pairs of optical and ISAR images are acquired in the joint optical-and-ISAR co-location observation system (COS). Then, key points of space targets on the images are used to solve for the Doppler information and the 3-D attitude. Meanwhile, the image offsets of each pair are further aligned based on Doppler co-projection between optical and ISAR images. The 3-D rotational offset relationship and the 3-D translational offset relationship are next deduced to align the spatial offset between pairs of images based on attitude changes in neighboring frames. Finally, a voxel trimming mechanism based on growth learning (VTM-GL) is designed to obtain the reserved voxels where mask features are used. Experimental results verify the effectiveness and robustness of the OC-V3R-OI method. Full article
Show Figures

Graphical abstract

18 pages, 2394 KiB  
Article
Unsupervised Anomaly Detection for Improving Adversarial Robustness of 3D Object Detection Models
by Mumuxin Cai, Xupeng Wang, Ferdous Sohel and Hang Lei
Electronics 2025, 14(2), 236; https://doi.org/10.3390/electronics14020236 - 8 Jan 2025
Viewed by 655
Abstract
Three-dimensional object detection based on deep neural networks (DNNs) is widely used in safety-related applications, such as autonomous driving. However, existing research has shown that 3D object detection models are vulnerable to adversarial attacks. Hence, the improvement on the robustness of deep 3D [...] Read more.
Three-dimensional object detection based on deep neural networks (DNNs) is widely used in safety-related applications, such as autonomous driving. However, existing research has shown that 3D object detection models are vulnerable to adversarial attacks. Hence, the improvement on the robustness of deep 3D detection models under adversarial attacks is investigated in this work. A deep autoencoder-based anomaly detection method is proposed, which has a strong ability to detect elaborate adversarial samples in an unsupervised way. The proposed anomaly detection method operates on a given Light Detection and Ranging (LiDAR) scene in its Bird’s Eye View (BEV) image and reconstructs the scene through an autoencoder. To improve the performance of the autoencoder, an augmented memory module with typical normal patterns recorded is introduced. It is designed to help the model to amplify the reconstruction errors of malicious samples with normal samples negligibly affected. Experiments on several public datasets show that the proposed anomaly detection method achieves an AUC of 0.8 under adversarial attacks and improves the robustness of 3D object detection. Full article
Show Figures

Figure 1

20 pages, 8697 KiB  
Article
An Autonomous Positioning Method for Drones in GNSS Denial Scenarios Driven by Real-Scene 3D Models
by Yongqiang Cui, Xue Gao, Rui Yu, Xi Chen, Dingwen Wang and Di Bai
Sensors 2025, 25(1), 209; https://doi.org/10.3390/s25010209 - 2 Jan 2025
Viewed by 625
Abstract
Drones are extensively utilized in both military and social development processes. Eliminating the reliance of drone positioning systems on GNSS and enhancing the accuracy of the positioning systems is of significant research value. This paper presents a novel approach that employs a real-scene [...] Read more.
Drones are extensively utilized in both military and social development processes. Eliminating the reliance of drone positioning systems on GNSS and enhancing the accuracy of the positioning systems is of significant research value. This paper presents a novel approach that employs a real-scene 3D model and image point cloud reconstruction technology for the autonomous positioning of drones and attains high positioning accuracy. Firstly, the real-scene 3D model constructed in this paper is segmented in accordance with the predetermined format to obtain the image dataset and the 3D point cloud dataset. Subsequently, real-time image capture is performed using the monocular camera mounted on the drone, followed by a preliminary position estimation conducted through image matching algorithms and subsequent 3D point cloud reconstruction utilizing the acquired images. Next, the corresponding real-scene 3D point cloud data within the point cloud dataset is extracted in accordance with the image-matching results. Finally, the point cloud data obtained through image reconstruction is matched with the 3D point cloud of the real scene, and the positioning coordinates of the drone are acquired by applying the pose estimation algorithm. The experimental results demonstrate that the proposed approach in this paper enables precise autonomous positioning of drones in complex urban environments, achieving a remarkable positioning accuracy of up to 0.4 m. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

19 pages, 11243 KiB  
Article
A Simple Polarization-Based Fringe Projection Profilometry Method for Three-Dimensional Reconstruction of High-Dynamic-Range Surfaces
by Xiang Sun, Zhenjun Luo, Shizhao Wang, Jianhua Wang, Yunpeng Zhang and Dandan Zou
Photonics 2025, 12(1), 27; https://doi.org/10.3390/photonics12010027 - 31 Dec 2024
Viewed by 613
Abstract
Three-dimensional (3D) reconstruction of high-dynamic-range (HDR) surfaces plays an important role in the fields of computer vision and image processing. Traditional 3D measurement methods often face the risk of information loss when dealing with surfaces that have HDR characteristics. To address this issue, [...] Read more.
Three-dimensional (3D) reconstruction of high-dynamic-range (HDR) surfaces plays an important role in the fields of computer vision and image processing. Traditional 3D measurement methods often face the risk of information loss when dealing with surfaces that have HDR characteristics. To address this issue, this paper proposes a simple 3D reconstruction method, which combines the features of non-overexposed regions in polarized and unpolarized images to improve the reconstruction quality of HDR surface objects. The optimum fringe regions are extracted from images with different polarization angles, and the non-overexposed regions in normally captured unpolarized images typically contain complete fringe information and are less affected by specular highlights. The optimal fringe information from different polarized image groups is gradually used to replace the incorrect fringe information in the unpolarized image, resulting in a complete set of fringe data. Experimental results show that the proposed method requires only 24~36 images and simple phase fusion to achieve successful 3D reconstruction. It can effectively mitigate the negative impact of overexposed regions on absolute phase calculation and 3D reconstruction when reconstructing objects with strongly reflective surfaces. Full article
(This article belongs to the Special Issue New Perspectives in Optical Design)
Show Figures

Figure 1

24 pages, 6819 KiB  
Article
Three-Dimensional Reconstruction of Road Structural Defects Using GPR Investigation and Back-Projection Algorithm
by Lutai Wang, Zhen Liu, Xingyu Gu and Danyu Wang
Sensors 2025, 25(1), 162; https://doi.org/10.3390/s25010162 - 30 Dec 2024
Viewed by 659
Abstract
Ground-Penetrating Radar (GPR) has demonstrated significant advantages in the non-destructive detection of road structural defects due to its speed, safety, and efficiency. This paper proposes a three-dimensional (3D) reconstruction method for GPR images, integrating the back-projection (BP) imaging algorithm to accurately determine the [...] Read more.
Ground-Penetrating Radar (GPR) has demonstrated significant advantages in the non-destructive detection of road structural defects due to its speed, safety, and efficiency. This paper proposes a three-dimensional (3D) reconstruction method for GPR images, integrating the back-projection (BP) imaging algorithm to accurately determine the size, location, and other parameters of road structural defects. Initially, GPR detection images were preprocessed, including direct wave removal and wavelet denoising, followed by the application of the BP algorithm to effectively restore the defect’s location and size. Subsequently, a 3D data set was constructed through interpolation, and the effective reflection data were extracted by using a clustering algorithm. This algorithm distinguished the effective reflection data from the background data by determining the distance threshold between the data points. The 3D imaging of the defect was then performed in MATLAB. The proposed method was validated using both gprMax simulations and laboratory test models. The experimental results indicate that the correlation between the reconstructed and actual defects was approximately 0.67, demonstrating the method’s efficacy in accurately achieving the 3D reconstruction of road structural defects. Full article
Show Figures

Figure 1

19 pages, 14915 KiB  
Article
3D Object Detection System in Scattering Medium Environment
by Seiya Ono, Hyun-Woo Kim, Myungjin Cho and Min-Chul Lee
Electronics 2025, 14(1), 93; https://doi.org/10.3390/electronics14010093 - 29 Dec 2024
Viewed by 582
Abstract
Peplography is a technology for removing scattering media such as fog and smoke. However, Peplography only removes scattering media, and decisions about the images are made by humans. Therefore, there are still many improvements to be made in terms of system automation. In [...] Read more.
Peplography is a technology for removing scattering media such as fog and smoke. However, Peplography only removes scattering media, and decisions about the images are made by humans. Therefore, there are still many improvements to be made in terms of system automation. In this paper, we combine Peplography with You Only Look Once (YOLO) to attempt object detection under scattering medium conditions. In addition, images reconstructed by Peplography have different characteristics from normal images. Therefore, by applying Peplography to the training images, we attempt to learn the image characteristics of Peplography and improve the detection accuracy. Also, when considering autonomous driving in foggy conditions or rescue systems at the scene of a fire, three-dimensional (3D) information such as the distance to the vehicle in front and the person in need of rescue is also necessary. Furthermore, we apply a stereo camera to this algorithm to achieve 3D object position and distance detection under scattering media conditions. In addition, when estimating the scattering medium in Peplography, it is important to specify the processing area, otherwise the scattering medium will not be removed properly. Therefore, we construct a system that continuously improves processing by estimating the size of the object in object detection and successively changing the area range using the estimated value. As a result, the PSNR result by our proposed method is better than the PSNR by the conventional Peplography process. The distance estimation and the object detection are also verified to be accurate, recording values of 0.989 for precision and 0.573 for recall. When the proposed system is applied, it is expected to have a significant impact on the stability of autonomous driving technology and the safety of life rescue at fire scenes. Full article
(This article belongs to the Special Issue Machine Learning and Deep Learning Based Pattern Recognition)
Show Figures

Figure 1

12 pages, 4015 KiB  
Article
Advancing Pediatric Surgery: The Use of HoloLens 2 for 3D Anatomical Reconstructions in Preoperative Planning
by Marco Di Mitri, Annalisa Di Carmine, Simone D’Antonio, Benedetta Maria Capobianco, Cristian Bisanti, Edoardo Collautti, Sara Maria Cravano, Francesca Ruspi, Michele Libri, Tommaso Gargano and Mario Lima
Children 2025, 12(1), 32; https://doi.org/10.3390/children12010032 - 28 Dec 2024
Viewed by 578
Abstract
Background: In pediatric surgery, a comprehensive knowledge of the child’s anatomy is crucial to optimize surgical outcomes and minimize complications. Recent advancements in medical imaging and technology have introduced innovative tools that enhance surgical planning and decision-making. Methods: This study explores the integration [...] Read more.
Background: In pediatric surgery, a comprehensive knowledge of the child’s anatomy is crucial to optimize surgical outcomes and minimize complications. Recent advancements in medical imaging and technology have introduced innovative tools that enhance surgical planning and decision-making. Methods: This study explores the integration of mixed reality technology, specifically the HoloLens 2 headset, for visualization and interaction with three-dimensional (3D) anatomical reconstructions obtained from computed tomography (CT) scans. Our prospective observational study, conducted at IRCCS (Scientific Hospitalization and Care Institute) Sant’Orsola-Malpighi University Hospital in Bologna, engaged ten pediatric surgeons, who assessed three types of anatomical malformations (splenic cysts, pulmonary cystic adenomatoid malformations, and pyelo-ureteral junction stenosis) and planned surgeries using both traditional 2D CT scans and 3D visualizations via HoloLens 2, followed by completing a questionnaire to evaluate the utility of each of these imaging techniques in surgical planning. Results: The statistical analysis revealed that the 3D visualizations significantly outperformed the 2D CT scans in clarity and utility (p < 0.05). The results indicated significant improvements in anatomy understanding and surgical precision. The immersive experience provided by HoloLens 2 enabled surgeons to better identify critical landmarks, understand spatial relationships, and prevent surgical challenges. Furthermore, this technology facilitated collaborative decision-making and streamlined surgical workflows. Conclusions: Despite some challenges in ease of use, HoloLens 2 showed promising results in reducing the learning curve for complex procedures. This study underscores the transformative potential of mixed reality technology in pediatric surgery, advocating for further research and development to integrate these advancements into routine clinical practice. Full article
(This article belongs to the Section Pediatric Surgery)
Show Figures

Figure 1

Back to TopTop