Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (49)

Search Parameters:
Keywords = camera overlapping fields of view

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 30026 KiB  
Article
Multi-Camera Multi-Vehicle Tracking Guided by Highway Overlapping FoVs
by Hongkai Zhang, Ruidi Fang, Suqiang Li, Qiqi Miao, Xinggang Fan, Jie Hu and Sixian Chan
Mathematics 2024, 12(10), 1467; https://doi.org/10.3390/math12101467 - 9 May 2024
Viewed by 775
Abstract
Multi-Camera Multi-Vehicle Tracking (MCMVT) is a critical task in Intelligent Transportation Systems (ITS). Differently to in urban environments, challenges in highway tunnel MCMVT arise from the changing target scales as vehicles traverse the narrow tunnels, intense light exposure within the tunnels, high similarity [...] Read more.
Multi-Camera Multi-Vehicle Tracking (MCMVT) is a critical task in Intelligent Transportation Systems (ITS). Differently to in urban environments, challenges in highway tunnel MCMVT arise from the changing target scales as vehicles traverse the narrow tunnels, intense light exposure within the tunnels, high similarity in vehicle appearances, and overlapping camera fields of view, making highway MCMVT more challenging. This paper presents an MCMVT system tailored for highway tunnel roads incorporating road topology structures and the overlapping camera fields of view. The system integrates a Cascade Multi-Level Multi-Target Tracking strategy (CMLM), a trajectory refinement method (HTCF) based on road topology structures, and a spatio-temporal constraint module (HSTC) considering highway entry–exit flow in overlapping fields of view. The CMLM strategy exploits phased vehicle movements within the camera’s fields of view, addressing such challenges as those presented by fast-moving vehicles and appearance variations in long tunnels. The HTCF method filters static traffic signs in the tunnel, compensating for detector imperfections and mitigating the strong lighting effects caused by the tunnel lighting. The HSTC module incorporates spatio-temporal constraints designed for accurate inter-camera trajectory matching within overlapping fields of view. Experiments on the proposed Highway Surveillance Traffic (HST) dataset and CityFlow dataset validate the system’s effectiveness and robustness, achieving an IDF1 score of 81.20% for the HST dataset. Full article
(This article belongs to the Special Issue Advances in Computer Vision and Machine Learning, 2nd Edition)
Show Figures

Figure 1

21 pages, 7718 KiB  
Article
Planar Reconstruction of Indoor Scenes from Sparse Views and Relative Camera Poses
by Fangli Guan, Jiakang Liu, Jianhui Zhang, Liqi Yan and Ling Jiang
Remote Sens. 2024, 16(9), 1616; https://doi.org/10.3390/rs16091616 - 30 Apr 2024
Viewed by 766
Abstract
Planar reconstruction detects planar segments and deduces their 3D planar parameters (normals and offsets) from the input image; this has significant potential in the fields of digital preservation of cultural heritage, architectural design, robot navigation, intelligent transportation, and security monitoring. Existing methods mainly [...] Read more.
Planar reconstruction detects planar segments and deduces their 3D planar parameters (normals and offsets) from the input image; this has significant potential in the fields of digital preservation of cultural heritage, architectural design, robot navigation, intelligent transportation, and security monitoring. Existing methods mainly employ multiple-view images with limited overlap for reconstruction but lack the utilization of the relative position and rotation information between the images. To fill this gap, this paper uses two views and their relative camera pose to reconstruct indoor scene planar surfaces. Firstly, we detect plane segments with their 3D planar parameters and appearance embedding features using PlaneRCNN. Then, we transform the plane segments into a global coordinate frame using the relative camera transformation and find matched planes using the assignment algorithm. Finally, matched planes are merged by tackling a nonlinear optimization problem with a trust-region reflective minimizer. An experiment on the Matterport3D dataset demonstrates that the proposed method achieves 40.67% average precision of plane reconstruction, which is an improvement of roughly 3% over Sparse Planes, and it improves the IPAA-80 metric by 10% to 65.7%. This study can provide methodological support for 3D sensing and scene reconstruction in sparse view contexts. Full article
Show Figures

Figure 1

14 pages, 2137 KiB  
Article
Spatiotemporal Niche Separation among Passeriformes in the Halla Mountain Wetland of Jeju, Republic of Korea: Insights from Camera Trap Data
by Young-Hun Jeong, Sung-Hwan Choi, Maniram Banjade, Seon-Deok Jin, Seon-Mi Park, Binod Kunwar and Hong-Shik Oh
Animals 2024, 14(5), 724; https://doi.org/10.3390/ani14050724 - 26 Feb 2024
Cited by 2 | Viewed by 750
Abstract
This study analyzed 5322 camera trap photographs from Halla Mountain Wetland, documenting 1427 independent bird sightings of 26 families and 49 species of Passeriformes. Key observations include morning activities in Cyanoptila cyanomelana and Horornis canturians and afternoon activity in Muscicapa dauurica and Phoenicurus [...] Read more.
This study analyzed 5322 camera trap photographs from Halla Mountain Wetland, documenting 1427 independent bird sightings of 26 families and 49 species of Passeriformes. Key observations include morning activities in Cyanoptila cyanomelana and Horornis canturians and afternoon activity in Muscicapa dauurica and Phoenicurus auroreus. Wetlands were significantly preferred (P_i = 0.398) despite their smaller area, contrasting with underutilized grasslands (P_i = 0.181). Seasonal activity variations were notable, with overlap coefficients ranging from 0.08 to 0.81 across species, indicating diverse strategies in resource utilization and thermoregulation. Population density was found to be a critical factor in habitat usage, with high-density species showing more consistent activity patterns. The study’s results demonstrate the ecological adaptability of Passeriformes in the Halla Mountain Wetland while highlighting the limitations of camera trapping methods. These limitations include their fixed field of view and intermittent recording capability, which may not fully capture the spectrum of complex avian behaviors. This research underlines the need for future studies integrating various methodologies, such as direct observation and acoustic monitoring, to gain a more comprehensive understanding of avian ecology. Full article
(This article belongs to the Section Wildlife)
Show Figures

Figure 1

24 pages, 10706 KiB  
Article
Adaptive Point-Line Fusion: A Targetless LiDAR–Camera Calibration Method with Scheme Selection for Autonomous Driving
by Yingtong Zhou, Tiansi Han, Qiong Nie, Yuxuan Zhu, Minghu Li, Ning Bian and Zhiheng Li
Sensors 2024, 24(4), 1127; https://doi.org/10.3390/s24041127 - 8 Feb 2024
Viewed by 1048
Abstract
Accurate calibration between LiDAR and camera sensors is crucial for autonomous driving systems to perceive and understand the environment effectively. Typically, LiDAR–camera extrinsic calibration requires feature alignment and overlapping fields of view. Aligning features from different modalities can be challenging due to noise [...] Read more.
Accurate calibration between LiDAR and camera sensors is crucial for autonomous driving systems to perceive and understand the environment effectively. Typically, LiDAR–camera extrinsic calibration requires feature alignment and overlapping fields of view. Aligning features from different modalities can be challenging due to noise influence. Therefore, this paper proposes a targetless extrinsic calibration method for monocular cameras and LiDAR sensors that have a non-overlapping field of view. The proposed solution uses pose transformation to establish data association across different modalities. This conversion turns the calibration problem into an optimization problem within a visual SLAM system without requiring overlapping views. To improve performance, line features serve as constraints in visual SLAM. Accurate positions of line segments are obtained by utilizing an extended photometric error optimization method. Moreover, a strategy is proposed for selecting appropriate calibration methods from among several alternative optimization schemes. This adaptive calibration method selection strategy ensures robust calibration performance in urban autonomous driving scenarios with varying lighting and environmental textures while avoiding failures and excessive bias that may result from relying on a single approach. Full article
(This article belongs to the Special Issue Radar Technology and Data Processing)
Show Figures

Figure 1

21 pages, 19137 KiB  
Article
Semi-Supervised Image Stitching from Unstructured Camera Arrays
by Erman Nghonda Tchinda, Maximillian Kealoha Panoff, Danielle Tchuinkou Kwadjo and Christophe Bobda
Sensors 2023, 23(23), 9481; https://doi.org/10.3390/s23239481 - 28 Nov 2023
Cited by 1 | Viewed by 1321
Abstract
Image stitching involves combining multiple images of the same scene captured from different viewpoints into a single image with an expanded field of view. While this technique has various applications in computer vision, traditional methods rely on the successive stitching of image pairs [...] Read more.
Image stitching involves combining multiple images of the same scene captured from different viewpoints into a single image with an expanded field of view. While this technique has various applications in computer vision, traditional methods rely on the successive stitching of image pairs taken from multiple cameras. While this approach is effective for organized camera arrays, it can pose challenges for unstructured ones, especially when handling scene overlaps. This paper presents a deep learning-based approach for stitching images from large unstructured camera sets covering complex scenes. Our method processes images concurrently by using the SandFall algorithm to transform data from multiple cameras into a reduced fixed array, thereby minimizing data loss. A customized convolutional neural network then processes these data to produce the final image. By stitching images simultaneously, our method avoids the potential cascading errors seen in sequential pairwise stitching while offering improved time efficiency. In addition, we detail an unsupervised training method for the network utilizing metrics from Generative Adversarial Networks supplemented with supervised learning. Our testing revealed that the proposed approach operates in roughly ∼1/7th the time of many traditional methods on both CPU and GPU platforms, achieving results consistent with established methods. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

13 pages, 4242 KiB  
Article
Neural Radiation Fields in a Tidal Flat Environment
by Huilin Ge, Zhiyu Zhu, Haiyang Qiu and Youwen Zhang
Appl. Sci. 2023, 13(19), 10848; https://doi.org/10.3390/app131910848 - 29 Sep 2023
Cited by 1 | Viewed by 1021
Abstract
Tidal flats are critical ecosystems, playing a vital role in biodiversity conservation and ecological balance. Collecting tidal flat environmental information using unmanned aerial vehicles (UAVs) and subsequently utilizing 3D reconstruction techniques for their detection and protection holds significance in providing comprehensive and detailed [...] Read more.
Tidal flats are critical ecosystems, playing a vital role in biodiversity conservation and ecological balance. Collecting tidal flat environmental information using unmanned aerial vehicles (UAVs) and subsequently utilizing 3D reconstruction techniques for their detection and protection holds significance in providing comprehensive and detailed tidal flat information, including terrain, slope, and other parameters. It also enables scientific decision-making for the preservation of tidal flat ecosystems and the monitoring of factors such as rising sea levels. Moreover, the latest advancements in neural radiance fields (Nerf) have provided valuable insights and novel perspectives for our work. We face the following challenges: (1) the performance of a single network is limited due to the vast area to cover; (2) regions far from the camera center may exhibit suboptimal rendering results; and (3) changes in lighting conditions present challenges for the achievement of precise reconstruction. To tackle these challenges, we partitioned the tidal flat scene into distinct submodules, carefully preserving overlapping regions between each submodule for collaborative optimization. The luminance of each image is quantified by the appearance embedding vector produced by every captured image. Subsequently, this corresponding vector serves as an input to the model, enhancing its performance across varying lighting conditions. We also introduce an ellipsoidal sphere transformation that brings distant image elements into the sphere’s interior, enhancing the algorithm’s capacity to represent remote image information. Our algorithm is validated using tidal plane images collected from UAVs and compared with traditional Nerf based on two metrics: peak signal-to-noise ratio (PSNR) and learned perceptual image patch similarity (LPIPS). Our method enhances the PSNR value by 2.28 and reduces the LPIPS value by 0.11. The results further demonstrate that our approach significantly enhances Nerf’s performance in tidal flat environments. Utilizing Nerf for the 3D reconstruction of tidal flats, we bypass the need for explicit representation and geometric priors. This innovative approach yields superior novel view synthesis and enhances geometric perception, resulting in high-quality reconstructions. Our method not only provides valuable data but also offers profound insights for environmental monitoring and management. Full article
Show Figures

Figure 1

11 pages, 5899 KiB  
Article
Mosaicing Technology for Airborne Wide Field-of-View Infrared Image
by Lei Dong, Fangjian Liu, Mingchao Han and Hongjian You
Appl. Sci. 2023, 13(15), 8977; https://doi.org/10.3390/app13158977 - 4 Aug 2023
Cited by 1 | Viewed by 850
Abstract
Multi-detector parallel scanning is derived from the traditional airborne panorama camera, and it has a great lateral field of view. A wide field-of-view camera can be used to obtain an area of remote sensing image by whisk broom mood during the flight. The [...] Read more.
Multi-detector parallel scanning is derived from the traditional airborne panorama camera, and it has a great lateral field of view. A wide field-of-view camera can be used to obtain an area of remote sensing image by whisk broom mood during the flight. The adjacent image during acquisition should cover the overlap region according to the flight path, and then the regional image can be generated by image processing. Complexity and difficulty are increased during the regional image processing due to some interference factors of aircraft in flight. The overlap of the acquired regional image is constantly variable. Depending on the analysis of the imaging geometric principle of a wide field-of-view scanning camera, this paper proposes the rigorous geometric model of geoposition. The infrared image mosaic technology is proposed according to the features of regional images through the SIFT (Scale Invariant Feature Transform) operator to extract the two best-matching point pairs in the adjacent overlap region. We realize the coarse registration of adjacent images according to image translation, rotation, and a scale model of image geometric transformation, and then the local fine stitching is realized using the normalized cross-correlation matching strategy. The regional mosaic experiment of aerial multi-detector parallel scanning infrared image is processed to verify the feasibility and efficiency of the proposed algorithm. Full article
(This article belongs to the Collection Space Applications)
Show Figures

Figure 1

22 pages, 17595 KiB  
Article
Environment Perception with Chameleon-Inspired Active Vision Based on Shifty Behavior for WMRs
by Yan Xu, Cuihong Liu, Hongguang Cui, Yuqiu Song, Xiang Yue, Longlong Feng and Liyan Wu
Appl. Sci. 2023, 13(10), 6069; https://doi.org/10.3390/app13106069 - 15 May 2023
Viewed by 1618
Abstract
To improve the environment perception ability of wheeled mobile robots (WMRs), the visual behavior mechanism of the negative-correlation motion of chameleons is introduced into the binocular vision system of WMRs, and a shifty-behavior-based environment perception model with chameleon-inspired active vision for WMRs is [...] Read more.
To improve the environment perception ability of wheeled mobile robots (WMRs), the visual behavior mechanism of the negative-correlation motion of chameleons is introduced into the binocular vision system of WMRs, and a shifty-behavior-based environment perception model with chameleon-inspired active vision for WMRs is established, where vision–motor coordination is achieved. First, a target search sub-model with chameleon-inspired binocular negative-correlation motion is built. The relationship between the rotation angles of two cameras and the neck and the camera’s field of view (FOV), overlapping angle, region of interest, etc., is analyzed to highlight the binocular negative-correlation motion compared with binocular synchronous motion. The search efficiency of the negative-correlation motion is doubled compared with binocular synchronous motion, and the search range is also greatly improved. Second, the FOV model of chameleon-inspired vision perception based on a shifty-behavior mode is set up. According to the different functional requirements of target searching and tracking stages, the shift of the robot visual behavior is analyzed from two aspects, measuring range and accuracy. Finally, a chameleon-inspired active-vision-based environment perception strategy for mobile robots is constructed based on the shifty-behavior mode, and experimental verification is deployed, which achieves the reproduction of the visual behavior of chameleons in the vision system of mobile robots with satisfactory results. Full article
(This article belongs to the Section Robotics and Automation)
Show Figures

Figure 1

18 pages, 2439 KiB  
Review
A Review on Methods for Measurement of Free Water Surface
by Gašper Rak, Marko Hočevar, Sabina Kolbl Repinc, Lovrenc Novak and Benjamin Bizjan
Sensors 2023, 23(4), 1842; https://doi.org/10.3390/s23041842 - 7 Feb 2023
Cited by 4 | Viewed by 3371
Abstract
Turbulent free-surface flows are encountered in several engineering applications and are typically characterized by the entrainment of air bubbles due to intense mixing and surface deformation. The resulting complex multiphase structure of the air–water interface presents a challenge in precise and reliable measurements [...] Read more.
Turbulent free-surface flows are encountered in several engineering applications and are typically characterized by the entrainment of air bubbles due to intense mixing and surface deformation. The resulting complex multiphase structure of the air–water interface presents a challenge in precise and reliable measurements of the free-water-surface topography. Conventional methods by manometers, wave probes, point gauges or electromagnetic/ultrasonic devices are proven and reliable, but also time-consuming, with limited accuracy and are mostly intrusive. Accurate spatial and temporal measurements of complex three-dimensional free-surface flows in natural and man-made hydraulic structures are only viable by high-resolution non-contact methods, namely, LIDAR-based laser scanning, photogrammetric reconstruction from cameras with overlapping field of view, or laser triangulation that combines laser ranging with high-speed imaging data. In the absence of seeding particles and optical calibration targets, sufficient flow aeration is essential for the operation of both laser- and photogrammetry-based methods, with local aeration properties significantly affecting the measurement uncertainty of laser-based methods. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

16 pages, 6528 KiB  
Article
3D Point Cloud Stitching for Object Detection with Wide FoV Using Roadside LiDAR
by Xiaowei Lan, Chuan Wang, Bin Lv, Jian Li, Mei Zhang and Ziyi Zhang
Electronics 2023, 12(3), 703; https://doi.org/10.3390/electronics12030703 - 31 Jan 2023
Cited by 3 | Viewed by 3414
Abstract
Light Detection and Ranging (LiDAR) is widely used in the perception of physical environment to complete object detection and tracking tasks. The current methods and datasets are mainly developed for autonomous vehicles, which could not be directly used for roadside perception. This paper [...] Read more.
Light Detection and Ranging (LiDAR) is widely used in the perception of physical environment to complete object detection and tracking tasks. The current methods and datasets are mainly developed for autonomous vehicles, which could not be directly used for roadside perception. This paper presents a 3D point cloud stitching method for object detection with wide horizontal field of view (FoV) using roadside LiDAR. Firstly, the base detection model is trained by KITTI dataset and has achieved detection accuracy of 88.94. Then, a new detection range of 180° can be inferred to break the limitation of camera’s FoV. Finally, multiple sets of detection results from a single LiDAR are stitched to build a 360° detection range and solve the problem of overlapping objects. The effectiveness of the proposed approach has been evaluated using KITTI dataset and collected point clouds. The experimental results show that the point cloud stitching method offers a cost-effective solution to achieve a larger FoV, and the number of output objects has increased by 77.15% more than the base model, which improves the detection performance of roadside LiDAR. Full article
(This article belongs to the Special Issue Deep Perception in Autonomous Driving)
Show Figures

Figure 1

34 pages, 7060 KiB  
Article
Sensor Fusion with Asynchronous Decentralized Processing for 3D Target Tracking with a Wireless Camera Network
by Thiago Marchi Di Gennaro and Jacques Waldmann
Sensors 2023, 23(3), 1194; https://doi.org/10.3390/s23031194 - 20 Jan 2023
Cited by 2 | Viewed by 1448
Abstract
We present a method to acquire 3D position measurements for decentralized target tracking with an asynchronous camera network. Cameras with known poses have fields of view with overlapping projections on the ground and 3D volumes above a reference ground plane. The purpose is [...] Read more.
We present a method to acquire 3D position measurements for decentralized target tracking with an asynchronous camera network. Cameras with known poses have fields of view with overlapping projections on the ground and 3D volumes above a reference ground plane. The purpose is to track targets in 3D space without constraining motion to a reference ground plane. Cameras exchange line-of-sight vectors and respective time tags asynchronously. From stereoscopy, we obtain the fused 3D measurement at the local frame capture instant. We use local decentralized Kalman information filtering and particle filtering for target state estimation to test our approach with only local estimation. Monte Carlo simulation includes communication losses due to frame processing delays. We measure performance with the average root mean square error of 3D position estimates projected on the image planes of the cameras. We then compare only local estimation to exchanging additional asynchronous communications using the Batch Asynchronous Filter and the Sequential Asynchronous Particle Filter for further fusion of information pairs’ estimates and fused 3D position measurements, respectively. Similar performance occurs in spite of the additional communication load relative to our local estimation approach, which exchanges just line-of-sight vectors. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

21 pages, 14208 KiB  
Article
A Multi-View Thermal–Visible Image Dataset for Cross-Spectral Matching
by Yuxiang Liu, Yu Liu, Shen Yan, Chen Chen, Jikun Zhong, Yang Peng and Maojun Zhang
Remote Sens. 2023, 15(1), 174; https://doi.org/10.3390/rs15010174 - 28 Dec 2022
Cited by 3 | Viewed by 3226
Abstract
Cross-spectral local feature matching between visual and thermal images benefits many vision tasks in low-light environments, including image-to-image fusion and camera re-localization. An essential prerequisite for unleashing the potential of supervised deep learning algorithms in the area of visible–thermal matching is the availability [...] Read more.
Cross-spectral local feature matching between visual and thermal images benefits many vision tasks in low-light environments, including image-to-image fusion and camera re-localization. An essential prerequisite for unleashing the potential of supervised deep learning algorithms in the area of visible–thermal matching is the availability of large-scale and high-quality annotated datasets. However, publicly available datasets are either in relative small quantity scales or have limited pose annotations due to the expensive cost of data acquisition and annotation, which severely hinders the development of this field. In this paper, we proposed a multi-view thermal–visible image dataset for large-scale cross-spectral matching. We first recovered a 3D reference model from a group of collected RGB images, in which a certain image (bridge) shares almost the same pose as the thermal query. We then effectively registered the thermal image to the model based on manually annotating a 2D-2D tie point between the bridge and the thermal. In this way, through simply annotating one same viewpoint image pair, numerous overlapping image pairs between thermal and visible could be available. We also proposed a semi-automatic approach for generating accurate supervision for training multi-view cross-spectral matching. Specifically, our dataset consists of 40,644 cross-modal pairs with well supervision, covering multiple complex scenes. In addition, we also provided the camera metadata, 3D reference model, depth map of the visible images and 6-DoF pose of all images. We extensively evaluated the performance of state-of-the-art algorithms on our dataset and provided a comprehensive analysis of the results. We will publish our dataset and pre-processing code. Full article
(This article belongs to the Section Earth Observation Data)
Show Figures

Figure 1

31 pages, 7984 KiB  
Article
V2ReID: Vision-Outlooker-Based Vehicle Re-Identification
by Yan Qian, Johan Barthelemy, Umair Iqbal and Pascal Perez
Sensors 2022, 22(22), 8651; https://doi.org/10.3390/s22228651 - 9 Nov 2022
Cited by 2 | Viewed by 2305
Abstract
With the increase of large camera networks around us, it is becoming more difficult to manually identify vehicles. Computer vision enables us to automate this task. More specifically, vehicle re-identification (ReID) aims to identify cars in a camera network with non-overlapping views. Images [...] Read more.
With the increase of large camera networks around us, it is becoming more difficult to manually identify vehicles. Computer vision enables us to automate this task. More specifically, vehicle re-identification (ReID) aims to identify cars in a camera network with non-overlapping views. Images captured of vehicles can undergo intense variations of appearance due to illumination, pose, or viewpoint. Furthermore, due to small inter-class similarities and large intra-class differences, feature learning is often enhanced with non-visual cues, such as the topology of camera networks and temporal information. These are, however, not always available or can be resource intensive for the model. Following the success of Transformer baselines in ReID, we propose for the first time an outlook-attention-based vehicle ReID framework using the Vision Outlooker as its backbone, which is able to encode finer-level features. We show that, without embedding any additional side information and using only the visual cues, we can achieve an 80.31% mAP and 97.13% R-1 on the VeRi-776 dataset. Besides documenting our research, this paper also aims to provide a comprehensive walkthrough of vehicle ReID. We aim to provide a starting point for individuals and organisations, as it is difficult to navigate through the myriad of complex research in this field. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

18 pages, 8242 KiB  
Article
Multi-Camera Digital Image Correlation in Deformation Measurement of Civil Components with Large Slenderness Ratio and Large Curvature
by Yuntong Dai and Hongmin Li
Materials 2022, 15(18), 6281; https://doi.org/10.3390/ma15186281 - 9 Sep 2022
Cited by 4 | Viewed by 1867
Abstract
To address the limitations of conventional stereo-digital image correlation (DIC) on measuring complex objects, a continuous-view multi-camera DIC (MC-DIC) system and its two forms of camera arrangement are introduced. Multiple cameras with certain overlapping field of view are calibrated simultaneously to form an [...] Read more.
To address the limitations of conventional stereo-digital image correlation (DIC) on measuring complex objects, a continuous-view multi-camera DIC (MC-DIC) system and its two forms of camera arrangement are introduced. Multiple cameras with certain overlapping field of view are calibrated simultaneously to form an overall system for measuring the continuous full-surface deformation. The bending experiment of coral aggregate concrete beam and the axial compression experiment of timber column are conducted to verify the capability of continuous-view MC-DIC in deformation measurement of civil components with large slenderness ratio and large curvature, respectively. The obtained deformation data maintain good consistency with the displacement transducer and strain gauge. Results indicate that the continuous-view MC-DIC is a reliable 3D full-field measurement approach in civil measurements. Full article
Show Figures

Graphical abstract

15 pages, 24468 KiB  
Article
Digital Outcrop Model Generation from Hybrid UAV and Panoramic Imaging Systems
by Alysson Soares Aires, Ademir Marques Junior, Daniel Capella Zanotta, André Luiz Durante Spigolon, Mauricio Roberto Veronez and Luiz Gonzaga
Remote Sens. 2022, 14(16), 3994; https://doi.org/10.3390/rs14163994 - 17 Aug 2022
Cited by 3 | Viewed by 1762
Abstract
The study of outcrops in geosciences is being significantly improved by the enhancement of technologies that aims to build digital outcrop models (DOMs). Usually, the virtual environment is built by a collection of partially overlapped photographs taken from diverse perspectives, frequently using unmanned [...] Read more.
The study of outcrops in geosciences is being significantly improved by the enhancement of technologies that aims to build digital outcrop models (DOMs). Usually, the virtual environment is built by a collection of partially overlapped photographs taken from diverse perspectives, frequently using unmanned aerial vehicles (UAV). However, in situations including very steep features or even sub-vertical patterns, incomplete coverage of objects is expected. This work proposes an integration framework that uses terrestrial spherical panoramic images (SPI), acquired by omnidirectional fusion camera, and a UAV survey to overcome gaps left by traditional mapping in complex natural structures, such as outcrops. The omnidirectional fusion camera produces wider field of view images from different perspectives, which are able to considerably improve the representation of the DOM, mainly where the UAV has geometric view restrictions. We designed controlled experiments to guarantee the equivalent performance of SPI compared with UAV. The adaptive integration is accomplished through an optimized selective strategy based on an octree framework. The quality of the 3D model generated using this approach was assessed by quantitative and qualitative indicators. The results show the potential of generating a more reliable 3D model using SPI allied with UAV image data while reducing field survey time and complexity. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Graphical abstract

Back to TopTop