Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Wilfried Karel

Mind your grey tones – examining the influence of decolourization methods on interest point extraction and matching for architectural image-based modelling
High resolution 3D models produced from photographs acquired with consumer-grade cameras are becoming increasingly common in the fields of geosciences. However, the quality of an image-based 3D model depends on the planning of the... more
High resolution 3D models produced from photographs acquired with consumer-grade cameras are becoming increasingly common in the fields of geosciences. However, the quality of an image-based 3D model depends on the planning of the photogrammetric surveys. This means that the geometric configuration of the multi-view camera network and the control data have to be designed in accordance with the required accuracy, resolution and completeness. From a practical application point of view, a proper planning (of both photos and control data) of the photogrammetric survey especially for terrestrial acquisition, is not always ensured due to limited accessibility of the target object and the presence of occlusions. To solve these problems, we propose a different image acquisition strategy and we test different geo-referencing scenarios to deal with the practical issues of a terrestrial photogrammetric survey. The proposed photogrammetric survey procedure is based on the acquisition of a sequence of images in panorama mode by rotating the camera on a standard tripod. The offset of the pivot point from the projection center prevents the stitching of these images into a panorama. We demonstrate how to still take advantage of this capturing mode. The geo-referencing investigation consists of testing the use of directly observed coordinates of the camera positions, different ground control point (GCP) configurations, and GCPs with different accuracies, i.e. artificial targets vs. natural features. Images of the test field in a low-slope hill were acquired from the ground using an SLR camera. To validate the photogrammetric results a terrestrial laser scanner survey is used as benchmark.
Research Interests:
ABSTRACT Time of flight range cameras simultaneously gather object distances for all pixels of a focal plane array by evaluating the round-trip time of an emitted signal. While ranging precisions typically amount to some centimetres,... more
ABSTRACT Time of flight range cameras simultaneously gather object distances for all pixels of a focal plane array by evaluating the round-trip time of an emitted signal. While ranging precisions typically amount to some centimetres, accuracies may be worse by an order of magnitude. Scattering is one of the sources of systematic errors, caused by the spreading of portions of the incident light over the sensor due to multiple reflections between the sensor, lens and optical filter. The present contribution analyses this phenomenon with respect to various capture parameters, with the objective of achieving a better understanding and a validation of assumptions. Subsequently, the authors derive both image space invariant and variant models for scattering, apply them to test scenes, and compare the results to each other and to those of existing methods.
ABSTRACT A framework for Orientation and Processing of Airborne Laser Scanning point clouds, OPALS, is presented. It is designed to provide tools for all steps starting from full waveform decomposition, sensor calibration, quality... more
ABSTRACT A framework for Orientation and Processing of Airborne Laser Scanning point clouds, OPALS, is presented. It is designed to provide tools for all steps starting from full waveform decomposition, sensor calibration, quality control, and terrain model derivation, to vegetation and building modeling. The design rationales are discussed. The structure of the software framework enables the automatic and simultaneous building of command line executables, Python modules, and C++ classes from a single algorithm-centric repository. It makes extensive use of (industry-) standards as well as cross-platform libraries. The framework provides data handling, logging, and error handling. Random, high-performance run-time access to the originally acquired point cloud is provided by the OPALS data manager, allowing storage of billions of 3D-points and their additional attributes. As an example geo-referencing of laser scanning strips is presented.
ABSTRACT Airborne laser scanning and image matching can today form the basis for the generation of digital terrain models (DTMs). In addition to the DTM, quality parameters are needed that describe the accuracy at a high level of detail,... more
ABSTRACT Airborne laser scanning and image matching can today form the basis for the generation of digital terrain models (DTMs). In addition to the DTM, quality parameters are needed that describe the accuracy at a high level of detail, at best for every interpolated DTM point. Furthermore, other parameters are of interest, for example, the distance from each DTM point to the data point next to it. This paper presents a method to derive accuracy measures from the original data and the DTM itself. Its application is demonstrated with an example. The quality measures are suitable for informing users in detail about DTM quality and warning them of weakly determined areas. © 2006 The Authors. Journal Compilation 2006 The Remote Sensing and Photogrammetry Society and Blackwell Publishing Ltd.
This article concentrates on the integrated self-calibration of both the interior orientation and the distance measurement system of a time-of-flight range camera that employs amplitude-modulated, continuous-wave, near-infrared light... more
This article concentrates on the integrated self-calibration of both the interior orientation and the distance measurement system of a time-of-flight range camera that employs amplitude-modulated, continuous-wave, near-infrared light (photonic mixer device - PMD). In contrast to other approaches that conduct specialized experiments for the investigation of individual, potential distortion factors, in the presented approach all calculations are based on the
This article concentrates on the integrated self-calibration of both the interior orientation and the distance measurement system of a time-of-flght range camera (photonic mixer device). Unlike other approaches that investigate individual... more
This article concentrates on the integrated self-calibration of both the interior orientation and the distance measurement system of a time-of-flght range camera (photonic mixer device). Unlike other approaches that investigate individual distortion factors separately, in the presented approach all calculations are based on the same data set that is captured without auxiliary devices serving as high-order reference, but with the
Three-dimensional (3D) imaging systems are now widely available, but standards, best practices and comparative data have started to appear only in the last 10 years or so. The need for standards is mainly driven by users and product... more
Three-dimensional (3D) imaging systems are now widely available, but standards, best practices and comparative data have started to appear only in the last 10 years or so. The need for standards is mainly driven by users and product developers who are concerned with 1) the applicability of a given system to the task at hand (fit-for-purpose), 2) the ability to fairly compare across instruments, 3) instrument warranty issues, 4) costs savings through 3D imaging. The evaluation and characterization of 3D imaging sensors and algorithms ...
Taking a photograph is often considered to be an indispensable procedural step in many archaeological fields (e.g. excavating), whereas some sub-disciplines (e.g. aerial archaeology) often consider photographs to be the prime data source.... more
Taking a photograph is often considered to be an indispensable procedural step in many archaeological fields (e.g. excavating), whereas some sub-disciplines (e.g. aerial archaeology) often consider photographs to be the prime data source. Whether they were acquired on the ground or from the air, digital cameras save with each photograph the exact date and time of acquisition and additionally enable to store the camera’s geographical location in specific metadata fields. This location is typically obtained from GNSS (Global Navigation Satellite System) receivers, either operating in continuous mode to record the path of the camera platform, or the position is observed for each exposure individually. Although such positional information has huge advantages in archiving the imagery, this approach has several limits as it does not record the complete exterior orientation of the camera. More specifically, the essential roll, pitch and yaw camera angles are missing, thus the viewing direction and the camera rotation around it. Besides enabling to define the exact portion of the scene that was photographed (essential for proper archiving), these parameters can also aid the subsequent orthophoto production workflows and even guide photo acquisition. This paper proposes a cost-effective hard- and software solution (camera position: 2.5 m and orientation in static conditions: maximally 2°, both at 1σ) to record all indispensable exterior orientation parameters during image acquisition. After the introduction of the utilized hardware components, the software that allows recording and estimating these parameters as well as embedding them into the image metadata is introduced. Afterwards, the obtainable accuracy in both static (i.e. terrestrial) and dynamic (i.e. airborne) conditions are calculated and assessed. Finally, the good use of this solution for different archaeological purposes will be detailed and commented where needed, while an outlook on future developments finalizes this article.
This paper investigates the use of different greyscale conversion algorithms to decolourize colour images as input for two Structure-from-Motion (SfM) software packages. Although SfM software commonly works with a wide variety of frame... more
This paper investigates the use of different greyscale conversion algorithms to decolourize colour images as input for two Structure-from-Motion (SfM) software packages. Although SfM software commonly works with a wide variety of frame imagery (old and new, colour and greyscale, airborne and terrestrial, large-and small scale), most programs internally convert the source imagery to single-band, greyscale images. This conversion is often assumed to have little, if any, impact on the final outcome.
To verify this assumption, this article compares the output of an academic and a commercial SfM software package using seven different collections of architectural images. Besides the conventional 8-bit true-colour JPEG images with embedded sRGB colour profiles, for each of those datasets, 57 greyscale variants were computed with different colour-to-greyscale algorithms. The success rate of specific colour conversion approaches can, therefore, be compared with the commonly implemented colour-to-greyscale algorithms (luma Y’601, luma Y’709, or luminance CIE Y), both in terms of the applied feature extractor as well as of the specific image content (as exemplified by the two different feature descriptors and the various image collections, respectively).
Although the differences can be small, the results clearly indicate that certain colour-to-greyscale conversion algorithms in an SfM-workflow constantly perform better than others. Overall, one of the best performing decolourization algorithms turns out to be a newly developed one.