Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
ABSTRACT HDAC is part of the ultraviolet imaging spectrometer (UVIS) onboard the Cassini spacecraft. The instrument scans the Lyman-α emission lines of hydrogen and deuterium atoms. In the photometer mode only the CEM detector is used to... more
ABSTRACT HDAC is part of the ultraviolet imaging spectrometer (UVIS) onboard the Cassini spacecraft. The instrument scans the Lyman-α emission lines of hydrogen and deuterium atoms. In the photometer mode only the CEM detector is used to register the signals within a 3 degree field of view. HDAC has been switched on in photometer mode most of the time producing a unique continuous data set for more than a decade. An analysis of the Lyman-α background data serves two purposes: determination of the parameters of the interstellar/interplanetary hydrogen and determination of the properties of the solar wind. The exhaustive pre-flight laboratory calibrations included evaluation of the absolute sensitivity of the instrument; evaluation of the instrument spectral sensitivity; evaluation of the off-axis response. During the mission these characteristics may change over time due to continuous time degradation of electronics and/or abrupt events. For example, three dramatic sensitivity breakdowns were observed in 2001. Thus the only chance to determine the current sensitivity of HDAC is to make in-flight comprehensive evaluation, e.g. measuring known fluxes from stars or other bodies. We systematically analyzed photometric observations of the star SPICA in order to perform in-flight calibrations. All three aspects listed above were explored. We found that the instrument is still in good condition. The current sensitivity of 12 count/s/Rayleigh is still sufficient to provide good signal to noise data. Off-axis responsivity is non-uniform and visibly differs from pre-flight determinations. At the same time the shape of the spatial sensitivity response is constant and can be used for all observations. The calibrated data are compared with sophisticated theoretical models describing the spatial distribution of interstellar/interplanetary hydrogen. First results will be reported.
The Institute of Optical Sensor Systems (OS) at the Robotics and Mechatronics Center of the German Aerospace Center (DLR) has more than 35 years of experience with high-resolution imaging technology. This paper shows the institutes... more
The Institute of Optical Sensor Systems (OS) at the Robotics and Mechatronics Center of the German Aerospace Center (DLR) has more than 35 years of experience with high-resolution imaging technology. This paper shows the institutes scientific results of the next generation of CMOS detector design in a TDI (Time Delay and Integration) architecture. This project includes the technological design of future high or multispectral resolution space-borne instruments and the possibility of higher integration. First results where published by Eckardt, et al. (1 ) 2013 and (2 ) 2014. DLR OS and the Fraunhofer Institute for Microelectronic Circuits and Systems in Duisburg were driving the technology of new detectors for future high resolution projects and hybridization capability in order to keep pace with the ambitious scientific and user requirements. In combination with the engineering research, the current generation of space borne sensor systems is focusing on VIS/NIR high spectral resolution to meet the requirements on earth and planetary observation systems. The combination of large swath and high-spectral resolution with intelligent synchronization control, fast-readout ADC chains and new focal-plane concepts open the door to new remote-sensing and smart deep-space instruments. The paper gives an overview over the DLR detector development and verification program on FPA level. New control possibilities for CMOS-TDI NGdetectors in synchronization control mode, and key parameters like linearity, PTC, cross talk and control effort will be discussed in detail.
Hyperspectral instruments are used to characterise (terrestrial)and planetary surfaces, oceans and the atmosphere. At present there are a number of aircraft systems and space missions. Examples are DESIS on the ISS and MERTIS as part of... more
Hyperspectral instruments are used to characterise (terrestrial)and planetary surfaces, oceans and the atmosphere. At present there are a number of aircraft systems and space missions. Examples are DESIS on the ISS and MERTIS as part of the planetary mission Bepi Colombo. In this work a scanning system for hyperspectral panoramas is investigated. In classical systems with spectrographs, the input aperture is a long slit whose image is distributed over a 2-D detector array so that all points along a line in the scene are scanned simultaneously. The spectral dimension is then orthogonal to the slot. There are low cost hyperspectral scanners having 2D variable spectral filters with each filter positioned perpendicular to the direction of movement or flight. The biggest challenge when using these low-cost scanners, for example for airborne applications, is the mapping of the images of the individual spectral channels to each other (co-registration). The solution to the problem is the prerequisite for using this type of hyperspectral cameras. Therefore, an investigation should focus on the process of data collection, correction and registration. To test for future applications, the camera was operated as a panorama scanner. In order to evaluate the quality, the derived results of scene classification should be described here.
Optical and Radar images cover two distinct aspects in the satellite image analysis. Optical images giving more semantic information derived e.g. from multispectral data, while the radar being more versatile due to independence on cloudy... more
Optical and Radar images cover two distinct aspects in the satellite image analysis. Optical images giving more semantic information derived e.g. from multispectral data, while the radar being more versatile due to independence on cloudy and night scenes. Due to the different detection methods can expect that the fusion of these different data types can lead to improvements in the overall information from the observed scene. We define, that the information content (IC) of a set of (multispectral) images can be optimally derived from the data, if spatial and spectral resolution is adequate to the task that has to be solved. Furthermore the information is masked by typical sensor smear and noise. Thus the information, which can be derived from remote sensing imagery, depends on the system performance or image quality (IQ). By the additional use of radar data (e.g. for classification), often no significant improvement in the result is visible. Thus, we expect a drastic improvement of I...
The rapid increasing of remote sensing (RS) data in many applications ignites a spark of interest in the process of satellite image matching and registration. These data are collected through remote sensors then processed and interpreted... more
The rapid increasing of remote sensing (RS) data in many applications ignites a spark of interest in the process of satellite image matching and registration. These data are collected through remote sensors then processed and interpreted by means of image processing algorithms. They are taken from different sensors, viewpoints, or times for many industrial and governmental applications covering agriculture, forestry, urban and regional planning, geology, water resources, and others. In this chapter, a feature-based registration of optical and radar images from same and different sensors using invariant local features is presented. The registration process starts with the feature extraction and matching stages which are considered as key issues when processing remote sensing data from single or multi-sensors. Then, the geometric transformation models are applied followed by the interpolation method in order to get a final registered version. As a pre-processing step, speckle noise removal is performed on radar images in order to reduce the number of false detections. In a similar fashion, optical images are also processed by sharpening and enhancing edges in order to get more accurate detections. Different blob, corner and scale based feature detectors are tested on both optical and radar images. The list of tested detectors includes: SIFT, SURF, FAST, MSER, Harris, GFTT, ORB, BRISK and Star. In this work, five of these detectors compute their own descriptors (SIFT, SURF, ORB, BRISK, and BRIEF), while others use the steps involved in SIFT descriptor to compute the feature vectors describing the detected keypoints. A filtering process is proposed in order to control the number of extracted keypoints from high resolution satellite images for a real time processing. In this step, the keypoints or the ground control points (GCPs) are sorted according to the response strength measured based on their cornerness. A threshold value is chosen to control the extracted keypoints and finalize the extraction phase. Then, the pairwise matches between the input images are calculated by matching the corresponding feature vectors. Once the list of tie points is calculated, a full registration process is followed by applying different geometric transformations to perform the warping phase. Finally and once the transformation model estimation is done, it is followed by blending and compositing the registered version. The results included in this chapter showed a good performance for invariant local feature detectors. For example, SIFT, SURF, Harris, FAST and GFTT achieve better performance on optical images while SIFT gives also better results on radar images which suffer from speckle noise. Furthermore, through measuring the inliers ratios, repeatability, and robustness against noise, variety of comparisons have been done using different local feature detectors and descriptors in addition to evaluating the whole registration process. The tested optical and radar images are from RapidEye, Pleiades, TET-1, ASTER, IKONOS-2, and TerraSAR-X satellite sensors in different spatial resolutions, covering some areas in Australia, Egypt, and Germany.
Research Interests:
ABSTRACT
ABSTRACT We present here model results from our 3d Monte Carlo model which simulates the scattering of solar radiation by hydrogen atoms at LyA wavelengths in Titan's exosphere. We apply our model to data from the Hydrogen... more
ABSTRACT We present here model results from our 3d Monte Carlo model which simulates the scattering of solar radiation by hydrogen atoms at LyA wavelengths in Titan's exosphere. We apply our model to data from the Hydrogen Deuterium Absorption Cell (HDAC) onboard Cassini during the flyby T9 in Dec 2005, which measures the LyA radiation from Titan's exosphere. We use our model in order to simulate the HDAC measurements and obtain best fitting hydrogen profiles in the altitude range between the exobase, located at 1,500km, and 30,000km. In our model, hydrogen atoms act as a scattering medium, whereas methane acts as an absorber. The methane profile between 900 and 2000km is taken from INMS data and extrapolated using a 1D exospheric Monte Carlo model. We applied different hydrogen distributions to our model and furthermore performed a sensitivity study on the hydrogen density at the exobase within the values found in the literature, which vary within one order of magnitude, as well as on the exospheric temperature.
Der Beitrag stellt ein Sensorfusionssystem zur Erkennung und Bewertung der Fahrumgebung eines Strassenfahrzeugs vor. Die zentralen Entwurfsentscheidungen zum Aufbau des Sensorfusionssystems werden eingefuehrt und am Beispiel der... more
Der Beitrag stellt ein Sensorfusionssystem zur Erkennung und Bewertung der Fahrumgebung eines Strassenfahrzeugs vor. Die zentralen Entwurfsentscheidungen zum Aufbau des Sensorfusionssystems werden eingefuehrt und am Beispiel der Objekterkennung auf Grundlage der Daten eines Laserscanners und einer Kamera vertiefend diskutiert. Das Fusionssystem besteht aus drei aufeinander aufbauenden Verarbeitungsebenen. In einer sensornahen Ebene (low-level) werden die Bilder vorverarbeitet (Bildverbesserung) und der Fahrkorridor anhand der Laserscannerdaten erkannt. In der Objektebene (middle-level) werden Objekthypothesen anhand einzelner Sensoren erzeugt und fusioniert. Fuer die Kamera werden Bildmerkmale wie Konturen, Strassentextur und markante Eckpunkte verwendet. Schliesslich wird in der abstrakten Ebene (high-level) zukuenftig eine Gefahrenerkennung erfolgen. Das Sensorfusionssystem ist modular aufgebaut und deshalb flexibel anpassbar und erweiterbar. (A) ABSTRACT IN ENGLISH: In this approach a sensor fusion system for modelling the driving environment is presented. The design decisions for the system are introduced and exemplified by an object detector where the data of a laser scanner and a video camera is fused. The system consists of three processing levels: at the low level the camera images are enhanced and the driving corridor is detected from the laser scanner data. At the middle level, object hypotheses are generated from the single sensors and fused into model objects. For the images, contour, road texture and distinctive points are used. Finally, at a high level, dangerous situations can be detected in the future on the basis of these modules. This is a modular approach, which can easily be adapted and extended. (A) Beitrag zur VDI-Tagung "Optische Technologien in der Fahrzeugtechnik", Leonberg, 3. und 4. Juni 2008. Siehe auch Gesamtaufnahme der Tagung, ITRD-Nummer D364540.
Airborne linear array sensors present new challenges for photogrammetric software. The push-broom nature of these sensor systems has the potential for very high quality images, but these are heavily influenced by the dynamics of the... more
Airborne linear array sensors present new challenges for photogrammetric software. The push-broom nature of these sensor systems has the potential for very high quality images, but these are heavily influenced by the dynamics of the aircraft during acquisition. Fortunately, highly precise position and attitude measurements have become possible, using today's inertial measuring units (IMUs). This allows image restoration to the sub-pixel level. The sensor discussed in detail here is a "three-line camera" with additional multispectral lines. The three lines are one looking forward, one in the nadir position and one looking backward with respect to the flight path. Extensive software processes are necessary to produce traditional photogrammetric products from a push-broom airborne sensor. The first steps of the ground processing flow are off-loading imagery and supporting data from the mass memory system of the sensor, post-processing of GPS/IMU data and image rectificati...
Using a modified Gaussian approximation to the depth distribution of the energy dissipation function for electron bombardment, an analytical expression was derived for electron-beam-induced current (EBIC) at a Schottky barrier parallel to... more
Using a modified Gaussian approximation to the depth distribution of the energy dissipation function for electron bombardment, an analytical expression was derived for electron-beam-induced current (EBIC) at a Schottky barrier parallel to the bombarded surface. Comparison of theory and experiment for the voltage dependence of EBIC for 14 specimens (including p- and n-type GaAs and Si) provided values for the
We expect commercial high resolution imaging systems, which are able to provide data with 25cm ground sample distance (GSD) or better in the near future. For selling the data, it is necessary to re-sample it to 30cm. The situation is... more
We expect commercial high resolution imaging systems, which are able to provide data with 25cm ground sample distance (GSD) or better in the near future. For selling the data, it is necessary to re-sample it to 30cm. The situation is similar when swinging out the satellite perpendicular to his ight direction. The GSD is then variable with the angle to Nadir direction. In this paper a method is proposed that the resolution adjusts adaptively according to the requirements.
The Sentinel-4 instrument is an imaging spectrometer, developed by Airbus under ESA contract in the frame of the joint European Union (EU)/ESA COPERNICUS program. SENTINEL-4 will provide accurate measurements of trace gases from... more
The Sentinel-4 instrument is an imaging spectrometer, developed by Airbus under ESA contract in the frame of the joint European Union (EU)/ESA COPERNICUS program. SENTINEL-4 will provide accurate measurements of trace gases from geostationary orbit, including key atmospheric constituents such as ozone, nitrogen dioxide, sulfur dioxide, formaldehyde, as well as aerosol and cloud properties. Key to achieving these atmospheric measurements are the two CCD detectors, covering the wavelengths in the ranges 305 nm to 500 nm (UVVIS) and 750 to 775 nm (NIR) respectively. The paper describes the architecture, and operation of these two CCD detectors, which have an unusually high full-well capacity and a very specific architecture and read-out sequence to match the requirements of the Sentinel- 4 instrument. The key performance aspects and their verification through measurement are presented, with a focus on an unusual, bi-modal dark signal generation rate observed during test.
Low-resolution thermal infrared array sensors can be used to detect human bodies and motion. Segmentation and deriving features from the segmented shape using such devices remains challenging. For improving and testing segmentation... more
Low-resolution thermal infrared array sensors can be used to detect human bodies and motion. Segmentation and deriving features from the segmented shape using such devices remains challenging. For improving and testing segmentation results, a sensor fusion approach using a Kinect sensor can be used to automatically receive ground-truth data. After performing a spatial calibration, experiments were performed to receive data for training and testing. A measure of difference to the ground-truth data is defined as error rate. Probability functions can be derived to determine whether a human is present, appearing or disappearing at a specific pixel. Optimization using Gaussian blur results in shapes ready for segmentation. A machine learning approach that uses conditional random fields on the ground-truth data generated by sensor fusion can be trained to reconstruct the ground-truth data. Testing different models showed that a spatial model that consists of a 4-connected neighborhood ach...
German Aerospace Center DLR is involved in several hyperspectral missions for Earth remote sensing e.g. EnMAP but also for deep space and planetary missions e.g. the Mercury mission Bepi Colombo. Hyperspectral instruments are designed for... more
German Aerospace Center DLR is involved in several hyperspectral missions for Earth remote sensing e.g. EnMAP but also for deep space and planetary missions e.g. the Mercury mission Bepi Colombo. Hyperspectral instruments are designed for characterization of planetary surfaces, oceans and the atmosphere. These spectrometers operate in the visible VIS, near infrared NIR, short wave infrared SWIR upi¾źto thermal infrared TIR spectral range with a spectral sampling below 10i¾źnm upi¾źto 100i¾źnm. In the spatial domain these instruments have more then 1000 pixels with a Ground Sampling Distance GSD of about 30i¾źm upi¾źto 90i¾źm. The paper describes the calibration and performance verification of a breadboard model for future spectrometer on space-borne platforms. These procedures include measurements of the dark signal DS, the linearity and deviation from linearity, noise behavior and signal to noise ratio SNR as well as photon transfer curve PTC, the absolute radiometric calibration and the spectral imaging performance or the spectral resolution.
Research Interests:
Research Interests:
Research Interests:
Research Interests:

And 238 more