Lecture Image Processing and Interpretation
Lecture Image Processing and Interpretation
and Interpretation
two prime approaches in the use of
remote sensing
• 1) standard photo-interpretation of scene
content
• 2) use of digital image processing and
classification techniques that are generally
the mainstay of practical applications of
information extracted from sensor data
sets
To accomplish this, we will utilize just one Landsat TM subscene that covers
the Morro Bay area on the south-central coast of California
Image below, brighter portions relate to higher energy levels
Image interpretation
• relies on one or both of these approaches:
– Photointerpretation:the interpreter uses his/her knowledge and
experience of the real world to recognize scene objects
(features, classes, materials) in photolike renditions of the
images acquired by aerial or satellite surveys of the targets
(land; sea; atmospheric; planetary) that depict the targets as
visual scenes with variations of gray-scale tonal or color patterns
(more generally, spatial or spectral variability that mirror the
differences from place to place on the ground)
– machine-processing manipulations (usually computer-based)
that analyze and reprocess the raw data into new visual or
numerical products, which then are interpreted either by
approach 1 or are subjected to appropriate decision-making
algorithms that identify and classify the scene objects into sets of
information
Image Processing
• Computer-Assisted Scene Interpretation (CASI);
also called Image Processing
• The techniques fall into three broad categories:
– Image Restoration and Rectification
– Image Enhancement
– Image Classification
• There is a variety of CASI methods:
contrast stretching, band ratioing, band
transformation, Principal Component Analysis,
Edge Enhancement, Pattern Recognition, and
Unsupervised and Supervised Classification
Classification
• Classification is probably the most informative
means of interpreting remote sensing data
• The output from these methods can be
combined with other computer-based programs.
• The output can itself become input for organizing
and deriving information utilizing what is known
as Geographic Information Systems (GIS)
For the Morro Bay subscene the various images shown in this Section
were created using the IDRISI software processing program
(it's worthwhile to check their ClarkLabs website (http://www.clarklabs.org/)
[Clark University in Worchester, Mass.]).
The IDRISI program is especially user-friendly to students wishing to
gain experience in image processing
Image Classification
• In classifying features in an image we use
the elements of visual interpretation to
identify homogeneous groups of pixels
which represent various features or land
cover classes of interest. In digital
images it is possible to model this process,
to some extent, by using two methods:
Unsupervised Classifications and
Supervised Classifications.
• Unsupervised Classifications
this is a computerized method without direction from the
analyst in which pixels with similar digital numbers are
grouped together into spectral classes using statistical
procedures such as nearest neighbour and cluster
analysis. The resulting image may then be interpreted
by comparing the clusters produced with maps,
airphotos, and other materials related to the image site.
• Supervised Classification:
• Limitations to Image Classification:
have to be approached with caution because it
is a complex process with many assumptions.
8-bit image
(0 - 255 brightness levels)
Image Histogram
x-axis = 0 to 255
y-axis = number of pixels
Image Enhancement
• Contrast Stretching: Quite often the useful data in a
digital image populates only a small portion of the available range of
digital values (commonly 8 bits or 256 levels). Contrast
enhancement involves changing the original values so that more of
the available range is used, this then increases the contrast
between features and their backgrounds. There are several types of
contrast enhancements which can be subdivided into Linear and
Non-Linear procedures.
Image Enhancement
• Linear Contrast Stretch: This involves identifying lower and upper bounds
from the histogram (usually the minimum and maximum brightness values in
the image) and applying a transformation to stretch this range to fill the full
range.
The linear contrast stretch enhances the contrast in the image with light toned
areas appearing lighter and dark areas appearing darker, making
visual interpretation much easier.
This example illustrates the increase in contrast in an image before (left) and after (right)
a linear contrast stretch.
Spatial Filtering
• Spatial filters are designed to highlight or suppress
features in an image based on their spatial frequency.
The spatial frequency is related to the textural
characteristics of an image. Rapid variations in
brightness levels ('roughness') reflect a high spatial
frequency; 'smooth' areas with little variation in
brightness level or tone are characterized by a low
spatial frequency. Spatial filters are used to suppress
'noise' in an image, or to highlight specific image
characteristics.
• Low-pass Filters
• High-pass Filters
• Directional Filters
• etc
Spatial Filtering
• Low-pass Filters: These are used to emphasize large
homogenous areas of similar tone and reduce the smaller
detail. Low frequency areas are retained in the image resulting
in a smoother appearance to the image.
Edge Detection
Lakes & Streams
Edge Detection
Fractures & Shoreline
Image Ratios
• It is possible to divide the digital numbers of one
image band by those of another image band to create a
third image. Ratio images may be used to remove the
influence of light and shadow on a ridge due to the sun
angle. It is also possible to calculate certain indices
which can enhance vegetation or geology
Image
Sensor EM Spectrum Application
Ratio
Landsat TM Bands 3/2 red/green Soils
Landsat TM Bands 4/3 PhotoIR/red Biomass
Landsat TM Bands 7/5 SWIR/NIR Clay Minerals/Rock Alteration
For example:
Vegetation is green
Surface water is blue
Playa is gray and white
(Playas are dry lakebeds)
Color display
Rely on display hardware to convert between DN and gray levels