Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
13 views

Digital Image Processing

Digital image processing on remote sensing

Uploaded by

Prabin Bhatta
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Digital Image Processing

Digital image processing on remote sensing

Uploaded by

Prabin Bhatta
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

Digital Image Processing

Dipl. Ing Sushmita Timilsina


MSc. Geodesy and Geoinformatics
Land Management Training Center

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
FUNDAMENTALS OF DIGITAL IMAGE
PROCESSING

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
What is an Image ?
❑ An image is a spatial representation of a two-
dimensional or three-dimensional scene.
❑ An image is an array, or a matrix pixels (picture
elements) arranged in columns and rows.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Digital Image
A digital image is a representation of a two-dimensional image as a finite
set of digital values, called picture elements or pixels
Pixel values typically represent gray levels, colours, heights, opacities etc
Remember digitization implies that a digital image is an approximation
of a real scene

1 pixel

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Picture elements, Image elements,
pels, and pixels
A digital image is composed of a finite number of elements, each of
which has a particular location and value.
These elements are referred to as picture elements, image elements, pels,
and pixels.
Pixel is the term most widely used to denote the elements of a digital
image.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
What Is Digital Image ?
An image may be defined as a two-dimensional function, f(x, y),
where x and y are spatial (plane) coordinates, and the
amplitude of f at any pair of coordinates (x,y) is called the
intensity or gray level of the image at that point.

Digital Image:
When x, y and the intensity values of f are all finite, discrete
quantities, we call the image a digital image.

▪ Color Image:
 r ( x, y) 
f ( x, y) =  g ( x, y ) 
 
 b ( x , y ) 

The field of digital image processing refers to processing digital images


by means of a digital computer.
COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -
ST
Image Representation

A digital image is composed


of M rows and N columns
of pixels each storing a
value

Pixel values are most often


grey levels in the range 0-
255(black- white)

We will see later on that


images can easily be
represented as matrices.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Image Representation

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
How are images represented in the computer?

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Image Representation: Color Images

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Digital Image
Common image formats include:
1 sample per point (B&W or Grayscale)
3 samples per point (Red, Green, and Blue)
4 samples per point (Red, Green, Blue, and “Alpha”,
a.k.a. Opacity)

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Common image file formats
GIF (Graphic Interchange Format)

PNG (Portable Network Graphics)

JPEG (Joint Photographic Experts Group)

TIFF (Tagged Image File Format)

PGM (Portable Gray Map)

FITS (Flexible Image Transport System)

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Spatial Resolution
The spatial resolution of an image is determined
by how sampling was carried out
Spatial resolution simply refers to the
smallest discernable detail in an image
Vision specialists will often talk about pixel size
Graphic designers will talk about dots per inch
(DPI)

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Spatial Resolution

Vision specialists will often talk about pixel size


COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -
ST
Spatial Resolution
1024 * 1024 512 * 512 256 * 256

128 * 128 64 * 64 32 * 32

Graphic designers will talk about dots per inch


COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -
ST
Intensity Level Resolution
Intensity level resolution refers to the number of
intensity levels used to represent the image
The more intensity levels used, the finer the level of
detail discernable in an image
Intensity level resolution is usually given in terms of the
number of bits used to store each intensity level
Number of
Number of Bits Examples
Intensity Levels
1 2 0, 1
2 4 00, 01, 10, 11
4 16 0000, 0101, 1111
8 256 00110011,
16 65,536 10100110011001100110101
COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -
ST 0
Intensity Level Resolution
256 grey levels (8 bits per pixe1l)28 grey levels (7 bpp) 64 grey levels (6 bpp) 32 grey levels (5 bpp)

16 grey levels (4 bpp) 8 grey levels (3 bpp) 4 grey levels (2 bpp) 2 grey levels (1
COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION - bpp)
ST
Resolution: How Much Is Enough?
The big question with resolution is always how much is
enough?
This all depends on what is in the image and what you
would like to do with it
Key questions include
Does the image look aesthetically pleasing?
Can you see what you need to see within the image?

The picture on the right is fine for counting the


number of cars, but not for reading the number plate
COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -
ST
Image Processing??

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Image Processing??

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Image Processing??

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
WHY digital image
processing???
Interest in digital image processing methods stems
from two principal application areas:
1. Improvement of pictorial information for
human interpretation
2. Processing of image data for storage, transmission, and
representation for autonomous machine perception

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Fields that Use Digital Image Processing
Today, there is almost no area of technical endeavor that is
not impacted in some way by digital image processing.

Gamma-Ray Imaging
X-Ray Imaging
Imaging in the Ultraviolet Band
Imaging in the Visible and Infrared Bands
Imaging in the Microwave Band
Imaging in the Radio Band

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Digital Image
Processing
DIGITAL IMAGE PROCESSING IS THE MANIPULATION OF THE DIGITAL
DATA WITH THE HELP OF THE COMPUTER HARDWARE AND SOFTWARE
TO PRODUCE DIGITAL MAPS IN WHICH SPECIFIC INFORMATION HAS
BEEN EXTRACTED AND HIGHLIGHTED .

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -
ST
WHAT IS DIGITAL IMAGE
PROCESSING?
DIP Definition:
A Discipline in Which Both the Input and Output
of a Process are Images.

Image Process Image

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Fundamental Steps in DIP

A.Image Acquisition

B.Image Pre- Processing


C.Image Registration
D.Image Enhancement and Filtering
E.Image Classification and Analysis
F.Application

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
A. Image Acquisition

Images are typically


generated by
illuminating a scene
and absorbing the
energy reflected by
the objects in that
scene
COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -
ST
Image Formation
Incoming energy lands on a
sensor material responsive
to that type of energy and
this generates a voltage
Collections of sensors are
arranged to capture images

Imaging Sensor

Line of Image Sensors

Array of Image Sensors


COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -
ST
Image Formation
There are two parts to the image formation process:

◦ The geometry of image formation, which determines where in the image


plane the projection of a point in the scene will be located.

◦ The physics of light, which determines the brightness of a point in the image
plane as a function of illumination and surface properties.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
B. Image Pre-Processing
Remotely sensed raw data generally contains flaws and deficiencies
received from imaging sensor mounted on the satellite. The correction of
deficiencies and removal of flaws present in the data through some
methods are termed as pre–processing methods this correction model
involves to correct geometric distortions, to calibrate the data
radiometrically and to eliminate noise present in the data. All the pre–
processing methods are consider under three heads, namely
Pre-Processing – Correcting for radiometric and geometric errors in data
◦ Radiometric and Atmospheric correction

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Radiometric Correction
Radiometric errors are caused by detected imbalance and atmospheric deficiencies.
Radiometric corrections are transformation on the data in order to remove error, which
are geometrically independent. Radiometric corrections are also called as cosmetic
corrections and are done to improve the visual appearance of the image.
Radiometric correction involves
◦ Noise correction
electronic noise - both random and periodic
◦ Sun-angle correction
for comparison and mosaic images acquired from
different time of the year
◦ Correction for atmosphere
subtract the haze DN values from different bands DN

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Radiometric correction?
Radiometric corrections may be necessary due to variations in
scene illumination
viewing geometry,
atmospheric conditions
sensor noise and response.
Each of these will vary depending on the specific sensor and
platform used to acquire the data and the conditions during
data acquisition.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Digital Number
Radiometric error is affecting the Digital
Number (DN) stored in an image .
Sensor induced
errors by
mechanical,
electronic communication failure.
Atmosphere induced errors by interaction
of EM energy with atmospheric
constituents

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Digital Number
– An image can have DN values ranging from 0 to a maximum
value depending on its radiometric resolution:
• E.g., an 8-bit image can have DNs ranging from 0 – 255
• A 12-bit image can have DNs ranging from 0 – 4095
• Etc.

– When the image data are visualized on a screen of a computer, they


are displayed as brightness values for each screen pixel
• A data pixel with a larger value is brighter than one with a smaller value
• However, unlike the image data, screen pixels can only have 256
unique brightness values (i.e., 0 to 255).
• This limitation prevents the data from being displayed with brightness
exactly equal to their real (DN) value

Compiled for Training purpose not for


distribution - ST
Radiometric Correction Process
DN (raw value
from the sensor)
Convert DNs to radiance
based on the rescaling factors
provided in the metadata file At-sensor
radiance

Requires additional information:


Earth-sun distance, Solar zenith Top of the
angle, exoatmospheric irradiance. Atmosphere
Often found in metadata. (TOA) Reflectance

Requires knowledge of
atmospheric conditions and
aerosol properties at the time the
image was acquired Surface
Reflectance
COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -
ST
Radiometric Correction Process
DN (raw value
Most image processing software packages have radiometric and
from the sensor)
atmospheric correction tools

Landsat 8 Image Before (left) and After Correction (right)


COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -
ST
Noise Removal
Image noise is any unwanted disturbance in image data that is due to
limitations in the sensing, signal digitization, or data recording process.
Potential source: electronic interference between sensor components
Noise can either degrade or totally mask the true radiometric
information content of a digital image
Noise removal usually precedes any subsequent enhancement or
classification of the image data

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Noise Removal
The objective is to restore an image to as close an approximation of the
original scene as possible
Line striping or banding : Destriping
Line striping occurs due to non-identical detector response
Although the detectors for all satellite sensors are carefully calibrated
and matched before the launch of the satellite
With time the response of some detectors may drift to higher or lower
levels, resulting in relatively higher or lower values
Line striping is corrected using histograms per detector

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Sensor corrections Striping

Local averaging

Normalization
Striping was common in early Landsat MSS data
due to variations and drift in the response over time
of the six MSS detectors.
The 'drift' was different for each of the six detectors,
causing the same brightness to be represented
differently by each detector.
The corrective process made a relative correction
among the six sensors to bring their apparent values
in line with each other

Images: Lillesand-Kiefer
COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -
ST Campbell 10.4
Noise Removal
Line drop: Occurs due to recording problems when one of the detectors
of the sensor in question either gives wrong data or stops functioning.
The Landsat ETM, for example, has 16 detectors in all its bands, except
the thermal band
A loss of one of the detectors would result in every sixteenth scan line
being a string of zeros that would plot as a black line on the image
Dropped lines are normally 'corrected' by replacing the line with the
pixel values in the line above or below, or with the average of the two.
Detector: Component of a remote sensing system that converts
electromagnetic radiation into a recorded signal

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Sensor corrections
Line Dropout
43 47 51 57
40 46 50 54
0 0 0 0
38 40 42 50

Solution: Mean from above


and below pixels
43 47 51 57
40 46 50 54
39 43 46 52
38 40 42 50

Images: Lillesand-Kiefer
COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION - Campbell 10.4
ST
COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -
ST
Sun angle correction

Position of the sun relative to the earth changes depending on time of the day and the
day of the year
Solar elevation angle: Time- and location dependent. In the northern hemisphere the
solar elevation angle is smaller in winter than in summer
Sun angle correction: An absolute correction involves dividing the DN-value in the
image data by the sine of the solar elevation angle. Size of the angle is given in the
header of the image data

Seasonal illumination difference will disturb if


we want to analyze sequences of image of
same area but taken at different time.

The trick is to normalize the images such if


the images were taken from sun at zenith.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Atmospheric correction method
The value recorded at any pixel location on the remotely sensed image
is not a record of the true ground – leaving radiant at that point, for the
signal is attenuated due to absorption and scattering.

The atmosphere has effect on the measured brightness value of a pixel.


Other difficulties are caused by the variation in the illumination
geometry. Atmospheric path radiance introduces haze in the imagery
where by decreasing the contrast of the data.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Atmospheric correction method
Pathway A: direct reflected
sunlight
Pathway B: skylight
Pathway C: air light
Atmospheric influence in two
directions (up and down)

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
14

Major Effects due to Atmosphere

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Atmospheric correction method
Two main processes:
Scattering: reflector
Absorption: energy reduction
Two main approaches:
Simple methods: often statistical
Complex radiative transfer based
methods (incl. use of
meteorological data)

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Scattering
Scattering: disturbance of EM waves by constituents of the atmosphere
resulting in change of direction and spectral distribution of the EM energy
Rayleigh scattering: Raleigh scattering occurs when the dimensions of the
scatter is much smaller than the wavelength of the incident electromagnetic
radiation. An example is when S-band radar waves are scattered by
raindrops. Raleigh scattering exhibits a strong wavelength dependence.
Mie scattering: Mie scattering occurs when the dimensions of the scattered is
almost same as the wavelength of the incident electromagnetic radiation. An
example is when light is scattered by small water droplets in clouds.
Nonselective scattering: influence of large particles
like dust, smoke and rain

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Atmospheric Absorption
Absorption: EM energy is taken up by atmospheric components
Absorption is wavelength specific
Choice of bands for EO sensors within atmospheric windows

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
15

Atmospheric correction

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Haze Reduction
Aerial and satellite images often contain haze.
Haze is caused by atmospheric scattering and absorption due to haze
constituents in which depend substantially on the wavelength of the solar
radiation.
Presence of haze reduces image contrast and makes visual examination of
images difficult.
Haze has an additive effect resulting in higher DN values.

One means of haze compensation in multispectral data is to observe the


radiance recorded over target areas of zero reflectance.
For example, the reflectance of deep clear water is zero in NIR region of the
spectrum. Therefore, any signal observed over such an area represents the
path radiance. This value can be subtracted from all the pixels in that band

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
C. Image Registration
16 (Transformation)
Remotely sensed images are not maps. Frequently information extracted
from remotely sensed images is integrated with the map data in
Geographical Information System (GIS). The transformation of the
remotely sensed image into a map with the scale and projection properties is
called geometric corrections. Geometric corrections of remotely sensed
image is required when the image is to be used in one of the following
circumstances:
I. To transform an image to match your map projection
II. To locate points of the interest on map and image.
III.To overlay temporal sequence of images of the same area, perhaps acquired
by different sensors.
IV.To integrate remotely sensed data with GIS.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
What is Geometric correction?
17

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Sources of geometric distortions of
images
▪ Curvature of the earth
▪ Earth rotation under the sensor while image is acquired
▪ Panoramic distortion due to the field of view of the sensor
▪ Topography of the terrain

Systematic distortions:
Mostly (automatic) corrected before image is delivered by ground station

Random distortions:
Corrected by using GCP: ground control points (&DEM)
GCP resampling Image to Image resampling

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
18

Geometric distortions for airborne images: Variations in aircraft/platform altitude, velocity


and attitude: pitch roll crab

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
19

Geometric correction

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Image Registration
25
Image registration is the translation and alignment process by which two
images/maps of like geometrics and of the same set of objects are
positioned co-incident with respect to one another so that corresponding
element of the same ground area appear in the same place on the registered
images. This is often called image to image registration.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
This process almost always
involves relating Ground
Control Point (GCP), pixel
26
coordinates with precise
geometric correction since
each pixel can be
referenced not only by the
row or column in a digital
image, but it is also
rigorously referenced in
degree, feet or meters in a
standard map projection
whenever accurate data,
direction and distance
measurements are acquired,
geometric rectification is
required. This is often
called as image to map
rectification. Image Registration

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Georefrencing
When an image (raster map) is created, either by a
satellite, airborne scanner or by an office
scanner, the image is stored in row and column
geometry in raster format. There is no
relationship between the rows/columns and real
world coordinates (UTM, geographic coordinates,
or any other reference map projection). In a
process called geo-referencing the relation
between row and column numbers and real
world coordinates are established.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Georefrencing
The process of georefrencing involves two steps:

1.Selection of the appropriate types of transformation.


2.Determination of the transformation parameters.

Three techniques of georefrencing are:


➢ Georeferencing raster image using corner coordinates.
➢ Georefrencing raster image using refrencing points from
georefrenced map.
➢ Georefencing raster image using refrence image.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Georeferencing

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Geocoding
Geocoding is georefrencing with subsequent
resampling of the image raster . It is to
produce geometrically correct image.This
process comprises two main steps:
1. Each new raster element is projected using
the transformation parameters on to the
original image.
2.A DN for new pixel is determined and stored.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Geocoding

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Geocoding

Interpolation method:
• Nearest Neighbor: In the nearest neighbor method, the value for a pixel in
the output image is determined by the value of the nearest pixel in the
input image.
• Bilinear: The bilinear interpolation technique is based on a distance
dependent weighted average, of the values of the four nearest pixels in
the input image.
• Bicubic: The cubic or bicubic convolution uses the sixteen
surrounding pixels in the input image. This method is also called
cubic spline terpolation.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
D. Image enhancement
• The goal is to improve the visual interpretability of an
image by increasing the apparent distinction between
the features in the scene.

• Why do we need a computer to do the enhancement?


– Our eyes are poor at discriminating the slight radiometric or spectral
differences that may characterize such features
– With computers, these slight differences can be visually amplified to make
them readily observable by our eyes.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Image enhancement
Image enhancement techniques can be classified
in many ways. Contrast enhancement, also called
global enhancement, transforms the raw data
using the statistics computed over the whole data
set. Examples are: linear contrast stretch,
histogram equalized stretch and piece-wise
contrast stretch.
Contrary to this, spatial or local enhancement
only take local conditions into consideration and
these can vary considerably over an image.
Examples are image smoothing and sharpening.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -
ST
COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -
ST
Steps in DIP: Image Enhancement
Types of Image Enhancement Operations
• Point Operations
– Brightness values of each pixel in an image data are modified independently

• Local Operations
• Brightness values of each pixel in an image data are modified based on neighboring brightness
values

Note: Either form of enhancement can be performed on single-band images or on the


individual components of multi-image composites.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Enhancement by point processing
original
• These processing methods are based
only on the intensity of single pixels.
Simple intensity transformation:
a). Image negatives:
• Negatives of digital images are useful in
numerous applications, such as
displaying medical images and
photographing a screen with
monochrome positive film with the idea
of using the resulting negatives as
normal slides.
• Transform function T : g(x,y)=L-f(x,y),
where L is the max. intensity.
Negative

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Enhancement by point
processing
b). Contrast stretching
• Low-contrast images can
result from poor
illumination, lack of
dynamic range in the image
sensor, or even wrong setting
of a lens aperture during Original
image acquisition.
• The idea behind contrast
stretching is to increase the
dynamic range of the gray
levels in the image being
processed.

Processed image

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Gray-level
Thresholding
• A segmentation procedure
• An input image band is
segmented into two classes:
– One class for those pixels
having values below a
defined gray level (DN) NIR Band of Landsat 7 ETM+
– One class for those pixels above this value Showing only Class 1 (Water)

• The result is a binary classification

• This binary classification can then be


applied to a particular image band data to
enable display of brightness variations in
only a particular class

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Histogram of DN values of NIR Ba

NIR Band of Landsat 7 ETM+

Level Slicing
• An enhancement technique
whereby the DNs
distributed along the x axis
of an image histogram are
divided into a series of “Sliced” NIR Band of Landsat 7 ETM+
intervals or “slices”.

• All of the DNs falling


within a ‘slice’ are then
displayed at a single DN in
the output image

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Histogram processing:

• The histogram of a digital image with


gray levels in the range [0,L-1] is a
discrete function p(rk)=nk/n, where rk is
the kth gray level, nk is the number of
pixels in the image with that gray level, n
is the total number of pixels in the image,
and k=0,1..L-1.
• P(rk) gives an estimate of the probability
of occurrence of gray level rk.
• The shape of the histogram of an image
gives us useful information about the
possibility for contrast enhancement.
• A histogram of a narrow shape indicates
little dynamic range and thus corresponds
to an image having low contrast.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
What is a Image Histogram?

• A type of histogram that acts as a graphical representation of the tonal


(“DN”) distribution in a digital image.

• It plots the number of pixels for each tonal/DN value.

• By looking at the histogram for a specific image a viewer will be able to


judge the entire tonal distribution at a glance.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Histogram Equalization

• Scales the original image


DN values to equalize the
number of DNs in each
display histogram bin

• In this approach, image


DN values are assigned
to the display levels on
the basis of their
frequency of occurrence

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Example: Histogram Equalization

Original Band 1 Stretched

Compiled for Training purpose not for


distribution - ST
Filtering
Filtering is usually carried out on a single band.

Spatial filtering is the process of dividing the image into its


constituent spatial frequency and selectively altering certain spatial
features. This technique increases the analyst’s ability to discriminate
details.

Filter operations are also used to extract features from


images.eg, edges and lines, for automatically recognizing
patterns and detecting objects . It is also called local
enhancement.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
Filtering
A common filtering procedure involves moving a
'window' of a few pixels in dimension (e.g. 3x3, 5x5,
etc.) over each pixel in the image, applying a
mathematical calculation using the pixel values under
that window, and replacing the central pixel with the
new value.

The window is moved along in both the row and


column dimensions one pixel at a time and the
calculation is repeated until the entire image has been
filtered and a "new" image has been generated.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -
ST
Filtering
There are basically two types of filters.
Low pass filters
A low-pass filter is designed to emphasize
larger, homogeneous areas of similar tone and
reduce the smaller detail in an image. Thus, low-
pass filters generally serve to smooth the
appearance of an image. Average and median
filters, often used for radar imagery .Noise
reduction is one example.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -
ST
Filtering

High pass filters


High-pass filters do the opposite and serve to sharpen the
appearance of fine detail in an image.

One implementation of a high-pass filter first applies a low-pass filter


to an image and then subtracts the result from the original, leaving
behind only the high spatial frequency information. Directional, or
edge detection filters are designed to highlight linear features, such as
roads or field boundaries.

These filters can also be designed to enhance features which


are oriented in specific directions. These filters are useful in
applications such as geology, for the detection of linear geologic
structures.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
E. Image Classification and
Analysis
`Classification / segmentation procedures partition an image into its constituent parts
or objects.
A segmentation procedure brings the process a long way toward successful solution of
imaging problems that require objects to be identified individually.

In general, the more accurate the segmentation, the more likely recognition is to succeed.
Segmentation algorithms for monochrome images generally are based on one of two basic
properties of gray-scale values:
Discontinuity: The approach is to partition an image based on abrupt changes in gray-
scale levels. The principal areas of interest within this category are detection of isolated
points, lines, and edges in an image.
Similarity: The principal approaches in this category are based on thresholding, region
growing, and region splitting/merging.

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
F. Application

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST
THANK YOU!

COMPILED FOR TRAINING PURPOSE NOT FOR DISTRIBUTION -


ST

You might also like