RemoteSensing-DigitalImageProcessing 2
RemoteSensing-DigitalImageProcessing 2
3/17/2023
2023 1
Contents to be covered:-
‾ Unit 1. Introduction
• If some challenges or problem arise, which software, which method and which
can view them on Google Maps, buy them from websites, you can
• The first ever picture from the outer space has been taken 70 years
• Prior to 1946, people had never seen the Earth from outer space.
The Soviets may have been the first to launch a satellite into orbit, but
On Oct. 24, 1946, soldiers and scientists launched V-2 missile carrying
Earth from space. These images were taken at altitude of 65 miles, just
above the accepted beginning of outer space. The film survived the
exploded and are now used for-all kinds of tasks in all kinds ofareas
– Image enhancement/restoration
– Artistic effects
– Medical visualisation
– Industrial inspection
– Law enforcement
real scene.
otelevision images,
ophotographs,
oThis value is normally the average value for the whole ground area
covered by the pixel. The intensity of a pixel is recorded as a digital
number.
•
Pixel values typically represent gray levels, colours, heights,
opacities etc.
Properties of Digital Image:
oEg. Landsat and sentinel 2 pixel size and their ground resolution?
• Satellite images are images captured by satellites at regular intervals (usually hourly)
30m 10m
esolution resolution
• Pixel: is the smallest item/unit of information in an image
• In an 8-bit gray scale image, the value of the pixel between 0 and 255
• the higher the number, the brighter the color that assigned to them
1. Binary Images
• It is the simplest type of image. It takes only two values i.e, Black and White or 0
and 1. The binary image consists of a 1-bit image and it takes only 1 binary digit to
represent a pixel. Binary images are mostly used for general shape or outline.
• Binary images are generated using threshold operation. When a pixel is above the
threshold value, then it is turned white('1') and which are below the threshold value
• Grayscale images are monochrome images, Means they have only one color.
Grayscale images do not contain any information about color. Each pixel
• A normal grayscale image contains 8 bits/pixel data, which has 256 different
used.
3. Color images
• Colour images are three band monochrome images in which, each band
contains a different color and the actual information is stored in the digital
image. The color images contain gray level information in each spectral
band.
• The images are represented as red, green and blue (RGB images). And each
color image has 24 bits/pixel means 8 bits for each of the three color
band(RGB).
8-bit color format
in a file of an image. In this format, each pixel represents one 8 bit byte. It
has 0-255 range of colors, in which 0 is used for black, 255 for white and
127 for gray color. The 8-bit color format is also known as a grayscale
• The 16-bit color format is also known as high color format. It has 65,536
16-bit color format is further divided into three formats which are Red,
• In RGB format, there are 5 bits for R, 6 bits for G, and 5 bits for B. One
additional bit is added in green because in all the 3 colors green color is
soothing to eyes.
24-bit color format
• The 24-bit color format is also known as the true color format. The
different colors like 8 bits for R, 8 bits for G and 8 bits for B.
Digital Imaging
• Digital imaging is the art of making digital images – photographs,
color.
1.4. Digital image display
0.38–0.75 mm, that is a very small part of the solar spectral range
a monochromatic display.
which corresponds to 256 grey levels, and displays DNs from 0 (black)
to 255 (white).
(a) An image in grey-scale black and white(B/W) display
2. Tristimulus color theory and RGB color display
oIf you understand the structure and principle of a color TV, you must
know that the tube is composed of three color guns of red, green and
blue.
oThese three colors are known as primary colors. The mixture of light
theory.
intensity (i.e. dark red, light red, etc.) depending on their DNs.
• Thus if the red, green and blue bands of a multi-spectral image are
generated.
• The RGB color model is an additive color model in which the red, green,
and blue primary colors of light are added together in various ways to
• The name of the model comes from the initials of the three additive
the sensing,
representation, and
image.
color TV
video cameras,
digital cameras.
• In this model, any secondary color when passed through white light
will not reflect the color from which a combination of colors is made.
• For example- when cyan is illuminated with white light, no red light
will be reflected from the surface which means that the cyan
subtracts the red light from the reflected white light (which itself is
• White light minus red leaves cyan, white light minus green leaves
It is a very important and attractive color model because it represents the
Hue is a color component that describes pure color (yellow, orange/ red)
primary colors.
•
• A false-color image is an image that shows an object in colors by using
infrared, red and green composite. Best to distinguish vegetated and non-
combining three monochromatic ratio data sets. Such composites have the
twofold advantage of combining data from more than two bands and
sometimes be difficult.
• An image is called a "true-color" image when it offers a natural color version,
• This means that the colors of bject in an image appear to a human observer the
• A green tree appears green in the image, a red apple red, a blue sky blue, and
so on.
true-color
Digital image data format
• Brightness values for each pixel are stored one after another.
• It shows the logic of how the data is recorded to the computer tape in
• Just as the BIP format treats each pixel of data as the separate unit, the
• Each line is represented in all 4 bands before the next line is recorded.
• shows the data is recorded to computer tape in sequential values for a four band
ack of both the brightness value and the number of times the
e of 10.
Advantages of digital images include:
loss of information
satellites.
spectrum.
• These remote sensing data sources typically include:
transportation agencies,
university collections
6. Hyperspectral Image
7. UAV/Drone image
1. Digital aerial photos
each other.
• Concurrently, Ignazio Porro developed the “photogoniometer” and many other
although placing cameras on balloons had been attempted as early as the 1860s. By
information from photographs, instruments were required to overcome the need for
from a point in the air for the purpose of making some type of study
on earth surface.
orthophotography and line mapping based originally on manual plotting and later
computers in the semi-analytical and the analytical stereoplotters, but the process
was still time consuming. However, by the 1980s, spatial information systems,
referred to also as GIS, were being developed in many countries. There was a need
for production of geo-coded digital spatial data that could be input to a local GIS
with an appropriate structure, thus enabling overlaying of this data with other
• Camera lens
• cartography,
• land-use planning,
• archaeology,
• geology
• Military
sensed images.
• A multispectral image is therefore composed of several channels
very specific wavelength ranges for each pixel (for example, green,
system for the Landsats 1-3 and 4-5. This sensor had four spectral
- along-track scanning
1. Across-track scanners
• Across-truck scanners scan the Earth in a series of lines. The lines are oriented
perpendicular to the direction of motion of the sensor platform (across the swath).
• Scanner sweeps perpendicular to the path or swath, centered directly under the
platform, i.e. at 'nadir'. The forward movement of the aircraft or satellite allows the
next line of data to be obtained in the order 1, 2, 3, 4 etc. In this way, an image is
(Thematic mapper).
Scanning
• Each line is scanned from one side of the sensor to the
electrical signal, they are converted to digital data and recorded for
the ground resolution cell viewed (D), and thus the spatial resolution.
The angular field of view (E) is the sweep of the mirror, measured in
degrees, used to record a scan line, and determines the width of the
• Instantaneous Field of View (IFOV) is the total view angle of the camera, which
Across-truck scanners
• What is a cross-track scanner?
• Cross-track scanner uses “back and forth” motion of the fore-optics. It scans each
mirror.
• Example
• The first five Landsats carried the MSS sensor which responded to Earth-reflected
sunlight in four spectral bands. Landsat 3 carried MSS sensor with an additional
mirror, they use a linear array of detectors (A) located at the focal
plane of the image (B) formed by lens systems (C), which are "pushed"
along in the flight track direction (i.e. along track). These systems are
resolution cell (D) and thus the size and IFOV of the detectors determines
measure each spectral band or channel. For each scan line, the energy
lens systems
ground resolution
• Eg. Landsat8 is multispectral sensor. It produce 11 images with the following bands
bands.
on what is imaged.
(MIR).
• A hyperspectral image could have hundreds or thousands of
bands.
considered to be hyperspectral.
Hyperspectral vs Multispectral Imaging
features and landscape patterns, hyperspectral imagery allows for identification and
assessment of individual pixels is often useful for detecting unique objects in the
scene.
spectral detail in hyperspectral images gives the better capability to see the unseen.
• The main difference between multispectral and hyperspectral is
has a descriptive title. For example, the channels below include red,
spectral resolution.
• finding objects,
•
Table 1. Some example of hyperspectral systems
•
Sensor Wavelength range (nm) Band width (nm) Number of bands
distribution.
6. Oceanography: Investigations of water quality, monitoring coastal
erosion.
7. Snow and Ice: Spatial distribution of snow cover, surface albedo and
8. Oil Spills: When oil spills in an area effected by wind, waves, and
tides, a rapid and assessment of the damage can help to maximize the
cleanup efforts
4. Microwave Radar Image
And Ranging.
trajectory.
at night.
mapping.
• Applications of RADAR image includes:
• agricultural monitoring,
• ice monitoring
• environmental monitoring
• 3-D measurements
5. Light Detection And Ranging (LIDAR)
technology.
Earth.
Lidar (Light Detection and Ranging)
• When the laser light strikes an object, the light is reflected. A sensor
detects the reflected laser light and records the time from the laser
• Ultraviolet
• Visible and
• Non-metalic objects
• rocks
• Chemical compounds
LiDAR tools offer a number of advantages over visual cameras:
• high accuracy
for 3D mapping.
• LIDAR Operating Principle
• Hard-to-access zones
• Laser scanner
• Computing technology
1. Laser Scanner
and angles.
and the range in which you can operate the LiDAR system.
2. Navigation and positioning systems
position and orientation of the sensor to make sure data captured are
useable data.
terrestrial.
1. Airborne
bathymetric.
i. Topographic LiDAR
calculations.
ii. Bathymetric Lidar
penetrating.
land-water interface.
ocean floor.
2. Terrestrial lidar
static.
• Power Utilities: power line survey to detect line sagging issues or for planning
activity
image.
• Digital aerial cameras and many high resolution spaceborne cameras record
for the panchromatic channel a spatial resolution that is about a factor four higher
than the resolution of RGB and NIR channels. Color images are easier to
interpret than grey scale images. Higher resolution images are easier to interpret
• These images do not allow you to distinguish tiny details, yet they
• In the image with the lower resolution, much more different objects
• The bigger a pixel, the more objects on the surface of the earth are
But
But
Expensive
Hard to get
7. UAV/Drone image
• the term Drone and UAV mean the same thing, and can
be used interchangeably.
• An unmanned aerial vehicle system has two parts, the drone itself
temporal resolutions.
• Drones are excellent for taking high-quality aerial photographs and
• Uses of drone/UAV
sensitive to.
1. Spatial Resolution
• For some remote sensing instruments, the distance between the target
being imaged and the platform, plays a large role in determining the
detail of information obtained and the total area imaged by the sensor.
• It refers to the size of the smallest possible object that can be detected.
• Sensors onboard platforms far away from their targets, typically view a
be detected.
the sensor.
• High spatial resolution means more detail and a smaller grid cell size.
Whereas, lower spatial resolution means less detail and larger pixel size.
distance
ground
IFOV to sensor
Resolution
Pixel Size of the Image
• Most remote sensing images are composed of a matrix of picture
• Images where only large features are visible are said to have coarse
Generally speaking, the finer the resolution, the less total ground
intervals/ranges.
• spectral resolution is the amount of spectral detail in a band based on the number
• High spectral resolution means its bands are more narrow. Whereas low spectral
• The finer the spectral resolution, the narrower the wavelength range for a
particular channel or band. Many remote sensing systems record energy over
ranges.
Hyperspectral resolution
3. Radiometric Resolution
• While the arrangement of pixels describes the spatial structure of an
content in an image.
energy
• The number of brightness levels depends upon the number of bits used.
is able to distinguish.
bits used in representing the energy recorded. Thus, if a sensor used 8 bits to
record the data, there would be 28=256 digital values available, ranging from 0 to
255.
• However, if only 4 bits were used, then only 24=16 values ranging from 0 to 15
• By comparing a 2-bit image with an 8-bit image, we can see that there is a large
resolutions.
4. Temporal Resolution
• The revisit period, refers to the length of time it takes for a satellite
‾ satellite/sensor capabilities,
‾ latitude.
• Some specific uses of remotely sensed images include:
• Large forest fires can be mapped from space, allowing rangers to see a much
• Tracking clouds to help predict the weather or watch erupting volcanoes, dust
storms.
• Tracking the growth of a city and changes in farmland etc. over several
decades.
• Discovery and mapping of the rugged topography of the ocean floor (e.g., huge
mountain ranges, deep canyons, and the “magnetic striping” on the ocean
floor).
Unit 3.
Digital Image Restoration and
Registration
Scaling of digital images
one. The brightness of the new image could be the simple average of the n
• However,:
information.
• The purpose of image restoration is to "compensate for"
• motion blur,
• noise, and
• camera miss-focus.
Noise
•Image Restoration:
• geometric distortions,
• sensor irregularities
remove error.
• Any image in which individual detectors appear lighter or darker than their
• acquisition truck,
• vehicles,
• Scan line drop-out: Dropped lines occur when there are systems errors
•Changes in illumination
•Angle of view
– de-striping
12
2. Atmospheric correction method,
point.
pixel.
Atmospheric correction
• Haze (fog, and other atmospheric phenomena) is a main
and contrasts.
Atmospheric correction
3. Geometric correction methods
geometric corrections.
processing.
system using:
• Image-to-Map Rectification ,
• Image-to-Image Registration ,
• Relief displacement,
to ground coordinates.
• It used when establishing the relation between raster or vector data and
geographical features.
2. Image-to-map registration refers to transformation of one image
sensed imagery:-
2. Digital planimetric
rectified
• his technique includes the process by which two images of a common area are
of the same ground appear in the same place on the registered images.
4. Spatial Interpolation Using Coordinate Transformations
photograph.
6. Geometric correction with ground control points (GCP)
location .
• GCPs help to ensure that the latitude and longitude of any point on
– road intersections,
– airport runways,
coordinate system.
1. Nearest Neighborhood,
3. Cubic Convolution.
• Nearest Neighbourhood
• Nearest neighbour resampling uses the digital value from the pixel
• This is the simplest method and does not alter the original values,
but may result in some pixel values being duplicated while others
are lost.
Bi-linear interpolation
• The averaging process alters the original pixel values and creates
•
• Cubic Convolution
•
18
is that the data must be recorded and made available in a digital form,
interpretation
perception
• Digital processing and analysis carried out automatically by
equipment.
• Manual interpretation is often limited to analyzing only a
• Meteorology
R e m o t e l y sensed raw data generally contains errors
methods.
information extraction.
This stage can include image enhancement or the image may be analyzed to
The result might be the image altered in some way or it might be a report based
data can be processed quickly and efficiently. A task that used to take
• Digital analysis of images offers high flexibility. The same processing can
with different algorithms or with updated inputs in a new trial. This process
can continue until the results are satisfactory. Such flexibility makes it
possible to produce results not only from satellite data that are recorded at
one time only, but also from data that are obtained at multiple times or even
user has been working for a long time, as the interpretation process is
produce the same results with the same input no matter who is
• Digital image analysis has four major disadvantages, the critical ones
being the initial high costs in setting up the system and limited
classification accuracy.
• Limited Accuracy
• Complexity
categories:
2.Image Enhancement
o These are corrections needed for distortion of raw data. Radiometric and
o This used to improve the appearance of imagery and to assist visual interpretation
and analysis. This involves techniques for increasing the visual distinction between
3. Image Classification
combine image data for a given geographic area with other geographically
• Radiometric correction
• Geometric correction
• Image classification
• Pixel based
• Object-oriented based
• Change detection
Why Image Processing?
Image acquisition
Image enhancement
Image Restoration
Image Compression
Image Segmentation
Recognition/Acknowledgment
Chapter-4
Image Enhancement
4.1. Image Enhancement
• Image enhancement algorithms are commonly applied
used.
original image.
Examples: Image Enhancement
• Take slice from MRI scan of canine heart, and find boundaries
detected easily.
• edges,
• boundaries, or
analysis.
• Image enhancement refers to the process of highlighting certain
• For example,
• eliminating noise,
transform of an image.
•Generally, Enhancement is employed to:
•emphasize,
•sharpen and
resolution result.
image.
• Digital image magnification is often referred to as zooming.
techniques.
characteristics may be done, but the image may still not be optimized
1. Contrast enhancement
2. Density slicing
specific features
1. Contrast Enhancement
• Stretching is performed by linear transformation expanding the original
• In raw imagery, the useful data often populates only a small portion of
removal of noise.
comprise an image.
•The brightness values (i.e. 0-255) are displayed along the x-axis of
the graph.
in a dataset.
on computers.
•Histogram-equalized stretch
a. Linear contrast stretch
• It is the simplest type of enhancement technique.
full range.
before After
------------Linear contrast stretch-------------
easier.
b. Histogram-equalized stretch
range.
technique.
to enhance contrast.
contrast.
• Histogram Equalization
image.
• Histogram Equalization is an image processing technique that
image.
• This allows for areas of lower local contrast to gain a higher contrast.
• The original image and its histogram, and the equalized versions. Both
• Density slicing converts the continuous gray tone range into a series
each slice.
features.
----Density slicing---
within image.
• The processed value for the current pixel depends on both itself and
surrounding pixels.
pixel.
• Image sharpening
components.
• The basic filters that can be used in frequency domain are low pass
noise.
• A low-pass filter is designed to emphasize larger, homogeneous areas of
• A low-pass filter (LPF) is a circuit that only passes signals below its
• Low pass filters are very useful for reducing random noise.
……
B. High pass filter
• These filters are basically used to make the image appear sharper.
• High pass filtering works in exactly the same way as low pass filters but uses the
different convolution kernel and it emphasizes on the fine details of the image.
• High pass filters let the high frequency content of the image pass through the filter
• High-pass filter is a filter designed to pass all frequencies above its cut-off
• While high pass filter can improve the image by sharpening and, overdoing of this
area changes.
• This replaces the pixel at the center of the filter with the median
value of the pixels falling beneath the mask. Median filter does not
• Unlike the low pass filter which only pass signals of a low frequency range or the high pass
filter which pass signals of a higher frequency range, a Band Pass Filters passes signals
within a certain “band” or “spread” of frequencies without distorting the input signal or
• Band-pass filter, arrangement of electronic components that allows only those electric
waves lying within a certain range, or band, of frequencies to pass and blocks all
others.
• There are applications where a particular band, or spread, or frequencies need to be filtered
from a wider range of mixed signals. Filter circuits can be designed to accomplish this task
by combining the properties of low-pass and high-pass into a single filter. The result is
• Other surface types, such as soil and water, show near equal
• Water bodies look dark if they are clear or deep because IR is an absorption
process.
2. Fourier Transform
compression.
For example, in the processing of pixelated images, the high spatial frequency edges of
pixels can easily be removed with the aid of a two-dimensional Fourier transform.
• The (2D) Fourier transform is a very classical tool in image processing. It is the
extension of the well known Fourier transform for signals which decomposes a
signal into a sum of sinusoids. So, the Fourier transform gives information about the
• The main advantage of Fourier analysis is that very little information is lost from the
amplitude, harmonics, and phase and uses all parts of the waveform to translate the
Texture transformation
• PCA is a technique used to emphasize variation and bring out strong patterns in a
• PCA should be used mainly for variables which are strongly correlated.
• If the relationship is weak between variables, PCA does not work well to reduce
data.
• Organizing information in principal components this way, will
variables.
components.
• facial recognition,
• fine,
• coarse,
• grained,
structure of a texture.
Identifying objects based on texture
• Image stacking and compositing
analysis.
Image mosaicking and sub-setting
large areas.
• In ArcGIS, you can create a single raster dataset from multiple raster
• Since satellite data downloads usually cover more area than you are
landscapes, including:
Agriculture;
Forests’
Rangeland;
Wetland,
Urban vegetation
The basic assumption behind the use of vegetation indices is that remotely
vegetation structure,
photosynthetic capacity,
added,
divided,
Subtracted, or
• NDVI is one of the earliest and the most widely used in various
applications.
• It is calculated as:
• NDVI conveys the same kind of information as the SR/RVI but is
properties (-1<NDVI<1).
• the higher the index, the greater the chlorophyll content of the
target.
photosynthesis.
Some Application Areas of NDVI
of detected change.
• Where L is the coefficient that should vary with vegetation density, ranging from
0 for very high vegetation cover to 1 for very low vegetation cover.
It is defined as :
dimensions is obvious.
and division of each one of the pixels of one image with the
new image h of the same size whose pixels are to the sum of the
Subtraction
Fig.The right image is the division of the left image by the right image.
Spectral enhancement using principal component analysis
emphasized.
• Principal component analysis (PCA) simplifies the
original variables.
the Earth's surface, having been adjusted for topographic relief, lens
for perspective so that they appear to have been taken from vertically
maps show additional point, line or polygon layers (like a traditional map)
• Point clouds are datasets that represent objects or space. These points
number of single spatial measurements into a dataset that can then represent
a whole.
• point cloud dataset is the name given to point clouds that resemble an
organized image (or matrix) like structure, where the data is split into rows
and columns. Examples of such point clouds include data coming from
You will get the return points from both the Canopy, as well as the ground underneath it. Or
Alternatively, image the points representing the Side of a building. You will have multiple
• Compared to this, DEM raster is much simpler. It will have only one elevation value for a
given cell. A Cell does not exactly represent a (X,Y) point either, since it has a cell width and
This data can be anything from text and images to sounds or other
studies the classification processes and has as its main streams of research
the statistical, the syntactic and finally hybrid methods of the previous two
approaches.
Unit 6
classification
• Topics to be covered
recognition.
pixels.
• If a pixel satisfies a certain set of criteria, then the pixel is assigned to the
to pixels.
• water,
• urban,
• forest,
• agriculture, and
• grassland.
Different classes be assigned
• Image Classification Process
• Maximum likelihood
• Minimum-distance
• Principal components
• Parallel piped
• Decision tree
• This classifier consider not only the cluster centers but also the
• Each pixel is assigned to the class that has the highest probability
this is not the case, you may have better results with the
• Pros:
• The most accurate of the classifiers in the ERDASIMAGINE system (if the input
into consideration.
• Takes the variability of classes into account by using the covariance matrix
• Cons:
• It uses the mean vectors for each class and calculates the Euclidean
distance from each unknown pixel to the mean vector for each class.
unclassified pixels.
• Cons:
• Pixels that should be unclassified become classified. However, this problem is improved
by thresholding out pixels that are farthest from the means of their classes.
• Does not consider class variability. For example, a class, like an urban land cover class is
made up of pixels with a high variance, which may tend to be farther from the mean of the
signature. Using this decision rule, outlying urban pixels may be improperly classified.
• Inversely, a class with less variance, like water, may tend to overclassify because the
pixels that belong to the class are usually spectrally closer to their mean than those of
oThe parallelepiped classifier uses the class limits and stored in each
class. However, if the pixel falls within more than one class, it is put
oIf the pixel does not fall inside any class, it is assigned to the null
candidate pixel are compared to upper and lower limits. These limits
• The minimum & maximum data file values of each band in the
signature,
• The mean of each band, plus and minus a number of standard deviations
• Any limits that you specify, based on your knowledge of the data and
signatures.
•There are high and low limits for every
‾Does NOT assign every pixel to a class. Only the pixels that
fall within ranges.
decision tree.
• In a Decision tree, there are two nodes, which are the Decision Node
• Each pixel is attached with a group to indicate the extent to which the pixel
patterns.
to a known spectra.
• It is a type of deep learning method that uses convolutional multiplication based on artificial
neural networks. CNN is a deep neural learning technique used in many computer
vision tasks such as image classification, segmentation, and object detection. They use
convolutional filters to extract useful features from images during classification. CNN is
used to predict the land cover type for a given patch from Landsat 7 image.
• CNN or the convolutional neural network (CNN) is a class of deep learning neural
networks. In short think of CNN as a machine learning algorithm that can take in an input
image, assign importance (learnable weights and biases) to various aspects/objects in the
• Using algorithms, they can recognize hidden patterns and correlations in raw data,
cluster and classify it, and – over time – continuously learn and improve.
6.2. Image Classification methods:
actual surface cover types present in the image are basis for selecting
each land cover class. The software then uses these “training sites”
Signature Editor.
training data.
• Steps of Supervised Classification
i. The first step is to locate the training samples for each potential class and
ii. The second step is to collect signature for each potential class.
iii. The third step is to evaluate the signatures which can help determine whether
signature data are a true representation of pixels to be classified for each class
• field data,
• Validation Data
• Should be random
2. Unsupervised classification:
on their properties.
a. Spatial
b. Spectral
1. Generate clusters
2. Assign classes
grouping pixels.
the image.
i. SHAPE: If you want to classify buildings, you can use a shape such as
homogeneous. But forests have shadows and are mix of green and black.
iii. SPECTRAL: You can use the mean value of spectral properties such as near-
quality products.
classification:
one)
3. Recode Classes
correctness.
The rows are usually used to display the map labels or classified data
ground.
column total.
5. Omission error: refers to those sample points that are omitted in the
classified sample units) divided by the total number of sample units in the
entire error matrix. However, just presenting the overall accuracy is not
•
Validating classification
Reference
1 2 3 4 5 Total
classified
2
Total
time.
particular application.
• The temporal aspects of natural phenomena are important for image
interpretation.
• Eg. metsats afford the advantages of global coverage at very high temporal
resolution.
• Again, however, temporal and spatial effects might be the
• daily
information:
environment and
• crop-coverage mapping,
• coastline monitoring,
• deforestation,
• urbanization,
because of:
• scene changes.
• Multi-temporal Analysis Techniques
•Post-classification comparison
•Change detection
•Image differencing
•Multi-date classification
possible.
• Remote sensing data are primary sources extensively used for change
• When one is interested in knowing the changes over large areas and
• identify,
• describe, and
• It can be seen which class has changed and how much - and to
• The new image can be interpreted easily and is ready for direct use,
LULC change.
• R=
• %R =
Where,
• %R = Percentage of LULCC
1991 2001
percentage area coverage rate of change
Class-Name percentage area rate of change
coverage (1991-2001) (2001-2011)
Waterbody
Built-up
Forest
Grassland
Agriculture
Wetland
1000
500
1988-1998
Area_ha
0 1998-2008
Woodland Riverine forest Grass land Water body Degraded land Open land 2008-2018
1988-2018
-500
-1000
-1500 Year
Eg: LULC Trend from 1988 to 2018
100000
90000
80000 1988
Area_ha
70000 1998
60000 2008
50000 2018
40000
30000
20000
10000
0
Woodland Riverine Forest Grass land Water body Degraded land Bare land
Year
• The graph below show how the change trend looks like from 1988 to 2018 in a
given area.
Change matrix
1998
1988 Woodland Riverine forest Grass land Water body Degraded land Bare land
Woodland 56131.9 59.02 3522.13 3.7 27240.06 28.64 184.59 0.19 7222.39 7.6 808.92 8.5 95109.99
Riverine
5055.91 35.84 7769.32 55.076 530.07 3.76 14.94 0.11 720.78 5.11 15.57 0.11 14106.59
forest
Grass land 18502.87 27.66 184.86 0.28 45768.97 68.41 4.95 0.001 971.28 1.45 1470.2 2.17 66903.13
Water body 50.6 19.81 1.89 0.64 31.14 10.5 194.01 68.66 0.99 0.36 0.18 0.06 278.81
Degraded
1938.16 74.504 24 0.00923 335.06 13.02 0.18 0.01 296.7 11.3 7.3 0.28 2601.4
land
Bare land 4207.664 53.19 1.8 0.92 3641.22 45.57 8.1 1.01 26.82 0.34 104.85 1.31 7990.454
Unit 8.
Image segmentation
• Image Segmentation is the process by which a digital image is
object.
• Image segmentation refers to the process of decomposing
area (e.g., water bodies in water quality analysis). The land covers
identified neighborhood.
What is segmentation?
• Segmentation divides an image into groups of pixels
• Pixels are grouped because they share some local property (gray level,
11
21
that have similar attributes. The parts in which you divide the
• Segmentation = partitioning
detection)
including:
techniques:
• Thresholding Segmentation.
• Edge-Based Segmentation.
• Region-Based Segmentation.
these labels
• It is useful when the required object has a higher intensity than the
your requirements.
• Thresholding is the simplest image segmentation method, dividing pixels
(threshold).
• It is suitable for segmenting objects with higher intensity than other objects or
a binary image.
1. Pixel-Based Segmentation
• If its value is smaller than the specified threshold, then it is given a value
sections).
• Simple Thresholding:
• Otsu’s Binarization
• Adaptive Thresholding
i. Simple Thresholding: you replace the image’s pixels with either white or black.
you’d replace it with black, if it’s higher than threshold, you’d replace it with
white.
ii. Otsu’s Binarization: you calculate threshold value from the image’s histogram
unnecessary colors from a file. You can’t use it for images that are not bimodal.
iii. Adaptive Thresholding: Having one constant threshold value might not be a
suitable approach to take with every image. Different images have different
the same.
• Edge detection is an image processing technique for finding the
• texture,
• contrast,
• grey level,
• You can improve the quality of your results by connecting all the
edges into edge chains that match the image borders more accurately.
• Segmentation is the finding of different regions based normally on the pixel
(outlines) of any shape, object in the image to separate it from the background or
other objects.
intensity. Edges are often associated with the boundaries of objects in a scene.
• neighborhood size
81 82 26 24
82 33 25 25
• how to detect change 81 82 26 24
• Edge-based segmentation is a popular image processing technique
the Canny method. The Canny method differs from the other edge-
strong and weak edges), and includes the weak edges in the output
• Region Growing
• In this method, you start with a small set of pixels and then start iteratively
image, compare it with the neighbouring pixels and start increasing the
• You should use region growing algorithms for images that have a lot of
noise as the noise would make it difficult to find edges or use thresholding
algorithms.
Region growing
• Start with (random) seed pixel as cluster
• When cluster stops growing, begin with new seed pixel and continue
• One danger: Since multiple regions are not grown simultaneously, threshold
growth step
• Region growing techniques start with one pixel of a potential
region and try to grow it by adding adjacent pixels till the pixels
•The first pixel selected can be just the first unlabeled pixel in the
added to a region.
Region Grow Example
image
segmentation
Region growing results
• Region Splitting and Merging
• As the name suggests, a region splitting and merging focused method would
perform two actions together – splitting and merging portions of the image.
• It would first split the image into regions that have similar attributes and
merge the adjacent portions which are similar to one another. In region
splitting, the algorithm considers the entire image while in region growth, the
• It divides the image into different portions and then matches them according
criterion
connected components)
• If this is not true, the image is split into four sub images
finding hidden data in the image that might not be visible to a normal
shadings, etc.
separate the data elements into clusters where the elements in a cluster are
detects lines forming ridges and basins, marking the areas between
color or shape.
Unit 9.
Remote sensing data Integration
• Many applications of digital image processing are enhanced through the
merger of multiple data sets covering the same geographical area. These
more than one date to create a product useful for visual interpretation.
information.
• In the early days of analog remote sensing when the only data source
• Today, most data are available in digital format from a wide array of
areas of change.
• For example, elevation data in digital form, called Digital Elevation
• Most data available in digital form which are obtained from a wide
more information.
• Remote sensing data are:
• Multi-platform
• Multi-stage
• Multi-scaled
• Multi-spectral
• Multi-temporal, Multi-resolution
• A multispectral image is a collection of a few image layers of the same scene, each
• In remote sensing, image fusion techniques are used to fuse high spatial
that are simultaneously recorded by one sensor. This is done to create high
resolution means more detail and a smaller grid cell size. Whereas,
lower spatial resolution means less detail and larger pixel size.
resolutions.
• Multi-temporal/multi-seasonal images
processes.
time.
• Multistage, multiplatform, multiscale & multiresolution images
geographic scale.
multispectral data.
• Eg. DEM, and DTM model can be combined together with remote
fusion.
+
Panchromatic – Resolution .5m
Bands 4,3,2 - Resolution 2m
Capricorn Seamount
Pan-sharpened 4,3,2 Resolution .5m
• Applications:
• Since the waves are created actively, the signal characteristics are
• Active sensors are divided into two groups: imaging and non-imaging
sensors.
• Figure: Principles of active microwave remote sensing
• Radar sensors belong to the group of most commonly used active
• Radio stands for the microwave and range is another term for
distance.
light.
RADAR remote sensing--------------
distance from the radar and thus their location can be determined. As
surface.
b. beam (B).
• As with all remote sensing systems, the viewing geometry of a radar results in
encountered when using cameras and scanners, radar images are also subject to
reversed with targets being displaced towards, instead of away from the sensor.
Radar foreshortening and layover are two consequences which result from relief
displacement.
• When the radar beam reaches the base of a tall feature tilted towards the radar (e.g.
a mountain) before it reaches the top foreshortening will occur. Again, because
compressed and the length of the slope will be represented incorrectly (A' to B').
Maximum foreshortening occurs when the radar beam is perpendicular to the slope
such that the slope, the base, and the top are imaged simultaneously (C to D). The
length of the slope will be reduced to an effective length of zero in slant range
reaches the base (A). The return signal from the top of the feature will be received
before the signal from the bottom. As a result, the top of the feature is displaced
towards the radar from its true position on the ground, and "lays over" the base of
the feature (B‘ to A'). Layover effects on a radar image look very similar to
small incidence angles, at the near range of a swath, and in mountainous terrain.
oThe term radar is abbreviation made up of the words
radio, detection, and ranging.
wavelengths have the advantage that they can penetrate clouds and
microwave energy.
•The recorder then stores the received signal.
G = antenna gain,
λ = wavelength,
Pt = transmitted energy,
ii. radar imaging geometry, that defines the size of the illuminated area
range, and
iii. characteristics of interaction of the radar signal with objects, i.e. surface
the energy to the antenna from where it is emitted towards the Earth’s
surface.
o Agriculture: for crop type identification, crop condition monitoring, soil moisture
o Forestry: for clear-cuts and linear features mapping, biomass estimation, species
o Oceanography: for sea ice identification, coastal wind-field measurement, and wave slope
measurement;
o Coastal Zone: for shoreline detection, substrate mapping, slick detection and general
vegetation mapping.
• Microwave polarizations
different images.
abbreviations:
directions.
• Distortions in radar images
elevation).
• This means that objects in near range are compressed with respect
Satellite.
satellites.
be monitored.
• Platforms are classified into three categories.
1. Ground-Based Platforms
•Carried on vehicle
Photographic systems
Limitation:
oIt provide opportunity for additional, correlative data for satellite based
stratosphere.
• The disadvantages are low area coverage and high cost per unit area of
ground coverage.
system.
• Airborne remote sensing missions are often carried out as one-time
• Aircraft can fly at relatively low altitudes allowing for sub-meter spatial resolution.
• Aircraft can easily change their schedule to avoid weather problems such as
• Last minute timing changes can be made to adjust for illumination from the sun, the
• Sensor maintenance, repair and configuration changes are easily made to aircraft
platforms.
The low altitude flown by aircraft narrows the field of view to the
The turnaround time it takes to get the data to the user is delayed due
transferring the raw image data to the data provider's facility for
preprocessing
3. Space borne platforms
• In space borne remote sensing, sensors are mounted on-board a
spacecraft (space shuttle or satellite) orbiting the earth. e.g. Rockets,
Satellites and space shuttles.
• Fine resolution can be achieved from both airborne and space borne platforms.
• Space borne radars are able to avoid some of these imaging geometry problems
• Airborne radar is able to collect data any where and at any time (as
• A space borne radar may have a revisit period as short as one day.
oAirborne radar will be vulnerable to variations in environmental/
weather conditions.
images in a specific season over successive years, or over a particular area over a series of
days.
This is an important factor for monitoring changes between images or for mosaicking
match the rotation of the Earth so they seem stationary, relative to the Earth's surface.
This allows the satellites to observe and collect information continuously over specific
areas. Weather and communications satellites commonly have these types of orbits.
Due to their high altitude, some geostationary weather satellites can monitor weather and
• There are many useful applications of radar images. Radar data provide
complementary information to visible and infrared remote sensing data. In the case
of forestry, radar images can be used to obtain information about forest canopy,
• Radar images also allow the differentiation of different land cover types, such as
• In geology and geomorphology the fact that radar provides information about
surface texture and roughness plays an important role in lineament detection and
geological mapping.
Light Detection And Ranging (LIDAR)
technology.
Earth.
• LiDAR, is used for measuring the exact distance of an object
mapping.
• LIDAR uses:
• Ultraviolet
• Visible and
• Non-metalic objects
• rocks
• Chemical compounds
LIDAR Operating Principle
• Hard-to-access zones
• Laser scanner
• Computing technology
1. Laser Scanner
and angles.
and the range in which you can operate the LiDAR system.
2. Navigation and positioning systems
position and orientation of the sensor to make sure data captured are
useable data.
position.
1. Airborne
bathymetric.
i. Topographic LiDAR
penetrating.
ocean floor.
2. Terrestrial lidar
static.
• Power Utilities: power line survey to detect line sagging issues or for planning activity
• the term Drone and UAV mean the same thing, and can be
used interchangeably.
system (GPS).
• UAV remote sensing can be used to:
• Track erosion
• Track damage
meshes.
temporal resolutions.
• Drones are excellent for taking high-quality aerial photographs
• Uses of drone/UAV
• Wildlife monitoring
• Weather forecasting
• Military
End of Course