Unit 10
Unit 10
Unit 10
Structure
10.1 Introduction
Objectives
10.2 Digital Image Processing
What is an Image?
What is a Digital Image?
What is Digital Image Processing?
Advantages of Digital Image Processing
Components of an Image Processing System
Steps in Digital Image Processing
10.3 Types and Characteristics of Digital Images
Types of Digital Images
Characteristics of Digital Image
Related Terminologies
10.4 Concept of True and False Colour Composite
10.5 Image Histogram and its Significance
10.6 Activity
10.7 Summary
10.8 Unit End Questions
10.9 Further/Suggested Reading
10.10 Answers
10.1 INTRODUCTION
In Unit 7, you have studied about the concept of visual image interpretation
with examples in Unit 8 of MGY-002. Now you have learnt that information
derived from visual mode of image interpretation is mostly qualitative. In Unit
9, you have studied that the ground truth data acts as a link between the image
and image derived information and the ground reality.
Computers handle data in digital form hence remote sensing data should be in
digital form. In this unit, you will be introduced to the digital image, its
characteristics and processing. Computer processing of a digital remote
sensing data include several steps such as pre-processing, enhancement,
transformation and information extraction. You shall also briefly learn about
these steps prior to studying them in detail in subsequent Block 4 of MGY-002.
Objectives
After studying this unit, you should be able to:
• define a digital image and discuss its characteristics;
• list the components of an image processing system;
• discuss advantages of digital image processing; 65
Image Interpretation • identify the steps in digital image processing;
• describe the concept of true and false colour composites; and
• define image histogram and discuss its importance.
Fig. 10.1: A digital image (left) and its corresponding values (centre). Note the variation
in the brightness and the change in the corresponding digital numbers.
Highlighted block in the centre figure shows one pixel. The figure at right
shows the range of values corresponding to the brightness
Fig. 10.2: Arrangement of rows and columns of an image of size 4 × 4 (4 rows and 4
columns). Left figure shows the numerical values in the image and the table at
right shows the representation of pixel location for an image of size 4 × 4. You
can observe that at location (1, 4), i.e. row 1 and column 4, the pixel value is 24
67
Image Interpretation 10.2.3 What is Digital Image Processing?
Interpretation of a digital image involves analysis of the image and extraction
of information through computer software. Digital image analysis requires
processing of the image using computer software. Those processing steps are
called digital image processing. Digital image processing can be defined as
subjecting numerical representation of objects (i.e. a digital image) to a series
of operations in order to obtain a desired result. Digital image processing
begins with one image and produces a modified version of that image. Digital
image analysis is a process that takes a digital image into something other than
a digital image, such as a set of measurements of the objects present in the
image. However, the term digital image processing is loosely used to cover
both processing and analysis.
Processing
Machine
Display Storage
Device Device
Image Printing
Processing Device
Software
Fig. 10.3: Block diagram showing different components of a digital image processing system
Storage Device
Storage devices are used for storing of images for different purposes and use.
Display Device
It is used for displaying data. Example of a display device is generally a colour
monitor.
Printing Device
It is used for representing and storing image data in hard copy format. It could
be laser, inkjet or any other printer.
• Storage: Digital images are stored usually in matrix form with various
multispectral bands and in different formats. Software which are capable
of storing and processing the concerned image formats should be
considered.
• Image Enhancement
Image enhancement is carried out to improve the appearance of certain
image features to assist in human interpretation and analysis. You should
note that image enhancement is different from image preprocessing step.
Image enhancement step highlights image features for interpreter whereas
image preprocessing step reconstructs a relatively better image from an
originally imperfect/degraded image.
• Image Transformations
Image transformations are operations similar in concept to those for
image enhancement. However, unlike image enhancement operations
which are normally applied only to a single channel of data at a time,
image transformations usually involve algebric operations of multi-layer
images. Algebric operations such as subtraction, addition, multiplication,
division, alogarithms, exponentials and trigonometric functions are
applied to transform the original images into new images which display
better or highlight certain features in the image.
70
• Thematic Information Extraction Ground Truth Data
Collection
It includes all the processes used for extracting thematic information from
images. Image classification is one such process which categorises pixels
in an image into some thematic classes such as land cover classes based
on spectral signatures. Image classification procedures are further
categorised into supervised, unsupervised and hybrid depending upon the
level of human intervention in the process of classification.
Spend
Check Your Progress I 5 mins
1) List out the advantages of digital image processing.
......................................................................................................................
......................................................................................................................
......................................................................................................................
......................................................................................................................
......................................................................................................................
(a) (b)
Fig. 10.4: Representation of (a) black and white and (b) gray scale images. Note the
range of values for the highlighted boxes in the two types of images
Fig. 10.5: Representation of a colour image. Note the range of values of its three
components, i.e. red, green and blue
RGB (Red, Green and Blue) is the commonly used colour space to visualise
colour images. Thus, RGB are primary colours for mixing light and are called
additive primary colours. Any other colour can be created by mixing the
correct amounts of red, green and blue light. If each of these three components
has a range of 0 - 255, there could be a total of 2563 different possible colours
in a colour image. Storing of a colour images require 24 bit for each pixel.
72
10.3.2 Characteristics of Digital Image Ground Truth Data
Collection
There are three basic measures for digital image characteristics:
• spatial resolution
• spectral resolution and
• radiometric resolution.
All these three types of image measures have already been described in Unit 5
Image Resolution of MGY-002, so it would not be repeated here. However,
you should keep in mind that higher the resolution of an image, the more
information the image contains.
Digital Number
In a digital image, each point/unit area in the image is represented by an
integer digital number depending upon the brightness/intensity, which is often
referred to as digital number or DN or DN value. The lowest intensity is
assigned DN of zero and the highest intensity the highest DN number, the
various intermediate intensities are assigned appropriate intermediate DNs.
Thus, intensities over a scene are converted into an array of numbers, where
each number represents the relative value of the field over a unit area. The
range of DNs used in a digital image depends upon the number of bit data
(Table 10.1), the most common being 8-bit type.
Pixel Depth
It refers to the number of bits used to represent each pixel in RGB space. For
example, if each pixel component of RGB image is represented by 8 bits, the
pixel is said to have a depth of 24 bits.
Look Up Table
It gives an output value for each of a range of index values. A look up table is
used to transform input data into a more desirable output format. For example,
a gray scale picture of the planet Saturn will be transformed into a colour
73
Image Interpretation image to emphasise the differences in its rings. Contrast and colour values can
be altered without modifying original digitised image, and an adjustable curve
may be utilised to interactively alter values present in the look up table.
Band
In a multispectral sensor such as those aboard the Landsat satellites,
information from different wavelengths of light is collected as in a digital
camera but there are two major differences. First is that instead of limiting
itself to the visible wavelengths (red, green and blue) a much broader range of
wavelengths are detected. Second difference is that instead of automatically
combining information from the different wavelengths to form a picture, the
information for each specific wavelength range is stored as a separate image.
This image is commonly called a band. In other words, images obtained in
different wavelengths together form a multispectral image and each image is
known as a band or layer or channel.
Digital image colour display is based entirely on this colour theory. This can
be explained with example of a colour TV that is composed of three precisely
registered colour guns- red, green and blue. In the blue gun, pixels of an image
are displayed in blue of different intensity (e.g., dark blue, light blue)
depending on their DNs. The same is true of the green and red guns. Thus, a
colour image is generated when red, green and blue bands of a multi-spectral
image are displayed in red, green and blue guns of a TV or computer monitor
simultaneously. Illustration in Fig. 10.7a shows the typical demonstration of
additive light mixtures, made by shining three overlapping squares of filtered
light onto an achromatic (gray or white) surface. If the surface is illuminated
by both red and green lights but not by the blue light, then eye responds with
the colour sensation of yellow. Magenta colour results from the mixture of red
and blue light, and cyan from the mixture of blue and green (Table 10.2). In
additive colour mixing, yellow and blue do not make green but white.
Fig. 10.7: Illustration of (a) additive and (b) subtractive colour mixtures. Images from
(c) to (f) show additive colour image display. (c), (d) and (e) are the three
images displayed in blue, green and red guns, respectively of a computer
monitor and (f) is the resultant colour image 75
Image Interpretation Table 10.2: Mixing of primary colours and the resultant colour produced
We see light colours by the process of emission from the source but we see
pigment colours by the process of reflection (i.e. light reflected off an object).
Colours which are not reflected are absorbed (subtracted). When source of
colour is pigment, the result of combining colours is different from when
source of colour is light. Cyan, magenta and yellow (CMY) are called the
subtractive primary colours (Fig. 10.7b). Subtractive colour mixing occurs
when light is reflected off a surface or is filtered through a translucent object.
Perhaps, easiest way to think about it is to realise that red pigment absorbs
green and blue, blue pigment absorbs red and green, and green pigment
absorbs red and blue and the result is completely black (Fig. 10.7b).
Thus, RGB colour cube is defined by the maximum possible DN level in each
component of display. Any image pixel in this system may be represented by a
vector from the origin to somewhere within the colour cube. Most standard
RGB display system can display 8 bits per pixel per channel, up to 24 bits =
2563 different colours. This capacity to display is enough to generate a true
colour image (Fig. 10.7f).
As we know colours lie in the visible spectral range of 380 - 750 nm, they are
used as a tool for information visualisation in colour display of all digital
images. Thus, for the display of a digital image, the assignment of each
primary colour for a spectral band can arbitrarily depend on the requirements
of application, which may not necessarily correspond to the actual colour of
spectral range of the band hence let us discuss two terms which are commonly
used in the context of remote sensing images viz. true colour composite and
false colour composite.
76
Ground Truth Data
Collection
Fig. 10.8: True and false colour composites generated from blue, green, red and near-
infrared (NIR) bands of Landsat images
In many of the sensors such as LISS III, they were not designed to acquire
images in blue wavelength because of the noise problem and images were
acquired in many other wavelength regions including infrared. In such cases,
false colour composites were generated without the presence of a blue band.
Standard false colour composite (SFCC) is a typical example of false colour
composites in which colour composite is generated by shifting bands in such a
way that NIR band is displayed in red plane, red band in green and green band
in blue plane of the monitor. In the SFCC, healthy vegetation appears in
shades of red because vegetation absorbs most of green and red energy but
reflects approximately half of incident infrared energy. SFCC effectively
highlights any vegetation distinctively in red (Fig. 10.8). Images displayed in
any other band combination are called broadly called false colour composites.
77
Image Interpretation Check Your Progress II
Spend
5 mins 1) Digital images are sampled and mapped as a grid of dots called ...............
...............................................................................
Fig. 10.9: Table in left is showing DNs of a hypothetical image. Central table shows
frequency of occurrence of each DN. Figure at the right is the graphical
representation i.e. histogram of the central table
Histogram, shown in Fig 10.9, looks like a mountain peak and its highest bar
is a representative of maximum concentration of a particular pixel value. Left
to right direction of the histogram is related to the darkness (minimum value
lies at left side) and lightness (maximum value lies at right side) of image,
respectively, while up and down directions of histogram (valleys and peaks)
correspond to brightness (or colour in multispectral image) information. If an
image is too dark, histogram will show higher concentration in the left side
and if the image is too bright its histogram will show higher concentration in
the right side. This will become easier to understand if you look at histograms
produced by a day and a night time image. Each image has its own unique
histogram, and with a histogram, it is easy to determine certain types of
78
problems in an image. For example, it is easy to conclude if an image contains Ground Truth Data
Collection
too bright or dark pixels by visual inspection of its histogram.
Careful inspection of histogram can also give us an idea about the dominant
types of features in the image. For example, Fig. 10.10c shows histogram of
an image representing coastal area, which shows two peaks in histogram. The
large peak at the left represents water pixels and other peak represents land
pixels. Images of coastal areas show two distinct peaks in NIR band
histogram, one peak having lower DN values corresponds to water pixels and
other peak having higher DN values represents land pixels.
(a) (b)
(c)
Fig. 10.10: (a) Jolly Boys island in the Andaman group of islands as seen in a false colour
composite, (b) gray scale image of NIR band clearly showing land (bright)
and water (dark) pixels and (c) histogram of NIR band. Note two distinct
peaks in histogram for water and land pixels
Now, you have implicit that for a low contrast image, histogram will not be
spread equally, i.e. histogram will be narrow and tall covering a short range of
pixel values. For a high contrast image, histogram will have an equal spread of
pixel values and produce short and flat (wide) histograms covering a wide
range of pixel values.
79
Image Interpretation Histograms always depend on the visual characteristics of the scene captured
in the image, so there is no single ideal histogram that exists. While a given
histogram may be optimal for a specific scene, it may be entirely unacceptable
for another. For example, the ideal histogram for an astronomical image would
likely be very different from that of a good landscape image. So, now you
have understood that significance of a histogram lies in the fact that it
provides an insight about image contrast and brightness and also about the
quality of the image.
10.6 ACTIVITY
To acquire more knowledge about image histogram you can practice the
activity given below:
Capture a picture by your digital camera in day light and capture the same
scene in night. Process these two images in digital image processing software
and create histograms for them. Compare histograms of these two images and
observe differences in them. You may also capture images of different objects
at a particular time and see differences in histogram.
10.7 SUMMARY
In the present unit, you have studied the following
• Images are a way of recording and representing information in a visual
form. A digital image is composed of finite number of elements called
pixels.
• Digital images are of three types: binary or black and white, gray scale
and colour or RGB image.
• Primary colours are those that cannot be created by mixing other colours.
Because of the way we perceive colours using different sets of
wavelengths, there are three primary colours; red, green and blue. Any
colour can be represented as some mixture of these three primary colours.
80
Ground Truth Data
10.8 UNIT END QUESTIONS Collection
10.10 ANSWERS
Check Your Progress I
1) Advantages of digital image processing are:
• images can be identically duplicated during reproduction and
distribution without any change or loss of information
• visualisation of greater details
• images can be processed to generate new images without altering the
original image
• faster extraction of quantitative information and
• repeatability of results.
2) Components of an image processing system are processing machine
(computer), image processing software, storage device and display
device.
81
Image Interpretation
GLOSSARY
Bhuvan: It is the Indian Google Earth launched by ISRO in its own version.
Bhuvan is based on the images taken by IRS satellites and it can be
downloadable from Indian Earth observation visualisation.
Colour space: The parts of the spectrum used to describe an image. Colour
spaces vary in their scope according to the range of colours involved.
Gray scale: A calibrated sequence of gray tones ranging from black to white.
Ground truthing: The process of collection of ground truth data that helps to
link the image data to the ground reality in order to verify the image features.
82
Image analysis : Understanding of the relationship between interpreted Visual Image Interpretation
information and the actual status or phenomenon, and to evaluate the situation.
Land use: Description of the way that humans are utilising any particular
piece of land for one or many purposes, e.g., for agriculture, industry, or
residence.
Noise: Random or repetitive events that obscure or interfere with the desired
information.
84
Visual Image Interpretation
ABBREVIATIONS
DN : Digital Number
NH : National Highway
NIR : Near-infrared
PAN : Panchromatic
TM : Thematic Mapper
85