Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit 10

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Ground Truth Data

UNIT 10 CHARACTERISTICS OF DIGITAL Collection

REMOTE SENSING IMAGES

Structure
10.1 Introduction
Objectives
10.2 Digital Image Processing
What is an Image?
What is a Digital Image?
What is Digital Image Processing?
Advantages of Digital Image Processing
Components of an Image Processing System
Steps in Digital Image Processing
10.3 Types and Characteristics of Digital Images
Types of Digital Images
Characteristics of Digital Image
Related Terminologies
10.4 Concept of True and False Colour Composite
10.5 Image Histogram and its Significance
10.6 Activity
10.7 Summary
10.8 Unit End Questions
10.9 Further/Suggested Reading
10.10 Answers

10.1 INTRODUCTION
In Unit 7, you have studied about the concept of visual image interpretation
with examples in Unit 8 of MGY-002. Now you have learnt that information
derived from visual mode of image interpretation is mostly qualitative. In Unit
9, you have studied that the ground truth data acts as a link between the image
and image derived information and the ground reality.

Computers handle data in digital form hence remote sensing data should be in
digital form. In this unit, you will be introduced to the digital image, its
characteristics and processing. Computer processing of a digital remote
sensing data include several steps such as pre-processing, enhancement,
transformation and information extraction. You shall also briefly learn about
these steps prior to studying them in detail in subsequent Block 4 of MGY-002.

Objectives
After studying this unit, you should be able to:
• define a digital image and discuss its characteristics;
• list the components of an image processing system;
• discuss advantages of digital image processing; 65
Image Interpretation • identify the steps in digital image processing;
• describe the concept of true and false colour composites; and
• define image histogram and discuss its importance.

10.2 DIGITAL IMAGE PROCESSING


Digital image and digital image processing have now become a part of our
everyday life. Pictures taken on any digital camera or mobile device are the
most common examples of digital images. Medical diagnostics is one of the
fields where digital image processing has seen many developments such as
bone scanning, digital X-Ray, MRI (Magnetic Resonance Imaging), digital
mammograms. There are other fields where digital image processing
techniques have enhanced extraction of information significantly and in a
quantitative manner. Interest in digital image processing stems mainly from
two application areas:
• improvement of pictorial information for human interpretations and
• processing of image data for storage, transmission and representation for
machine perception.
Let us first study about image, digital image and advantages of digital image
processing.

10.2.1 What is an Image?


In a broad sense, an image is a picture or photograph. They are most common
and convenient means of storing, conveying and transmitting information.
They concisely convey information about positions, sizes and inter-
relationships between objects and portray spatial information that we can
recognise as objects.
An image is usually a summary of the information in the object it represents.
The information of an image is presented in tones and colours. In a strict
sense, photographs are images, which are recorded on photographic film and
have been converted into paper form by some chemical processing of the film
whereas an image is any pictorial representation of information. So, it can be
said that all photographs are images but not all images are photographs.
Mathematically, an image may be defined as a two-dimensional function
f(x,y), where x and y are spatial (plane) coordinates. The amplitude of f at any
pair of coordinates (x,y) (or in other words, any location in the image) is called
the intensity or gray level of the image and is proportional to the brightness of
the scene at that coordinate/location (x,y).

10.2.2 What is a Digital Image?


When a paper photograph is scanned through a scanner and stored in a
computer, it becomes a digital image as it has been converted into digital
mode. When you see a paper photograph and its digital version in a computer,
you do not see any difference. In digital mode, photographic information is
stored as an array of discrete numbers. Each number corresponds to a discrete
dot, i.e. one image element in an image. This image element is the smallest
part of an image and is generally known as picture element or pixel or pel.
66
These numbers vary from place to place within the image depending upon the Ground Truth Data
Collection
tonal variation. Number of pixels in an image depends upon the image size
(length and width of the image). In any image, bright areas are represented by
higher values whereas dark areas are represented by lower values. The values
are known as digital number.
We know now that a digital image is composed of a finite number of pixels,
each of which has a particular location and value. In other words, when (x,y)
and amplitude values of ‘f’ are all finite, discrete quantities both in spatial
coordinates and in brightness, the image is called a digital image.
An image must be converted to numerical form before processing and this
conversion process is called digitisation. In Fig. 10.1, the image is divided
into horizontal lines made up of adjacent pixels. At each pixel location, the
image brightness is sampled and quantised. This step generates an integer at
each pixel representing the brightness or darkness of the image at that point
and is represented by a two-dimensional integer array and digitised brightness
value is called gray level. Thus, it is a digital representation in the form of
rows and columns, where each number in the array represents the relative
value of the parameter at that point or over the unit area. Fig. 10.1 shows a
digital image with its corresponding digital values. Here, if you notice you will
observe that when the colour is dark the value of pixel is less and when the
colour is light value is high. Similarly, Fig. 10.2 shows an image of size 4 × 4,
where the value of f(x,y), say for (1,4), i.e. first row and fourth column, the
corresponding value of pixel is 24.

Fig. 10.1: A digital image (left) and its corresponding values (centre). Note the variation
in the brightness and the change in the corresponding digital numbers.
Highlighted block in the centre figure shows one pixel. The figure at right
shows the range of values corresponding to the brightness

Fig. 10.2: Arrangement of rows and columns of an image of size 4 × 4 (4 rows and 4
columns). Left figure shows the numerical values in the image and the table at
right shows the representation of pixel location for an image of size 4 × 4. You
can observe that at location (1, 4), i.e. row 1 and column 4, the pixel value is 24
67
Image Interpretation 10.2.3 What is Digital Image Processing?
Interpretation of a digital image involves analysis of the image and extraction
of information through computer software. Digital image analysis requires
processing of the image using computer software. Those processing steps are
called digital image processing. Digital image processing can be defined as
subjecting numerical representation of objects (i.e. a digital image) to a series
of operations in order to obtain a desired result. Digital image processing
begins with one image and produces a modified version of that image. Digital
image analysis is a process that takes a digital image into something other than
a digital image, such as a set of measurements of the objects present in the
image. However, the term digital image processing is loosely used to cover
both processing and analysis.

10.2.4 Advantages of Digital Image Processing


There are certain thresholds beyond which the human interpreter cannot detect
minor differences in image features in an image. For example, if a data is
recorded with 256 gray shades, there may be more subtle information present
in the image than we can extract visually. Similarly, it becomes difficult to
keep track of a great amount of detailed quantitative information such as the
spectral characteristics for crop identification purposes throughout a growing
season. However, computer is much more adept at storing and manipulating
such information.
Recall Table 7.1 of Unit 7
of course MGY-002 for Advantages of handling remote sensing data in digital mode as compared to
comparison of visual and
photographic mode are listed below:
digital image interpreta-
tion. • ease in data storage and distribution
• images can be identically duplicated during reproduction and distribution
without any change or loss of information
• visualisation of greater details
• images can be processed to generate new images without altering the
original image
• faster extraction of quantitative information and
• repeatability of results.
Computer assisted image interpretation approach mimics the visual image
interpretation approach to a certain level. Manual image analysis uses most of
the elements of image interpretation such as tone, colour, size, shape, texture,
pattern, height, shadow, site and association whereas computer assisted image
interpretation involves the use of only a few of the basic elements of image
interpretation. In fact, majority of all digital image analysis appears to be
dependent primarily on just the tone or colour of image feature but both
manual and digital analysis of remotely sensed data seeks to detect and
identify important phenomena in the scene as both the interpretation
approaches have the same general goals.

10.2.5 Components of an Image Processing System


The components of an image processing system differs depending on the
different types of image, for example, satellite images and X-ray images
68
cannot be processed by same type of image processing system. However, Ground Truth Data
Collection
following components are the minimum requirement of an image processing
system as shown in Fig. 10.3.

Processing
Machine

Display Storage
Device Device

Image Printing
Processing Device
Software

Fig. 10.3: Block diagram showing different components of a digital image processing system

Processing Machine (Computer)


It may be a general purpose computer according to task to be performed. The
basic use of this device is that it will perform all digital image processing task
off line.

Storage Device
Storage devices are used for storing of images for different purposes and use.

Display Device
It is used for displaying data. Example of a display device is generally a colour
monitor.

Image Processing Software


Image processing software such as IGIS, ERDAS, ENVI and Geomatica are
specially designed programming modules that perform specific tasks.

Printing Device
It is used for representing and storing image data in hard copy format. It could
be laser, inkjet or any other printer.

It is important to note that following factors should be considered while


selecting a digital image processing system:

• Number of Analyst and Mode of Operation: You should consider a


software which could be accessed and interactively used for data
processing by at least the number of people involved in the study.

• Memory and Processing Specifications of Computer: Processing of


different types of digital remote sensing data require different processing
capabilities and memory usage. You should choose a software that is
69
Image Interpretation compatible with the specifications of computer. You have or should buy a
computer with minimum specification that is required to process the data
of your interest.

• Operating System: The operating system must be powerful and easy to


use. DOS, UNIX and WINDOW’s are the most universally used
operating systems. The chosen image processing software should be
compatible with the operating system in your computer.

• Storage: Digital images are stored usually in matrix form with various
multispectral bands and in different formats. Software which are capable
of storing and processing the concerned image formats should be
considered.

• Display Resolution: Different image types require different display


resolution hence you should consider software which is capable of
displaying highest resolution.

10.2.6 Steps in Digital Image Processing


It is now apparent to you that the subject of digital image processing is very
broad. For our understanding, we can generalise the image processing into
following four steps:
We shall discuss about all
the four digital image • Image Preprocessing
processing steps in the It is usually necessary to preprocess remote sensing data prior to its
next block, i.e. Block 4 of
MGY-002.
analysis because image data recorded by sensors contain errors which
degrade quality of the image and cause the image to appear noise, blurred
and distorted. The errors creep into during data acquisition process. Most
common types of errors are geometric and radiometric errors. All these
errors are corrected using suitable mathematical models at the time of
preprocessing.

• Image Enhancement
Image enhancement is carried out to improve the appearance of certain
image features to assist in human interpretation and analysis. You should
note that image enhancement is different from image preprocessing step.
Image enhancement step highlights image features for interpreter whereas
image preprocessing step reconstructs a relatively better image from an
originally imperfect/degraded image.

• Image Transformations
Image transformations are operations similar in concept to those for
image enhancement. However, unlike image enhancement operations
which are normally applied only to a single channel of data at a time,
image transformations usually involve algebric operations of multi-layer
images. Algebric operations such as subtraction, addition, multiplication,
division, alogarithms, exponentials and trigonometric functions are
applied to transform the original images into new images which display
better or highlight certain features in the image.

70
• Thematic Information Extraction Ground Truth Data
Collection
It includes all the processes used for extracting thematic information from
images. Image classification is one such process which categorises pixels
in an image into some thematic classes such as land cover classes based
on spectral signatures. Image classification procedures are further
categorised into supervised, unsupervised and hybrid depending upon the
level of human intervention in the process of classification.
Spend
Check Your Progress I 5 mins
1) List out the advantages of digital image processing.
......................................................................................................................
......................................................................................................................
......................................................................................................................
......................................................................................................................
......................................................................................................................

2) Name the components in an image processing system.


......................................................................................................................
......................................................................................................................
......................................................................................................................
......................................................................................................................

10.3 TYPES AND CHARACTERISTICS OF DIGITAL


IMAGES
Before discussing about the image processing steps in the next block, it is
essential to understand little more about the digital image. So, we shall now
discuss about types of digital image and its characteristics.

10.3.1 Types of Digital Images


Digital images can be classified into several types based on their form or
method of generation. The actual information stored in the digital image data
is the brightness information in each spectral band and in general, digital
images are of following three types:

a) Black and White or Binary image


Pixels in this type of images show only two colours, either black or white (Fig.
10.4a) and hence the pixels are represented by only two possible values for
each pixel, 0 for black and 1 for white. Since a black and white image can be
described in terms of binary values, such images are also known as binary
images or bi-level or two-level images. This also means that the binary images
require only a single bit (0 or 1) to represent each pixel hence storing of these
kinds of images require only one bit per pixel. Inability to represent
intermediate shades of gray limits usefulness of binary images in dealing with
remote sensing or photographic images.
71
Image Interpretation

(a) (b)

Fig. 10.4: Representation of (a) black and white and (b) gray scale images. Note the
range of values for the highlighted boxes in the two types of images

b) Gray Scale or Monochrome Image


Pixels in this type of images show white and black colours including the
different shades between the two colours as shown in Fig. 10.4b. Generally,
black colour is represented by 0 value, white by 255 and other in between gray
shades by values between the two values. This range means that each pixel can
be represented by eight bits or exactly one byte. In other words, storing of gray
scale images require 8 bits per pixel.

c) Colour or RGB Image


Each pixel in this type of image has a particular colour which is described by
the amount of red, green and blue colours in it (Fig. 10.5). Colour images are
constructed by stacking three gray scale images where each image (i.e. band)
corresponds to a different colour hence there are three values (one each for
red, green and blue components) corresponding to each pixel.

Fig. 10.5: Representation of a colour image. Note the range of values of its three
components, i.e. red, green and blue

RGB (Red, Green and Blue) is the commonly used colour space to visualise
colour images. Thus, RGB are primary colours for mixing light and are called
additive primary colours. Any other colour can be created by mixing the
correct amounts of red, green and blue light. If each of these three components
has a range of 0 - 255, there could be a total of 2563 different possible colours
in a colour image. Storing of a colour images require 24 bit for each pixel.
72
10.3.2 Characteristics of Digital Image Ground Truth Data
Collection
There are three basic measures for digital image characteristics:
• spatial resolution
• spectral resolution and
• radiometric resolution.
All these three types of image measures have already been described in Unit 5
Image Resolution of MGY-002, so it would not be repeated here. However,
you should keep in mind that higher the resolution of an image, the more
information the image contains.

10.3.3 Related Terminologies


You shall come across many terms while studying about digital image
processing in the following units. Some of the commonly used terminologies
related to digital images and their digital processing are introduced in this
section so that you would become familiar with them.

Digital Number
In a digital image, each point/unit area in the image is represented by an
integer digital number depending upon the brightness/intensity, which is often
referred to as digital number or DN or DN value. The lowest intensity is
assigned DN of zero and the highest intensity the highest DN number, the
various intermediate intensities are assigned appropriate intermediate DNs.
Thus, intensities over a scene are converted into an array of numbers, where
each number represents the relative value of the field over a unit area. The
range of DNs used in a digital image depends upon the number of bit data
(Table 10.1), the most common being 8-bit type.

Table 10.1: Range of digital number with corresponding bit number


Bit number Scale DN range
7-bit or 27 128 0 - 127
8-bit or 28 256 0 - 255
9-bit or 29 512 0 - 511
10-bit or 210 1024 0 - 1023
11-bit or 211 2048 0 - 2047
12-bit or 212 4096 0 - 4095

Pixel Depth
It refers to the number of bits used to represent each pixel in RGB space. For
example, if each pixel component of RGB image is represented by 8 bits, the
pixel is said to have a depth of 24 bits.

Look Up Table
It gives an output value for each of a range of index values. A look up table is
used to transform input data into a more desirable output format. For example,
a gray scale picture of the planet Saturn will be transformed into a colour
73
Image Interpretation image to emphasise the differences in its rings. Contrast and colour values can
be altered without modifying original digitised image, and an adjustable curve
may be utilised to interactively alter values present in the look up table.
Band
In a multispectral sensor such as those aboard the Landsat satellites,
information from different wavelengths of light is collected as in a digital
camera but there are two major differences. First is that instead of limiting
itself to the visible wavelengths (red, green and blue) a much broader range of
wavelengths are detected. Second difference is that instead of automatically
combining information from the different wavelengths to form a picture, the
information for each specific wavelength range is stored as a separate image.
This image is commonly called a band. In other words, images obtained in
different wavelengths together form a multispectral image and each image is
known as a band or layer or channel.

10.4 CONCEPT OF TRUE AND FALSE COLOUR


COMPOSITE
You can learn about the concept of true and false colour composites only when
you have read about digital image. If you recognise structure and principle of a
colour TV tube, you would know that tube is composed of three colour guns
of red, green and blue. The red, green and blue colours are known as primary
colours and any other colour can be matched by proper proportions of these
three colours. The mixture of the light from these three primary colours can
produce any colour on a TV. As to the human eyes, the image that appears just
like the original subject would be as a blue sky appears blue, a red apple
appears red and a green tree appears green. Let us now try to comprehend how
human eyes determine a colour. Fig. 10.6 shows horizontal cross section of
human eye. A human eye consists of three membranes enclosing the eyes
which are given below:
a) cornea and sclera that are the outer covers of eyes
b) choroid that has further two parts called ciliary body and iris diaphragm and
c) retina that has two classes of receptors called cones and rods.

74 Fig. 10.6: Cross section view of human eye


Besides these three types of membranes, eyes also have one lens. When eyes Ground Truth Data
Collection
are properly focused, light from an object outside the eyes is imaged on retina.
Human retina has three types of cones. The response by each type of cone is a
function of the wavelength of the incident light; it peaks at 440 nm (blue, B),
545 nm (green, G) and 680 nm (red, R), i.e. each type of cone is primarily
sensitive to one of the primary colours: blue, green or red. A colour perceived
by a person depends on the proportion of each of these three types of cones
A colour space is a
being stimulated and, thus, can be expressed as a triplet of numbers (R, G, B).
mathematical system for
These three values define a 3-dimensional colour space called RGB colour representing colours.
space.

Digital image colour display is based entirely on this colour theory. This can
be explained with example of a colour TV that is composed of three precisely
registered colour guns- red, green and blue. In the blue gun, pixels of an image
are displayed in blue of different intensity (e.g., dark blue, light blue)
depending on their DNs. The same is true of the green and red guns. Thus, a
colour image is generated when red, green and blue bands of a multi-spectral
image are displayed in red, green and blue guns of a TV or computer monitor
simultaneously. Illustration in Fig. 10.7a shows the typical demonstration of
additive light mixtures, made by shining three overlapping squares of filtered
light onto an achromatic (gray or white) surface. If the surface is illuminated
by both red and green lights but not by the blue light, then eye responds with
the colour sensation of yellow. Magenta colour results from the mixture of red
and blue light, and cyan from the mixture of blue and green (Table 10.2). In
additive colour mixing, yellow and blue do not make green but white.

Fig. 10.7: Illustration of (a) additive and (b) subtractive colour mixtures. Images from
(c) to (f) show additive colour image display. (c), (d) and (e) are the three
images displayed in blue, green and red guns, respectively of a computer
monitor and (f) is the resultant colour image 75
Image Interpretation Table 10.2: Mixing of primary colours and the resultant colour produced

Combination of primary colours Resultant colour


- Green Blue Cyan
Red - Blue Magenta
Red Green - Yellow
Red Green Blue White
- - - Black

We see light colours by the process of emission from the source but we see
pigment colours by the process of reflection (i.e. light reflected off an object).
Colours which are not reflected are absorbed (subtracted). When source of
colour is pigment, the result of combining colours is different from when
source of colour is light. Cyan, magenta and yellow (CMY) are called the
subtractive primary colours (Fig. 10.7b). Subtractive colour mixing occurs
when light is reflected off a surface or is filtered through a translucent object.
Perhaps, easiest way to think about it is to realise that red pigment absorbs
green and blue, blue pigment absorbs red and green, and green pigment
absorbs red and blue and the result is completely black (Fig. 10.7b).

Thus, RGB colour cube is defined by the maximum possible DN level in each
component of display. Any image pixel in this system may be represented by a
vector from the origin to somewhere within the colour cube. Most standard
RGB display system can display 8 bits per pixel per channel, up to 24 bits =
2563 different colours. This capacity to display is enough to generate a true
colour image (Fig. 10.7f).

As we know colours lie in the visible spectral range of 380 - 750 nm, they are
used as a tool for information visualisation in colour display of all digital
images. Thus, for the display of a digital image, the assignment of each
primary colour for a spectral band can arbitrarily depend on the requirements
of application, which may not necessarily correspond to the actual colour of
spectral range of the band hence let us discuss two terms which are commonly
used in the context of remote sensing images viz. true colour composite and
false colour composite.

True Colour Composite


If we display three image bands of a remote sensing data acquired in red,
green and blue spectral ranges in red, green and blue colour guns/plane of a
monitor, respectively, then a true colour composite (TCC) image is generated
(Fig. 10.8). In other words, if we display the blue band in blue plane, green
band in green plane and red band in red plane of colour monitor for display
then a true colour composite is produced.

76
Ground Truth Data
Collection

Fig. 10.8: True and false colour composites generated from blue, green, red and near-
infrared (NIR) bands of Landsat images

False Colour Composite


Sensors such as LISS III are designed to acquire images in green, red, NIR and
middle infrared wavelengths of the electromagnetic spectrum and not in blue
wavelength. In such cases, where there is no band in visible region or there are
images acquired beyond the visible region, images can still be displayed in
monitor however by assigning and displaying images in colour planes of the
monitor to which they do not belong. Such kinds of colour image display are
known as false colour composites since they do not represent the true colour
as we would see on ground. In other words, false colour composites are
artificially generated colour images in which different bands of a multispectral
data are displayed in image planes other than their own (Fig. 10.8). For
example, in the false colour composite shown in Fig. 10.8, green band is
assigned to blue plane, red band is assigned to green plane and NIR band is
assigned to red plane of the computer monitor for display. False colour
composite is the general case of an RGB colour display in any combination,
however, the true colour composite is only a special case of it.

In many of the sensors such as LISS III, they were not designed to acquire
images in blue wavelength because of the noise problem and images were
acquired in many other wavelength regions including infrared. In such cases,
false colour composites were generated without the presence of a blue band.
Standard false colour composite (SFCC) is a typical example of false colour
composites in which colour composite is generated by shifting bands in such a
way that NIR band is displayed in red plane, red band in green and green band
in blue plane of the monitor. In the SFCC, healthy vegetation appears in
shades of red because vegetation absorbs most of green and red energy but
reflects approximately half of incident infrared energy. SFCC effectively
highlights any vegetation distinctively in red (Fig. 10.8). Images displayed in
any other band combination are called broadly called false colour composites.
77
Image Interpretation Check Your Progress II
Spend
5 mins 1) Digital images are sampled and mapped as a grid of dots called ...............
...............................................................................

2) In a digital image, each point in the image is represented by an integer


referred to as ...........................................................................................

10.5 IMAGE HISTOGRAM AND ITS SIGNIFICANCE


Let us now study about image histogram and its significance. Most commonly,
remote sensing images are of 8 bits hence, the range of DNs varies between 0
and 255. If you tabulate frequencies of occurrence of each DN in an image, it
can be graphically presented in a histogram wherein range of DNs is presented
on abscissa and frequency of occurrence of each DN on the ordinate as shown
in Fig. 10.9. In the context of remote sensing, histogram can be defined as a
plot of the number of pixels at each pixel value within each spectral band.

Histograms in general are frequency distribution describing frequency of DNs


occurring in an image. Histogram basically is a graph that represents
maximum range of DNs that a remote sensor captures in 256 steps (0 = pure
black and 255 = pure white) for an 8 bit data. Histogram provides a
convenient summary of brightness (pixel values) in an image. It is used to
depict image statistics in an easily interpreted visual format.

Fig. 10.9: Table in left is showing DNs of a hypothetical image. Central table shows
frequency of occurrence of each DN. Figure at the right is the graphical
representation i.e. histogram of the central table

Histogram, shown in Fig 10.9, looks like a mountain peak and its highest bar
is a representative of maximum concentration of a particular pixel value. Left
to right direction of the histogram is related to the darkness (minimum value
lies at left side) and lightness (maximum value lies at right side) of image,
respectively, while up and down directions of histogram (valleys and peaks)
correspond to brightness (or colour in multispectral image) information. If an
image is too dark, histogram will show higher concentration in the left side
and if the image is too bright its histogram will show higher concentration in
the right side. This will become easier to understand if you look at histograms
produced by a day and a night time image. Each image has its own unique
histogram, and with a histogram, it is easy to determine certain types of
78
problems in an image. For example, it is easy to conclude if an image contains Ground Truth Data
Collection
too bright or dark pixels by visual inspection of its histogram.

Careful inspection of histogram can also give us an idea about the dominant
types of features in the image. For example, Fig. 10.10c shows histogram of
an image representing coastal area, which shows two peaks in histogram. The
large peak at the left represents water pixels and other peak represents land
pixels. Images of coastal areas show two distinct peaks in NIR band
histogram, one peak having lower DN values corresponds to water pixels and
other peak having higher DN values represents land pixels.

(a) (b)

(c)
Fig. 10.10: (a) Jolly Boys island in the Andaman group of islands as seen in a false colour
composite, (b) gray scale image of NIR band clearly showing land (bright)
and water (dark) pixels and (c) histogram of NIR band. Note two distinct
peaks in histogram for water and land pixels

Now, you have implicit that for a low contrast image, histogram will not be
spread equally, i.e. histogram will be narrow and tall covering a short range of
pixel values. For a high contrast image, histogram will have an equal spread of
pixel values and produce short and flat (wide) histograms covering a wide
range of pixel values.

79
Image Interpretation Histograms always depend on the visual characteristics of the scene captured
in the image, so there is no single ideal histogram that exists. While a given
histogram may be optimal for a specific scene, it may be entirely unacceptable
for another. For example, the ideal histogram for an astronomical image would
likely be very different from that of a good landscape image. So, now you
have understood that significance of a histogram lies in the fact that it
provides an insight about image contrast and brightness and also about the
quality of the image.

10.6 ACTIVITY
To acquire more knowledge about image histogram you can practice the
activity given below:

Capture a picture by your digital camera in day light and capture the same
scene in night. Process these two images in digital image processing software
and create histograms for them. Compare histograms of these two images and
observe differences in them. You may also capture images of different objects
at a particular time and see differences in histogram.

10.7 SUMMARY
In the present unit, you have studied the following
• Images are a way of recording and representing information in a visual
form. A digital image is composed of finite number of elements called
pixels.

• A digital image corresponds to a two-dimensional array of pixels and


each pixel has a particular location and value.

• Digital images are of three types: binary or black and white, gray scale
and colour or RGB image.

• Digital image processing is a collection of techniques for the


manipulation of digital images by computers.

• Components of a digital image processing system include processing


machine, software and storage and printing devices. Basic steps in digital
image processing include image preprocessing, image enhancement,
image transformation and image classification.

• Primary colours are those that cannot be created by mixing other colours.
Because of the way we perceive colours using different sets of
wavelengths, there are three primary colours; red, green and blue. Any
colour can be represented as some mixture of these three primary colours.

• Histograms are frequency distribution of pixel values in an image. The


brightness of an image can be improved by modifying histogram of the
image.

80
Ground Truth Data
10.8 UNIT END QUESTIONS Collection

1) Define an image and a digital image.


2) Discuss major steps in digital image processing.
3) Write about the types of digital image.
4) Define true and false colour composites.
5) What do you mean by image histogram?

10.9 FURTHER/SUGGESTED READING


• Jensen, J.R. (1986), Introductory Digital Image Processing - A Remote
Sensing Perspective. Prentice Hall, New Jersey.

• Reddy, M. A.(2006), Textbook of Remote Sensing and Geographical


Information Systems, BS Publications, Hyderabad.

10.10 ANSWERS
Check Your Progress I
1) Advantages of digital image processing are:
• images can be identically duplicated during reproduction and
distribution without any change or loss of information
• visualisation of greater details
• images can be processed to generate new images without altering the
original image
• faster extraction of quantitative information and
• repeatability of results.
2) Components of an image processing system are processing machine
(computer), image processing software, storage device and display
device.

Check Your Progress II


1) Pixels.
2) Digital number (DN).

Unit End Questions


• Refer to subsections 10.2.1 and 10.2.2
• Refer to subsection 10.2.6
• Refer to subsection 10.3.1
• Refer to section 10.4
• Refer to section 10.5

81
Image Interpretation
GLOSSARY

Ancillary data: It include any type of data/information (spatial and non-


spatial) that may be of value in the image classification process (i.e. pre, post
and during classification). It comprises any type of information likes slope,
height, aspect, geology, soils, hydrology, transportation networks, political
boundaries, vegetation maps and so on.

Bhuvan: It is the Indian Google Earth launched by ISRO in its own version.
Bhuvan is based on the images taken by IRS satellites and it can be
downloadable from Indian Earth observation visualisation.

Bit: It is the lowest level of electronic value in a digital image. It defines a


pixel’s colour value in combination with other bits. Each bit can have one of
two values either 1 or 0.

Brightness: It is the amount of light received by the eye regardless of colour.


The brightness of a colour identifies how light or dark the colour is. Any
colour whose brightness is zero is black, regardless of its hue or saturation.

Colour composite: It is a colour image prepared by combining individual


band images in which each band (up to a maximum of 3) is assigned one of
the three additive primary colours such as blue, green and red.

Contrast stretching: Improving the contrast of images by digital processing.


The original range of digital values is expanded to utilise the full contrast
range of the recording film or display device.

Colour space: The parts of the spectrum used to describe an image. Colour
spaces vary in their scope according to the range of colours involved.

False colour composites: These are artificially generated colour images in


which blue, green and red colours are assigned to the wavelength regions to
which they do not belong.

GPS (Global Positioning System): It is a satellite based location system that


gives accurate position (latitude, longitude and height) and navigational
information. At present there are 24 GPS satellites.

Gray scale: A calibrated sequence of gray tones ranging from black to white.

Ground truthing: The process of collection of ground truth data that helps to
link the image data to the ground reality in order to verify the image features.

Image reading: It is an elemental form of image interpretation and


corresponds to simple identification of objects using such image interpretation
elements as shape, size, etc.

Image measurement : Represents the extraction of physical quantities such as


length, location, height, density, temperature and so on by using reference data
or calibration data deductively or inductively.

82
Image analysis : Understanding of the relationship between interpreted Visual Image Interpretation
information and the actual status or phenomenon, and to evaluate the situation.

Interpretation key: Criteria for identification of an object with elements of


interpretation.

Land cover: Physical material present on the surface e.g., forest.

Landforms: Natural features of a land surface e.g., mountains, plateaus,


plains, etc.

Landsat: Comprises a series of unmanned Earth-observing satellites jointly


managed by NASA and U.S. Geological Survey (formerly called Earth
Resources Technology Satellite – ERTS).

Land use: Description of the way that humans are utilising any particular
piece of land for one or many purposes, e.g., for agriculture, industry, or
residence.

Light: EMR within 400-700 nm in wavelength that is detectable by the


human eye.

Location: A specific position in the physical space.

NIR (Near-infrared): A subdivision in the infrared band somewhere between


the 800 nm and 2,500 nm wavelengths.

Noise: Random or repetitive events that obscure or interfere with the desired
information.

Picture element: It is the smallest element in a digital image. In a digitised


image this is the area on the ground represented by each digital value. Because
the analogue signal from the detector of a scanner may be sampled at any
desired interval, the picture element may be smaller than the ground resolution
cell of the detector. It is commonly abbreviated as pixel.

Scale: Ratio of the distance on an image to the equivalent distance on the


ground.

Thematic map: The extracted information that will be finally represented in a


map form.

True colour composite: It looks like a natural colour composite image in


which spectral bands are combined in such a way that the appearance of the
displayed image resembles a visible colour photograph.

Stereoscopy: Science of viewing a pair of stereoscopic photographs or images


by looking at the left image with the left eye and the right image with the right
eye.

Aerial camera: A precision camera specifically designed for use in aircrafts.

Aerial photograph: Photograph taken from an airborne platform using a


precision camera.
83
Image Interpretation Densitometry: Science of making accurate measurement of film density.

Densitometer: An instrument that measure image density by directing a light


of known brightness through a small portion of the image, then measuring its
brightness as altered by the film.

Height finder: An instrument designed for use with a stereoscope. It permits


estimation of topographic elevation or of the heights of features from
stereoaerial photographs.

84
Visual Image Interpretation
ABBREVIATIONS

AVHRR : Advanced Very High Resolution Radiometer

AWiFS : Advanced Wide Field Sensor

CMY : Cyan, Magenta and Yellow

DEMs : Digital Elevation Models

DN : Digital Number

GPS : Global Positioning System

HRV : High Resolution Visible

IRS : Indian Remote Sensing Satellites


ISRO : Indian Space Research Organisation
LISS : Linear Imaging Self-Scanning

LRF : Lake Reserved Forest

LULC : Land Use/Land Cover

MMU : Minimum Mapping Unit

MRI : Magnetic Resonance Imaging

NH : National Highway

NIR : Near-infrared

NOAA : National Oceanic and Atmospheric Administration

NRSC : National Remote Sensing Centre

PAN : Panchromatic

RGB : Red, Green and Blue

SFCC : Standard False Colour Composite

SPOT : Satellite Pour I’Observation de la Terre

SWIR : Shortwave Infrared

TCC : True Colour Composite

TM : Thematic Mapper

WiFS : Wide Field Sensor

85

You might also like