Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

GIS & RS Book (Final)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 122

UNITED INSTITUTE OF TECHNOLOGY

GIS & REMOTE SENSING


(KOE-066)

Ms. DIPTI TIWARI Mr. SWAROOP MALLICK Mr. MOHAMMAD SHABEEH Mr. SAURABH PANDEY
PREFACE
In today’s world – the flow of information especially digital information has become the critical
ingredient for success in any activity. That is why, the period we live in is often referred to as
an information age.
The digital information revolution of the late twentieth century has allowed this geographic
information to be more easily accessed, analysed and used than ever before. This led to the
development of GIS as a discipline and emergence of GIS as a core of digital technology.
The technology of GIS is spread over the domain of several disciplines such as Mathematics,
Statistics, Computer Sciences, Remote Sensing, Environmental Sciences and of course
Geography. Similarly, diverse is the list of its applications – Commerce, Governance, Planning
and Academic Research. These application areas are also growing and expanding every day
due to its power and vast possibilities.
Now in the era of multidisciplinary approach, students, researchers, professionals from
different disciplines find their way into the emerging discipline of GIS making it popular. The
rapid expansion and popularization of GIS means that now GIS is not just for the specialists,
but for everyone, but these GIS users have different requirements.
The present book is an attempt to provide basic fundamentals of GIS for beginners. It is tailor-
made to meet the requirements of students who are in third year B.Tech course and has chosen
GIS & RS as a subject in Open Elective of sixth semester.
We extend our gratitude to our Principal, Prof. Sanjay Srivastava Sir for his continual support,
motivation and guidance which is acting as a driving force to complete this work within the
stipulated time. It was sir who ideate the notion and explained us the importance of writing this
book.

Ms. Dipti Tiwari


Mr. Swaroop Mallick
Mr. Mohammad Shabeeh
Mr. Saurabh Pandey
CONTENTS
UNIT-I: Basic component of remote sensing (RS), advantages and limitations 1-9
of RS, possible use of RS techniques in assessment and monitoring of land and
water resources; electromagnetic spectrum, energy interactions in the atmosphere
and with the Earth’s surface; major atmospheric windows; principal applications
of different wavelength regions; typical spectral reflectance curve for vegetation,
soil and water, spectral signatures.

UNIT II: Different types of sensors and platforms; contrast ratio and possible 10 - 38
causes of low contrast; aerial photography; types of aerial photographs, scale of
aerial photographs, planning aerial photography- end lap and side lap;
stereoscopic vision, requirements of stereoscopic photographs; air-photo
interpretation- interpretation elements.

UNIT – III: Photogrammetry- measurements on a single vertical aerial 39 - 63


photograph, measurements on a stereo-pair- vertical measurements by the
parallax method; ground control for aerial photography; satellite remote sensing,
multispectral scanner- whiskbroom and push-broom scanner; different types of
resolutions; analysis of digital data- image restoration; image enhancement;
information extraction, image classification, unsupervised classification,
supervised classification, important consideration in the identification of training
areas, vegetation indices.

UNIT-IV: Microwave remote sensing. GIS and basic components, different 64 - 81


sources of spatial data, basic spatial entities, major components of spatial data,
Basic classes of map projections and their properties.

UNIT-V: Methods of data input into GIS, Data editing, spatial data models 82 - 119
and structures, Attribute data management, integrating data (map overlay) in
GIS, Application of remote sensing and GIS for the management of land and
water resources.
UNIT-I
Syllabus:
Basic component of remote sensing (RS), advantages and limitations of RS, possible use of RS
techniques in assessment and monitoring of land and water resources; electromagnetic
spectrum, energy interactions in the atmosphere and with the Earth’s surface; major
atmospheric windows; principal applications of different wavelength regions; typical spectral
reflectance curve for vegetation, soil and water, spectral signatures.

REMOTE SENSING:
Remote sensing is the science (and to some extent, art) of acquiring information about the
Earth's surface without actually being in contact with it. This is done by sensing and
recording reflected or emitted energy and processing, analyzing, and applying that
information.

In much of remote sensing, the process involves an interaction between incident radiation and
the targets of interest. This is exemplified by the use of imaging systems where the following
seven elements are involved. Note, however that remote sensing also involves the sensing of
emitted energy and the use of non-imaging sensors.
 Humans apply Remote sensing in their day-to-day business through vision hearing and
sense of smell the data collected can be of many forms
 Variations in acoustic wave distributions
 Variations in force distributions
 Variations Electromagnetic energy distribution
 The data collected through various sensors may be analysed to obtain information about
the object
1.1 BASIC COMPONENT OF REMOTE SENSING (RS):
1.1.1 Energy Source or Illumination:
(A) The first requirement for remote sensing is to have an energy source which illuminates or
provides electromagnetic energy to the target of interest.
1.1.2 Radiation and the Atmosphere:
(B) As the energy travels from its source to the target, it will come in contact with and interact
with the atmosphere it passes through. This interaction may take place a second time as the
energy travels from the target to the sensor.
1.1.3 Interaction with the Target:
(C) Once the energy makes its way to the target through the atmosphere, it interacts with the
target depending on the properties of both the target and the radiation.

Page |1
1.1.4 Recording of Energy by the Sensor:
(D) After the energy has been scattered by, or emitted from the target, we require a sensor
(remote - not in contact with the target) to collect and record the electromagnetic radiation.
1.1.5 Transmission, Reception, and Processing:
(E) The energy recorded by the sensor has to be transmitted, often in electronic form, to a
receiving and processing station where the data are processed into an image (hardcopy and/or
digital).
1.1.6 Interpretation and Analysis:
(F) The processed image is interpreted, visually and/or digitally or electronically, to extract
information about the target which was illuminated.
1.1.7 Application:
(G) The final element of the remote sensing process is achieved when we apply the information
we have been able to extract from the imagery about the target in order to better understand it,
reveal some new information, or assist in solving a particular problem.

Fig. 1.1 REMOTE SENSING PROCESS

1.2.1 ADVANTAGES OF REMOTE SENSING:


 Provide data for large areas.
 Provide data for very remote and inaccessible areas.
 Able to obtain imagery of any area over a continuous period of time.
 Possible to monitor any anthropogenic or natural changes in the landscape.

Page |2
 Relatively inexpensive when compared to employing a team of surveyors.
 Easy and rapid collection of data.
 Rapid production of maps for interpretation.

1.2.2 LIMITATIONS OF REMOTE SENSING:


 The interpretation of imagery requires a certain skill level.
 Need cross verification with ground(field) survey data.
 Data from multiple sources may create confusion.
 Object can be misclassified or confused.
 Distortions may occur in an image due to relative motion of sensor and source.

1.3 POSSIBLE USE OF RS TECHNIQUES IN ASSESSMENT AND MONITORING OF


LAND AND WATER RESOURCES:
Satellites play a huge role in the development of many technologies like world mapping, GPS,
City planning, etc. Remote Sensing is one of the many innovations that were possible, thanks
to the satellites roaming around the earth.
Following are some major fields in what can remote sensing be used for
 Weather
 Forestry
 Agriculture
 Surface changes
 Biodiversity
And many more (the number is damn high, can’t list out all, these are the main fields in which
it is mostly used)
 Analyzing the condition of rural roads:
Rural road conditions are now possible to be analyzed using various GIS technique and Remote
Sensing techniques with an inch to inch accuracy. It saves a lot of time and money from
transporters.
 Creating a base map for visual reference:
Nowadays many modern mapping technologies are based on Remote Sensing including Google
maps, open street maps, Bing maps, NASA’s Globe view, etc.
 Computing snow pack:
Snow melt ratio can be easily understood by using Remote Sensing technology, NASA uses
LIDAR along with a spectrometer in order to measure the absorption of sunlight.
 Collecting earth’s pictures from space:
Many space organization has a collection containing images of earth. Interesting patterns of
earth’s geometry including atmosphere, oceans, land, etc can be seen in it. EO-1, Terra, and
Landsat are used to collect this data.

Page |3
 Controlling forest fires:
Information acquired by satellites using Remote Sensing enables firefighters to be dispatched
on time and over correct locations so the damage from such fires can be decreased to minimal.
 Detecting land use and land cover:
Remote Sensing technologies are used to determine various physical properties of land and also
what it is being used for (land use).
 Estimating forest supplies:
MODIS, AVHRR, and SPOT are regularly used to measure the increment/decrement in global
forests since forests are the source of valuable materials such as paper, packaging, construction
materials, etc.
 Locating construction and building alteration:
Tax revenue agencies use satellite data in several countries including Greece, Athens, etc. They
locate signs of wealth using this technology. Early in the year of 2013, there were 15000
swimming pools (unclaimed to steal taxes) in those countries.
 Figuring out fraud insurance claims:
Many insurance companies use Land sat’s red and infrared channels to figure out vegetation
growth in particular land. This information can be used to verify seeded crops and fight against
crop insurance fraud.
 Observing climate changes:
Satellites such as CERES, MODIS, AMSRE, TRMM, and MOPITT has made it possible to
observe climate changes from up above the skies. It is also possible to compare past climate
situation with the current one.
 Predicting potential landslides:
Landslides cause noticeable death and wealth loss around the globe. INSAR uses interferometry
(Interferometry has been a time-honored technique for surface topography measurement.)
remote sensing technique for providing an early warning regarding potential landslides.

1.4 ELECTROMAGNATIC RADIATIONS (EMR):


Electromagnetic radiation or EMR is the term used to describe all of the different types of
energies released by electromagnetic processes. Visible light is just one of many forms of
electromagnetic energy. Radio waves, infrared light and X rays are all forms of electromagnetic
radiation. The electromagnetic spectrum is the term used to describe to entire range of all
possible frequencies of electromagnetic radiation. Remote sensing technologies rely on a
variety of electromagnetic energy. Sensors detect and measure electromagnetic energy in
different portions of the spectrum. Therefore it is important to understand the fundamentals of
electromagnetic radiation.

Page |4
1.4.1 Wave model:

• James Clerk Maxwell conceptualized electromagnetic energy or wave that travels


through at the speed of light at 299792.46 Km/sec.

• Electromagnetic radiations is energy that is propagated through free space or through a


material medium in form of electromagnetic wave such as radio wave, visible light,
gamma rays the term also refers to the emission and transmission of such radiant energy.

• EMR consist of two fluctuating field one electric on magnetic 90 to each other and both
are perpendicular to the direction of propagation.

• Both have same amplitude and transmit through vacuum (such as space).

1.4.2 Particle Model:


 Electromagnetic energy may also be described in terms of joules and electron volts.
 The rate of energy transfer from one place to another(sun to earth) is termed as
flux(flow) of energy.
 Quantum theory of EMR states that energy is transferred as discrete packets called
quanta or photon.
 EMR can be expressed in terms of frequency and wave length as E=hv, E=hc/λ.

Fig. 1.2 ELECTRO MAGNETIC SPECTRUM

Page |5
EMR EFFECT IMAGE

Fig. 1.3 ROLE OF EMR IN REMOTE SENSING

1.5 ENERGY INTERACTIONS IN THE ATMOSPHERE:


Following interaction with atmosphere are seen
1.5.1 Absorption:
 Is the process by which radiant energy is absorbed and converted into another form of
energy.
 Ozone, carbon dioxide and water vapour are the three main atmospheric constituents
that absorb radiations. Ozone absorbs harmful ultraviolet radiation of the sun carbon
dioxide absorbs infrared radiations of the sun (green house effect) and water vapour
absorbs long wave infrared radiations and short wave microwave radiations.
1.5.2 Scattering:
Atmospheric scattering is the unpredictable diffusion of radiations by particle in the
atmosphere, it occurs when particle or large gas molecules present in the atmosphere interact
with and cause the EMR to be redirected from its original path it depends upon wavelength
diameter of particle.
Types:
(A) Rayleigh scattering: Takes place in upper 4.5 km of atmosphere having Effective dia of
matter smaller than wavelength of EMR.
(B) Mie scattering: Takes place lower 4.5 km of atmosphere Effective dia of matter is Equal
to wavelength of EMR.

Page |6
(C) Raman Scattering: Effective dia of matter is larger, smaller, or equal to that of wavelength
of EMR.

EMR INTERACTION WITH ATMOSHPER FIG.

Fig.1.4 EMR INTERACTING WITH ATMOSHPHERE

1.6 ENERGY INTERACTIONS WITH THE EARTH’S SURFACE:


Radiation not absorbed in atmosphere can reach and interact with the earth surface there are
three form of interaction
 Absorptance
 Transmittance
 Reflectance

INTERACTION WITH TARGET FIG

Atmospheric window at 3-5 µm and 8-10 µm.

Fig. 1.6 ATMOSHPHERIC WINDOW

Page |7
One important practical consequence of the interaction of electromagnetic radiation with matter
and of the detailed composition of our atmosphere is that only light in certain wavelength
regions can penetrate the atmosphere well. These regions are called atmospheric windows.

Fig. 1.7 OPACITY OF EARTH’S ATMOSPHERE


The dominant windows in the atmosphere are seen to be in the visible and radio frequency
regions, while X-Rays and UV are seen to be very strongly absorbed and Gamma Rays and IR
are somewhat less strongly absorbed. We see clearly the argument for getting above the
atmosphere with detectors on space-borne platforms in order to observe at wavelengths other
than the visible and RF regions.

1.7 SPECTRAL REFLECTANCE CURVE (SIGNATURE):

Spectral signature is the variation of reflectance or emittance of a material with respect


to wavelengths (i.e., reflectance/emittance as a function of wavelength). The spectral signature
of stars indicates the composition of the stellar atmosphere. The spectral signature of an object
is a function of the incidental EM wavelength and material interaction with that section of
the electromagnetic spectrum
Features on the Earth reflect, absorb, transmit, and emit electromagnetic energy from the sun.
Special digital sensors have been developed to measure all types of electromagnetic energy as
it interacts with objects in all of the ways listed above. The ability of sensors to measure these
interactions allows us to use remote sensing to measure features and changes on the Earth and
in our atmosphere. A measurement of energy commonly used in remote sensing of the Earth is
reflected energy (e.g., visible light, near-infrared, etc.) coming from land and water surfaces.
The amount of energy reflected from these surfaces is usually expressed as a percentage of the
amount of energy striking the objects. Reflectance is 100% if all of the light striking and object
bounces off and is detected by the sensor. If none of the light returns from the surface,

Page |8
reflectance is said to be 0%. In most cases, the reflectance value of each object for each area of
the electromagnetic spectrum is somewhere between these two extremes. Across any range of
wavelengths, the percent reflectance values for landscape features such as water, sand, roads,
forests, etc. can be plotted and compared. Such plots are called “spectral response curves” or
“spectral signatures.” Differences among spectral signatures are used to help classify remotely
sensed images into classes of landscape features since the spectral signatures of like features
have similar shapes. The more detailed the spectral information recorded by a sensor, the more
information that can be extracted from the spectral signatures. Hyperspectral sensors have much
more detailed signatures than multispectral sensors and thus provide the ability to detect more
subtle differences in aquatic and terrestrial features.

Fig. 1.8 SPECTRAL SIGNATURE

Page |9
UNIT II
Syllabus:
Different types of sensors and platforms; contrast ratio and possible causes of low contrast;
aerial photography; types of aerial photographs, scale of aerial photographs, planning aerial
photography- end lap and side lap; stereoscopic vision, requirements of stereoscopic
photographs; air-photo interpretation- interpretation elements.

2.1 REMOTE SENSING PLATFORMS (DIFFERENT TYPES OF PLATFORMS):


Platforms are commonly called the vehicles or carriers for remote sensing devices. A platform
is a synonym for any orbiting spacecraft, be it a satellite or a manned station, from which
observations are made. In most instances, the platforms are in motion and by movement they
automatically proceed to new positions from where, they target new objects. A satellite orbiting
the Earth is a typical platform however, the platforms range from balloons, kites (low altitude
remote sensing) to aircrafts and satellites (aerial and space remote sensing). As we go higher in
the sky, larger area is viewed by the sensor. Thus, the altitude determines the ground coverage,
which is a key factor for selection of a platform. Besides the altitude of the platforms, there are
other parameters which are responsible for determining ground coverage and resolution of
image data and hence one platform may have different sensors acquiring data at different ground
coverage with different image resolutions. Platforms equipped with remote sensors may be
situated on the ground, on a balloon, on an aircraft (or some other platform within the Earth’s
atmosphere), or on a spacecraft or satellite outside the Earth’s atmosphere. The three most
common types of platforms as shown in Fig. 2.1 are:
 Terrestrial platform
 Airborne platform
 Space borne platform
2.1.1 Terrestrial Platforms:
These platforms range from simple tripods to booms, cranes and towers. Booms raise the sensor
from 1 to 2 m above the object to be sensed and towers can reach tens of meters above the
Earth’s surface. Terrestrial platforms are also known as ground-based platforms because they
remain in contact with the ground during the imaging of the Earth’s surface. The ground based
Sensors and Platforms are either static (on a stationary platform such as a tripod or mast) or
dynamic (on a moving vehicle) used for close-range, high accuracy applications. These
platforms work at short, medium and long range of 50-100 m, up to 250 m and 1000 m,
respectively. Purpose of short range application is the mapping of buildings and small objects.
The data collected by terrestrial platforms are used for bridge and dam monitoring, landslide
and soil erosion mapping, architectural restoration, facilities inventory, crime and accident
scene analysis, manufacturing, etc.

P a g e | 10
Fig. 2.1 Remote sensing platforms

2.1.2 Airborne Platforms:


Airborne platforms such as airplanes, helicopters, balloons and even rockets are commonly used
to collect very detailed images. Because they are capable of operating over a wide range of
altitudes ranging from the sea level to stratosphere at an altitude of about 500 km, they facilitate
collection of data over virtually any portion of the Earth’s surface at any time. Aerial platforms
are primarily stable wing aircrafts, although helicopters are occasionally used.
Aerial photographs have been a main source of information about the Earth’s surface almost
since the beginning of aviation more than a century ago. Aerial photographs are obtained using
mapping cameras that are usually mounted in the nose or underbelly of an aircraft that then flies
in discrete patterns or swathes across the area to be surveyed.
Aircrafts have following advantages as platforms for remote sensing systems:
 Aircraft can fly at relatively low altitudes thus allowing for sub-meter sensor spatial
resolution.
 Aircraft can easily change their schedule to avoid weather problems such as clouds,
which may block a passive sensor’s view of the ground.
 Last minute timing changes can be made to adjust for illumination from the Sun, the
location of the area to be visited and additional revisits to that location.
 Sensor maintenance, repair and configuration changes can be easily made to aircraft
platforms.
 Aircraft flight paths know no boundaries except political boundaries however, getting
permission to intrude into foreign airspace could be a lengthy and frustrating process.

P a g e | 11
2.1.3 Spaceborne Platforms:
Man made satellites are the examples of spaceborne platforms. Satellite based remote sensing
is also referred as orbital remote sensing. Space transport system, commonly known as the space
shuttle is also sometimes used as a platform In space borne remote sensing, sensors are mounted
on-board a spacecraft (space shuttle or satellite) orbiting the Earth. Because of their orbits,
satellites permit repetitive coverage of the Earth’s surface on a continuing basis. These orbits
are fixed; a single satellite orbit can be adjusted slightly to maintain consistency over time but
it cannot be changed from one orbit type to another. In spaceborne platforms, satellites are
placed at three types of orbits around the Earth that are geostationary, polar and sun synchronous
orbits Space borne platforms are either of short duration, such as the space shuttle that remains
aloft for 1-2 weeks, or of long duration, such as the Earth resource monitoring and
meteorological satellites (e.g., Landsat, SPOT, AVHRR). At present, there are several remote
sensing satellites providing imagery for a variety of applications. Satellite remote sensing can
significantly enhance the information available from traditional data sources because it can
provide synoptic view of large portions of the Earth. However, resolution is limited due to the
satellite’s fixed altitude and orbital path flown. Spaceborne sensors are currently used to assist
in scientific and socio-economic activities like weather prediction, crop monitoring, mineral
exploration, waste land mapping, cyclone warning, water resources management and pollution
detection. Space borne remote sensing has the following advantages:
 Large area coverage.
 Frequent and repetitive coverage of areas of interest.
 Quantitative measurement of ground features possible using radio metrically calibrated
sensors.
 Semi automated computerized processing and analysis.
 Relatively lower cost per unit area coverage.
 One obvious advantage satellites have over aircrafts is the global accessibility.
There are numerous governmental restrictions that deny access to airspace over sensitive areas
or over foreign countries. Satellite orbits are not subject to these restrictions, although there may
well be legal agreements to limit distribution of data collected over particular areas of the globe.

2.2 SENSOR SYSTEM (DIFFERENT TYPES OF SENSORS):


The sensor systems are simply the eyes of the satellites that view and record the scene. Sensors
are the special instruments mounted on the platforms (aeroplane or satellite) usually having
sophisticated lenses with filter coatings, to focus the area to be observed at a specific region of
EMS. Solar radiation is the main source of EMR and is a combination of several wavelengths
such as gamma ray, x-ray, visible, infrared, thermal and microwaves. Sensor systems mainly
operate in the visible, infrared, thermal and microwave regions of EMR. Sensors are
characterized by spatial, spectral and radiometric performance. The first issue to be considered

P a g e | 12
in selecting the sensor is the scale. Terrestrial and airborne sensors have high spatial resolution
but sensors mounted on satellite, meant for acquiring synoptic coverage have relatively coarse
spatial resolution. Selection of appropriate sensor system for the remote sensing satellite will
also depend on the criteria to be addressed for using highly localized and refined or more
regional and coarse sensor systems.
Sensor systems are classified as imaging and non-imaging sensors based on the type of output
they provide. Imaging sensors measure the emitted / reflected intensity of EMR and provide
image of the ground as output (e.g. photographic camera) whereas non-imaging sensors
measures the intensity of radiation but does not provide any image and the observations are in
the form of some numerical data (e.g. Gravimeter). Sensor systems can be broadly classified as
passive or active systems based on the source of EMR:
 Passive Sensors:
They detect the reflected or emitted EMR from natural sources. The useful wave bands are
mostly in the visible and infrared region for passive remote sensing detectors.
 Active Sensors:
They detect the reflected or emitted radiation from the objects which are irradiated from
artificially generated energy sources, such as RADAR and LIDAR. The active sensor detectors
are used in the radar and microwave regions.

Fig. 2.2 Schematics showing the functional mechanism of (a) passive and (b) active
sensors
Broadly, all the imaging sensors systems are classified based on technical components of the
system and the capability of the detection by which the energy reflected by the terrain features
is recorded (Fig.2.2).

P a g e | 13
Fig. 2.3 Atmospheric transmission and the wavebands of common remote sensing
systems; (a) Remote sensing scanners, and (b) atmospheric transmittance

The active and passive sensors can be further classified as shown in Table 2.1.
Table 2.1: Types of remote sensing sensors

P a g e | 14
In this unit we will further discuss about the sensor systems under the following headings:
 Multispectral imaging sensor systems
 Thermal remote sensing systems
 Microwave radar sensing system

2.2.1 Multispectral Imaging Sensor Systems:


The multispectral imaging sensors include photographic and scanning systems. The
photographic system is an imaging system in which cameras are used. In the scanning system,
scanners along with filters for various wavelength regions are used. In some cases both
photographic and scanning systems are used in combination. In the photographic system,
different parts of the spectrum are sensed with different film-filter combinations and images are
formed directly on to a film. In the opto-mechanical (scanning system) sensors, the optical
image is first converted into an electrical signal (video data) and later processed to record or
transmit the data. But the photographic system suffers from one major defect of considerable
distortion at the edges. This is due to large lens opening.
2.2.1.1 Analog (Photographic) Systems:
Photographic cameras are the oldest and most widely used remote sensing sensor especially in
aerial photography. Cameras, belong to framing system, have been successfully used as the
remote sensors from aircraft, balloons and manned and unmanned spacecraft. Camera systems
are passive optical sensors that use a lens or system of lenses to form an image at the focal plane
where an image is sharply defined. The images are recorded on the photographic films which
are sensitive to light from 0.3 ìm to 0.9 ìm in wavelength covering the ultraviolet, visible, and
near-infrared
2.2.1.2 Scanning Systems:
The scanning systems use an electrical sensor called detector that records the brightness of the
small scene of the terrain within its instantaneous field-of-view (IFOV) to produce an image.
The brightness (electrical) signals which are recorded by the detector vary continuously in
proportion as the optics of the sensors and mirrors sweep the IFOV over the landscape. This
signal is amplified, recorded in the magnetic tape and then converted to digital form to produce
an image. In this way an image is produced from a series of adjacent cells or picture elements
(pixels). The scanning system is able to record the brightness of the entire terrain by sweeping
the detector rapidly across the terrain. Multispectral scanner systems can sense from 0.3 μm to
14 μm and further they can sense in very narrow bands
There are four common types of scanning modes for the scanning system:
a) Across-track scanning system
b) Along-track scanning system
c) Side looking or oblique scanning system (Radar)
d) Spin scanning system

P a g e | 15
a) Across-Track Scanners:
This type of scanning system employs a faceted mirror that is rotated by an electric motor in
which horizontal axis of rotation aligned parallel with the direction of flight. The mirror scans
the terrain in a pattern of parallel scan lines that are right angles to the direction of the airborne
platform as shown in Fig.2.4 (a). Energy reflected or radiated from the ground is reached onto
the detector by the mirrors. Examples of across-track scanners are Multispectral Scanner (MSS)
and Thematic Mapper (TM) of Landsat series of satellites.

Fig. 2.4 Schematics showing sensor’s scanning modes. (a) across-track mode; and (b)
along-track mode

• Landsat Multispectral Scanner (MSS):


It was the primary sensor system for the Landsats This sensor had four spectral bands ranging
from 0.5 to 1.1μm in the electromagnetic spectrum that record reflected radiation from the
Earth’s surface. These bands are green (0.5 to 0.6μm), red (0.6 to 0.7μm), and near-infrared (0.7
to 0.8 and 0.8 to 1.1 μm).
Landsat data are widely used for detecting and monitoring Earth’s resources. Among the four
bands of MSS, band 1 is used to detect green reflectance from healthy vegetation, and band 2
for detecting chlorophyll absorption in vegetation. MSS bands 3 and 4 are ideal for recording
near infrared reflectance peaks in healthy green vegetation and for detecting water-land
interfaces.
• Thematic Mapper:
Thematic Mapper (TM) is an advanced second generation of MSS first deployed on Landsat-4
and 5. Its design offered spatial, radiometric, and geometric improvements over the MSS
systems. The design of the TM was more complicated than the MSS. TM is an across-track
scanner similar to MSS with an oscillating scan mirror and arrays of detectors. TM provides
data which are scanned simultaneously in seven narrow spectral bands covering visible (blue-

P a g e | 16
0.45 - 0.52 μm, green - 0.52 - 0.60 μm, and red - 0.63 - 0.69 μm), near infrared (0.76 - 0.90 μm),
middle infrared (1.55 - 1.75 μm and 2.08 - 2.35 μm) and thermal infrared (10.4 - 12.5 μm).
b) Along-Track Scanning System:
Along-track scanners record multiband image data along a swath beneath the aircraft (Fig. 2.4
b). As the aircraft advances in the forward direction, the scanner scans the Earth with respect to
the designed swath to build a two dimensional images by recording successive scan lines that
are oriented at right angles to the direction of aircraft. In this system, detectors are placed in a
linear array in the focal plane of the image formed by a lens system. This system is also called
as a push broom system because the array of the detectors that record terrain brightness along a
line of pixels are effectively pushed like a broom along the orbital path of the aircraft. The along
track scanners have long dwell time in which the detectors provide fine spatial and higher
spectral resolutions.
Example of an along-track scanner is SPOT-High Resolution Visible (HRV) camera and Linear
Imaging Self Scanning Sensors (LISS) of Indian remote sensing (IRS).
 Linear Imaging Self Scanning Sensor:
Linear imaging self scanning (LISS) is a sensor system designed by ISRO for Indian remote
sensing satellites. In fact, LISS is a multispectral camera systems in which each camera system
contains four imaging lens assemblies one for each band followed by a linear charge coupled
devices (CCD) array. The optics focuses a strip of landscape in the across track direction on to
the sensor array. The images obtained from each detector are stored and later shifted out to get
video signals. It operates in four bands which are B1 (0.45-0.52 μm), B2 (0.52-0.59 μm), B3
(0.62-0.68 μm) and B4 (0.77-0.86 μm) in the visible and near infrared wavelength of
electromagnetic region.
c) Side Looking or Oblique Scanning Systems (Radar):
Side looking scanning system is an active scanning system, e.g. radar. This system itself
generates EMR, then illuminates the terrain and detects the energy (radar pulses) returning back
from the terrain and records it as an image. Therefore, radar imagery is received by collecting
and measuring the reflections of pulses sent out from radar equipped in the aircraft and satellites.
Side Looking Airborne Radar (SLAR) is one of the common types of remote sensing techniques
used for obtaining radar images of the terrain. The main components of SLAR include antenna,
duplexer, transmitter, receiver, pulse-generating device and cathode ray tube (Fig.2.5).
In this system, the radar antenna sends the radar pulse to the ground and receives the radar return
from the ground. Duplexer is an electronic switch and its main function includes the prevention
of interference between radar return and transmitted beams. Receiver records the timing and
intensity of radar return and also amplifies the weak pulses received by the antenna.
This helps to identify features of the terrain appeared on an image. Finally, radar return may be
displayed on a cathode ray tube and get recorded on film or in the magnetic tapes. Overall,

P a g e | 17
Fig. 2.5 SLAR
an image created is a function of time and strength of the radar pulse that is returned from the
Earth’s objects. The resolution of radar systems can be determined by the radar beam width of
the microwave pulse generated by the system. It is calculated by the equation given below:
Resolution (m) = range (km) × wavelength (cm) / antenna aperture (m).

Fig. 2.6 Side looking scanning system

P a g e | 18
d) Spin scanning system:
The spin scan camera was first flown on ATS-1 from 1966 to 1973 to continuously monitor the
global atmospheric circulation. Later, modified versions with improved sensor capabilities were
flown on ATS-3 and SMS. Today, the GOES series of spin scan cameras is in regular
operational use. The value of the spin scan camera as a dependable meteorological workhorse
requires proper functioning of the entire camera system, composed of four main elements: a) a
spinning spacecraft whose highly stable and predictable motions generate a time divisible
precision scan and therefore a metric image; b) a telescope having both on-axis image quality
and a wide field of view; c) a data chain which in-corporates duty cycle improvement and uses
the spacecraft as a communication link to distribute the image data to users; and d) image
display and analysis techniques which permit organizing a large number of images in the time
domain and efficiently selecting and measuring data of greatest importance.

2.2.2 Thermal Remote Sensing System:


Thermal scanners belong to the electro-optical scanning systems. These scanners sense the
thermal infrared portion of the EMS. Thermal scanners do not record the true internal
temperature of objects but record the pattern of radiant temperature variation of the objects. As
a consequence, they sense energy emitted rather than reflected from objects, therefore, thermal
scanners can operate day or night. Thermal scanners use photo detectors to detect emitted
thermal radiation. The detectors are cooled to temperatures close to absolute zero in order to
limit their own thermal emissions.
Thermal infrared scanners scan the terrain in the across-track mode. A thermal airborne infrared
scanner consists of an electric motor and a rotating shaft which is oriented parallel to the aircraft
flight direction (Fig.2.7). Both these instruments are mounted in the aircraft. The scan mirror
which is inclined at 45º mounted on the one end of the shaft sweeps the terrain at a right angle
to the fight path. This scan mirror also detects the infrared energy emitted from the terrain.
Later, it sends the energy to focusing mirrors where the energy get detected by the detector. The
detector converts the emitted energy into an electrical signal. The signal varies in proportion
according to the intensity of emitted infrared radiation. Detector is normally placed by a vacuum
bottle filled with liquid nitrogen. A second mirror known as recorder mirror placed at another
end of the shaft rotates synchronously with the scan mirror and sweeps the image of the
modulated light source across a strip of recording film.

2.2.3 Microwave Imaging System:


The microwave region of the EM spectrum includes wavelengths from 1mm to 1 m. The
advantage of microwave remote sensing is that microwaves are capable of penetrating the
atmosphere when it is with conditions like cloud cover, snow and smoke. They also have the
capacity of sensing in day or night.

P a g e | 19
Fig. 2.7 Schematic representation of thermal infrared scanner
Microwave imaging systems can be classified into two categories namely, a) active and b)
passive microwave remote sensing.
 Active Microwave Remote Sensing:
Active microwave sensing systems are of two types and they are imaging sensors and non-
imaging sensors. Most imaging sensors or imaging radars used for remote sensing e.g. SLAR.
These imaging radars are again divided into real aperture and synthetic aperture systems.
 Passive Microwave Remote Sensing:
Non-imaging remote sensing radars are either scatter meters or altimeters. Passive microwave
sensors, called radiometers, measures the natural emitted energy from the Earth’s surface. The
suitable antenna collects the emitted energy and transforms it as a signal. It is represented as an
equivalent temperature, i.e. the temperature of a black body source which produces the same
amount of signal in bandwidth of the system. Passive remote sensing is possible if the
radiometer is used in a scanning mode just like the optical scanner.

2.3 CONTRAST RATIO AND POSSIBLE CAUSES OF LOW CONTRAST:


Contrast generally refers to the difference in luminance or grey level values in an image and is
an important characteristic. It can be defined as the ratio of the maximum intensity to the
minimum intensity over an image. Contrast ratio has a strong bearing on the resolving power

P a g e | 20
and detectability of an image. Larger this ratio, more easy it is to interpret the image. Satellite
images lack adequate contrast and require contrast improvement.
2.3.1 Contrast Enhancement:
Contrast enhancement techniques expand the range of brightness values in an image so that the
image can be efficiently displayed in a manner desired by the analyst. The density values in a
scene are literally pulled farther apart, that is, expanded over a greater range. The effect is to
increase the visual contrast between two areas of different uniform densities. This enables the
analyst to discriminate easily between areas initially having a small difference in density.
 Linear Contrast Stretch:
This is the simplest contrast stretch algorithm. The grey values in the original image and the
modified image follow a linear relation in this algorithm. A density number in the low range of
the original histogram is assigned to extremely black and a value at the high end is assigned to
extremely white. The remaining pixel values are distributed linearly between these extremes.
The features or details that were obscure on the original image will be clear in the contrast
stretched image. Linear contrast stretch operation can be represented graphically as shown in
Fig. 2.8. To provide optimal contrast and color variation in color composites the small range of
grey values in each band is stretched to the full brightness range of the output or display unit.

Fig. 2.8 Linear Contrast Stretch

P a g e | 21
 Non-Linear Contrast Enhancement:
In these methods, the input and output data values follow a non-linear transformation. The
general form of the non-linear contrast enhancement is defined by y = f (x), where x is the input
data value and y is the output data value. The non-linear contrast enhancement techniques have
been found to be useful for enhancing the color contrast between the nearly classes and
subclasses of a main class. A type of non linear contrast stretch involves scaling the input data
logarithmically. This enhancement has greatest impact on the brightness values found in the
darker part of histogram. It could be reversed to enhance values in brighter part of histogram by
scaling the input data using an inverse log function.
Histogram equalization is another non-linear contrast enhancement technique. In this technique,
histogram of the original image is redistributed to produce a uniform population density. This
is obtained by grouping certain adjacent grey values. Thus the number of grey levels in the
enhanced image is less than the number of grey levels in the original image.
2.3.2 Possible causes of low contrast:
Low contrast images can result from Poor illumination, lack of dynamic range in the imaging
sensor or even wrong setting of lens aperture during image acquisition etc.

2.4 AERIAL PHOTOGRAPHY:


2.4.1 What is Aerial Photograph?
Aerial photography refers to taking photograph of earth surface from space. Platform of aerial
photography includes aircraft, helicopter, balloon, parachute etc. Aerial photography was first
practiced by the French photographer and balloonist Gaspard-Félix Tournachon, known as
"Nadar", in 1858 over Paris, France. It was the first means of remote sensing with immense
application potentiality even uses now-a-days in the age of satellites with sophisticated
electronic devices.
The characteristics of aerial photography that make it widely popular are:
1. Synoptic view point: Aerial photograph gives bird’s eye view enabling to see surface
features of large area and their spatial relationships.
2. Time freezing ability: Aerial photographs provide a permanent and objective records
of the existing conditions of the earth’s surface at a point of time, thus can be used for
historical records.
3. Capability to stop action: They provide a stop action view of the dynamic conditions
of earth’ surface features, thus useful in studying dynamic phenomenon such as flood,
forest fire, agriculture etc.
4. Spectral resolution and spatial resolution: Aerial photograph can be achieved
sensitive to the electromagnetic (EM) wave outside the spectral sensitivity of human eye
with very high spatial resolution.

P a g e | 22
5. Three dimensional perspectives: Stereo-scopic view can be obtained from aerial
photographs enabling for both vertical and horizontal measurements.
6. Availability: Aerial photographs of different scales are available in websites approved
by agencies involved in Aerial photography mission.
7. Economy: They are much cheaper than that of field survey and more accurate than
maps.
The aerial photographs that have been geometrically “corrected” using ground elevations data
to correct displacements caused by differences in terrain relief and camera properties are known
as Orthophotos.
2.4.2 Types of Aerial Photos:
Aerial photos can be distinguished depending on the position of camera axis with respect to the
vertical and motion of the aircraft. Aerial photographs are divided into two major groups,
vertical and oblique photos.
2.4.2.1 Vertical photos: The optical axis of the camera or camera axis is directed vertically as
straight down as possible (Fig.2.8). The nadir and central point of the photograph are coincident.
But in real a truly vertical aerial photograph is rarely obtainable because of unavoidable angular
rotation or tilts of aircraft. The allowable tolerance is usually +3 o from the perpendicular
(plumb) line to the camera axis. Vertical photographs are taken for most common use in remote
sensing and mapping purposes.

Fig. 2.8 Schematic diagram of taking a vertical photograph.


A vertical photograph has the following characteristics:
1) The camera axis is perpendicular to the surface of the earth.
2) It covers relatively small area than oblique photographs.
3) The shape of the ground covered on a single vertical photo closely approximates a

P a g e | 23
square or rectangle.
4) Being a view from above, it gives an unfamiliar view of the ground.
5) Distance and directions may approach the accuracy of maps if taken over flat terrain.
6) Relief is not readily apparent.
2.4.2.2 Oblique photos: When the optical of the camera forms an angle of more than 50 with
the vertical, oblique photographs are obtained (Fig.2.9). The nadir and central point of the
photograph are not coincident.

Fig. 2.9. Vertical and oblique photography


There are two types of oblique aerial photography – high angle and low angle. In high angle
oblique aerial photography the resulting images shows the apparent horizon and in low angle
oblique photograph does not. Oblique photographs can be useful for covering very large areas
in a single image and for depicting terrain relief and scale.

(a)

P a g e | 24
(b)
Fig. 2.9 (a) High oblique and (b) low oblique photographs.
A square outline on the ground appears as a trapezium in oblique aerial photo. These
photographs can be distinguished as high oblique and low oblique (Fig.2.9). But these are not
widely used for mapping as distortions in scale from foreground to the background preclude
easy measurements of distance, area, and elevation.
An oblique photograph has the following characteristics:
1. Low oblique photograph covers relatively small area than high oblique photographs.
2. The ground area covered is trapezoid, but the photograph is square or rectangular. Hence
scale is not applicable and direction (azimuth) also cannot be measured.
3. The relief is discernible but distorted.
2.4.3 Scale of aerial photographs:
The scale of a map or photograph is defined as the ratio of distance measured on the map to the
same distance on the ground. The amount of detail in an aerial photograph depends on the scale
of the photograph. Scales may be expressed as unit equivalents or dimensionless representative
fractions and ratios. For example, if 1 mm on a photograph represents 25 m on the ground, the
scale of the photograph can be expressed as 1mm = 25m (Unit equivalents), or 1/25,000
(representative fraction) or 1:25,000 (ratio).
A convenient way to clear confusion between large scale and small scale photograph is that the
same objects are smaller on a small scale photograph than on a large scale photograph. For
example, two photographs having scale 1:50,000 and 1:10,000. Aerial photo with scale
1:10,000 images shows ground features at a larger, more detailed size but less ground coverage
than 1:50,000 scale photo. Hence, in spite of its smaller ground coverage, the 1:10,000 photos
would be termed the large scale photo.
The most straightforward method for determining photo scale is to measure the corresponding
photo and ground distances between any two points . The scale S is then computed as the ratio

P a g e | 25
of the photo distance d to the ground distance D. In the Fig.2.10 the triangle ∆Lab and ∆LAB
are similar.
Hence, ao/AO =Lo/LO
or, d/D = f/H
S = d/D = focal length/flying height
where,
S = scale of photograph
d = distance in photograph
D = distance in ground
F = focal length
H = flying height
Hence Scale of a photo α focal length of camera (f)
α 1/flying height (H)

Fig. 2.10
2.4.4 Planning aerial photography- End lap and Side lap:
Aerial photographs are taken using a camera fitted at the bottom of a aircraft along a line is
termed as flight line or flight strips and the line traced on ground directly beneath the camera is
called nadir line. The point on photograph where the nadir line meets the ground is termed
as principal point. Lines drawn to connect marks located along opposite sides of the photo
(fiducial marks) intersect precisely at the principal point. The point on the photo that falls on a
line half- way between the principal point and the Nadir point is known as isocenter. The ground
distance between the photo centers (principal points) is called air base.
In aerial photography, the aircraft acquires a series of exposures along each strip of multiple
flight lines. Successive photographs are generally taken with some degree of overlap, which is
known as end lap (Fig.2.11). Standard end lap is 60%, which may be 80-90% in special cases

P a g e | 26
such as in mountainous terrain. It ensures that each point of the ground appears in at least two
successive photographs essential for stereoscopic coverage. Stereoscopic coverage consists of
adjacent pairs of overlapping vertical photographs called stereo pairs. Beside end lap the
photograph is taken with some overlap of photographs of a strip with the adjacent strip, known
as sidelap (Fig 2.12). It varies from 20% to 30% to ensure that no area of the ground is missing
out to be photograph.

Fig. 2.11 Photographic coverage along flight line: endlap.

Fig. 2.12. Positions of aerial photos: sidelap.


A truly vertical photograph is rarely obtained because of unavoidable angular rotations or tilts,
caused by the atmospheric conditions (air pockets or currents), human error of the pilot fails to
maintain a steady flight and imperfections in the camera mounting. There are three kinds of tilts
as it can be seen from Fig 2.13.

P a g e | 27
1. Tilting forward and backwards (pitch)
2. Tilting sideways (roll)
3. Tilting in the direction of flight (yaw)

Fig.2.13 (a) Roll, (b) Pitch, (c) Yow tilting of aircraft and corresponding ground
coverage of aerial photograph
In order to understand the geometric characteristics of an aerial photograph it is also necessary
to understand the viewing perspective and projection. In case of viewing perspective on a map
the objects and features are both planimetrically and geometrically accurate, because the objects
and features is located in the same position relative to each other as they are on the ground or
on the surface of earth. But there is a change in scale. On the other hand in aerial photography,
central or perspective projection system is used as it can be seen from Fig 2.14. Therefore, there
are not only changes in scale but also change in relative position and geometry between the
objects depending on the location of the camera.

P a g e | 28
Fig.2.14 Central or perspective projection.

2.5. STEREOSCOPIC VISION:


2.5.1. Definition of Stereoscopy:
Stereoscopy, sometimes called stereoscopic imaging, is a technique used to enable a three
dimensional effect, adding an illusion of depth to a flat image. In aerial photography, when two
photographs overlap or the same ground area is photographed from two separate position forms
a stereo-pair, used for three dimension viewing. Thus obtained a pair of stereoscopic
photographs or images can be viewed stereoscopically. A stereoscope facilitates the stereo
viewing process by looking at the left image with the left eye and the right image with the right
eye.
Stereoscopic vision is constructed with a stereo pair images using the relative orientation or tilt
at the time of photography. Stereo viewing allows the human brain to judge and perceive in
depth and volume. 3D representation of the earth’s surface resulting in the collection of the
geographic information with a greater accuracy compared to the monoscopic techniques.
2.5.2 Stereoscopic Vision:
On our daily life we unconsciously perceive and measure depth using our eyes. This stereo
effect is possible because we have two eyes or binocular vision. The perception of depth through
binocular vision is referred to as stereoscopic viewing, which means viewing an object from
two different locations. Monoscopic or monocular vision refers to viewing surrounding objects
with only one eye. Depth is perceived primarily based on the relative sizes of objects, shadow;
distant objects appear smaller and behind closer objects. In stereoscopic vision, objects are
viewed with both eyes a little distant from each other (approximately 65 mm) helps in viewing
objects from two different positions and angles, thus a stereoscopic vision is obtained. The angle
between the lines of sight of two eyes with each object known as parallactic angle helps our

P a g e | 29
brain in determining the relative distances between objects. Lesser the parallactic angle higher
the objects depth. Fig. 2.15 shows the human stereoscopic vision, parallactic angle Øa > Øb,
helps the brain automatically to estimate the differences (Da - Db) in depths between the objects
A and B. This concept of distance estimation in stereoscopic vision is applied to view a pair of
overlapping aerial photograph.

Fig. 2.15 Human stereoscopic vision


As an example, in two photographs overlap the same region, in which objects A, B and C are
situated at the same altitude and object D at a different altitude, the four objects will be observed
in a different sequence in the two photographs a, b, d, c in the left photograph and a, d, b, c in
the right (Fig. 2.15). In the same photograph, segments ab and bc are equal since they are at the
same altitude, but segments ad and dc are not.

Fig. 2.16 Perception of relief from two aerial photographs.

P a g e | 30
2.5.3. Stereoscopes:
A stereoscope is used in conjunction with two aerial photographs taken from two different
positions of the same area, (known as a stereo-pair) to produce a 3-D image. There are two types
of stereoscopes: lens (or pocket) stereoscope and mirror stereoscope. Lens (or pocket)
stereoscope has a limited view and therefore restricts the area that can be inspected where as in
mirror stereoscope has wide view and enables a much larger area to be viewed on the stereo-
pair. The most obvious feature when using a stereoscope is the enhanced vertical relief. This
occurs because our eyes are only 65mm apart, but the air photos may be taken at 100s of meters
apart, hence the difference in exposures is far greater than the difference between our eyes. Such
an exaggeration also enables small features to become quite apparent and easily viewed.
A stereoscope (Fig.2.17) consists of a double optical system (lenses, mirrors, prisms, etc.)
mounted on a rigid frame supported on legs. In this way distance d is fixed and kept the focal
distance. Thus the optical system creates a virtual image at infinity and consequently
stereoscopic vision is obtained without eye strain.

Fig. 2.17 Lens and mirror stereoscopes


A simple lens stereoscope is made up of two achromatic convex lenses. The focal length is
equal to d corresponding to the height of the stereoscope above the plane on which the stereo
pair is placed. The lens spacing (y) can allowed varying within 45 to 75 mm to accommodate
individual eye spacing. The disadvantage of lens stereoscope is that the features just underneath

P a g e | 31
the lens only are viewable but it has some magnification power. A mirror stereoscope comprises
two metallized mirrors, two prisms, two lenses and two eyepieces having little or no
magnification power. It enables viewing the optical part fixed on an arm and the photographic
pairs are arranged on two different planes. They facilitate analyses of several stereo pairs
consecutively without changing the arrangement in the whole overlap region compared to the
lens stereoscope.
2.5.4 Types of Stereoscopic Vision:
Stereoscopic vision can be of two types:
 Natural Stereoscopic Vision
 Artificial Stereoscopic Vision
2.5.4.1. Natural Stereoscopic Vision:
Natural Stereoscopic vision is possible due to the Monocular Vision which is possible due to
the relative size of the objects, overcutting convergence and accommodation of eyes, haze of
atmosphere etc. And Binocular Vision is responsible for perception of depth. Two slightly
different images, seen by two eyes simultaneously are fused into one by brain, giving the
sensation of a ‘model’ with three dimensions. The three dimension effect is reduced beyond
viewing distance of one meter. So also the distance between two eyes, called ‘Eye base, affects
stereoscopic vision. Wider the eye base, better is the three dimensional effect.
2.5.4.2. Artificial Stereoscopic Vision:
Artificial stereoscopic vision can be achieved with certain aids and a two dimensional
photograph can provide a three dimensional effect. This image obtained is comparable to the
image that can be obtained if two eyes are placed at two points of exposure stations on a flight
line. Here the distance between two exposure stations is called the ‘airbase’.

Fig.2.18 Converging Angle in viewing object at different distance

P a g e | 32
Relationship of accommodation i.e. changes of focus, and convergence (Fig: 2.18) or
divergence of visual axes is important. As the eyes focus on an object, they also turn so that
lines of sight intersect at the object. Angle of convergence of nearer object is larger than the
angle at object with longer distance. Proper association between accommodation and
convergence are necessary for efficient function of eyes. This association can be weakened or
destroyed by improper use of eyes. Visual illusion, colour vision and defects of focus,
coordination defects in depth perception etc. are important factors affecting photo
interpretation. Stereoscopic vision thus, is the observer’s ability to resolve parallax differences
between far and near images. Stereoscopic acuity depends upon ability to perceive smallest but
significant amounts of parallax. ‘Brains’ ability to convert parallax differences into proper
perception of depth depends upon ability of right side eye to see object form right side and of
the left eye to see same object from left side. If this order is reserved, as happens when relative
position of aerial photograph is reserved, closer objects appear close, and this phenomenon is
called ‘pseudo’ stereo vision.
2.5.5 Requirements of stereoscopic vision:
Requirements of Stereoscopic Photography If instead of looking at the original scene, we
observe photos of that scene taken from two different viewpoints, we can under suitable
conditions, obtain a three dimensional impression from the two dimensional photos. This
impression may be very similar to the impression given by the original scene, but in practice
this is rarely so.
In order to produce a spatial model, the two photographs of a scene must fulfill certain
condition: Both photographs must cover same scene, with 60% overlap.
Time of exposure of both photographs must be same.
 The scale of the two photographs should be approximately the same. Difference up to 15%
may be successfully accommodated. For continuous observation and measurements, differences
greater than 5% may be disadvantageous.
The brightness of both the photographs should be similar.
 Base height ratio must have an appropriate value. Normally the ‘B/Z’ or Base height ratio is
upto 2.0. Ideal is not known but is probably near to 0.25. If this ratio is too small, say 0.02, the
stereoscopic view will not provide depth impression better than when only single photo is
viewed.
In the base height ratio- B/Z
Where B is the distance between two exposure stations
Z is the distance between an object and the line joining two exposure stations.
Base height ratio increases when overlap decreases and larger viewing angle corresponds with
larger base height ratio. Short focal length wide angle lens cameras give better base height ratio
which is important in natural resource survey. (Fig.2.19).

P a g e | 33
Fig. 2.19 Base Height Ratio
Base height ratio B/Z is also = b/c
Where b = photo base is the distance between two principal points of consecutive
photographs.
c = principle distance of camera.
If photo base is larger than the eye base and image is viewed stereoscopically without
enlargement, depth impression becomes exaggerated. Enlargement of images, by binoculars,
telescope etc; enlarge parallaxes also and thereby increases depth perception is better when
aerial photographs are positioned in such a way that the shadows of the objects fall towards
observer.

2.6. AIR-PHOTO INTERPRETATION:


Image interpretation of remote sensing data is to extract qualitative and quantitative information
from the photograph or imagery. It involves identification of various objects on the terrain
which may be natural or artificial consists of points, lines, or polygons. It depends on the way
how different features reflect or emits the incident electromagnetic radiation and their recording
by a camera or sensor. In the very beginning, when digital images and computerized
classification were not available, the aerial photographs were analyzed only by visual
interpretation. Accuracy of the interpretation depends on the training, experience, scale of
photograph, geographic location of the study area, associated map, ground observation data etc.
After the availability of satellite images, the data were categorized in two processing methods:
analogue aerial photographs and digital satellite images. Though satellite images can be visually
interpreted and aerial photographs can be processed by computers.
In image or photograph, some objects may be readily identifiable while other may not. It
depends on individual perceptions and experience. The detail to which an image or photograph
can be analyzed depends on the resolution of the image and scale of the photograph. Satellite

P a g e | 34
images are generally have small scale than aerial photographs and cannot be analyzed
stereoscopically.
2.6.1. Elements of Visual Interpretation:
In our daily life we interpret many photos and images, but interpretation of aerial photographs
and images are different because of three important aspects:
(1) the portrayal of features from an overhead, often unfamiliar, perspective.
(2) the frequent use of wavelengths outside of the visible portion of the spectrum.
(3) the depiction of the earth’s surface at unfamiliar scales.
Eight fundamental parameters or elements are used in the interpretation of remote sensing
images or photographs. These are tone or color, texture, size, shape, pattern, shadow, site and
association. In some cases, a single such element is alone sufficient for successful identification;
in others, the use of several elements will be required.

Fig. 2.20 Ordering of image elements in image interpretation.


2.6.1.1 Tone or color: Tone is the relative brightness of grey level on black and white image
or color/F.C.C image. Tone is the measure of the intensity of the reflected or emitted radiation
of the objects of the terrain. Lower reflected objects appear relatively dark and higher reflected
objects appear bright. Fig. 2.21.a represents a band imaged in NIR region of the electromagnetic
spectrum. Rivers does not reflect in NIR region thus appear black and the vegetation reflects
much thus appears bright. Our eyes can discriminate only 16-20 grey levels in the black and
white photograph, while more than hundreds of color can be distinguished in a color
photograph. In multispectral imaging, optimal three bands are used to generate color composite
image. False Color Composite (FCC) using NIR, red and green are most preferred combination
for visual interpretation. In a standard FCC, NIR band passes through red channel, red band
passes through green channel and green band passes through blue channel. Vegetation reflects
much in NIR region of the electromagnetic spectrum therefore in standard FCC vegetation
appears red (Fig.2.21.b), which is more suitable in vegetation identification.

P a g e | 35
Fig. 2.21 Satellite image of area in (a) grey scale and in (b) standard FCC

2.6.1.2 Texture: Texture refers to the frequency of tonal variation in an image. Texture is
produced by an aggregate unit of features which may be too small to be clearly discerned
individually on the image. It depends on shape, size, pattern and shadow of terrain features.
Texture is always scale or resolution dependent. Same reflected objects may have difference in
texture helps in their identification. As an example in a high resolution image grassland and tree
crowns have similar tone, but grassland will have smooth texture compared to tree. Smooth
texture refers to less tonal variation and rough texture refers to abrupt tonal variation in an
imagery or photograph.
2.6.1.3 Pattern: Pattern refers to the spatial arrangement of the objects. Objects both natural
and manmade have a pattern which aids in their recognition. The repetition of certain general
form or relationship in tones and texture creates a pattern, which is characteristic of this element
in image interpretation. In the Fig. 2.22. it could be easily understood that at the left bottom
corner of the image, it is plantation, where the tress are nearly equally spaced. Where as at the
upper right and bottom right corners show natural vegetation.

2.22

Fig.2.22 High resolution image showing different textures

P a g e | 36
2.6.1.4 Size: Size of objects on images must be considered in the context of the image scale or
resolution. It is important to assess the size of a target relative to other objects in the scene, as
well as the absolute size, to aid in the interpretation of that target. A quick approximation of
target size can make direct interpretation to an appropriate result more quickly. The most
measured parameters are length, width, perimeter, area, and occasionally volume. For example,
if an interpreter had to distinguish zones of land use, and had identified an area with a number
of buildings in it, large buildings such as factories or warehouses would suggest commercial
property, whereas small buildings would indicate residential use.
2.6.1.5 Shape: Shape refers to the general form, configuration or outline of an individual object.
Shape is one of the most important single factors for recognizing object from an image.
Generally regular shapes, squares, rectangles, circles are signs of man-made objects, e.g.,
buildings, roads, and cultivated fields, whereas irregular shapes, with no distinct geometrical
pattern are signs of a natural environment, e.g., a river, forest. In a general case of
misinterpretation in between roads and train line: roads can have sharp turns, joints
perpendicularly, but rails line does not. From the shape of the following image, it can be easily
said that the dark-blue colored object is a river.

Fig. 2.23 Satellite view of a part of a city

Fig. 2.24 Satellite image of an area

P a g e | 37
2.6.1.6 Shadow: Shadow is a helpful element in image interpretation. It also creates
difficulties for some objects in their identification in the image. Knowing the time of
photography, we can estimate the solar elevation/illumination, which helps in height estimation
of objects. The outline or shape of a shadow affords an impression of the profile view of objects.
But objects within shadow become difficult to interpret. Shadow is also useful for enhancing or
identifying topography and landforms, particularly in radar imagery.

Fig.2.25 Shadow of objects used for interpretation.


2.6.1.7 Association: Association refers to the occurrence of certain features in relation to others
objects in the imagery. In urban area a smooth vegetation pattern generally refers to a play
ground or grass land not agricultural land (Fig 2.25).

Fig. 2.25 Satellite image of an urban area


2.6.1.8 Site: Site refers to topographic or geographic location. It is also an important element
in image interpretation when objects are not clearly identified using the previous the elements.
A very high reflectance feature in the Himalayan valley may be snow or cloud, but in Kerala
one cannot say it as snow.

P a g e | 38
UNIT – III

Syllabus:
Photogrammetry- measurements on a single vertical aerial photograph, measurements on a
stereo-pair- vertical measurements by the parallax method; ground control for aerial
photography; satellite remote sensing, multispectral scanner- whiskbroom and push-broom
scanner; different types of resolutions; analysis of digital data- image restoration; image
enhancement; information extraction, image classification, unsupervised classification,
supervised classification, important consideration in the identification of training areas,
vegetation indices.

3.1 PHOTOGRAMMETRY:
Photogrammetry is the art, science, and technology of obtaining reliable information about
physical objects and the environment through processes of recording, measuring, and
interpreting photographic images and patterns of recorded radiant electromagnetic energy and
other phenomena.

3 . 1 . 1 T yp es of Photogrammetry:
Photogrammetry can be classified several ways but one standard method is to split the field
based on camera location during photography. On this basis we have Aerial Photogrammetry,
and Terrestrial (or Close-Range) Photogrammetry.
3.1.1.1 Aerial Photogrammetry:
In Aerial Photogrammetry, the camera is mounted in an aircraft and is usually pointed
vertically towards the ground. Multiple overlapping photos of the ground are taken as the
aircraft flies along a flight path. The aircraft traditionally have been fixed wing manned craft
but many projects now are done with drones and UAVs. Traditionally these photos were
processed in a stereo-plotter (an instrument that lets an operator see two photos at once in a
stereo view) but now are often processed by automated desktop systems.
3.1.1.2 Terrestrial (or Close-Range) Photogrammetry:
In Terrestrial and Close-range Photogrammetry, the camera is located on the ground, and
hand held, tripod or pole mounted. Usually this type of photogrammetry is non-topographic that
is, the output is not topographic products like terrain models or topographic maps, but instead
drawings, 3D models, measurements, or point clouds. Everyday cameras are used to model and
measure buildings, engineering structures, forensic and accident scenes, mines, earth-works,
stock-piles, archaeological artifacts, film sets, etc. In the computer vision community, this type
of photogrammetry is sometimes called Image-Based Modeling.
This includes calculating distances and lengths, objects heights and area measurements.

P a g e | 39
3.2 MEASUREMENTS ON A SINGLE VERTICAL AERIAL PHOTOGRAPH:
Photogrammetry has been around since the development of modern photography techniques. If
the scale of an image is known, distances or lengths of objects can be easily calculated by
measuring the distance on the photo and multiplying it by the scale factor.
 Scale:
Remember that scale is the ratio of the size or distance of a feature on the photo to its actual
size. Scale for aerial photos is generally expressed as a representative fraction (1 unit on the
photo equals "x" units on the ground). If the scale is known distances on the photograph can
easily be transformed into real-world ground distances.

 Calculating Distance and Area:


 Distance and Length:
If the scale of an aerial photograph is known distances lengths and areas of features can easily
be calculated. You simply measure the distance on the photo (photo distance) and multiply the
distance by the scale factor. Remember that scale is always equal to the ratio of the photo
distance to the ground distance.
Example: The scale of an aerial photograph is 1:15,000. In the photo you measure the
length of a bridge to be 0.25 inches, what is the length of the bridge in feet in real life?

Fig. 3.1

P a g e | 40
 Area:
It is important to remember that area is measured in square units. To determine rectangular area
it is length multiplied by width, so if you measure both and convert these distances remember
that if you are multiplying them together the resulting units are squared. For example, if an area
is 100 meters by 500 meters, it is 50,000 square meters. Now if you wanted to change that
number to square feet you wouldn't multiply by 3.28 (there are 3.28 feet per meter), you would
multiply by 10.76 (3.28 x 3.28).
Example: An aerial photograph has a scale of 1:10,000. On the photo, the length of a field
is measured as 10 mm and the width 7mm. How big (in Hectares) is the field in real-life?
Note that 10,000 square meters = 1 Hectare.

Fig. 3.2
 Calculating Object Heights:
As with calculating scale, there are multiple methods to determine the height of tall objects (e.g.
trees or buildings) in aerial photos. In single aerial photos the two primary methods are the
relief/radial displacement method and the shadow methods.
 Relief/Radial Displacement Method:
The magnitude of the displacement in the image between the top and the bottom of an object is
known as its relief displacement and is related to the height of the object and the distance of the
object from the principal point. This method can only be used if the object that is being measured
is be far enough from the principal point to measure the displacement and the top and bottom
of the object are visible in the photo.

P a g e | 41
Example: The length of a a displaced building is measured at 2.01 mm and the radial
distance to the principal point is 56.43 mm. If the the flying height about the surface is
1220 m, what is the height of the building?

 Shadow Method:
If you can measure the length of a shadow and know the angle of the sun, the height of the
object can be calculated using simple trigonometry. If you know when and where the aerial
photo was taken you can determine the angle of the sun using the NOAA Solar Calculator.
When using this calculator you want to use the solar elevation angle (El) for your calculations.

Fig. 3.3
3.3 MEASUREMENTS ON A STEREO-PAIR:
3.3.1 Parallax for Height Measurement using Aerial Photography:
 Parallax Concept:
Photogrammetry is capable of measuring elevation of earth surface. Aerial photographs/stereo
pair satellite images can be used to measure elevation differences through the use of parallax
method.
New launched satellite are providing stereo pair satellite images such worldview-2 etc.
Parallax can be defined as the apparent displacement of a point due to a change in view of the
point.
One of the parallax example for human eye can be explained by a simple exercise of
childhood.
i.e. hold finger in front of eyes and look at finger, where it is relative to a wall in the background
with your right eye. Then look at it with your left eye and its appearance relative to the wall has
changed.
This relative change in appearance is due to parallax. It took place due to change in view point
for the finger.

P a g e | 42
 Parallax of Aerial Photographs:
Parallax can be described by below figure

Fig. 3.4
In this figure point P & Q on surface are captured on two aerial photographs
as p & q respectively.
 Consider that at one instant the airplane is at O1, vertically above a point P. The image
of P will appear at p on the image plane.
 After sometime when the plane is at O2, P will appear as p' on this image plane. The
ground point P appear as p on first image plane and p’ on the second image plane so
this shift of pp' in the position of the image of P on the image plane is the parallax of P.
 Similarly, for any other point Q, one instant the airplane is at O1, vertically above a
point P. The image of Q will appear at q on the image plane.
 After sometime when the plane is at O2, Q will appear as q' on this image plane. The
ground point Q appear as q on first image plane and q’ on the second image plane so
this shift of qq' in the position of the image of Q on the image plane is the parallax of Q.
 Types of Parallax for Aerial Photographs
There are two types of Parallax; Absolute Parallax and Differential Parallax
 Absolute Parallax (X-Parallax/Horizontal Parallax): This parallax is in the X
direction, It is the algebraic difference of the distances of the two images from their
respective photograph nadirs, measured in horizontal plane and parallel to the airbase.
 Differential Parallax(Y-Parallax): This parallax in Y direction, the difference between
the perpendicular distance of two the images of a point from the vertical plane
containing the airbase.
Y-parallax is an indication of tilt in either or both photographs, or a difference in a flying height.
Parallax used for height determination by below method
∆p H'
∆h = -----------
pc
∆h = ha - hc

P a g e | 43
where
∆h = change in elevation between two points a and c
∆p = parallax of point a subtracted from parallax of point c
H' = flying height of airplane
pc = parallax of point c
While measuring height using Parallax, below are required
 Any point being measured has to appear on both aerial photos that overlap
 If elevation (benchmark) of point c has known and then its parallax can be measured
 Any point a’s elevation can be calculated relative to point c’s known elevation
 Parallax measurement facilitates computation of an elevation model of ground using
aerial photographs.
Numerical Example:
Question. Benchmark c has an elevation of 1545.32 ft., x coordinate on left photo of +74.12
mm and on right photo of -18.41 mm. Unknown point a has x coordinate on left photo of +65.78
mm and on right photo of -24.38 mm. If flying height above average ground is 3000 ft. what is
the elevation of point a?
Solution. Parallax is the change in x coordinates defined as parallel to the flight line.
Parallax of point c = 74.12 – (-18.41) = 92.53 mm
Parallax of point a = 65.78 – (-24.38) = 90.16 mm
Height measured using below formula
∆p H'
∆h = ha - hc = -----------
pc
here all values calculated below
∆p = pa – pc = 90.16 – 92.53 = -2.37 mm
∆h = (-2.37 mm) * (3000 ft.) / 92.53 mm
∆h = -76.84 ft.
By putting ∆h value in below equation
ha = hc + ∆h = 1545.32 + (-76.84)
ha= 1468.48 ft.
It can be assumed from question that if point a has less parallax so it should have lower elevation
than point c.
This Parallax concept is important in aerial photography as this can be used to measure
the height of ground.

3.4 GROUND CONTROL FOR AERIAL PHOTOGRAPHY (GROUND CONTROL


POINTS (GCPS)):
Ground Control Points (GCP) are points on the ground with known coordinates in the spatial
coordinate system (i.e. both coordinates defining horizontal position and the altitude

P a g e | 44
coordinate). Their coordinates are obtained with traditional surveying methods in the field
(tachymetry, GNSS-measurement) or from other available sources.
The GCP in nature determines the position of its aerial photo mage in the coordinate system.
To calculate the coordinates for each point on the aerial photography, several ground control
points’ coordinates are used and photogrammetric procedures followed.
GCPs are necessary for orientation and placement of aerial photographs in the spatial coordinate
system, which is a prerequisite for the production of georeferenced metric and 3D models of
the earth’s surface (point cloud, DSM, DTM, orthophoto plan). Namely, computer processing
and analysis require spatial coordination models – from point cloud to orthophoto mosaics.

In practice – even for large areas of the field – it is sufficient to use from 5 to 10 ground control
points. Using a larger number of points does not significantly contribute to a higher accuracy.
The more the terrain is naturally varied, the higher number of ground control points are
necessary to achieve the desired accuracy.
In the field, ground control points are marked in a way that makes them visible later on aerial
photographs. Then they are arranged along the edges of areas in question. In practice, all but
one are placed on the characteristic points of the terrain polygon; the resting one is placed
exactly in the middle of the polygon. Each ground control point must be visible in at least two
aerial photographs, or best on five for optimum results. Therefore, the points should better not
be located right at the edge of the area; if so, they will only be visible on few aerial photographs.

3.5 SATELLITE REMOTE SENSING:


The fundamental principles of remote sensing derive from the characteristics and interactions
of electromagnetic radiation (EMR) as it propagates from source to sensor. The principles relate
to the following: 1) the source of energy and the type and amount of energy it provides; 2) the
absorption and scattering effects of the atmosphere on EMR; 3) the mechanisms of EMR
interaction with Earth surface features; and 4) the nature of sensor response as determined by
the type of sensor.
Most satellite sensors detect EMR electronically as a continuous stream of digital data. The data
are transmitted to ground reception stations, processed to create defined data products, and made

P a g e | 45
available for sale to users on a variety of digital data media. Once purchased, the digital image
data are readily amenable to quantitative analysis using computer-implemented digital image
processing techniques. Some of these techniques (such as data error compensations,
atmospheric corrections, calibration, and map registration) essentially involve preprocessing the
data for subsequent interpretation and analysis. Another group of techniques is designed to
selectively enhance the digital data and produce hard-copy image formats for interpreters to
study. For these images, some of the principles and techniques of air photo interpretation can
be applied to manual analysis of the image information content. A third major group of digital
processing techniques involves information extraction through the implementation of a wide
range of simple to complex mathematical and statistical operations on the numerical data values
in the image. The results of these operations provide output such as derived information
variables (that might relate to terrain brightness or vegetation condition), categorized land and
water features, or images showing changes over time.
A discussion of remote sensing technology would not be complete without mention of
geographic information systems (GIS). Satellite remote sensing represents a technology for
synoptic acquisition of spatial data and the extraction of scene-specific information. GIS
provides a computer-implemented spatially oriented database for evaluating the information in
conjunction with other spatially formatted data and information that may be acquired from
remote sensor data, maps, surveys, and other sources of spatially referenced information.

3.6 MULTISPECTRAL SCANNER:


Many electronic (as opposed to photographic) remote sensors acquire data using scanning
systems, which employ a sensor with a narrow field of view (i.e. IFOV) that sweeps over the
terrain to build up and produce a two-dimensional image of the surface. Scanning systems can
be used on both aircraft and satellite platforms and have essentially the same operating
principles. A scanning system used to collect data over a variety of different wavelength ranges
is called a multispectral scanner (MSS), and is the most commonly used scanning system.
There are two main modes or methods of scanning employed to acquire multispectral image
data - across-track scanning, and along-track scanning.

Fig. 3.5
P a g e | 46
3.6.1 WHISKBROOM SCANNER:
Across-track scanners scan the Earth in a series of lines. The lines are oriented perpendicular
to the direction of motion of the sensor platform (i.e. across the swath). Each line is scanned
from one side of the sensor to the other, using a rotating mirror (A). As the platform moves
forward over the Earth, successive scans build up a two-dimensional image of the Earth´s
surface. The incoming reflected or emitted radiation is separated into several spectral
components that are detected independently. The UV, visible, near-infrared, and thermal
radiation are dispersed into their constituent wavelengths. A bank of internal detectors (B),
each sensitive to a specific range of wavelengths, detects and measures the energy for each
spectral band and then, as an electrical signal, they are converted to digital data and recorded
for subsequent computer processing.

The IFOV (C) of the sensor and the altitude of the platform determine the ground resolution
cell viewed (D), and thus the spatial resolution. The angular field of view (E) is the sweep of
the mirror, measured in degrees, used to record a scan line, and determines the width of the
imaged swath (F). Airborne scanners typically sweep large angles (between 90º and 120º),
while satellites, because of their higher altitude need only to sweep fairly small angles (10-20º)
to cover a broad region. Because the distance from the sensor to the target increases towards
the edges of the swath, the ground resolution cells also become larger and introduce geometric
distortions to the images. Also, the length of time the IFOV "sees" a ground resolution cell as
the rotating mirror scans (called the dwell time), is generally quite short and influences the
design of the spatial, spectral, and radiometric resolution of the sensor.

Fig. 3.6
3.6.2 PUSHBROOM SCANNER:
Along-track scanners also use the forward motion of the platform to record successive scan
lines and build up a two-dimensional image, perpendicular to the flight direction. However,
instead of a scanning mirror, they use a linear array of detectors (A) located at the focal plane
of the image (B) formed by lens systems (C), which are "pushed" along in the flight track
direction (i.e. along track). These systems are also referred to as pushbroom scanners, as the

P a g e | 47
motion of the detector array is analogous to the bristles of a broom being pushed along a floor.
Each individual detector measures the energy for a single ground resolution cell (D) and thus
the size and IFOV of the detectors determines the spatial resolution of the system. A separate
linear array is required to measure each spectral band or channel. For each scan line, the energy
detected by each detector of each linear array is sampled electronically and digitally recorded.
Along-track scanners with linear arrays have several advantages over across-track mirror
scanners. The array of detectors combined with the pushbroom motion allows each detector to
"see" and measure the energy from each ground resolution cell for a longer period of time (dwell
time). This allows more energy to be detected and improves the radiometric resolution. The
increased dwell time also facilitates smaller IFOVs and narrower bandwidths for each detector.
Thus, finer spatial and spectral resolution can be achieved without impacting radiometric
resolution. Because detectors are usually solid-state microelectronic devices, they are generally
smaller, lighter, require less power, and are more reliable and last longer because they have no
moving parts. On the other hand, cross-calibrating thousands of detectors to achieve uniform
sensitivity across the array is necessary and complicated.
Regardless of whether the scanning system used is either of these two types, it has several
advantages over photographic systems. The spectral range of photographic systems is restricted
to the visible and near-infrared regions while MSS systems can extend this range into the
thermal infrared. They are also capable of much higher spectral resolution than photographic
systems. Multi-band or multispectral photographic systems use separate lens systems to acquire
each spectral band. This may cause problems in ensuring that the different bands are comparable
both spatially and radio metrically and with registration of the multiple images. MSS systems
acquire all spectral bands simultaneously through the same optical system to alleviate these
problems. Photographic systems record the energy detected by means of a photochemical
process which is difficult to measure and to make consistent. Because MSS data are recorded
electronically, it is easier to determine the specific amount of energy measured, and they can
record over a greater range of values in a digital format. Photographic systems require a
continuous supply of film and processing on the ground after the photos have been taken. The
digital recording in MSS systems facilitates transmission of data to receiving stations on the
ground and immediate processing of data in a computer environment.

3.7 RESOLUTION:
The resolution of an image refers to the potential detail provided by the imagery. In remote
sensing we refer to three types of resolution: spatial, spectral and temporal.
3.7.1 Spatial Resolution refers to the size of the smallest feature that can be detected by a
satellite sensor or displayed in a satellite image. It is usually presented as a single value
representing the length of one side of a square. For example, a spatial resolution of 250m means
that one pixel represents an area 250 by 250 meters on the ground.

P a g e | 48
3.7.2 Spectral Resolution refers to the ability of a satellite sensor to measure specific
wavelengths of the electromagnetic spectrum. The finer the spectral resolution, the narrower
the wavelength range for a particular channel or band.
3.7.3 Temporal resolution refers to the time between images. The capability for satellites to
provide images of the same geographical area more frequently has increased dramatically since
the dawn of the space age.
3.8 ANALYSIS OF DIGITAL DATA:
To selectively enhance certain fine features in the data and to remove certain noise, the digital
data is subjected to various image processing operations. Image-processing may be grouped
into three functional categories: these are defined below together with lists of typical processing
techniques.
3.8.1 IMAGE RESTORATION compensates for data errors, noise and geometric distortions
introduced during the scanning, recording, and playback operations.
a. Restoring periodic line dropouts.
b. Restoring periodic line striping.
c. Filtering of random noise.
d. Correcting for atmospheric scattering.
e. correcting geometric distortions.
3.8.2 IMAGE ENHANCEMENT alters the visual impact that the image has on the interpreter
in a fashion that improves the information content.
a. Contrast enhancement.
b. Intensity, hue, and saturation transformations.
c. Density slicing.
d. Edge enhancement.
e. Making digital mosaics.
f. Producing synthetic stereo images.
3.8.3 INFORMATION EXTRACTION utilizes the decision-making capability of the
computer to recognize and classify pixels on the basis of their digital signatures,
a. Producing principal-component images.
b. Producing ratio images.
c. Multispectral classification.
d. Producing change-detection images.

3.8.1 IMAGE RESTORATION: The concerns of the image restoration are the removal or
reduction of degradations which are included during the acquisition of images e.g.; Noise, pixel
value errors, out of focus blurring or camera motion blurring using prior knowledge of the
degradation phenomenon. This means it deals with the modelling of the degradation and

P a g e | 49
applying the process (inverse) to reconstruct the image. The image restoration has got a wide
scope of usage (Fig. 3.7).

Fig. 3.7
3.8.1.1 RESTORATION TECHNIQUES:
1. Median Filter This is the statistical method as implied by the name .In this method pixel
value is replaced by the median of the pixels in the neighborhood found .The usage is to remove
the salt and pepper noise .It is used widely and can reduce the noise in the images excellently.
This filtering removes the noise but keeps the edges. This tends to overcome the image to
become blur and that is its advantage over the smoothing model (Table 3.1).
Table 3.1

2. Adaptive Filter: In adaptive filter behavior changes based on statistical characteristics of


image inside the filter region. It is that type of linear filter which has a transfer function
controlled by a variable parameter. For removal of impulse noise in images these filters use the
color and gray space in comparison to other filters .It has best noise suppression results, preserve
edges in better way and hence yield better quality.
3. Linear Filters: In this, we replace each pixel by the linear combination of its neighbors. The
operations that are implemented include sharpening, smoothing and edge enhancement. This
type of filter has its implementation in salt and pepper noise and Gaussian noise.
4. IBD (Iterative Blind De-convolution) method: This technique was given by Ayers and
Dainty (1988).It is a method of blind de-convolution. In this method Fourier transform is
calculated which causes less computation. In this method image recovery is done by (little or
no) prior knowledge of PSF (Response of an imaging system to a point source or point object
(point spread function)). It results in high resolution and Quality. The drawback of this method
is that convergence is not guaranteed.

P a g e | 50
5. Non Negative and Support Constraints Recursive Inverse Filtering (NAS-RIF): This
filtering technique was put forward by D.kunde, The aim is to reconstruct a reliable estimated
image from a blurred image. In this algorithm estimation of the target image is made. Error
function is minimized to make the estimate which contains the domains of image. The
advantage is that we only need to find support domain of target area and need to be cautious
such that the estimation obtained will be positive.
6. Super-Resolution Restoration Algorithm based on Gradient Adaptive Interpolation:
The basic idea is that the local gradient of pixel affects the interpolated pixel value, in the edgy
areas of the image . The influence is inversely proportional to the local gradient of a pixel. In
this method three subtasks are involved: registration, fusion and deblurring.
7. Deconvolution Using a Sparse Prior: Deconvolution problem is formulated in this
algorithm as given by the observation which determines the maximum a-posterior estimate of
the original image. Furthermore, a prior enforcing spatial-domain of the image derivatives is
exploited in the algorithm. It has been successfully applied to raw images.
8. Block Matching: In Block matching high correlation containing blocks are used because its
accuracy is significantly affected by the presence of noise. A block-similarity measure is
utilized which performs a coarse initial de-noising in local 2D transform domain. or blur is
removed from each block in this method by dividing the image into blocks.
9. Wiener Filter: Wiener filter includes both the degradation function and statistical
characteristics of noise into the restoration process. The main objective of the method is to find
an estimated value of the uncorrupted image value such that the mean square value between
them is minimized. The drawback of inverse and pseudo inverse filtering is that they are noise
sensitive. But wiener filtering is not noise sensitive so this is the advantage of the wiener
filtering. Its response is better in presence of noise.
10. Deconvolution using Regularized Filter (DRF): It is another category of Non-Blind
Deconvolution technique. When smoothness like constraints are applied on the image recovered
and limited information of noise is known, then this techniques can be used effectively. Using
a regularized filter, by constrained least square restoration, the degraded image is actually
restored.
11. Lucy-Richardson Algorithm Techniques: The image restoration is divided into blind and
non blind de convolution. In non blind PSF is known. The Richardson–Lucy is the most popular
technique in the field of astronomy and medical imaging .The reason of popularity is its ability
to produce reconstructed images of good quality in the presence of high noise level. Lucy and
Richardson found this in the early 1970’s from byes theorem. Lucy Richardson is nonlinear
iterative method. This method is gaining more acceptance than linear methods as better results
are obtained here. The inverse Fourier transform of Optical Transfer Function (OTF) in the

P a g e | 51
frequency domain is the PSF, where OTF gives linear, position-invariant system the response
to an impulse. The Fourier transfer of the point (PSF) is OTF.

3.8.1.2 APPLICATIONS OF RESTORATION:


1. In the area of astronomical applications characterized by poisson noise, Gaussian noise;
image restoration has played a very important role in the area of imaging.
2. SR technique is also useful in medical imaging such as computerized tomography(CT) and
magnetic resonance imaging (MRI) Since resolution while the resolution quality is limited the
acquisition of multiple images is possible. This can help the surgeon to operate more
successfully over the exact part of the body with care.
3. Over the multispectral bands of satellite imagery , multispectral image restoration can be
carried out in order to improve the resolution of the captured satellite images.

3.8.2 IMAGE ENHANCEMENT:


Enhancement is the modification of an image to alter impact on the viewer. Generally
enhancement distorts the original digital values; therefore enhancement is not done until the
restoration processes are completed.

3.8.2.1 Contrast Enhancement:


There is a strong influence of contrast ratio on resolving power and detection capability of
images. Techniques for improving image contrast are among the most widely used enhancement
processes. The sensitivity range of any remote sensing detector is designed to record a wide
range of terrain brightness from black basalt plateaus to white sea beds under a wide range of
lighting conditions. Few individual scenes have a brightness range that utilizes the full
sensitivity range of these detectors. To produce an image with the optimum contrast ratio, it is
important to utilize the entire brightness range of the display medium, which is generally film.
Fig. 3.8 (a) shows the typical histogram of the number of pixels that correspond to each DN of
an image with no modifications of original DNs. Three of the most useful methods of contrast
enhancement are described in the following sections.
3.8.2.1.1 Linear Contrast Stretch:
The simplest contrast enhancement is called a linear contrast stretch . A DN value in the low
end of the original histogram is assigned to extreme black and a value at the high end is assigned
to extreme white. The remaining pixel values are distributed linearly between these extremes,
as shown in the enhanced histogram of Fig. 3.8 (b).The improved contrast ratio of the image
with linear contrast stretch will enhance different features on the map. Most of the image
processing software display an image only after linear stretching by default. For colour images,
the individual bands were stretched before being combined in colour.
3.8.2.1.2 Nonlinear Contrast Stretch:

P a g e | 52
Nonlinear contrast enhancement is made in different ways. Fig. 3.8 (c) illustrates a uniform
distribution stretch (or histogram equalization) in which the original histogram has been
redistributed to produce a uniform population density of pixels along the horizontal DN axis.
This stretch applies the greatest contrast enhancement to the most populated range or brightness
values in the original image.

Fig. (a)

Fig. (b)

Fig. (c)

Fig. 3.8
3.8.2.2 Intensity, Hue and Saturation Transformations:
The additive system of primary colours (red, green, and blue, or RGB system) is well
established. An alternate approach to colour is the intensity, hue and saturation system (IHS),
which is useful because it presents colours more nearly as the human observer perceives them.
The IHS system is based on the colour sphere (Fig. 3.9) in which the vertical axis represents
intensity, the radius is saturation, and the circumference is hue. the intensity (I) axis represents
brightness variations and ranges from black (0) to white (255): no colour is associated with this
axis. Hue (H) represents the dominant wavelength of colour. Hue values commence with 0 at
the midpoint of red tones and increase counterclockwise around the circumference of the sphere
to conclude with 255 adjacent to 0. Saturation (S) represents the purity of colour and ranges
from 0 at the centre of the colour sphere to 255 at the circumference. A saturation of 0 represents
a completely impure colour, in which all wavelengths are equally represented and which the
eye will perceive a shade of grey that ranges from white to black depending on intensity.

P a g e | 53
Intermediate values of saturation represent pastel shades, whereas high values represent purer
and more intense colours. The range from 0 to 255 is used here for consistency with the eight-
bit scale; any range of values (0 to 100, for example) could as well be used as IHS coordinates.
When any three spectral bands of a sensor data are combined in the RGB system, the resulting
colour images typically lack saturation, even though the bands have been contrast-stretched.
The under-saturation is due to the high degree of correlation between spectral bands. High
reflectance values in the green band, for example, are accompanied by high values in the blue
and red bands, so pure colours are not produced.

Fig. 3.9 Intensity, Hue and Saturation colour coordinate system. Colour at point A has
the values: I=195, H=75 and S=135.
3.8.2.3 Density Slicing:
Density slicing converts the continuous grey tone of an image into a series of density intervals,
or each corresponding to a specified digital range D. Slices may be displayed as areas bounded
by contour lines. This technique emphasizes subtle grey-scale differences that may be
imperceptible to the viewer.
3.8.2.4 Edge Enhancement:
Most interpreters are concerned with recognizing linear features in images such as joints and
lineaments. Geographers map manmade linear features such as highways and canals. Some
linear features occur as narrow lines against a background of contrasting brightness; others are
the linear contact between adjacent areas of different brightness . In all cases, linear features are
formed by edges. Some edges are marked by pronounced differences that may be difficult to
recognize. Contrast enhancement may emphasize brightness differences associated with some
linear features. This procedure, however, is not specific for linear features because all elements
of the scene are enhanced equally, not just the linear elements. Digital filters have been
developed specifically to enhance edges in images and fall into two categories: directional and
nondirectional.

P a g e | 54
3.8.2.5 Making Digital Mosaics:
Mosaics of images may be prepared by matching and splicing together individual images.
Differences in contrast and tone between adjacent images cause the checkerboard pattern that
is common on many mosaics. This problem can be largely eliminated by preparing mosaics
directly from the digital CCTs. Adjacent images are geometrically registered to each other by
recognizing ground control points (GCPs) in the regions of overlap. Pixels are then
geometrically adjusted to match the desired map projection. The next step is to eliminate from
the digital file the duplicate pixels within the areas of overlap. Optimum contrast stretching is
then applied to all the pixels, producing a uniform appearance throughout the mosaic.
3.8.2.6 Producing Synthetic Stereo Images:
Ground-control points may be used to register image pixel arrays to other digitized data sets,
such as topographic maps. This registration causes an elevation value to be associated with each
image pixel. With this information the computer can then displace each pixel in a scan line
relative to the central pixel of that scan line. Pixels to the west of the central pixel are displaced
westward by an amount that is determined by the elevation of the pixels. The same procedure
determines eastward displacement of pixels east of the central pixel. The resulting image
simulates the parallax of an aerial photograph. The principal point is then shifted, and a second
image is generated with the parallax characteristics of the overlapping image of a stereo pair.
A synthetic stereo model is superior to a model from side-lapping portions of adjacent Landsat
images because (1) The vertical exaggeration can be increased and (2) The entire image may be
viewed stereoscopically. Two disadvantages of computer synthesised stereo images are that
they are expensive and that a digitized topographic map must be available for elevation control.

3.8.3 INFORMATION EXTRACTION:


Image restoration and enhancement processes utilize computers to provide corrected and
improved images for study by human interpreters. The computer makes no decisions in these
procedures. However, processes that identify and extract information do utilize the computer's
decision-making capability to identify and extract specific pieces of information. A human
operator must instruct the computer and must evaluate the significance of the extracted
information.
3.8.3.1 Principal-Component Images:
For any pixel in a multispectral image, the DN values are commonly highly correlated from
band to band. This correlation is illustrated schematically in Fig. 3.10, which plots digital
numbers for pixels in TM bands 1 and 2. The elongate distribution pattern of the data points
indicates that as brightness in band 1 increases, brightness in band 2 also increases. A three-
dimensional plot (not illustrated) of three bands, such as 1, 2 and 3, would show the data points
in an elongate ellipsoid, indicating correlation of the three bands. This correlation means that if
the reflectance of a pixel in one band (IRS band 2, for example) is known, one can predict the

P a g e | 55
reflectance in adjacent bands (IRS bands 1 and 3 ) . The correlation also means that there is
much redundancy in a multispectral data set. If this redundancy could be reduced, the amount
of data required to describe a multispectral image could be compressed.

Fig. 3.10 Scatter Plot of IRS Band 1 (X1 axis) and Band 2 (X2 axis) showing Correlation
between these Bands.
The principal-components transformation has several advantages:
1 Most of the variation is a multispectral data set is compressed into one or two PC images.
2 Noise may be relegated to the lesson related PC images, and
3 Spectral differences between materials may be more apparent in PC images than in individual
bands.
3.8.3.2 Ratio Images:
Ratio images are prepared by dividing the DN in one band by the corresponding DN in another
band for each pixel, stretching the resulting value, and plotting the new values as an image. A
total of 15 ratio images plus an equal number of inverse ratios (reciprocals of the first 15 ratios)
may be prepared from six original bands. In a ratio image the black and white extremes of the
grey scale represent pixels with the greatest difference in reflectivity between the two spectral
bands. The darkest signatures are areas where the denominator of the ratio is greater than the
numerator. Conversely the numerator is greater than the denominator for the brightest
signatures. Where denominator and numerator are the same, there is no difference between the
two bands. For example, the spectral reflectance curve shows that the maximum reflectance of
vegetation occurs in IRS band 4 (reflected IR) and that reflectance is considerably lower in band
2 (green). The ratio image 4/2, results when the DNs in band 4 are divided by the DNs in band
2. The brightest signatures in this image correlate with the vegetation.
Like PC images, any three ratio images may be combined to produce a colour images by
assigning each image to a separate primary colour. The colour variations of the ratio colour
image express more geologic information and have greater contrast between units than do the

P a g e | 56
conventional colour images. Ratio images emphasize differences in slopes of spectral
reflectance curves between the two bands of the ratio. In the visible and reflected IR regions,
the major spectral differences of materials are expressed in the slopes of the curves; therefore
individual ratio images and ratio colour images enable one to extract reflectance variations. A
disadvantage of ratio images is that they suppress differences in albedo; materials that have
different albedos but similar slopes of their spectral curves may be indistinguishable in ratio
images. Ratio images also minimize differences in illumination conditions, thus suppressing the
expression of topography.
3.8.3.3 Multispectral Classification:
For each pixel in IRS or Landsat TM image, the spectral brightness is recorded for four or seven
different wavelength bands respectively. A pixel may be characterized by its spectral signature,
which is determined by the relative reflectance in the different wavelength bands. Multispectral
classification is an information-extraction process that analyses these spectral signatures and
then assigns pixels to categories based on similar signatures. By plotting the data points of each
band at the centre of the spectral range of each multispectral band, we can generate cluster
diagram. The reflectance ranges of each band form the axes of a three-dimensional coordinate
system. Plotting additional pixels of the different terrain types produces three-dimensional
clusters or ellipsoids.
The surface of the ellipsoid forms a decision boundary, which encloses all pixels for that terrain
category. The volume inside the decision boundary is called the decision space. Classification
programs differ in their criteria for defining the decision boundaries. In many programs the
analyst is able to modify the boundaries to achieve optimum results . For the sake of simplicity
the cluster diagram is explained with only three axes. In actual practice the computer employs
a separate axis for each spectral band of data: four for IRS and six or seven for Landsat TM.
Once the boundaries for each cluster, or spectral class, are defined, the computer retrieves the
spectral values for each pixel and determines its position in the classification space. Should the
pixel fall within one of the clusters, it is classified accordingly. Pixels that do not fall within a
cluster are considered unclassified. In practice, the computer calculates the mathematical
probability that a pixel belongs to a class; if the probability exceeds a designated threshold
(represented spatially by the decision boundary), the pixel is assigned to that class. There are
two major approaches to multispectral classification viz., Supervised classification and
unsupervised classification.
In Supervised classification, the analyst defines on the image a small area, called a training
site, which is representative of each terrain category, or class. spectral values for each pixel in
a training site are used to define the decision space for that class. After the clusters for each
training site are defined, the computer then classifies all the remaining pixels in the scene. In

P a g e | 57
Unsupervised classification, the computer separates the pixels into classes with no direction
from the analyst.
3.8.3.3.1 Classification Algorithms:
Various classification methods may be used to assign an unknown pixel to one of the classes.
The choice of a particular classifier or decision rule depends on the nature of the input data and
desired output. PARAMETRIC classification algorithm assumes that the observed
measurement vectors Xc obtained for each class in each spectral band during the training phase
of the supervised classification is Gaussian in nature. Nonparametric classification algorithm
makes no such assumption. It is instructive to review the logic of several of the classifiers. The
Parallelepiped, Minimum distance and Maximum likelihood decision rules are the most
frequently used classification algorithms.
(a) The parallelepiped classification algorithm
(b) The minimum distance to means classification algorithm
(c) Maximum likelihood classification algorithm

3.8.3.4 Change-Detection Images:


Change-detection images provide information about seasonal or other changes. The information
is extracted by comparing two or more images of an area that were acquired at different times.
The first step is to register the images using corresponding ground-control points. Following
registration, the digital numbers of one image are subtracted from those of an image acquired
earlier or later. The resulting values for each pixel will be positive, negative, or zero; the latter
indicates no change. The next step is to plot these values as an image in which neutral grey tone
represents zero. Black and white tones represent the maximum negative and positive differences
respectively. Contrast stretching is employed to emphasize the differences. The agricultural
practice of seasonally alternating between cultivated and fallow fields can be clearly shown by
the light and dark tones in the difference image. Change-detection processing is also useful for
producing difference images for other remote sensing data, such as between night time and
daytime thermal IR images.
3.9 IMAGE CLASSIFICATION:
Image classification refers to the task of extracting information classes from a multiband raster
image. The resulting raster from image classification can be used to create thematic maps.
Depending on the interaction between the analyst and the computer during classification, there
are two types of classification: supervised and unsupervised.
The classification process is a multi-step workflow, therefore, the Image Classification toolbar
has been developed to provided an integrated environment to perform classifications with the
tools. Not only does the toolbar help with the workflow for performing unsupervised and
supervised classification, it also contains additional functionality for analyzing input data,
creating training samples and signature files, and determining the quality of the training samples

P a g e | 58
and signature files. The recommended way to perform classification and multivariate analysis
is through the Image Classification toolbar.
Supervised classification:
Supervised classification uses the spectral signatures obtained from training samples to classify
an image. With the assistance of the Image Classification toolbar, you can easily create
training samples to represent the classes you want to extract. You can also easily create a
signature file from the training samples, which is then used by the multivariate classification
tools to classify the image.
Unsupervised classification:
Unsupervised classification finds spectral classes (or clusters) in a multiband image without the
analyst’s intervention. The Image Classification toolbar aids in unsupervised classification by
providing access to the tools to create the clusters, capability to analyze the quality of the
clusters, and access to classification tools.

3.10 UNSUPERVISED CLASSIFICATION:


Unsupervised classification is a form of pixel based classification and is essentially computer
automated classification. The user specifies the number of classes and the spectral classes are
created solely based on the numerical information in the data (i.e. the pixel values for each of
the bands or indices). Clustering algorithms are used to determine the natural, statistical
grouping of the data. The pixels are grouped together into based on their spectral similarity. The
computer uses feature space to analyze and group the data into classes. Roll over the below
image to see how the computer might use feature space to group the data into ten classes (Fig.
3.11 (a)).
While the process is basically automated, the user has control over certain inputs. This includes
the Number of Classes, the Maximum Iterations, (which is how many times the classification
algorithm runs) and the Change Threshold %, which specifies when to end the classification
procedure. After the data has been classified the user has to interpret, label and color code the
classes accordingly.

P a g e | 59
(b)

(a)
Fig. 3.11 (a) Unsupervised classification (b) Clustering algorithm
3.10.1 ADVANTAGES AND DISADVANTAGES:
Advantages:

Unsupervised classification is fairly quick and easy to run. There is no extensive prior
knowledge of area required, but you must be able to identify and label classes after the
classification. The classes are created purely based on spectral information, therefore they are
not as subjective as manual visual interpretation.
Disadvantages:

One of the disadvantages is that the spectral classes do not always correspond to informational
classes. The user also has to spend time interpreting and label the classes following the
classification. Spectral properties of classes can also change over time, so you can’t always use
the same class information when moving from one image to another.

P a g e | 60
3.11 SUPERVISED CLASSIFICATION:
In supervised classification the user or image analyst
“supervises” the pixel classification process. The user specifies
the various pixels values or spectral signatures that should be
associated with each class. This is done by selecting
representative sample sites of a known cover type
called Training Sites or Areas. The computer algorithm then
uses the spectral signatures from these training areas to classify
the whole image. Ideally, the classes should not overlap or
should only minimally overlap with other classes (Fig. 3.12).
In ENVI there are four different classification algorithms you
can choose from in the supervised classification procedure.
There are as follows:
 Maximum Likelihood: Assumes that the statistics for
each class in each band are normally distributed and calculates
the probability that a given pixel belongs to a specific class. Each
pixel is assigned to the class that has the highest probability (that
is, the maximum likelihood). This is the default.
Fig. 3.12
 Minimum Distance: Uses the mean vectors for each class and calculates the Euclidean
distance from each unknown pixel to the mean vector for each class. The pixels are
classified to the nearest class.
 Mahalanobis Distance: A direction-sensitive distance classifier that uses statistics for
each class. It is similar to maximum likelihood classification, but it assumes all class
covariance are equal, and therefore is a faster method. All pixels are classified to the
closest training data.
 Spectral Angle Mapper: (SAM) is a physically-based spectral classification that uses
an n-Dimension angle to match pixels to training data. This method determines the
spectral similarity between two spectra by calculating the angle between the spectra and
treating them as vectors in a space with dimensionality equal to the number of bands.
This technique, when used on calibrated reflectance data, is relatively insensitive to
illumination and albedo effects.

3.11.1 WHAT’S THE DIFFERENCE BETWEEN A SUPERVISED AND


UNSUPERVISED IMAGE CLASSIFICATION?
Two major categories of image classification techniques include unsupervised (calculated by
software) and supervised (human-guided) classification.
Unsupervised classification is where the outcomes (groupings of pixels with common
characteristics) are based on the software analysis of an image without the user providing

P a g e | 61
sample classes. The computer uses techniques to determine which pixels are related and groups
them into classes. The user can specify which algorism the software will use and the desired
number of output classes but otherwise does not aid in the classification process. However, the
user must have knowledge of the area being classified when the groupings of pixels with
common characteristics produced by the computer have to be related to actual features on the
ground (such as wetlands, developed areas, coniferous forests, etc.).
Supervised classification is based on the idea that a user can select sample pixels in an image
that are representative of specific classes and then direct the image processing software to use
these training sites as references for the classification of all other pixels in the image. Training
sites (also known as testing sets or input classes) are selected based on the knowledge of the
user. The user also sets the bounds for how similar other pixels must be to group them together.
These bounds are often set based on the spectral characteristics of the training area, plus or
minus a certain increment (often based on “brightness” or strength of reflection in specific
spectral bands). The user also designates the number of classes that the image is classified into.
Many analysts use a combination of supervised and unsupervised classification processes to
develop final output analysis and classified maps.

3.12 VEGETATION INDEX:


A Vegetation Index (VI) is a spectral transformation of two or more bands designed to enhance
the contribution of vegetation properties and allow reliable spatial and temporal inter-
comparisons of terrestrial photosynthetic activity and canopy structural variations.
There are many Vegetation Indices (VIs), with many being functionally equivalent. Many of
the indices make use of the inverse relationship between red and near-infrared reflectance
associated with healthy green vegetation. Since the 1960s scientists have used satellite remote
sensing to monitor fluctuation in vegetation at the Earth's surface. Measurements of vegetation
attributes include leaf area index (LAI), percent green cover, chlorophyll content, green biomass
and absorbed photo synthetically active radiation (APAR).
3.12.1 Uses:
Vegetation indices have been used to:
 examine climate trends.
 estimate water content of soils remotely.
 monitor drought.
 schedule crop irrigation, crop management.
 monitor evaporation and plant transpiration.
 assess changes in biodiversity.
 classify vegetation.

P a g e | 62
3.12.2 List of Vegetation Indices:

3.12.2.1 Multispectral Vegetation Indices:


 Simple Ratio.
 Normalized Difference Vegetation Index (NDVI).
 Kauth-Thomas Tasseled Cap Transformation.
 Infrared Index.
 Perpendicular Vegetation Index.
 Greenness Above Bare Soil.
 Moisture Stress Index.
 Leaf Water Content Index (LWCI).
 MidIR Index.
 Soil-Adjusted Vegetation Index (SAVI).
 Modified SAVI.
 Atmospherically Resistant Vegetation Index.
 Soil and Atmospherically Resistant Vegetation Index.
 Enhanced Vegetation Index (EVI).
 New Vegetation Index.
 Aerosol Free Vegetation Index.
 Triangular Vegetation Index.
 Reduced Simple Ratio.
 Visible Atmospherically Resistant Index.
 Normalized Difference Built-Up Index.
 Weighted Difference Vegetation Index (WDVI).
 Fraction of absorbed photosynthetically active radiation (FAPAR).
 Normalized Difference Greenness index (NDGI).

3.12.2.2 Hyperspectral Vegetation Indices:


With the advent of hyperspectral data, vegetation indices have been developed specifically for
hyperspectral data.
 Discrete-Band Normalized Difference Vegetation Index.
 Yellowness Index.
 Photochemical Reflectance Index.
 Descrete-Band Normalized Difference Water Index.
 Red Edge Position Determination.
 Crop Chlorophyll Content Prediction.
 Moment distance index (MDI).

P a g e | 63
UNIT-IV
Syllabus:
Microwave remote sensing. GIS and basic components, different sources of spatial data, basic
spatial entities, major components of spatial data, Basic classes of map projections and their
properties.

4.1 MICROWAVE REMOTE SENSING:


Remote Sensing is a set of multidisciplinary techniques and methodologies that aim at
obtaining information about the environment through “remote” measurements.

In particular, microwave remote sensing uses electromagnetic radiation with a wavelength


between 1 cm and 1 m (commonly referred to as microwaves) as a measurement tool. Due to
the greater wavelength compared to visible and infrared radiation, microwaves exhibit the
important property of penetrating clouds, fog, and possible ash or powder coverages (for
example, in case of an erupting volcano or a collapsed building). This important property makes
this technique virtually suitable to work in any weather condition or environment.

4.1.1 MICROWAVE REMOTE SENSING SYSTEMS ARE CLASSIFIED INTO TWO


GROUPS:

1) Passive Systems collect the radiation that is naturally emitted by the observed surface. In
fact, objects emit energy at the microwave frequencies, although sometimes in an extremely
small amount. These systems are generally characterized by relatively low spatial resolutions.

2) Active Systems are characterized by the presence of their own source (transmitter) that
“lights up” the observed scene and, therefore, can be used both at night and day, independently
of the presence of sun.

Fig. 4.1

The sensor transmits a (radio) signal in the microwave bandwidth and records the part that is
backscattered by the target towards the sensor itself. The power of the backscattered signal
allows to discriminate between different targets within the scene, while the time between the

P a g e | 64
sent and the received signal is used to measure the distance of the target. A system that operates
in this way is called RADAR (the name stands for Radio Detection And Ranging), and may
allow to obtain a “microwave image” of the observed scene.
The most commonly used microwave imaging sensor is the Synthetic Aperture Radar (SAR),
that is a radar system capable of providing high-resolution microwave images. They have
distinctive characteristics compared to common optical images acquired in the visible or
infrared bands; for this reason, radar and optical data can be complementary, as they carry on a
different informative contribution.

It is also important to highlight that the radar images can be obtained and made available to all
the community, especially to those responsible for land management (Ministries and
government agencies such as the Civil Protection authorities, public and local authorities, etc.),
only after a significant (in terms of time and computer resources) processing operation.

4.1.2 Types of Microwave Remote Sensors:


Microwave radiometers:Measure the emittance of EM energy within the microwave region ofthe EM
spectrum, just like thermal IR sensors.
1) Non-imaging RADARs:
A) Altimeters – measure the elevation of the earth’s surface
B) Scatterometers – detect variations in microwave backscatter from a large area – measure variations in
surface roughness, used to estimate ocean wind speed.
2) Imaging RADARs:
A) Synthetic Aperture Radars – map variations in microwave backscatter at fine spatial scales (10 to 50
m), used to create animage – measure variations in surface roughness and surface moisture
B) Microwave radiometers- Measure the emittance of EM energy within the microwave region of the
EM spectrum, just like thermal IR sensors
4.1.3 MICROWAVE MEAUREMENT:

Fig. 4.2

P a g e | 65
Fig. 4.3
4.2 COMPONENTS OF GIS:
There is almost as much debate over the components of a GIS as there is about its definition.
At the simplest level, a GIS can be viewed as a software package with various tools to enter,
manipulate, analyze and output of geographical data. At the other extreme, GIS components
include the computer hardware, software, spatial data, data management and analysis
procedures and the peoples to operate it (Fig. 4.4). If the computer is located on a network, it
can also be considered as the component of GIS since it enables data sharing among users.
Hence, GIS is the combination of all these six components organized to automate, manage, and
deliver information through geographic presentation.

Fig. 4.4 Component of GIS adapted from (Goodchild, Longley, Maguire, & Rhind, 2005)

P a g e | 66
4.2.1 Network:
Today, the most fundamental component of GIS is probably the network. Without rapid
development of IT, the network, there is no rapid communication or sharing of digital
information could occur, except between a small group of people crowded around a computer
monitor. Users connected to the Internet could zoom in to parts of themap, or pan to other parts
in their desktop WWW browser without ever needing to install specialized software or
download large amounts of data. The Internet is increasingly integratedinto the GIS. The rapidly
growing application of Web GIS has the following capabilities:
 Displaying static maps which users can pan or zoom whilst online;

 Creating user-defined maps online which are in turn used to generate reports and new
maps from data on a server;
 integrating users ‘local data with data from the Internet;

 Providing data that are kept secure at the server site;

 Providing maps through high-speed intranets within organizations;

 Providing maps and data across the Internet to a global audience.

The remaining five components of GIS are entertained below.


4.2.2 Hardware components:
The hardware components of a GIS consist of a computer, memory (CPU or workstations), data
storage devices, tape drives or others, scanners, digitizer, plotter, printers, global positioning
system (GPS) units, and other physical components (Figurr:4.5). The user controls the computer
and the peripherals via a visual display unit (VDU) or terminal.
 The disk drive unit provides space for storing of map and document data in a digital
format and sends them to the computer.
 The plotter used to present the results of the data processing.
 A tape or CD/DVD drive is used for storing data or programs.
 A scanner or digitizer is required to convert the analogue data into digital format.

Fig 4.5 Hardware components for GIS

P a g e | 67
The choice of hardware ranges from personal computers to multi user super computers.
Computers should have essentially an efficient processor to run the software and sufficient
memory to store data. The essential hardware elements for effective GIS operations include:
a) The presence of a processor with sufficient power to run the software
b) Sufficient memory for the storage and backup of large volumes of data
c) A good quality, high resolution color graphics screen or monitor and
d) Data input and output devices, like keyboards, printers and plotters.
4.2.3 Software components:
GIS software components and sub-components have several functional elements to perform
different operations. All GIS software fit this requirement; except in their user interface
differences.
GIS software functional elements
Components Sub-components
Data acquisition/Input Digitizing
Editing
Data processing and pre-processing Topology building
Projection transformation
Format conversion
Attribute assignment, etc.
Database management (storage and retrieval) Data archival
Hierarchical modeling
Relational modeling
Attribute query
Object oriented database
Spatial manipulation and analysis Measurement operations
Buffering
Overlay operations
Connectivity operations
Product generation: Graphical output and Scale transformation
visualization Generalization
Topographic maps
Statistical maps
Fig. 4.6 (A) (also shows the procedural perspective workflow processes of GIS software
to perform these functional elements though it is important to view these elements as a
continuing process.)

P a g e | 68
Fig. 4.6 (B) Workflow process of GIS - procedural perspective
4.2.3.1 Data input:
Data input is the operation of encoding data and writing them on to the database for the GIS
use. Data input involves data acquisition – i.e. identification and collection of the required data.
It covers all aspects of transforming data captured from existing maps, field observations, and
sensors into a compatible digital format.

Fig: 4.7 Data input for GIS

P a g e | 69
Data input can be performed in the following stages:
 Acquire or capture spatial data

 Entering spatial and associated attributes of non-spatial data


 Linking the spatial to the non-spatial data using unique identifiers.
4.2.3.2 Data pre-processing/ processing:
Data input in a computer may involve several steps known as pre-processing. The procedure
isused to convert a dataset into a format compatible for permanent storage within the GIS
database and establishes a consistent system for data recording. Often, a large proportion of
GIS data requires some kind of processing and data manipulation to get coordinated set of
datalayers. At this stage, the first digital map is constructed. The essential pre-processing
procedures include:

 Format conversion such as geo-referencing with geometric correction and resampling,


data generalization, and reduction (i.e. converting of GPS points into feature classes),
 Error detection and editing, edge matching and tiling,

 Merging of points into lines, and lines into polygons,

 Merging data storage and database management.

 Rectification/registration, interpolation, and photo-interpretation.


4.2.3.3 Data storage and database management:

The key idea to grasp about GIS software component is the geographic database
management system. The data management functions necessary in any GIS facilitates the
storage, organization and retrieval of data using a database management system (DBMS). It
is a set of computer programs for organizing information, at the core of which will be a
database. A database is a large, computerized collection of structured data. The computer
program used to organize the database is a database management system (DBMS). It is a
software package that allows the user to set up, use and maintain a database. All GIS
software, regardless of vendor,consists of a DBMS capable of handling, organizing, and
integrating of spatial data and attribute data.

P a g e | 70
Fig. 4.8 Data storage and database management system
DBMS has to be structured and organized in such a way that they could be handled in the
computer and how they are perceived by the users. It should indicate the way in which data
about the position, linkages (topology), and attributes of geographical elements (such as points,
lines, areas, and more complex entities representing objects on the earth's surface) are structured
and organized. This function provides consistent method of data entry, update, deletion, and
retrieval. An ideal GIS DBMS should provide support for multiple users and multiple databases
to allow efficient updating, minimize redundant information and data independence, security
and integrity.
4.2.3.4 Data analysis and modelling:
This functional element is the most important capability of GIS as far as the user is concerned
and facilitates spatial analysis using spatial and non-spatial attributes. It involves working within
databases to derive new information using several basic and advanced tools. This subsystem
transforms spatial data, for example from one entity type (points, lines and areas) to another, and
performs spatial analysis. Transformation may involve converting of rasters to vectors or vice
versa. This is the distinguishing characteristics of GIS from other types of information systems.
This subsystem allows us for an in-depth study of the topological and geometric properties of
datasets. However, it is the most abused subsystem of a GIS due to lackof understanding the
nature of spatial data contained in the subsystem. The operational classesof GIS can be grouped
as:
 Retrieval, (re)-classification, and measurement functions.
 Overlay operations involve the combination of two or more datasets.
 Connectivity operations include contiguity, proximity, network, and spread operators.
 Neighborhood functions include search operations, topographic function, and

P a g e | 71
interpolation.
 Modeling involves simplified representation and prediction of the reality (e.g. a land
use map).
4.2.4 Data output:
This functional element concerns with the ways in which data are displayed and the results of
analyses are reported to the users (Figure 2-6). These output products can be available in a
variety of ways that includes statistical reports, maps, tables, figures, and graphics of various
kinds.

Fig. 4.9 Data output for GIS

The generated product may range from the ephemeral image on a cathode ray tube through
hard copy output drawn on printer or plotter to information recorded on magnetic media in
digital form. There are several professional GIS software packages in the market such as
ArcGIS, ILWIS, ERDAS, IDRISI, MAPINFO, GRASS, and the database systems of Oracle
and dBase.
4.2.5 People: People are user of Geographic Information System. They run the GIS
software. Hardware and software have seen tremendous development which made
people easy to run the GIS software. Also computer are affordable so people are using
for GIS task. These task may be creating simple map or performing advance GIS
analysis. The people are main component for the successful GIS.
4.2.6 Methods: For successful GIS operation a well-designed plan and business
operation rules are important. Methods can vary with different organizations. Any
organization has documented their process plan for GIS operation. These document

P a g e | 72
address number question about the GIS methods: number of GIS expert required, GIS
software and hardware, Process to store the data, what type of DBMS (database
management system) and more. Well designed plan will address all these question.

4.3 DIFFERENT SOURCES OF SPATIAL DATA:


Most projects begin with a search for base data. This listing will begin with a few sure-fire
sources of data that you can use to build a contextual dataset for a city. We will then work up
to strategies for finding more detailed data. One thing to always keep in mind: since you are
going to invest time in pulling data together to study a place, you should organize your data
well so that you and others can build on what you have done!
Discovering Geographic Data:
The great thing about geographically referenced data is that datasets compiled by independent
agencies can be combined together and will align with each other in a coherent way. In addition
to representing things graphically, GIS data includes attributes that reflect measurements,
classifications or other observations that may be critical to understanding what things are and
how they are related. Discovering these resources can be as easy as doing a web search. Many
cities and government agencies now have sections of their web sites focused on Geographic
Information Systems data. The likelihood of your finding free information about a particular
area is usually proportional to the importance of that area to a government agency, and the
willingness of that agency to make that information available.
Exploring GIS Data:
When you finally find some GIS data, you will want to understand whether the data is suitable
for your purposes or not. Critical aspects of a dataset should be addressed in the data
documentation, or Metadata for example:
 The subject of the dataset?
 What time period is represented?
 Who collected the data and why?
You should always be sure to save the metadata for datasets that you download from the web!
Without documentation the data that you download will be of little use later on.
General-Purpose GIS Data Resources:
Every project needs to begin with a study of the overall context. Data such as general shorelines
and transportation.
OpenStreentMap The People's Map:
The OpenStreentMap is a project that engages the public in the project of developing digital
map data comparable with Google maps. The great thing about this project is that the data are
freely available for applications outside of the web browser -- so, for example you can download
GIS databases of the openstreetmap data from GeoFabrik Webite.

P a g e | 73
GIS Data from Libraries:

The Tufts Open Geo Portal is an exciting project that lets you search and discover thousands
of datasets from several libraries.
Data from National and International Mapping Agencies:
As our quest for data becomes more specific, the next most reliable source of data are national
and international mapping agencies. Some of these public service agencies are very secretive
or charge exorbitant prices for their data (for example, the British Ordinance Survey and its
descendants around the world) which is a shame. But other agencies set a good example, For
example:

Elevation Data: The US Federal Government has very good information of a level of detail.
You can find loads of useful information, including low cost aerial photography for the entire
united states, at The National Map Viewer and Download Tool. is a new and growing source of
information regarding terrain and land cover information, nationwide, and aerial photography
for selected areas.

Bathymetry Data: For U.S. coastal bathymetry, check out The NOAA Estuarine Bathymetry
download site.
A world-wide bathymetry dataset has been compiled by The General Bathymetric Chart of the
Oceans (GEBCO) Their data is available for download from British Oceanographic Data
Centre. Though this process is a bit convoluted and poorly documented. Users on the GSD
network are encouraged to use our local copy of the GEBCO Bathymetry dataset with the Grid
View extraction software located in goliath://geo/gebco_worldwide_bathymetry.
A technical tutorial on downloading and transforming elevation data is covered in more detail
on the web page, Digital Elevation Models.
More Global Sources:
A good Source of data for Europe and Africa is found at The United Nations Environment
Program GIS Data Bank.
Also see the The EDENext Data Portal and Links from the DIVA Project
Georeferenced Images:
A vast amount of information about places can be found in the form of digital images, either
scanned maps or digital aerial photographs. Many sources of these are discussed on the The
GIS Manual Page on Geographic Images Some of these images may have imbedded
georeferencing information that allows it to be aligned with other data in the GIS. If not, then
images may be georeferenced using techniques discussed on the Georeferencing Images.
Time Series Multispectral Satellite Images:
The NASA Mission to Planet Earth project has produced nicely georeferenced series of
relatively cloud free imagery covering the whole globe. The imagery, which can be downloaded

P a g e | 74
from The University of Maryland Global Land Cover Facility, is of fairly course resolution (30-
60 meter cell size) but contains higher spectral resolution than most imagery, with a channel for
infrared reflectance. These multi-channel images can be tricky to work with, but if you are very
patient, you can meld the various channels into useful graphical products and even do some
classification of land cover and land use change.
State and Provincial Agencies:
The more detailed the information you want, the less likely it is that you will find it for free on
the web. At the state level, you may get lucky, and find a site like the Massachusetts GIS These
days, even city governments are making their data available on the web. Find the GIS section
of the state, city, or county's official web site.
Detailed Municipal Data:
If you have clicked through some of the links provided above, you will have noticed that the
world of geographical data is not well organized. At a world-wide scope, you may find
generalized world-wide data. At a regional level, the data are more specific, but fragmented
across administrative domains -- both in the bureaucratic and geographic sort. The same pattern
of fragmentation is exacerbated at the local level. The Open Government and Open Data
movements have helped to expose more geographic data of local interest on the web. For a few
examples, see:
 City of Cambridge GIS Data
 City of Boston Open Data Site (Many datasets of geographical interest!
 City of Boston GIS Data

Compare and Contrast!


For local data, it may be necessary to contact local officials to request data. Studio instructors
should cultivate their local connections well in advance of the start of their studio to try to obtain
detailed local data. Obtaining local data involves a number of steps. First, you should have a
clear idea of the types of layers that are desired (e.g. building footprints, trees, edge of pavement,
property parcels, contours, aerial photography, etc.) Then think about what local agencies might
have these data. Of course, you would look on the web to see if data are available easily -- or to
find contact information of the probable custodians of the data. Then one would write letters or
make phone calls. When requesting data you should be very specific about what you would like
to find. If you simply ask for GIS data you are likely to get a file of zip code boundaries for the
area in question. Of course, you should also ask for metadata! for a boiler-plate data request
letter that you can tailor to meet your purposes.

P a g e | 75
4.4 BASIC SPATIAL ENTITIES:
To work in a GIS environment, real world observations (objects or events that can be
recorded in 2D or 3D space) need to be reduced to spatial entities. These spatial entities can
be represented in a GIS as a vector data model or a raster data model.

Fig. 4.10 Vector and raster representations of a river feature.


4.4.1 Vector:
Vector features can be decomposed into three different geometric
primitives: points, polylines and polygons.
4.4.1.1 Point:

Fig. 4.11 Three point objects defined by their X and Y coordinate values.
A point is composed of one coordinate pair representing a specific location in a coordinate
system. Points are the most basic geometric primitives having no length or area. By definition
a point can’t be “seen” since it has no area; but this is not practical if such primitives are to
be mapped. So points on a map are represented using symbols that have both area and shape
(e.g. circle, square, plus signs).
We seem capable of interpreting such symbols as points, but there may be instances when
such interpretation may be ambiguous (e.g. is a round symbol delineating the area of a round
feature on the ground such as a large oil storage tank or is it representing the point location
of that tank?).
4.4.1.2 Polyline:

Fig. 4.12 A simple polyline object defined by connected vertices.

P a g e | 76
A polyline is composed of a sequence of two or more coordinate pairs called vertices. A
vertex is defined by coordinate pairs, just like a point, but what differentiates a vertex from
a point is its explicitly defined relationship with neighboring vertices. A vertex is connected
to at least one other vertex.
Like a point, a true line can’t be seen since it has no area. And like a point, a line is
symbolized using shapes that have a color, width and style (e.g. solid, dashed, dotted, etc…).
Roads and rivers are commonly stored as polylines in a GIS.
4.4.1.3 Polygon:

Fig. 4.13 A simple polygon object defined by an area enclosed by connected vertices.
A polygon is composed of three or more line segments whose starting and ending coordinate
pairs are the same. Sometimes you will see the words lattice or area used in lieu of
‘polygon’. Polygons represent both length (i.e. the perimeter of the area) and area. They also
embody the idea of an inside and an outside; in fact, the area that a polygon encloses is
explicitly defined in a GIS environment. If it isn’t, then you are working with a polyline
feature. If this does not seem intuitive, think of three connected lines defining a triangle: they
can represent three connected road segments (thus polyline features), or they can represent
the grassy strip enclosed by the connected roads (in which case an ‘inside’ is implied thus
defining a polygon).
4.4.2 Raster:

Fig. 4.14 A simple raster object defined by a 10x10 array of cells or pixels.
A raster data model uses an array of cells, or pixels, to represent real-world objects. Raster
datasets are commonly used for representing and managing imagery, surface temperatures,
digital elevation models, and numerous other entities.
A raster can be thought of as a special case of an area object where the area is divided into a
regular grid of cells. But a regularly spaced array of marked points may be a better analogy
since rasters are stored as an array of values where each cell is defined by a single coordinate
pair inside of most GIS environments.

P a g e | 77
Implicit in a raster data model is a value associated with each cell or pixel. This is in contrast
to a vector model that may or may not have a value associated with the geometric primitive.

4.5 MAJOR COMPONENTS OF SPATIAL DATA:


Spatial data comprise the relative geographic information about the earth and its features. A
pair of latitude and longitude coordinates defines a specific location on earth. Spatial data are
of two types according to the storing technique, namely, raster data and vector data.
Raster data are composed of grid cells identified by row and column. The whole geographic
area is divided into groups of individual cells, which represent an image. Satellite images,
photographs, scanned images, etc., are examples of raster data.

Vector data are composed of points, polylines, and polygons. Wells, houses, etc., are
represented by points. Roads, rivers, streams, etc., are represented by polylines. Villages and
towns are represented by polygons.

4.6 BASIC CLASSES OF MAP PROJECTIONS AND THEIR PROPERTIES:


4.6.1 Types of Map Projections:
Many types of map projections are being used for map making. They are basically classified
into four groups in accordance with the Map Projection Theory or the types of surfaces that are
tangent with the globe. The four categories are:
- Planar, Azimuthal or Zenithal projection
- Conic projection
- Cylindrical projection
- Mathematical or Conventional projection obtained from mathematical calculation.
4.6.1.1 Planar, Azimuthal or Zenithal projection: This type of map projection allows a flat
sheet to touch with the globe, with the light being cast from certain positions, including the
centre of the Earth, opposite to the tangent area, and from infinite distance. This group of map
projections can be classified into three types: Gnomonic projection, Stereographic projection
and Orthographic projection.
1. Gnomonic projection:
The Gnomonic projection has its origin of light at the center of the globe. Less than half of the
sphere can be projected onto a finite map. It displays all the large circles as straight lines, and
parallels as curved lines. This type of map projection is not suitable for a large and wide area.
The disadvantage is that it does not maintain equal-area and conformal properties, particularly
in the areas distant from tangent points. However, it is typically used for pilot systems, such as
in the navigation and aviation.
2. Stereographic projection:
The Stereographic projection has its origin of light on the globe surface opposite to the tangent
point. The created curved lines will be defined on more than half of the sphere. The meridians

P a g e | 78
are straight lines adjacent to one another in the central area and become more widely spaced at
the map periphery, while the parallels are circles. Shape is maintained in this type of projection,
making it applicable for aviation mapping.
3. Orthographic projection:
The Orthographic projection was originated from a made up scenario that if the light is cast
straightly past the globe towards a flat sheet that touches the polar regions, the equatorial region
or certain areas above the globe’s surface, only a hemisphere of the globe will be depicted. The
scale of orthographic projection is most accurate at the tangent area. The more distant it is from
tangent points, the more errors will occur. This type of map projection is commonly used for
the Earth mapping.
These three types of map projections, however, are different in the position of light sources as
well as the tangent points, which include one at the pole, one on equatorial plane, and one at
diagonal position.
4.6.1.2 Conic projection: This type of projection uses a conic surface to touch the globe when
light is cast. When the cone is unrolled, the meridians will be in semicircle like the ribs of a fan.
The tangent areas of conic projection can be classified as central conical projection or tangent
cone, secant conical projection, and polyconic projection.
1. Central conical projection:
This simple map projection seats a cone over the globe then casts the light with the axis of the
cone overlapping that of a globe at tangent points. Drawing straight lines will create standard
parallel, with a correct scale at the tangent point. The areas distant from tangent points will be
more distorted. This type of projection is applicable for the mapping of a narrow long-shaped
space in east-west direction.
2. Secant conical projection:
The projection uses a conical surface to intersect the surface of a globe, creating two tangent
points and subsequently two parallels. This increases accuracy around the tangent areas. The
projection looks like a tangent cone with one standard parallel, which is a meridian that extends
straight out from the pole. The parallels are circular curves which have the pole as their shared
center. The inventors of this popular map projection are Lambert and Alber who also invented
Lambert conformal project and Alber’s conic equal area projection, respectively.
3. Polyconic projection:
The projection seats a series of cones over a globe with the axis of each cone lapping over the
axis of a globe, creating parallels in equal number to that of the tangent cones. The parallels are
arcs of circles that are not concentric, but are equally spaced along the central meridian. The
parallels and meridians are curves, except the equator which is a straight line. As both parallels
and meridians are more curved at the periphery, there is possibility that the scale distortion

P a g e | 79
grows. This type of map projection is commonly used for map-making in an area that extends
in north-south direction.
4.6.1.3 Cylindrical projection: This type of projection uses a cylinder as a tangent surface that
wraps around a globe, or to intersect the globe at certain positions. If the cylinder is unrolled
into a flat sheet, the parallels and meridians will be straight lines that create the right angles
where they intersect each other. The projection displays directions and shapes correctly. The
area close to tangent points will be more accurate. The more distant it is from tangent points,
the more distortion will be shown. This type of projection is typically used to map the world in
particular areas between 80 degrees north and 80 degrees south latitudes.
The cylindrical projection is classified into three types:
1. Cylindrical equal area projection:
The projection places a cylinder to touch a globe at normal positions. All the parallels and
meridians are straight lines crossing each other at the right angles. Every parallel is in the same
length as the equator on the globe. It is widely known as Lambert’s cylindrical equal area
projection.
2. Gall’s stereographic cylindrical projection:
Gall invented this type of map projection by using a cylinder to intersect the globe at the 45th
parallel north and south, resulting in less distortion around both poles. Parallels and meridians
are all straight lines intersecting each other at right angles. The parallel spacing increases in the
areas closer to the poles.
3. Mercator projection:
Mercator invented this type of projection in the 16th Century and it has been commonly used
ever since. This projection uses a cylinder to touch a globe at the equator plane and cast the
light for meridians and parallels to appear on cylindrical surface. Meridians are straight lines
and equally spaced, while parallels are also straight lines but their spacing increases as they get
closer to the poles.
Shapes are represented more accurately in tangent point areas. However, the closer to the poles,
the more distortion occurs. Therefore, it is not typically used to make a map in areas above 80
degrees north latitude and below 80 degrees south latitude.
The Mercator projection is being applied in varying patterns, such as by taking a cylinder to
touch a globe with the axis of cylinder intersecting that of the globe at the right angle, leaving
the cylinder to touch any single meridian. By that way, a central Meridian is created. When the
cylinder is unrolled, the area adjacent to the central meridian will have constant scales. This
type of projection is called Transverse Mercator projection, which is used in the making of
Thailand’s geographic map.

P a g e | 80
4.6.1.4 Mathematical or Conventional projection:
1. Mollweide homolographic projection:
This type of projection is commonly used to display different parts of the Earth. It maintains
area around the central meridian. The equator is a straight horizontal line intersecting the central
meridian at a right angle. Other meridians are curved lines, while other parallels are straight
lines. This map projection was initiated by Karl B. Mollweide in 1805. Its disadvantage is the
distortion at the Earth’s polar regions. However, there is more scale accuracy in the equatorial
regions. The projection is ideal for making global maps.
2. Sinusoidal projection or Samson Flamsteed projection:
All the parallels are straight lines perpendicular to a central meridian, while other lines are
curved like those in the Mollweide projection. The values of sine curves are used to create
meridians, making the meridian spacing wider than that of the Mollweide projection. The
Sinusoidal projection is typically used for map making of the equatorial regions such as in South
America and Africa.
3. Homolosine projection:
This type of equal-area projection is a combination of the Homolographic and the Sinusoidal.
Normally, the Sinusoidal projection is applied between the 40 degrees south and 40 degrees
north latitudes, grafted to the Homolographic in the areas out of the above mentioned range. As
the two projections can not match perfectly, small kinks are seen on the meridians where the
two projections match.

Planar projection Conic projection Cylindrical projection


Fig. 4.15

P a g e | 81
UNIT-V
Syllabus:
Methods of data input into GIS, Data editing, spatial data models and structures, Attribute data
management, integrating data (map overlay) in GIS, Application of remote sensing and GIS for
the management of land and water resources.

5.1 METHODS OF DATA INPUT INTO GIS:

Data input is the method of encoding data in computer readable form and capture the data in

GIS database. Data entry is usually the main hurdle in applying GIS. The initial cost of creating

a database is usually higher. Generally, two types of data entered into the GIS systems. One is
spatial data and another is non-spatial data.
Four types of data entry methods are commonly used in GIS. These are –
1. Keyboard entry.
2. Manual digitizing.
3. Automatic digitizing.
4. Conversion of digital data files.

1. KEYBOARD ENTRY: This method is also known as keyboard encoding method. Most of

the case attribute data is usually input by keyboard but spatial data is rarely input by keyboard.
Both spatial and attribute data is inserted into GIS system using the keyboard terminal of the
computer.
Advantages: i) Very easy to use.
ii) Both spatial and attribute data can input.
iii) Most precise and accurate technique compared to others.

Disadvantages: i) This method becomes difficult to perform when number of entries is huge.

ii) This method is rarely used for storing spatial data.


2. MANUAL DIGITIZING: This is one of the most common and widely used spatial data
input techniques from maps in GIS. A digital base map is collected which is georeferenced
using GIS software. After completing the georeferenced process, all features of the map is
digitized using a computer mouse, this method is called on-screen digitizing. In this
method digitizer uses a puck (mouse-like tool) or cursor to trace point, line and polygon of a

hard copy map, this method also known as hard copy digitizing. The purpose of digitization is

to nourish the coordinates in the computer.


Advantages: i) Ability to copy the map correctly in poor conditions.
ii) It has the ability to easily register and update existing data.
Disadvantages: i) Computers are at greater risk of errors when interpreting poor

P a g e | 82
quality images or information on maps.

ii) Digitization accuracy depends on the efficiency of the digitizer.

3. AUTOMATIC DIGITIZING: Variety of scanning devices exists for the automatic


digitizing. In this method, a digital image of the map is produced by moving an electronic
detector across the surface of the map. This is the faster way of data entry in GIS compared to
other process. Two types of scanners mainly used in this process, one is Drum scanner and
another is Flat-bed scanner.
Advantages: i) Fastest data entry method.
ii) Reduces data entry time.
Disadvantages: i) It is an expensive technique because scanners are very costly.
ii) Scanned data requires some amount of manual editing to create a
clean data layer.
4. CONVERSION OF DIGITAL DATA FILES: Over the last few years, this process has
become popular for data input. There are many government organizations and private
companies on the market that preparing and sell digital data files often in a format that can be
read directly into a GIS.
These are the few data entry methods described above. The choice of specific data input
technology depends on various factors such as how data is collected, what accuracy is required
in the output, resources and available time, project cost, etc.

5.2 DATA EDITING:


The process of data encoding is so complex that an error free data input is next to impossible.
Data may have errors derived from the original source data or may be during encoding process.
There may be errors in co-ordinate data as well as inaccuracies and uncertainness in attribute
data. However, good practice in GIS involves continuous management of data quality, and it is
normal at this stage in the data stream to make special provision for the identification and
correction of errors. It is better to intercept errors before they contaminate the GIS database and
go on to infect (propagate) the higher levels of information that are generated. The process is
known as data editing or ‘cleaning’. Data editing includes – detection and correction of errors;
re-projection, transformation and generalization; and edge matching and rubber sheeting.
5.2.1 Detecting and correcting errors: Errors in input data may derive from three main
sources: errors in the source data; errors introduced during encoding; and errors propagated
during data transfer and conversion. Errors in source data may be difficult to identify. For
example, there may be subtle errors in a paper map source used for digitizing because of the
methods used by particular surveyors, or there may be printing errors in paper based records
used as source data. During encoding a range of errors can be introduced. During keyboard
encoding it is easy for an operator to make a typing mistake; during digitizing an operator may

P a g e | 83
encode the wrong line; and folds and stains can easily be scanned and mistaken for real
geographical features. During data transfer, conversion of data between different formats
required by different packages may lead to a loss of data. Errors in attribute data are relatively
easy to spot and may be identified using manual comparison with the original data. For example,
a forest area can be wrongly identified as agricultural land or if a railway line has been
erroneously digitized as a road, then the attribute database may be corrected accordingly.
Various methods, in addition to manual comparison, exist for the correction of attribute errors.
Errors in spatial data are often more difficult to identify and correct than errors in attribute data.
These errors take many forms, depending on the data model being used (vector or raster) and
the method of data capture. There is a possibility that certain types of error can help to identify
other problems with encoded data. For example, in an area data layer ‘dead-end nodes’ might
indicate missing lines, overshoots or undershoots.
The user can look for these features to direct editing rather than having to examine the whole
map. Most GIS packages will provide a suite of editing tools for the identification and removal
of errors in vector data.

Fig. 5.1 Examples of spatial errors


Corrections can be done interactively by the operator ‘on-screen’, or automatically by the GIS
software. However, visual comparison of the digitized data against the source document, either
on paper or on the computer screen, is a good starting point. This will reveal obvious omissions,
duplications and erroneous additions. Systematic errors such as overshoots in digitized lines
can be corrected automatically by some digitizing software, and it is important for data to be
absolutely correct if topology is to be created for a vector data set.

P a g e | 84
Table 5.1 Common spatial errors

Automatic corrections can save many hours of work but need to be used with care as incorrectly
specified tolerances may miss some errors or correct ‘errors’ that never existed in the first place.
Errors will also be present in raster data. In common with vector data, missing entities and noise
are particular problems. Data for some areas may be difficult to collect, owing to environmental
or cultural obstacles. Similarly, it may be difficult to get clear images of vegetation cover in an
area during a rainy season using certain sensors. Noise may be inadvertently added to the data,
either when they were first collected or during processing. This noise often shows up as
scattered pixels whose attributes do not conform to those of neighboring pixels. For example,
an individual pixel representing water may be seen in a large area of forest. While this may be
correct, it could also be the result of noise and needs to be checked. This form of error may be
removed by filtering. Filtering involves passing a filter (a small grid of pixels specified by the
user-often a 3 × 3 pixel square is used) over the noisy data set and recalculating the value of the
central (target) pixel as a function of all the pixel values within the filter. This technique needs
to be used with care as genuine features in the data can be lost if too large a filter is used.
5.2.2 Re-projection, transformation and generalization: Once spatial and attribute data have
been encoded and edited, it may be necessary to process the data geometrically in order to
provide a common framework of reference. The scale and resolution of the source data are also
important and need to be taken into account when combining data from a range of sources into
a final integrated database.
Data derived from maps drawn on different projections will need to be converted to a common
projection system before they can be combined or analyzed. If not re-projected, data derived
from a source map drawn using one projection will not plot in the same location as data derived
from another source map using a different projection system. For example, if a coastline is
digitized from a navigation chart drawn in the Mercator projection (cylindrical) and the internal
state boundaries of the country are digitized from a map drawn using the Alber’s Equal Area
(conic) projection, then the state boundaries along the coast will not plot directly on top of the
coastline. In this case they will be offset and will need to be re-projected into a common
projection system before being combined.

P a g e | 85
Data derived from different sources may also be referenced using different co-ordinate systems.
The grid systems used may have different origins, different units of measurement or different
orientation. If so, it will be necessary to transform the co-ordinates of each of the input data sets
onto a common grid system. This is quite easily done and involves linear mathematical
transformations.
Some of the other methods commonly used are:
• Translation and scaling: One data set may be referenced in 1-metre co-ordinates while another
is referenced in 10-metre co-ordinates. If a common grid system of 1-metre coordinates is
required, then this is a simply a case of multiplying the coordinates in the 10metre data set by a
factor of 10.
• Creating a common origin: If two data sets use the same co-ordinate resolution but do not
share the same origin, then the origin of one of the data sets may be shifted in line with the other
simply by adding the difference between the two origins (dx, dy) to its co-ordinates.
• Rotation: Map co-ordinates may be rotated using simple trigonometry to fit one or more data
sets onto a grid of common orientation.
Data may be derived from maps of different scales. The accuracy of the output from a GIS
analysis can only be as good as the worst input data. Thus, if source maps of widely differing
scales are to be used together, data derived from larger-scale mapping should be generalized to
be comparable with the data derived from smaller-scale maps. This will also save processing
time and disk space by avoiding the storage of unnecessary detail. Data derived from largescale
sources can be generalized once they have been input to the GIS. Routines exist in most vector
GIS packages for weeding out unnecessary points from digitized lines such that the basic shape
of the line is preserved. The simplest techniques for generalization delete points along a line at
a fixed interval (for example, every third point).
These techniques have the disadvantage that the shape of features may not be preserved. Most
other methods are based on the Douglas-Peucker algorithm. This involves the following stages:
i. Joining the start and end nodes of a line with a straight line.
ii. Examining the perpendicular distance from this straight line to individual vertices along the
digitized line.
iii. Discarding points within a certain threshold distance of the straight line.
iv. Moving the straight line to join the start node with the point on the digitized line that was
the greatest distance away from the straight line.
v. Repeating the process until there are no points left which are closer than the threshold
distance.

P a g e | 86
Fig. 5.2 Different forms of generalization

When it is necessary to generalize raster data the most common method employed is to
aggregate or amalgamate cells with the same attribute values. This approach results in a loss of
detail which is often very severe. A more sympathetic approach is to use a filtering algorithm.
If the main motivation for generalization is to save storage space, then, rather than resorting to
one of the two techniques outlined above, it may be better to use an appropriate data compaction
technique as this will result in a volume reduction without any loss in detail.
5.2.3 Edge Matching and Rubber Sheeting: When a study area extends across two or more
map sheets small differences or mismatches between adjacent map sheets may need to be
resolved.
Normally, each map sheet would be digitized separately and then the adjacent sheets joined
after editing, re-projection, transformation and generalization. The joining process is known as
edge matching and involves three basic steps.
First, mismatches at sheet boundaries must be resolved. Commonly, lines and polygon
boundaries that straddle the edges of adjacent map sheets do not meet up when the maps are
joined together. These must be joined together to complete features and ensure topologically
correct data. More serious problems can occur when classification methods vary between map
sheets. For example, different soil scientist may interpret the pattern and type of soils
differently, leading to serious differences on adjacent map sheets. This may require quite radical
reclassification and reinterpretation to attempt a smooth join between sheets. This problem may
also be seen in maps derived from multiple satellite images. If the satellite images were taken
at different times of the day and under different weather and seasonal conditions then the

P a g e | 87
classification of the composite image may produce artificial differences where images meet.
These can be seen as clear straight lines at the sheet edges.

Fig. 5.3 Example of edge matching.


Second, for use as a vector data layer, topology must be rebuilt as new lines and polygons have
been created from the segments that lie across map sheets. This process can be automated, but
problems may occur due to the tolerances used. Too large a tolerance and small edge polygons
may be lost, too small a tolerance and lines and polygon boundaries may remain unjoined.
Finally, redundant map sheet boundary lines are deleted or dissolved note that although some
quasi-automatic scanning edge matching is available, in practice the presence of anomalies in
the data produced can require considerable human input to the process. Certain data sources
may give rise to internal distortions within individual map sheets. This is especially true for
data derived from aerial photography as the movement of the aircraft and distortion caused by
the camera lens can cause internal inaccuracies in the location of features within the image.
These inaccuracies may remain even after transformation and re-projection. These problems
can be rectified through a process known as rubber sheeting (or conflation). Rubber sheeting
involves stretching the map in various directions as if it were drawn on a rubber sheet. Objects
on the map that are accurately placed are ‘tacked down’ and kept still while others that are in
the wrong location or have the wrong shape are stretched to fit with the control points. These
control points are fixed features that may be easily identified on the ground and on the image.
Their true co-ordinates may be determined from a map covering the same area or from field
observations using GPS. Distinctive buildings, road or stream intersections, peaks or coastal
headlands may be useful control points. Fig. 5.4 illustrates the process of rubber sheeting. This
technique may also be used for re-projection where details of the base projection used in the
source data are lacking. Difficulties associated with this technique include the lack of suitable
control points and the processing time required for large and complex data sets. With too few
control points the process of rubber sheeting is insufficiently controlled over much of the map
sheet and may lead to unrealistic distortion in some areas.

P a g e | 88
5.2.4 Geocoding address data: Geocoding is the process of converting an address into a point
location. Since addresses are an important component of many spatial data sets, geocoding
techniques have wide applicability during the encoding and preparation of data for analysis.

Fig. 5.4 Example of rubber sheeting


During geocoding the address itself, a postcode or another non-geographic descriptor (such as
place name, land owner or land parcel reference number) is used to determine the geographical
co-ordinates of a location. UK postcodes can be geocoded with an Ordnance Survey grid
reference. Several products are available that contain a single data record for each of the 1.6
million postcodes in the UK. In these files, each data record contains the OS Grid Reference
and local government ward codes for the first address in each postcode. Many GIS software
products can geocode US addresses, using the address, zip code or even place names. Address
matching is the process of geocoding street addresses to a street network. Locations are
determined based on address ranges stored for each street segment. Geocoding can be affected
by the quality of data. Address data are frequently inconsistent: place names may be spelt
incorrectly, addresses may be written in different formats and different abbreviations exist for
words that appear frequently in addresses, the use of standards for address data is particularly
relevant to geocoding.

5.3 SPATIAL DATA MODELS AND STRUCTURES:


5.3.1 Spatial data models:
Let us first discuss the concept of data model. Data model is, basically, a conceptual
representation of the data structures in a database. Whereas data structures comprise objects of
data, relationships between data objects and rules which regulate operations on the objects. In

P a g e | 89
other words, data model represents a set of rules or guidelines which are used to convert the
real world features into digitally and logically represented spatial objects. In GIS, data models
comprise the rules which are essential to define what is in operational GIS and its supporting
system. Data model is the core of any GIS which gives a set of constructs for describing and
representing selected aspects of the real world in a computer.
You have already read that in GIS data models, all real world features are represented as points,
lines or arcs and polygons. Data modellers often use multiple models during the representation
of real world in a GIS environment (Fig. 5.5). First is reality, which consists of real world
phenomena such as natural and man-made features. Other three stages are conceptual, logical
and physical models. The conceptual model is the process of developing a graphical
representation from the real world. It determines the aspects of the real world to include and
exclude from the model and the level of detail to model each aspect. It is human-oriented and
partially structured. Logical model is the representation of reality in the form of diagrams and
lists. It has an implementation-oriented approach. Physical model presents the actual
implementation in a GIS environment and comprises tables which are stored as databases.
Physical model has specific implementation approach.
Geospatial data is numerical representation which analyses and describes real world features in
GIS. The nature of geospatial database is a dynamic rather than static and allows a range of
functions such as organizing, storing, processing, analyzing and visualizing spatial data.
Geospatial data depicts the real world in two basic models such as the object-based model
and the field-based model as shown in Fig. 5.6.

Fig. 5.5 Stages of processing relevant to GIS data models

P a g e | 90
Fig. 5.6 Illustration representing an outline model
Object-Based Model: The object is a spatial feature and has some characteristics like spatial
boundary, application relevant and feature description (attributes). Spatial objects represent
discrete features with well defined or identifiable boundaries, for example, buildings, parks,
forest lands, geomorphological boundaries, soil types, etc. In this model, data can be obtained
by field surveying methods (chain-tape, theodolite and total station surveying, GPS/DGPS
survey) or laboratory methods (aerial photo interpretation, remote sensing image analysis and
onscreen digitization). Depending on the nature of the spatial objects we may represent them as
graphical elements of points, lines and polygons.
Field-Based Model: Spatial phenomena are real world features that vary continuously over
space with no specific boundary. Data for spatial phenomena may be organized as fields which
are obtained by direct or indirect sources. Source of direct data is from aerial photos, remote
sensing imagery, scanning of hard copy maps, and field investigations made at selected sample
locations. We can obtain or generate the data by using mathematical functions such as
interpolation, sampling or reclassification from selected sample locations. This approach comes
under indirect data source. For example, Digital Elevation Model (DEM) can be generated from
topographic data such as spot heights and contours that are usually obtained by indirect
measurements.

P a g e | 91
Note: The Digital Elevation Model (DEM) consists of an array of uniformly spaced elevation data. A DEM is
point based but it can be easily converted to raster data by placing each elevation point at the center of a cell.
Spatial database may be organized as either object-based model or the field based model. In
object-based databases, the spatial units are discrete objects which can be obtained from field-
based data by means of object recognition and mathematical interpolation. In the object-based
model, spatial data is mostly represented in the form of coordinate’s lists (i.e. vector lines) and
generally called as the vector data model. When a spatial phenomena database is structured on
the field-based model in the form of grid of square or rectangular cells then the representation
is generally called as the raster data model. Geospatial database possess two distinct
components such as locations and attributes. Geographical features in the real world are very
difficult to capture and may requires large scale database. GIS can organize reality through the
data models. Each model tends to fit certain types of data and applications better than others.
All spatial data models fall into two basic categories: raster and vector.
Let us now discuss in brief about these two types of models.
5.3.1.1 Raster Data Models:
The raster data model is composed of a regular grid of cells in specific sequence and each cell
within a grid holds data. The conventional sequence is row by row which may start from the
top left corner. In this model, basic building block is the cell. The representation of the
geographic feature in this model is used by coordinate, and every location corresponds to a cell.
Each cell contains a single value and is independently addressed with the value of an attribute.
One set of cells and associated value is a layer. Cells are arranged in layers. A data set can be
composed of many layers covering the same geographical areas e.g., water, paddy, forest,
cashew (Fig. 5.7). Points, lines and polygons representation in grid format is presented in Fig.
5.8. The raster model, which is most often used to represent continuously varying phenomena
such as elevation or climate, is also used to store pictures or satellite images and plane based
images. A raster image comprises a collection of grid cells rather like a scanned map or photo.
5.3.1.2 Vector Data Models:
Vector data model comprises discrete features. Features can be discrete locations or events
(points), lines, or areas (polygons). This model uses the geometric objects of point, line and
polygon (Fig. 5.9). In vector model, the point is the fundamental object. Point represents
anything that can be described as a discrete x, y location (e.g., hospital, temple, well, etc.). Line
or polyline (sequence of lines) is created by connecting the sequence of points. End points are
usually called as nodes and the intermediate points are termed as vertices. If we know the start
and end node coordinates of each line or polyline we can compute the length of line or polyline.
These are used to represent features that are linear in nature e.g., stream, rail, road, etc. Polygon
is defined in this model by a closed set of lines or polylines.

P a g e | 92
Fig. 5.7 Illustration of raster data; (a) raster grid matrix with their cell location and
coordinates, and (b) raster grid and its attribute table

Fig. 5.8 Representation of raster gird format; (a) point (cell), line (sequence of cells), and
polygon (zone of cells) features and (b) no data cells (black in color)

P a g e | 93
Areas are often referred to as polygons. A polygon can be represented by a sequence of nodes
where the last node is equal to the first node. Polygons or areas identified as closed set of lines
are used to define features such as rock type, land use, administration boundaries, etc.

Fig. 5.9 Vector model represents point, line and polygon features
Points, lines and polygons are features which can be designated as a feature class in a geospatial
database. Each feature class pertains to a particular theme such as habitation, transportation,
forest, etc. Feature classes can be structured as layers or themes in the database (Fig. 5.10).
Feature class may be linked to an attribute table. Every individual geographic feature
corresponds to a record (row) in the attribute table (Fig. 5.10).

Fig. 5.10 Various themes organized as individual feature class


The simplest vector data model stores and organizes the data without establishing relationships
among the geographic features are generally called as spaghetti model. In this model, lines in

P a g e | 94
the database overlap but do not intersect, just like spaghetti on a plate. The polygon features are
defined by lines which do not have any concept of start and end node or intersection node.
However, the polygons are hatched or colored manually to represent something. There is no
data attached to it and, therefore, no data analysis is possible in the spaghetti model (Fig. 5.11)

Fig. 5.11 Vector spaghetti data model; (a) Spaghetti data, (b) cleaned spaghetti data
and (c) polygons in spaghetti data

5.3.1.3 Comparison of Raster and Vector Data Models:


As you know raster and vector data models are important in a GIS. Each one has its own
strength. A comparison between these two types of data models is shown in Table 5.2.
Table 5.2 Comparison between raster and vector data models

P a g e | 95
5.3.1.4 Advantages and Disadvantages of Raster and Vector Data Models:
The representation of raster and vector models for geospatial data has two divergent views of
the real world as well as data processing and analysis. To solve different geospatial problems
obviously these two models can be used. But for the purpose of GIS applications geospatial
data requirement should be determined. In order to understand the relationships between data
representation and analysis in GIS, it is necessary to know the relative advantages and
disadvantages of raster and vector models. Both raster and vector models for storing geospatial
data have unique advantages and disadvantages. It is generally agreed that the raster model is
best suitable for integrating GIS analysis for various resource applications. Now-a-days most
of the GIS packages are able to handle both models.
Advantages of Raster Data:
• data structure is simple.
• good for representing continuous surfaces.
• location specific data collection is easy.
• spatial analytical operations are faster.
• different forms of data are available (satellite images, field data, etc.), and
• mathematical modelling and quantitative analysis can be made in easiest way due to the
inherent nature of raster images.
Disadvantages of Raster Data:
• data volumes are huge.
• poor representation for points, lines and areas.
• cartographic output quality may be low.
• difficult to effectively represent linear features (depends on the cell resolution). Hence, the
network analysis is difficult to establish.
• coordinate transformation is difficult which sometimes cause distortion of grid cell shape
• suffer from mixed pixel problem and missing or redundant data, and
• raster images generally have only one attribute or characteristic value for a feature or object,
therefore, limited scope to handle the attribute data.
Advantages of Vector Data:
• data structure is more compact.
• data can be represented with good resolution.
• it can clearly describe topology. Hence, good for proximity and network analysis.
• spatial adjustment of the data is easy with the utilization of techniques such as rubber sheeting,
affine, etc.
• graphic output in small scale as well as large scale gives a good accuracy.
• geographic location of entities accurate.
• modernizing and generalization of the entities are possible.

P a g e | 96
• easy handling of attribute data, and
• coordinate transformation techniques such as linear transformation, similarity transformation
and affine transformation could be done easily.
Disadvantages of Vector Data:
• data structures are complex.
• overlay analysis is difficult in processing. Often, this inherently limits the functionality for
large data sets, e.g., a large number of features.
• data collection may be expensive.
• high resolution drawing, coloring, shading and also displaying may be time consuming and
unbearable.
• technology of data preparation is expensive.
• representation of spatial variability is difficult, and
• spatial analysis and filtering within polygons is impossible.

5.3.2 Spatial Data Structure:


Structures that provide information required for computer to reconstruct spatial data model in
digital form are defined as spatial data structure. Many GIS software have specific capabilities
for storing and manipulating attributes data in addition to spatial information. However, basic
spatial data structures in GIS are mainly vector and raster.
5.3.1 Raster Data Structure:
Raster or grid data structure refers to the storage of the raster data for data processing and
analysis by the computer. There are mainly three commonly used data structures such as cell-
by-cell encoding, run-length encoding, and quadtree.
Cell-By-Cell Encoding Data Structure:
This is the simplest raster data structure and is characterized by subdividing a geographic space
into grid cells. Each pixel or grid cell contains a value. A grid matrix and its cell values for a
raster are arranged into a file by row and column. Fig. 5.12 shows the cell-by-cell encoding data
structure. Digital Elevation Models (DEMs) are the best examples for this method of data
structure. In Fig. 5.12, value 1 represents the gray cells and 0 has no data. This cell-by-cell
encoding method can also be used for storage of data in satellite images. Most of satellite images
consist of multispectral bands and each pixel in a satellite image has more than one value.
Mainly three formats such as Band Sequential (BSQ), Band Interleaved by Lines (BIL), and
Band Interleaved by Pixels (BIP) are used to store data in a multiband/multispectral imagery.
Run-Length Encoding Data Structure:
Run-Length Encoding (RLE) algorithm was developed to handle the problem that a grid often
contains redundant or missing data.

P a g e | 97
Fig. 5.12 Cell-by-cell encoding data structure
When the raster data contains more missing data, the cell-by-cell encoding method cannot be
suggested. In RLE method, adjacent cells along a row with the same value are treated as a group
called a run. If a whole row has only one class, it is stored as the class and the same attributes
are kept without change. Instead of repeatedly storing the same value for each cell, the value is
stored once together with the number of the cells that makes the run. Fig. 5.13 explains the run-
length encoding structure of a polygon. In the figure, the starting cell and the end cell of the
each row denote the length of group and is generally called as run. RLE data compression
method is used in many GIS packages and in standard image formats.

Fig. 5.13 Run-length encoding data structure


Quadtree Data Structure:
To compress the data as well as to save the space in original grid, quad tree data structure can
be used (Fig. 5.13). A quadtree works by dividing a grid into four quadrants for the available
data. The available data quadrant is again split into four half-size quadrants and so on until the
individual pixel is reached. The attribute data for all the pixels of the quadrant remains the same
even if it is divided.

P a g e | 98
Fig. 5.13 Quadtree data structure
5.3.2 Vector Data Structure:
As you know description of geographical phenomena explained in the form of point, line, or
polygons is called as vector data structure. Vector data structures are now widely used in GIS
and computer cartography. This data structure has an advantage in deriving information from
digitization, and is more exact in representation of complex features such as administration
boundaries, land parcels, etc. In early GIS, vector files were simply lines and were having only
starting and ending points. The vector file consists of a few long lines, many short lines, or even
a mix of the two. The files are generally written in a binary or ASCII (American Standard Code
for Information Interchange) code which refers to a set of codes used to represent alpha
numerical characters in computer data processing. Therefore, a computer programmer needs to
follow the line from one place to another in the file to enter the data in system. This unstructured
vector data are called as cartographic spaghetti. Vector data in the spaghetti data model may not
be usable by GIS. However, most of the systems still use this basic data structure because of
their standard format (e.g., mapping agency’s standard linear format).
To express the spatial relationships more accurately between the features, the concept of
topology has evolved. Topology can explain the spatial relationships of adjacent, connectivity
and containment between spatial features. Topological data are useful for detecting and
correcting digitizing errors e.g., two streams do not connect perfectly at an intersection point.
Therefore, topology is necessary for carrying out some types of spatial analysis such as network
and proximity. There are commonly two data structures used in vector GIS data storage viz.
topological and non-topological structures.
Let us now discuss about the two types of data structure.
a) Topological Data Structure:
Topologic data structure is often referred to as an intelligent data structure because spatial
relationships between geographic features are easily derived when using them. Because of this
reason topological vector data structure is important in undertaking complex data analysis. In a

P a g e | 99
topological data structure, lines cannot overlap without a node whereas lines can overlap
without nodes in a nontopological data structure (e.g., spaghetti).
The arc-node topological data structure is now used in most of the systems. In the arc-node data
structure, the arc is used for the data storage and it also works when it is needed to reconstruct
a polygon. In file of arcs, point data is stored and linked to the arc file. Arc is a line segment
and its structure is given in Fig. 5.14. Node refers to the end points of the line segment. The arc
has information not only related to that particular arc but also to its neighbors in

Fig. 5.14 Topological structure of the arc


geographic space. It includes the arc number of the next connecting arc and the polygon number
i.e. A: the left polygon (PL) and B: the right polygon (PR). The arc forms areas or polygons,
and the polygon identifier number is the key for constructing a polygon. Some important vector
data structures are such as Topologically Integrated Geographic Encoding and Referencing
(TIGER) and Coverage Data Structure.
i) Topologically Integrated Geographic Encoding and Referencing (TIGER):
It is an early application of topology in preparing geospatial data created by US Bureau of
Census as an improvement to the Geographic Base File/Dual Independent Map Encoding
(GBF/DIME) data structure. This data structure or format was used in the 2000 census by US
Bureau of the Census. In the TIGER database, points are called 0-cells, lines 1-cells, and areas
2-cells (Fig. 5.15). Each 1-cell represents a direct line which starts from one point and ending
at another point. The line comprises both sides of the data. Each 2 and 0-cells share of the
information of the 1-cells associated with it. The main advantage of this data structure is that
the user can easily identify an address on either the right side or the left side of a street or road.

Fig. 5.15 Topology in TIGER database

P a g e | 100
ii) Coverage Data Structure:
Coverage data structure was practiced by many GIS companies like ESRI, in their software
packages in 1980s to separate GIS from CAD (Computer Aided Design). A coverage data
structure is a topology based vector data structure that can be a point, line or polygon coverage.
A point is a simple spatial entity which can be represented with topology. The point coverage
data structure contains feature identification numbers (ID) and pairs of x, y coordinates, as for
example A (2, 4) (Fig. 5.16). Data structure of line coverage is represented in Fig. 5.17. The
starting point of the arc is called from node (F-Node) and where it ends to node (T-Node). The
arc-node list represents the x, y coordinates of the nodes and the other points (vertices) that
generate each arc. For example, arc C consists of three line segments comprising F-Node at (7,
2), the T-Node at (2, 6) and vertex at (5, 2). Fig. 5.18 shows the relationship between polygons
and arcs (polygon/arc list), arcs and their left and right polygons (left poly/right poly list), and
the nodes and vertices (arc-coordinate list). Polygon ‘a’ is created with arcs A,B,G,H and I.
Polygon ‘c’ surrounded by polygon ‘a’ is an isolated polygon and consists of only one arc, i.e.
8. ‘o’ is the universal polygon which covers outside the map area. Arc A is a directed line from
node 1 to node 2 and has polygon ‘o’ as the polygon on the left and polygon ‘a’ as right polygon.
The common boundary between two polygons (o and a) is stored in the arccoordinate list once
only, and is not duplicated.

Fig. 5.16 Point coverage data structure

P a g e | 101
Fig. 5.17 Line coverage data structure

Fig. 5.18 Polygon coverage data structure


b) Non-Topological Data Structure:
Vector data structure that is common among GIS software is the Computer Aided Design
(CAD) data structure. Drawing Exchange Format (DXF) is used in the CAD package (e.g.,
AutoCAD) for transferring of the data files. DXF does not support topology and arrange the
data as individual layers. This structure consists of listing elements, not features, defined by
strings of vertices, to define geographic features, e.g., points, lines, or areas. There is
considerable redundancy with this data model since the boundary segment between two

P a g e | 102
polygons will be stored twice, once for each feature. This format allows user to draw each layer
by using different line symbols, colors and texts. In this structure, polygons are independent and
difficult to answer about the adjacency of features. The CAD vector model lacks the definition
of spatial relationships between features that is defined by the topological data model.
Since 1990s almost all commercial GIS packages such as ArcGIS, MapInfo, Geomedia have
adopted non-topological data structure. Shape file (.shp) is a standard non-topological data
format used in GIS packages. In ArcInfo coverage, the geometry of shape file is stored into two
extension types such as .shp and .shx. Shape file (.shp) stores the feature geometry and .shx file
maintains the spatial index of the feature geometry. The advantage of nontopological
data structure, i.e. shape file, lies in quick display on the system than the topological data. Many
software packages such as ArcGIS, MapInfo uses the .shp file format.
Note: Shape file comprises points (a pair of x, y coordinates), lines (series of points), polygons (series of lines).
There are no files to describe the spatial relationship between geometric objects and polygon boundaries have
duplicate in shape file.
5.4 ATTRIBUTE DATA MANAGEMENT:
The Attribute data management system (ADMS) requires fulfilling the following requirements:
1. Attribute data input and management Attribute data should be stored in the form of table.
The ADMS is able to accept, process and sort various types of data automatically as
well as guarantee data security always.
2. Attribute data query: The ADMS should provide multiple query schemes in order that
users can fast query things that they require in various practices.
3. Attribute data dynamic updating: The ADMS should be designed to support different
types of attribute data updating, such as, editing, erasing, deleting, adding, and so on so
that the data is current, accurate and reliable.
4. Statistics and analysis: The ADMS should have the functions of statistic analysis and
development prediction to various attribute data.
5. Data (base) operating: The ADMS should be simple to learn, convenient to operate
because the users vary from beginner to sophisticated operators.
6. User interface: Friendly user interface, such as menu bar, pull-down menu, diagonal
box, pop-up menu, toolbar, and so on.

P a g e | 103
5.4.1 Architecture of ADMS in GIS:

Fig. 5.19 The architecture of GeoStar GIS.


5.4.2 Functions of ADMS:
ADMS should have the following functions:
 Attribute data structure setup,
 Attribute data input,
 Attribute data editing, adding, deleting, and so on operations,
 Attribute data processing,
 Attribute data query,
 Statistic analysis and prediction,
 output,
 Data security maintenance.
5.4.3 Properties of ADMS:
 Perfect: The system should be as perfect as possible. This means that the system should
be able to implement many functions such as creating new file, open file, creating
database, query, modifying attribute data (names, values), database conversion,
mapping, output and so on.
 Standard: The standardization of spatial data structure, data model and spatial data
share in multiple databases has become an important issue in GIS software industry.
Thus, to be developed system should conform to identical specification. Types of spatial
objects should be as scientific as possible.
 Advanced: The system should employ current public software and hardware
environment, advanced technologies.
 Compatible: Developed software can run in IBM compatible PCs and data should be
converted easily.

P a g e | 104
 Effective: To be developed software should save memory and be less time-consuming;
and algorithm should be optimum.
 Adaptive: To be organized data can be shared and be communicated with other GIS
and can be called by other GIS softwares.
 High quality: inputted data is reliable, renewable, current and accurate.
5.4.4 The Flowchart of ADMS in GeoStar:

Fig. 5.20 The designed architecture of ADMS in GeoStar.


Fig. 5.20 shows the designed architecture of ADMS in GeoStar, which consists of 7 modules:
file management, feature type setup, database operation, tabular output, statistic analysis,
database conversion and help. The advantages of this design are
(1) Open: It allows user to create clientl server desktop applications.
(2) Completely seamless integration solution: Built ODBC technology and SQL connectivity
serve as clientl server database, which makes the GIs users access tables of relational database
management system based on clientl server architecture.
(3) Structure module: System was designed into tree layer structure. Thus, each module can
easily be added, deleted, moved up to users’ requirements.

5.5 INTEGRATING DATA (MAP OVERLAY) IN GIS:

Overlay is a GIS operation that superimposes multiple data sets (representing different themes)
together for the purpose of identifying relationships between them. An overlay creates a

P a g e | 105
composite map by combining the geometry and attributes of the input data sets. Tools are
available in most GIS software for overlaying both Vector or raster data.

Before the use of computers, a similar effect was developed by Ian McHarg and others by
drawing maps of the same area at the same scale on clear plastic and actually laying them on
top of each other.
5.5.1 Overlay with Vector Data:
Feature overlays from vector data are created when one vector layer (points, lines, or polygons)
is merged with one or more other vector layers covering the same area with points, lines, and/or
polygons. A resultant new layer is created that combines the geometry and the attributes of the
input layers.

An example of overlay with vector data would be taking a watershed layer and laying over it a
layer of counties. The result would show which parts of each watershed are in each county.

Fig. 5.21 Map overlay

P a g e | 106
5.5.1.1 Polygon Overlay Functions:
Various GIS software packages offer a variety of polygon overlay tools, often with differing
names. Of these, the following three are used most commonly for the widest variety of purposes:
 Intersection, where the result includes all those polygon parts that occur in both input layers
and all other parts are excluded. It is roughly analogous to AND in logic and multiplication
in arithmetic.
 Union, where the result includes all those polygon parts that occur in either A or B (or
both), so is the sum of all the parts of both A and B. Different from identify in that individual
layers are no longer identifiable. It is roughly analogous to OR in logic and addition in
arithmetic.
 Subtract, also known as Difference or Erase, where the result includes only those polygon
parts that occur in one layer but not in another. It is roughly analogous to AND NOT in
logic and subtraction in arithmetic.
The remainder are used less often, and in a narrower range of applications. If a tool is not
available, all of these could be derived from the first three in two or three steps.
 Symmetric Difference, also known as Exclusive Or, which includes polygons that occur
in one of the layers but not both. It can be derived as either (A union B) subtract (A intersect
B), or (A subtract B) union (B subtract A). It is roughly analogous to XOR in logic.
 Identity covers the extent of one of the two layers, with the geometry and attributes merged
in the area where they overlap. It can be derived as (A subtract B) union (A intersect B).
 Cover, also known as Update, is similar to union in extent, but in the area where the two
layers overlap, only the geometry and attributes of one of the layers is retained. It is called
"cover" because it looks like one layer is covering the other; it is called "update" because
its most common usage is when the covering layer represents recent changes that need to
replace polygons in the original layer, such as new zoning districts. It can be derived as A
union (B subtract A).
 Clip contains the same overall extent as the intersection, but only retains the geometry and
attributes of one of the input layers. It is most commonly used to trim one layer by a polygon
represent an area of interest for the task. It can be derived as A subtract (A subtract B).
It is important to note that these functions can change the original polygons and lines into new
polygons and lines and their attributes.
5.5.2 Overlay with Raster Data:
Raster overlay involves two or more different sets of data that derive from a common grid. The
separate sets of data are usually given numerical values. These values then are mathematically
merged together to create a new set of values for a single output layer. Raster overlay is often
used to create risk surfaces, sustainability assessments, value assessments, and other
procedures. An example of raster overlay would be to divide the habitat of an endangered

P a g e | 107
species into a grid, and then getting data for multiple factors that have an effect on the habitat
and then creating a risk surface to illustrate what sections of the habitat need protecting most.

5.6 APPLICATION OF REMOTE SENSING AND GIS FOR THE MANAGEMENT OF


LAND AND WATER RESOURCES:
5.6.1 Application of Remote Sensing and GIS in Land Resource Management:
Planning and development of urban areas with infrastructure, utilities, and services has its
legitimate importance and requires extensive and accurate LU/LC classification. Information
on changes in land resource classes, direction, area and pattern of LU/LC classes form a basis
for future planning. It is also essential that this information on LU/LC be available in the form
of maps and statistical data as they are very vital for spatial planning, management and
utilization of land. However, LU/LC classification is a time consuming and expensive
processes. In recent years, the significance of spatial data technologies, especially the
application of remotely sensed data and geographic information systems (GIS) has greatly
increased. Now-a-days, remote sensing technology is offering one of the quick and effective
approaches to the classification and mapping of LU/LC changes over space and time. The
satellite remote sensing data with their repetitive nature have proved to be quite useful in
mapping LU/LC patterns and changes with time.
Quantifying the anthropogenic or human activity that governs the LU/LC changes has become
a key concept in the town planning profession. A major objective of planning analysis is to
determine how much space and what kind of facilities a community will need for activities, in
order to perform its functions. An inventory of land uses will show the kind and amount of
space used by the urban system.
LU/LC study with the use of remote sensing technology is emerging as a new concept and has
become a crucial item of basic tasks in order to carry through a series of important works,
processes such as the prediction of land-use change, prevention and management of natural
disaster, and protection of environment, etc and most importantly analyzing the present
development and future scope of development of the nation. In the recent years, with the
enhancement of more advanced Remote Sensing technology and Geo-Analysis models,
monitoring the status and dynamical change of LU/LC thoroughly using remotely sensed digital
data has become one of the most rapid, credible and effectual methods.
5.6.1.1 Study area:
The study area is Madurai city, Tamil Nadu (Fig. 5.22), one of the famous historical and cultural
cities in India. It is located in South Central Tamil Nadu, is the second largest city after Chennai
and is the headquarters of Madurai District. In 2011, the jurisdiction of the Madurai Corporation
was expanded from 72 wards to 100 wads covering area 151 Sq.Km, dividing into four regions
Zone I, II, III, IV. There has been rapid growth in Madurai from 1967 and it has gradually
increased over the years in Madurai and its surrounding areas. Most of the areas around Madurai

P a g e | 108
are least developed and are in the transformation stage. It extended geographically from 9o50’
North latitude to 10o North latitude and 78o02’ East longitude to 78º12’ East longitude, and
approximately 100 m above MSL. The terrain of the city is gradually sloped from the north to
south and west to east.
The River Vaigai is the prominent physical feature which bisects the city into North and South
zones with the north sloped towards Vaigai River and the south zone sloped away from the
river. The city became municipality in 1867 and was upgraded as a corporation in 1971 after
104 years. The corporation limit was extended from 52.18 km2 to 151 km2 in 2011. As per 2011
census the population of the city is 15.35 lakhs. The area has been experiencing remarkable
land cover changes due to urban expansion, population pressure and various economic activities
in the recent years.

Fig. 5.22 Study Area Location

5.6.1.2 Methodology:
5.6.1.2.1 Data:
For this study, Landsat ETM+ (path 143, row 53) images were used (Table 5.3). Landsat images
were downloaded from USGS Earth Resources Observation Systems data center. A base map
of Madurai city was provided by Local Planning Authority of Madurai. The Landsat ETM+
image data consists of eight spectral bands, with the same spatial resolution as the first five
bands of the Landsat TM image. Its 6th and 8th (panchromatic) bands have resolutions of 60 m
and 15 m, respectively. All visible and infrared bands (except the thermal infrared) were

P a g e | 109
included in the analysis. Remote sensing image processing was performed using ERDAS
Imagine 9.1 software. Landsat data of 1999, 2006, and SOI Toposheet were selected and used
to find the spatial and temporal changes in the study area, during the period of study.
Table 5.3 LANDSAT Satellite Data used in the study

5.6.1.2.2 Image classification:


In this study, totally, four LU/LC classes were considered as vegetation, Built-up land, waste
land, and water area. The classes in the images were decided based on the LU/LC classification
system devised by National Remote Sensing Agency (NRSA) for Indian conditions. The LU/LC
classes are presented in Table 5.4. In the study area, a supervised classification of the image
was performed using the signature files from the unsupervised classification. For the supervised
classification a maximum likelihood rule was used for a parametric rule. The LU/LC classified
maps for 1999 and 2006 were produced from Landsat images and are given in Fig. 5.23.

Fig. 5.23 LU/LC Classified Images (a) 1999 (b) 2006.


Table 5.4 LU/LC Classification scheme considered for Madurai

P a g e | 110
5.6.1.3 LULC change analysis:
The LU/LC classification results are summarized form the year 1999 to 2006 in Table 5.5.
From 1999 to 2006, built-up area increased by 17.09. On the other hand open land decreased
by 11.82 % respectively. The fluctuations were observed in vegetation and water area due to
seasonal variation found in the study area. All these land use change are closely related with the
development of regional economy and the population growth in the city. The trend of LU/LC
and urban change in the city is shown in the Fig. 5.24.
Table 5.5 Summary of Areas for LU/LC Classes from 1999 to 2006

Fig. 5.24 Comparison of LU/LC from 1999 to 2006


5.6.2 Application of Remote Sensing and GIS in Water Resource Management:
5.6.2.1 Design of water supply reservoirs with insufficient data:
In many regions of the world, particularly in developing countries, water managers are often
faced with the problem to design a water resources system, e.g. a water supply reservoir for a
city or an irrigation system, where no or almost no hydro meteorological information is
available. Many failures of such water supply systems are due to the fact that they were designed
with inadequate hydrological data. In such situations usually an observation system consisting
of one or a few river gauges with equipment for velocity measurements is installed. The data
collected during the design period form the basis for the estimation of the required reservoir
storage capacity, in order to meet the demand.

P a g e | 111
At the beginning of the design period of the water project (if no hydrological data are available)
a hydrological network (at least one stream gauge) is installed which collects data in form of
"ground truth" during a short period (planning period), e.g. between one and three years only.
A mathematical model was developed which connects this ground truth to data obtained from
satellite (Meteosat) imagery. The parameters of the nonlinear mathematical model are calibrated
based on short-term simultaneous satellite and ground truth data. The principle of the technique
is illustrated in Fig. 5.25.

Fig. 5.25 Design of a water supply reservoir with the aid of satellite imagery (Meteosat)
The model works in two consecutive steps:
(a) estimation of monthly precipitation values with the aid of Meteosat infrared data and
(b) transformation of the monthly rainfall volumes into the corresponding runoff values with
the aid of the calibrated model.
The model was applied to the Tano River basin (16 000 km2) in Ghana, West Africa. Fig. 5.26
shows two consecutive infrared Meteosat images of the region of the Tano River in Ghana. The
spatial resolution of Meteosat is 5 km x 5 km, the temporal resolution is 30 minutes. The
relatively coarse spatial resolution allows the application of this technique only for larger
drainage basins, i.e. larger than 5000 km2. The high temporal resolution of Meteosat provides
48 images per day in three spectral channels.

P a g e | 112
Fig. 5.26 Cloud development, Tano River basin, Ghana, West Africa. Two successive
Meteosat IR images
An example of the model performance is given in Fig. 5.27, which shows the monthly runoff
of the Tano River in three different curves. One represents the observed runoff, the second
shows the runoff which was computed with the rainfall runoff model based on observed rainfall
data (ground truth) and the third curve represents monthly runoff values calculated based on
remote sensing information (Meteosat, IR data).
5.6.2.2 Reservoir sediment monitoring and control:
Dams built to store water for various purposes (e.g. water supply, irrigation, hydropower)
usually undergo sediment processes, i.e. erosion or silting up. It is very important to know the
state of sedimentation within a reservoir in order to either prevent deterioration of the reservoir
due to erosion or to restore the reservoir capacity in case of silting up (e.g. by excavation of
sediments). Both processes, erosion and silting up, are unfavorable for the use of the reservoir:
erosion since it endangers the structure of the dam itself, siltation since it reduces the available
storage capacity considerably. Since a reservoir is a three dimensional body the required
knowledge of the reservoir state can be gained only with the aid of monitoring techniques
showing a high resolution in space. Under such conditions remote sensing techniques are very
useful, in this case multi frequency echo sounding taken from a boat in a multi-temporal fashion,
which allows detection of changes of the sediment conditions. In the example presented here,
both, erosion and silting up of a reservoir occurs. Fig. 5.28 shows the reservoir Lake Kemnade
in the Ruhr River valley in Germany. Twenty-five cross-sections can be seen, which are
observed periodically by echo sounder from a boat.

P a g e | 113
Fig. 5.27 Monthly runoff, Tano River basin (16 000 km2), Ghana, West Africa:
observed, computed from observed rainfall and from Meteosat IR data.
Fig. 5.29 shows a cross-section in the upper region of the lake eight years after construction.
We observe siltation in the center and the right hand part of the cross-section, while there is
some minor erosion in the left part of the section. While in the upper region and center part of
the lake siltation is dominant, near the dam and close to the weir erosion is the dominant process.
5.6.2.3 Flood forecasting and control:
In recent years we observe increasing damages and losses of lives due to severe floods on all
continents of the Earth. This shows that the problem of flood elevation still needs increasing
attention. Flood warning on the basis of flood forecasts is one way to reduce problems, another
— and better — way is reduction of floods with the aid of flood protection reservoirs. For both
purposes, flood warning and the operation of flood protection reservoirs, it is necessary to have
real time flood forecasts available. The usefulness of these forecasts is the higher, the sooner
they are available. In order to gain lead-time of the forecast it is advisable to compute forecast
flood hydrographs on the basis of rainfall observed in real time. Since the variability of rainfall
in time and space is very high it is advisable to monitor rainfall with the aid of remote sensing
devices having a high resolution in time and space. For this purpose ground-based weather radar
operating on the basis of active microwave information is most useful.

P a g e | 114
Fig. 5.28 Lake Kemnade with cross-sections

Fig. 5.29 Reservoir cross-section with sediments (Lake Kemnade, Ruhr River, Germany)

P a g e | 115
Fig. 5.30 shows schematically the acquisition of rainfall information by radar and its
transformation into real time flood forecasts, which in turn may be used for the computation of
an optimum real time reservoir operation strategy. Fig. 5.31 shows two consecutive isohyet
maps of the Giinz River
drainage basin in Germany observed by ground-based weather radar. This information has a
high resolution in time and space which can be used in order to compute a forecast flood
hydrograph in real time with the aid of a distributed system rainfall runoff model. With the aid
of observed (by radar) and forecasted rainfall it is possible to compute real time forecast flood
hydrographs. Fig. 5.32 shows such flood forecasts for different probabilities of non exceedance
in comparison to the (later) observed flood hydrograph. Although such forecasts are by no
means perfect they are still useful for the computation of the optimum reservoir operating
strategy of flood protection reservoirs as shown in Fig. 5.33.

Fig. 5.30 Reservoir operation based on real time flood forecasts with the aid of radar
rainfall measurements

P a g e | 116
Fig. 5.31 Isohyets for the River Giinz drainage basin, Germany, obtained from two
consecutive radar measurements
5.6.2.4 Hydropower scheduling:

In many regions of the world river runoff occurring in spring and summer originates from
snowmelt in mountainous regions. Thus the hydropower production during Spring and Summer
depends to a great extent on the quantity of snow which fell in the mountains during winter and
early spring. If therefore the quantity of snow and its water equivalent is known early during
the year it is possible to make forecasts of the expected runoff in the following months. If the
reservoirs feeding the hydropower plants are large enough, it is possible to optimize hydropower
production by scheduling the releases from the reservoirs to the power plants accordingly. This
technique has been used already very early, i.e. in the late seventies and early eighties in Norway
(0strem et al., 1981). Since most of the high mountain basins in Norway are not forested
variations in the snow cover can easily be monitored with the aid of satellite data (e.g. NOAA).
During the main snowmelt period, May to July, a forecast of expected river flows to be used in
the power plants is of high interest for proper management of the plants.
5.6.2.4 Irrigation scheduling:
The allocation of water to the various farmers within a irrigation scheme is usually regulated by
certain fixed rules. These rules may allocate certain quantities to certain farmers or allocate
water proportional to the irrigated area. Such rigid rules may be sub-optimal since they cannot
be adapted to actual and real time water demand of the crops according to their present state. It
is better to allocate water either to match crop water requirements or to maximize effectiveness.
In order to allocate water to match crop water requirements it is necessary to know the actual
water demand of the crop in real time. As long as the water supply meets the demand usually
no problems occur. If, however, crop water stress occurs the water allocation is certainly not

P a g e | 117
optimal. In order to improve the situation in real time it is necessary to detect crop water stress.
In order to do this crop water stress indices have to be defined and the major unknown
parameters in such an index should be detectable with the aid of remote sensing data. The
evapotranspiration of crops under stress is different from crops under normal conditions and
this difference can be detected with the aid of e.g. thermal infrared data.

Fig. 5.32 Flood forecast based on radar rainfall measurement and rainfall forecast
5.6.2.5 Groundwater exploration for water supply purposes:

Direct groundwater exploration or observations with the aid of remote sensing techniques is not
feasible due to the fact that most remote sensing techniques — with the exception of airborne
geophysics and radar — have no penetrating capabilities beyond the uppermost layer, i.e. less
than 1 m. Therefore, the use of remote sensing techniques in groundwater exploration is limited
to being a powerful additional tool to the standard geophysical methods. Therefore, the general
application of remote sensing in hydrogeology lies in the domain of image interpretation, i.e.
qualitative information, which is, however, very useful and may enable the groundwater
explorer to reduce the very expensive conventional techniques considerably. This qualitative or
indirect information, which can be obtained from remote sensing sources is e.g. (a) likely areas
for the existence of groundwater, (b) indicators of the existence of groundwater, (c) indicators

P a g e | 118
of regions of groundwater recharge and discharge and (d) areas where wells might be drilled.
These indicators are usually based on geologic and geomorphology structures or on multi-
temporal observations of surface water and on transpiring vegetation. Landsat visible and
infrared data are preferred for these purposes, but also other sensors including microwave
sensors are used. In the thermal infrared band temperature changes in multi-temporal imagery
may provide information on groundwater, e.g. areas containing groundwater being warmer than
the environment in certain seasons of the year. Shallow groundwater can be inferred by soil
moisture measurements and by changes in vegetation types and patterns. Groundwater recharge
and discharge areas within drainage basins can be inferred from soils, vegetation and shallow
or perched groundwater. Lineaments detected by Landsat or SPOT imagery are straight to
slightly curving lines formed in many different types of landscape. Many linear features, which
are not continuous may be extended or joint in image analysis. It is assumed that lineaments
mark the location of joints and faults, which again are indicators of potential groundwater
resources. Also soil type and changes in vegetation types and patterns in the area may give
certain indications of the potential availability of groundwater. It should be stated, however,
that in the field of groundwater exploration remote sensing information can only add to the
conventional exploration techniques, but certainly cannot replace them.

Fig. 5.33 Optimal release policy for two parallel reservoirs. Flood of February 1970

P a g e | 119

You might also like