Classification of Benthic Features Using WorldView Final
Classification of Benthic Features Using WorldView Final
WorldView-2 Imagery
On the Cover:
Benthic classification created from the November 2012 water only radiance image
(12NOV26022344-M2AS-052717601090_01_P001_REC.tif) using 11 spectra and the Spectral
Angle Mapper algorithm in ENVI. The original classification was smoothed and generalized using a
3x3 majority filter passed over the data 3 times, and a 5x5 majority filter passed over once.
Prepared by:
Citation:
ii
Table of Contents
On the Cover ............................................................................................... ii
List of Tables ............................................................................................... iv
List of Figures ............................................................................................. iv
Summary .................................................................................................... 5
Background.................................................................................................. 6
Organization ................................................................................................ 7
Methods ...................................................................................................... 8
Image Inputs .......................................................................................... 11
Generalized Depth Invariant Index Calculation Procedure .............................. 11
Models and Scripts ................................................................................... 14
Scalar Table Recommendation ................................................................... 14
Image Classification ................................................................................. 16
Results and Discussion ................................................................................ 17
Recommendations.................................................................................... 17
Bibliography ............................................................................................... 20
Appendix 1: Image Pre-Processing Python Scripts .......................................... 24
Appendix 2: ERDAS Imagine Command Sequence .......................................... 26
Appendix 3: Depth Invariant Index Worksheet ............................................... 27
Section descriptions ................................................................................. 29
iii
List of Tables
Table 1 band number/name correspondence .................................................... 8
List of Figures
Figure 1 Conceptualization of the depth invariant index calculation ..................... 9
Figure 2 Summary of image bands used to derive depth invariant index ............ 10
Figure 3 Generalized depth invariant index calculations ................................... 12
Figure 4 ERDAS Imagine AOI creation command sequence .............................. 13
Figure 5 Load and run a spatial model in ERDAS Imagine ................................ 15
Figure 1-1 Organization of Python image pre-processing scripts ....................... 25
Figure 2-1 ERDAS Imagine Import and Reprojection Command Sequence .......... 26
Figure 3-1 Depth Invariant Index Calculation Worksheet ................................. 28
iv
Summary
The objectives of this effort are to develop and document methods for satellite
mapping of benthic habitats using WorldView-2 imagery for Timor-Leste, as
described in Work Order 20150309_Order_WE-133F-15-SE-0518_3K3 LLC, under
Task 1.6 of Section 3, Required Work. The intent is to review and document
methods for deriving benthic habitat information from WorldView-2 satellite
imagery. The pilot area for evaluation consists of two images from the north
shore of Timor Leste. Methods are developed using standard image processing
software (i.e. ENVI and ERDAS Imagine), Excel spreadsheets, and Python scripts.
The stated goal is to be able to identify the following habitat types: hard and soft
substrates, the presence of living algae or coral, deep sea (areas too deep to
derive meaningful habitat information from the imagery) and mangrove.
Two widely used methods were chosen as the most feasible approach to develop a
benthic habitat classification in Pacific island waters. These are the derivation of a
series of depth invariant layers, and an image-based supervised classification
procedure. In order to optimize creation of the depth invariant index, a “water-
only” image layer is produced using a Normalized Difference Water Index
(McFeeters, 1996; Xu, 2006). This layer masks and excludes all “non-water’
features, including mangroves. Therefore, mangroves are excluded from any
classification output using the depth invariant index. However, mangroves can be
identified and delineated using the described supervised classification procedure.
5
Background
Numerous authors have proposed and applied the depth invariant index in
nearshore waters as a means to remove the influence of the water column on the
characteristics of bottom feature radiance and reflectance. A sampling of these
includes: Andrefouet, et al. (2003); Blakey, et al. (2015); Ciampilini, et al., 2015;
Deidda and Sanna (2012); Doo et al. (2012); El-Askary (2014); Manessa, et al.
(2014); Nieto, 2013); Pahlevan, et al. (2006); and Vanderstraete, et al. (2004).
Andrefouet, et al. (2003), provide a description of the concept based on Lyzenga’s
(1981) work as follows: [ …]Lyzenga showed that pixels of the same bottom-type
located at various unknown depths appear along a line in the bidimensional
histogram of two log-transformed visible bands. The slope of this line is the ratio
of diffuse attenuation of the two bands. Repeating this for different bottom types
at variable depth results in a series of parallel lines, one for each bottom type.
Projection of these lines onto an axis perpendicular to their common direction
results in a unitless depth-invariant bottom-index where all pixels from a given
bottom-type receive the same index-value regardless of its depth. The
Bibliography section below provides a more comprehensive collection of
publications describing the method and its application.
A depth invariant index is calculated using band pairs from a multiband image.
Three bands are paired and ratioed iteratively, then combined to produce a
multiband image. It should be noted that this method transforms image
radiance/reflectance data into a relative index value. This value cannot be directly
related to radiance or reflectance values. In addition, it is necessary to have
examples of similar bottom types (e.g. sand) present in both shallow and deep
water areas within the image. A lack of similar bottom types over a range of
depths will bias the resulting ratio values (Maritorena, 1996). Applied in an
appropriate area, the method has been shown to increase classification accuracies
(Mumby, et al., 1998; Collin, 2012).
6
priori or user-selected spectral information in order to “train” the chosen algorithm
to identify groupings of spectra that are thought to describe like features. A brief
description of the Minimum Distance and ENVI Spectral Angle Mapper (SAM)
classification sequences is provided.
Organization
The description of the methods used to calculate the depth invariant index
provides details on:
Image inputs
ERDAS Imagine command sequences,
An explanation of the Microsoft Excel template for coefficient calculation
A brief discussion on image classification
Appendix 1 contains a description and the processing sequence for the use of
Python scripts for image preprocessing. Appendix 2 describes the ERDAS Imagine
command sequence to import and project georeferenced WorldView-2 tiff images.
Appendix 3 contains a screenshot of the Bilko-inspired depth invariant index
calculation spreadsheet. Bilko is an educational image processing software suite
developed by the United Nations Educational, Scientific and Cultural Organization
(UNESCO, 1999).
7
Methods
Figure 1 provides a graphic view of the depth invariant index calculation process,
assuming that three bands of a multiband image are used. These bands are
labeled 1, 2 and 3 and correspond to the bands listed in Table 1. As this is an
iterative process, three depth invariant layers (band pairs) can be derived, 1&2,
1&3, and 2&3. Table 1 below provides WorldView-2 band numbers and names.
Recommendation:
8
Figure 1 Conceptualization of the depth invariant index calculation
9
Figure 2 Summary of image bands used to derive depth invariant index
10
Image Inputs
Required image inputs for creating depth invariant index layers include selected
bands of either a raw, radiance or reflectance WorldView-2 image. Radiance
values were used to create the examples discussed in this summary. All images
used in the process should be:
Georeferenced
Deglinted
Masked for water only features (i.e. all non-water features are masked)
For each image to be processed, at least two image subsets (“AOI” on ERDAS
Imagine; alternatively referred to as “ROI” in ENVI) are needed to calculate the
coefficients for the index. The subsets should be comprised of largely
homogenous areas of known benthic habitat in both shallow and deeper waters.
Sandy areas are typically easier to identify and are widely used.
The generalized process to follow for the calculation of depth invariant index band
pairs is illustrated in Figure 4. For clarity, this procedure is largely drawn from
the UNESCO (1999) BILKO tutorial. BILKO is an open source image processing
software package. The Depth Invariant Index calculation worksheet contains the
formulas used to calculate the various coefficients referred to in Figure 3. In
addition, the graphic found in Appendix 3 and in the associated PowerPoint file
illustrates typical results of these calculations (depth_invariant_background_and
_initial_results.pptx).
11
Figure 3 Generalized depth invariant index calculations
12
Figure 4 describes the ERDAS Imagine command sequence for creating image
subsets. Image subsets containing deep and shallow water sand features are used
to calculate coefficients to correct for water column effects on benthic feature
spectra.
13
Models and Scripts
ERDAS Imagine spatial models have been created to automate portions of the
depth invariance index calculation and multiband image creation tasks.
Descriptions of the models follow below. Figure 5 illustrates the steps to load and
run an ERDAS Imagine spatial model.
Python image pre-processing scripts have been slightly modified to automate the
creation of radiance, reflectance, and water only images. These files are listed in
Appendix 1, Image Pre-Processing Python Scripts.
14
Figure 5 Load and run a spatial model in ERDAS Imagine
15
Image Classification
A multiband depth invariant image was used for evaluation, derived from the
WorldView-2 image 12NOV26022344-M2AS-052717601090_01_P001_REC.tif,
corrected to top of atmosphere radiance, deglinted and masked for non-water
features. Three bands were combined, derived from the 1 & 2, 1 & 5, and 2 & 5
band pairs (Coastal, Blue and Red respectively - see Table 1).
The ENVI K-means module was used to generate an unsupervised result, with 6
and 10 classes specified. The change threshold was set at 5%, maximum
iterations was set to 10, and the maximum standard deviation from mean and
maximum distance error were not set in order to ensure that all pixels were
processed.
ENVI Spectral Angle Mapper (SAM) and Minimum Distance supervised classification
procedures (under Classification => Endmember Collection in v5.0) were also
applied to the depth invariant evaluation image. These procedures require the use
of existing spectra in one of several formats that ENVI can read. Using the
evaluation image and the spectral profile tool, spectra were collected within the 5,
15 and 20-meter contours and saved as a spectral library. A variety of spectra
were chosen, with no particular knowledge of what the underlying features were.
These included “open water”, sand, and hard bottom. Ideally field data or expert
knowledge would guide the selection of spectra for representative features.
For the SAM module, the value for the maximum angle (in radians) was set to
0.100. This angle is used to determine the separation of groups of spectral
values. For the Minimum Distance algorithm, the maximum distance error field
was left empty, which allows for all pixels to be classified. No rule image was
created for either process. A rule image provides what can be thought of as
confidence intervals to guide the user in refining the spectral classification. It
should be noted that there are numerous classification algorithms to choose from
and the user should spend some time learning about these in order to make an
informed decision about the best process to use.
16
Results and Discussion
Appendix 1 provides a listing and brief description of the Python scripts, developed
previously and modified for this application. Appendix 2 provides a graphic
illustrating the ERDAS Imagine image import and reprojection workflow. Appendix
3 includes a graphic of the DII calculation Microsoft Excel worksheet and a
description of each component part of that worksheet. In addition, graphics of the
calculation worksheet and ERDAS Imagine Spatial Models, as well as classification
results are included in the associated PowerPoint presentation:
(depth_invariant_background_and_initial_results.pptx).
Listed below are recommendations for specific aspects of the DII methodology.
Recommendations
Reprojection of WorldView-2 images – due to an idiosyncrasy of the
combination of Python and Imagine Spatial Models, WorldView-2 images
need to be reprojected from the Geographic, WGS84 projection to the UTM
projection. The procedure to accomplish this in Imagine is described in
Appendix 2, Figure 2-1.
17
Collin (2012) also suggests that results were improved by combining more
than three band pairs prior to classification. In addition, there may be
improvements achieved using combinations of band pairs and top of
atmosphere (TOA) radiance single bands.
18
indicated that better results were obtained using the supervised methods.
As implemented here, these supervised methods were slightly non-
traditional, in that spectral signatures were derived from the images being
processed, rather than from field work or existing spectral libraries. It is
should be noted that these signatures are based on band ratio indices, not
purely radiance or reflectance spectra. In addition, it is important to keep in
mind that the boundaries of clusters of spectra that represent habitat
features are not always discrete. Oftentimes a gradient, or area of transition
is apparent, due to the influence of the radiance/reflectance of adjacent
features on the target spectral signatures.
19
Bibliography
Asian Development Bank, 2014, State of the Coral Triangle: Timor-Leste, Asian
Development Bank, Mandaluyong City, Philippines, 57pp.
Bejerano, S., et al., 2010, Combining Optical and Acoustic Data to Enhance the
Detection of Caribbean Fore Reef Habitats, Remote Sensing of Environment,
v.114, pp.2768-2778
Boggs, G., et al., 2009, The Timor Leste Coastal/Marine Habitat Mapping for
Tourism and Fisheries Development Project, Project N.1, Marine and Coastal
Habitat Mapping in Timor Leste (North Coast) Final Report, Ministry of Agriculture
and Fisheries, Government of Timor Leste, 74pp.
Collin, A., and S. Planes, 2012, Enhancing Coral Health Detection Using Spectral
Diversity Indices from WorldView-2 Imagery and Machine Learners, Remote
Sensing, v.4, pp.3244-3264
20
El-Askary, H., 2014, Change Detection of Coral Reef Habitat Using Landsat-5 TM,
Landsat 7 ETM+ and Landsat 8 OLI data in the Red Sea (Hurghada, Egypt),
International Journal of Remote Sensing, v.35, n.6, pp.2327-2346
Grantham, H.S., 2011, National Ecological Gap Assessment for Timor-Leste 2010,
Prepared on behalf of the United Nations Development Program and the
Department of Protected Areas and National Parks of Timor-Leste by CNRM
Solutions Pty Ltd, Byron Bay, New South Wales, 151pp.
Hedley, J., et al., 2013, Technical Note: Simple and Robust Removal of Sun Glint
for Mapping Shallow‐Water Benthos, International Journal of Remote Sensing,
v.26, n.10, pp.2107-2112
Kay, S., et al., 2009, Sun Glint Correction of High and Low Spatial Resolution
Images of Aquatic Scenes: A Review of Methods for Visible and Near-Infrared
Wavelengths, Remote Sensing, v.1, pp.697-730
Locke, R., 2011, Using Satellite Imagery to Create a Coastal Habitat Classification
for Use in Conservation Planning for the Three Kings Islands, Master’s Thesis,
Auckland University of technology, School of Applied Science, Auckland, NZ,
123 pp.
McCoy, K., et al., 2015, Coral Reef Fish Biomass and Benthic Cover Along the
North Coast of Timor-Leste Based on Underwater Visual Surveys in June 2013,
PIFSC Data Report DR-15-004, Pacific Islands Fisheries Science Center, Honolulu,
HI, 33 pp.
21
McFeeters,S., 1996, The Use of the Normalized Difference Water Index (NDWI) in
the Delineation of Open Water Features, International Journal of Remote Sensing,
v.17, n.7, pp.1425-1432
Nadiah, N.Y., n.d., Coastal Habitats Mapping Using ALOS AVNIR-2 Satellite Data,
http://a-a-r-s.org/acrs/administrator/components/com_jresearch/files/publications
/P_126_8-15-20.pdf
Nieto, P., 2013, Classifying Benthic Habitats and Deriving Bathymetry at the
Caribbean Netherlands Using Multispectral Imagery, Case Study of St. Eustatius,
Thesis Report GIRS-2013-18, Wageningen University and Research Centre, The
Netherlands, 98 pp.
Sagawa, T., n.d., A New Application Method for Lyzenga’s Optical Model, 5pp.
http://www.watercolumncorrection.com/documents/Sagawa-et-al.188.pdf
Turak, E., and L. Devantier, 2013, Reef-Building Corals in Timor Leste, Chapter 2,
in A Rapid Marine Biological Assessment of Timor-Leste, Conservation
International, pp.85-128
22
UNESCO, 1999, Applications of Satellite and Airborne Image Data to Coastal
Management, Lesson 5: Compensating for Variable Water Depth to Improve
Mapping of Underwater Habitats: Why it is Necessary, Coastal Regions and Small
Island Papers 4, UNESCO, Paris, France, 185 pp
http://www.ncl.ac.uk/tcmweb/bilko/mod7_pdf.shtml
Vanderstraete, T., et al., 2004, Coral Reef Habitat Mapping in the Red Sea
(Hurghada, Egypt) Based on Remote Sensing, EARSeL eProceedings 3, 2/2004,
pp.191-207,http://www.eproceedings.org/static/vol03_2/03_2_vanderstraete1.pdf
Web pages
http://www.unesco.org/csi/pub/source/rs10.htm
http://www.ncl.ac.uk/tcmweb/bilko/module7_details.shtml
http://blog.conservation.org/2013/08/timor-leste-fish-survey-will-help-create-
sustainable-fisheries/
http://globalreefrecord.org/regions/details/3
23
Appendix 1: Image Pre-Processing Python Scripts
A series of Python scripts, previously created, were modified slightly for this
application. There are two primary scripts that are run, Main.py and
Main_phase2.py. Each of these scripts call and sequentially run a series of other
Python scripts. Figure 1-1 provides a graphic depiction of how these files are
organized, and a brief description of each. In addition, there is the original
ReadMe.txt text file that describes the input file requirements, as well as the file
structure and organization of the scripts.
The Main.py script sets the data processing path; creates top of atmosphere (TOA)
radiance and reflectance images; creates a mask file to separate water from non-
water portions of the image; creates scalar tables to deglint the “raw image;
creates water only raw, radiance and reflectance images; and deglints the “raw”
image.
Data requirements for the scripts that are run using this batch file include a “raw”
WorldView-2 image; the metadata file for this image (*.IMD); and an image
subset of deep water for the scalar table and deglinting calculations. The resultant
image subset should be named deepwaterraw.tif. Also within the “data” folder are
two Perl scripts: russ_helper.pl and russ_v3.pl. These scripts extract coefficients
needed for transformation of the raw image data from the “*.IMD” image
metadata file.
Once these scripts are complete, the user needs to create deep water image
subsets from the water only radiance and reflectance images for input to the next
set of Python scripts called by Main_phase2.py. These subset images should be
named deepwaterrad.tif and deepwaterreflectance.tif respectively and placed in
the “data” folder.
Main_phase2.py calls scripts that create deglinting scalar tables for the radiance
and reflectance images; deglints these images; and creates unadjusted
bathymetry images using the Stumpf method with the green (#3) and blue (#2)
bands.
The resulting deglinted, water only raw, radiance and reflectance images can then
be used to create shallow and deep water sand image subsets to calculate depth
invariant indices for benthic habitat classification.
24
Figure 1-1 Organization of Python image pre-processing scripts
25
Appendix 2: ERDAS Imagine Command Sequence
26
Appendix 3: Depth Invariant Index Worksheet
27
Figure 3-1 Depth Invariant Index Calculation Worksheet
28
Section descriptions
1. Radiance and/or reflectance values derived from ASCII exports of shallow and
deep water image subsets from deglinted, water-only source image, and linearized
values derived from natural log of radiance/reflectance values
2. Variance of radiance and/or reflectance values by band
3. Mean of radiance and/or reflectance values by band
4. Coefficient of variation of radiance/reflectance data by band
5. Covariance of all band pair combinations of bands 1 through 5
6. Attenuation coefficient (“a”) calculated for all band pairs
7. Ratio of attenuation coefficients (Ki/Kj) calculated for all band pairs
8. Depth invariance index calculation formula (Note: this formula may omit
linearization of band radiance/reflectance values if this step occurs in the pre-
processing procedure)
9. Example calculated depth invariant values (both initial and offset) using the
shallow and deep image subset radiance/reflectance values
10. Minimum values for calculated invariance values and additive offsets to ensure
that all depth invariant values are positive
11. Coefficient of variation for invariant index band pairs
12. Variation in radiance and/or reflectance accounted for by depth invariant
processing (Note: band pairs with largest values are typically used to create
multiband images for classification)
Necessary formulas are embedded in the appropriate sections, and are visible by clicking
on a given cell.
Note: for sections 2, 3, 4, and 10, ensure that the range of values used in the
calculations matches the range of radiance and/or reflectance values in section 1
29