remote sensing
Article
The First Wetland Inventory Map of Newfoundland
at a Spatial Resolution of 10 m Using Sentinel-1 and
Sentinel-2 Data on the Google Earth Engine Cloud
Computing Platform
Masoud Mahdianpari 1,2, * , Bahram Salehi 3 , Fariba Mohammadimanesh 1,2 ,
Saeid Homayouni 4 and Eric Gill 2
1
2
3
4
*
C-CORE, 1 Morrissey Rd, St. John’s, NL A1B 3X5, Canada
Department of Electrical and Computer Engineering, Memorial University of Newfoundland,
St. John’s, NL A1C 5S7, Canada; f.mohammadimanesh@mun.ca (F.M.); ewgill@mun.ca (E.G.)
Environmental Resources Engineering, College of Environmental Science and Forestry, State University of
New York, NY 13210, USA; bsalehi@esf.edu
Department of Geography, Environment, and Geomatics, University of Ottawa,
Ottawa, ON K1N 6N5, Canada; Saeid.Homayouni@uottawa.ca
Correspondence: m.mahdianpari@mun.ca; Tel.: +1-709-986-0110
Received: 22 October 2018; Accepted: 20 December 2018; Published: 28 December 2018
Abstract: Wetlands are one of the most important ecosystems that provide a desirable habitat for
a great variety of flora and fauna. Wetland mapping and modeling using Earth Observation (EO)
data are essential for natural resource management at both regional and national levels. However,
accurate wetland mapping is challenging, especially on a large scale, given their heterogeneous
and fragmented landscape, as well as the spectral similarity of differing wetland classes. Currently,
precise, consistent, and comprehensive wetland inventories on a national- or provincial-scale are
lacking globally, with most studies focused on the generation of local-scale maps from limited remote
sensing data. Leveraging the Google Earth Engine (GEE) computational power and the availability of
high spatial resolution remote sensing data collected by Copernicus Sentinels, this study introduces
the first detailed, provincial-scale wetland inventory map of one of the richest Canadian provinces in
terms of wetland extent. In particular, multi-year summer Synthetic Aperture Radar (SAR) Sentinel-1
and optical Sentinel-2 data composites were used to identify the spatial distribution of five wetland
and three non-wetland classes on the Island of Newfoundland, covering an approximate area of
106,000 km2 . The classification results were evaluated using both pixel-based and object-based
random forest (RF) classifications implemented on the GEE platform. The results revealed the
superiority of the object-based approach relative to the pixel-based classification for wetland mapping.
Although the classification using multi-year optical data was more accurate compared to that of SAR,
the inclusion of both types of data significantly improved the classification accuracies of wetland
classes. In particular, an overall accuracy of 88.37% and a Kappa coefficient of 0.85 were achieved with
the multi-year summer SAR/optical composite using an object-based RF classification, wherein all
wetland and non-wetland classes were correctly identified with accuracies beyond 70% and 90%,
respectively. The results suggest a paradigm-shift from standard static products and approaches
toward generating more dynamic, on-demand, large-scale wetland coverage maps through advanced
cloud computing resources that simplify access to and processing of the “Geo Big Data.” In addition,
the resulting ever-demanding inventory map of Newfoundland is of great interest to and can be
used by many stakeholders, including federal and provincial governments, municipalities, NGOs,
and environmental consultants to name a few.
Remote Sens. 2019, 11, 43; doi:10.3390/rs11010043
www.mdpi.com/journal/remotesensing
Remote Sens. 2019, 11, 43
2 of 27
Keywords: wetland; Google Earth Engine; Sentinel-1; Sentinel-2; random forest; cloud computing;
geo-big data
1. Introduction
Wetlands cover between 3% and 8% of the Earth’s land surface [1]. They are one of the most
important contributors to global greenhouse gas reduction and climate change mitigation, and they
greatly affect biodiversity and hydrological connectivity [2]. Wetland ecosystem services include floodand storm-damage protection, water-quality improvement and renovation, aquatic and plant-biomass
productivity, shoreline stabilization, plant collection, and contamination retention [3]. However,
wetlands are being drastically converted to non-wetland habitats due to both anthropogenic activities,
such as intensive agricultural and industrial development, urbanization, reservoir construction,
and water diversion, as well as natural processes, such as rising sea levels, thawing of permafrost,
changing in precipitation patterns, and drought [1].
Despite the vast expanse and benefits of wetlands, there is a lack of comprehensive wetland
inventories in most countries due to the expense of conducting nation-wide mapping and the highly
dynamic, remote nature of wetland ecosystems [4]. These issues result in fragmented, partial, or outdated
wetland inventories in most countries worldwide, and some have no inventory available at all [5].
Although North America and some parts of Western Europe have some of the most comprehensive
wetland inventories, these are also incomplete and have considerable limitations related to the
resolution and type of data, as well as to developed methods [6]. These differences make these
existing inventories incomparable [1] and highlight the significance of long-term comprehensive
wetland monitoring systems to identify conservation priorities and sustainable management strategies
for these valuable ecosystems.
Over the past two decades, wetland mapping has gained recognition thanks to the availability
of remote sensing tools and data [6,7]. However, accurate wetland mapping using remote sensing
data, especially on a large-scale, has long proven challenging. For example, input data should be
unaffected/less affected by clouds, haze, and other disturbances to obtain an acceptable classification
result [4]. Such input data can be generated by compositing a large volume of satellite images collected
during a specific time period. This is of particular concern for distinguishing backscattering/spectrally
similar classes (e.g., wetland), wherein discrimination is challenging using a single image. Historically,
the cost of acquiring multi-temporal remote sensing data precluded such large-scale land cover
(e.g., wetland) mapping [8]. Although Landsat sensors have been collecting Earth Observation (EO)
data at frequent intervals since the mid-1980s [9], open-access to its entire archive has occurred
since 2008 [8]. This is of great benefit for land cover mapping on a large-scale. However, much of
this archived data has been underutilized to date. This is because collecting, storing, processing,
and manipulating multi-temporal remote sensing data that cover a large geographic area over
three decades are infeasible using conventional image processing software on workstation PC-based
systems [10]. This is known as the “Geo Big Data” problem and it demands new technologies
and resources capable of handling such a large volume of satellite imagery from the data science
perspective [11].
Most recently, the growing availability of large-volume open-access remote sensing data and the
development of advanced machine learning tools have been integrated with recent implementations
of powerful cloud computing resources. This offers new opportunities for broader sets of applications
at new spatial and temporal scales in the geospatial sciences and addresses the limitation of existing
methods and products [12]. Specifically, the advent of powerful cloud computing resources, such as
NASA Earth Exchange, Amazon’s Web Services, Microsoft’s Azure, and Google cloud platform has
addressed these Geo Big Data problems. For example, Google Earth Engine (GEE) is an open-access,
cloud-based platform for parallel processing of petabyte-scale data [13]. It hosts a vast pool of
Remote Sens. 2019, 11, 43
3 of 27
satellite imagery and geospatial datasets, and allows web-based algorithm development and results
visualization in a reasonable processing time [14–16]. In addition to its computing and storage
capacity, a number of well-known machine learning algorithms have been implemented, allowing
batch processing using JavaScript on a dedicated application programming interface (API) [17].
Notably, the development of advanced machine learning tools further contributes to handling
large multi-temporal remote sensing data [18]. This is because traditional classifiers, such as maximum
likelihood, insufficiently manipulate complicated, high-dimensional remote sensing data. Furthermore,
they assume that input data are normally distributed, which may not be the case [19]. However, advanced
machine learning tools, such as Decision Tree (DT), Support Vector Machine (SVM), and Random Forest
(RF), are independent of input data distribution and can handle large volumes of remote sensing data.
Previous studies have demonstrated that both RF [20] and SVM [21] outperformed DT for classifying
remote sensing data. RF and SVM have also relatively equal strength in terms of classification
accuracies [22]. However, RF is much easier to execute relative to SVM, given that the latter approach
requires the adjustment of a large number of parameters [23]. RF is also insensitive to noise and
overtraining [24] and has shown high classification accuracies in various wetland studies [19,25].
Over the past three years, several studies have investigated the potential of cloud-computing
resources using advanced machine learning tools for processing/classifying the Geo Big Data in a
variety of applications. These include global surface water mapping [26], global forest-cover change
mapping [27], and cropland mapping [28], as well as studies focusing on land- and vegetation-cover
changes on a smaller scale [29,30]. They demonstrated the feasibility of characterizing the elements of
the Earth surface at a national and global scale through advanced cloud computing platforms.
Newfoundland and Labrador (NL), a home for a great variety of flora and fauna, is one of the
richest provinces in terms of wetlands and biodiversity in Canada. Most recently, the significant value
of these ecosystems has been recognized by the Wetland Mapping and Monitoring System (WMMS)
project, launched in 2015. Accordingly, a few local wetland maps, each covering approximately 700 km2
of the province, were produced. For example, Mahdianpari et al. (2017) introduced a hierarchical
object-based classification scheme for discriminating wetland classes in the most easterly part of NL,
the Avalon Peninsula, using Synthetic Aperture Radar (SAR) observations obtained from ALOS-2,
RADARSAT-2, and TerraSAR-X imagery [19]. Later, Mahdianpari et al. (2018) proposed the modified
coherency matrix obtained from quad-pol RADARSAT-2 imagery to improve wetland classification
accuracy. They evaluated the efficiency of the proposed method in three pilot sites across NL, each of
which covers 700 km2 [31]. Most recently, Mohammadimanesh et al. (2018) investigated the potential
of interferometric coherence for wetland classification, as well as the synergy of coherence with SAR
polarimetry and intensity features for wetland mapping in a relatively small area in NL (the Avalon
Peninsula) [32]. These local-scale wetland maps exhibit the spatial distribution patterns and the
characteristics of wetland species (e.g., dominant wetland type). However, such small-scale maps
have been produced by incorporating different data sources, standards, and methods, making them of
limited use for rigorous wetland monitoring at the provincial, national, and global scales.
Importantly, precise, comprehensive, provincial-level wetland inventories that map small to large
wetland classes can significantly aid conservation strategies, support sustainable management, and
facilitate progress toward national/global scale wetland inventories [33]. Fortunately, new opportunities
for large-scale wetland mapping are obtained from the Copernicus programs by the European Space
Agency (ESA) [34]. In particular, concurrent availability of 12-days SAR Sentinel-1 and 10-days optical
Sentinel-2 (multi-spectral instrument, MSI) sensors provides an unprecedented opportunity to collect
high spatial resolution data for global wetland mapping. The main purpose of these Sentinel Missions
is to provide full, free, and open access data to facilitate the global monitoring of the environment
and to offer new opportunities to the scientific community [35]. This highlights the substantial role
of Sentinel observations for large-scale land surface mapping. Accordingly, the synergistic use of
Sentinel-1 and Sentinel-2 EO data offers new avenues to be explored in different applications, especially
for mapping phenomena with highly dynamic natures (e.g., wetland).
Remote Sens. 2019, 11, 43
4 of 27
Notably, the inclusion of SAR data for land and wetland mapping is of great significance for
monitoring areas with nearly permanent cloud-cover. This is because SAR signals are independent of
solar radiation and the day/night condition, making them superior for monitoring geographic regions
with dominant cloudy and rainy weather, such as Newfoundland, Canada. Nevertheless, multi-source
10, x FOR PEER REVIEW
of 27
satelliteRemote
dataSens.
are2018,
advantageous
in terms of classification accuracy relative to the accuracy4 achieved
by
a singleofsource
of data [36].
This
is because
optical
sensors
aresuperior
sensitive
the reflective
and spectral
solar radiation
and the
day/night
condition,
making
them
for to
monitoring
geographic
characteristics
of
ground
targets
[37,38],
whereas
SAR
sensors
are
sensitive
to
their
structural,
regions with dominant cloudy and rainy weather, such as Newfoundland, Canada. Nevertheless, textural,
multi-source
satellite data
are advantageous
in termsuse
of of
classification
relative
to the
and dielectric
characteristics
[39,40].
Thus, a synergistic
two types accuracy
of data offers
complementary
accuracy
achieved
single source
of utilizing
data [36]. one
This source
is because
sensors Several
are sensitive
to the
studies
have also
information,
which
maybybea lacking
when
of optical
data [41,42].
reflective
and
spectral
characteristics
of
ground
targets
[37,38],
whereas
SAR
sensors
are
sensitive
to
highlighted the great potential of fusing optical and SAR data for wetland classification [25,36,41].
their structural, textural, and dielectric characteristics [39,40]. Thus, a synergistic use of two types of
This study aims to develop a multi-temporal classification approach based on open-access remote
data offers complementary information, which may be lacking when utilizing one source of data
sensing[41,42].
data and
tools
to map
wetland
classes as
aspotential
the other
types
withdata
high
Several
studies
have
also highlighted
thewell
great
of land
fusingcover
optical
and SAR
foraccuracy,
here piloting
this
approach
for
wetland
mapping
in
Canada.
Specifically,
the
main
objectives
of this
wetland classification [25,36,41].
study
aims to
develop
a multi-temporal
classification
approach
based
on open-access
study were This
to: (1)
Leverage
open
access
SAR and optical
images obtained
from
Sentinel-1
and Sentinel-2
sensing
data and tools
to map wetland
classes as(2)
well
as the other
land cover types
with
high Earth
sensorsremote
for the
classification
of wetland
complexes;
assess
the capability
of the
Google
accuracy, here piloting this approach for wetland mapping in Canada. Specifically, the main
Engine cloud computing platform to generate custom land cover maps, which are sufficient in
objectives of this study were to: (1) Leverage open access SAR and optical images obtained from
discriminating
wetland classes as standard land cover products; (3) compare the efficiency of both
Sentinel-1 and Sentinel-2 sensors for the classification of wetland complexes; (2) assess the capability
pixel-based
and
object-based
forest classification;
and (4)
produce
first
provincial-scale,
of the Google
Earth Enginerandom
cloud computing
platform to generate
custom
land the
cover
maps,
which
fine resolution
(i.e.,in10discriminating
m) wetland wetland
inventory
map
Canada.
The
results
of this
demonstrate
are sufficient
classes
as in
standard
land
cover
products;
(3) study
compare
the
efficiency of from
both pixel-based
object-based
forest classification;
and
(4) producemore
the first
a paradigm-shift
standard and
static
productsrandom
and approaches
toward
generating
dynamic,
provincial-scale,
fine
resolution
(i.e.,
10
m)
wetland
inventory
map
in
Canada.
The
results
of
this
on-demand, large-scale wetland coverage maps through advanced cloud computing resources that
study demonstrate a paradigm-shift from standard static products and approaches toward
simplify access to and processing of a large volume of satellite imagery. Given the similarity of
generating more dynamic, on-demand, large-scale wetland coverage maps through advanced cloud
wetlandcomputing
classes across
thethat
country,
theaccess
developed
methodology
can be
scaled-up
to map
wetlands at
resources
simplify
to and processing
of a large
volume
of satellite
imagery.
the national-scale.
Given the similarity of wetland classes across the country, the developed methodology can be scaledup to map wetlands at the national-scale.
2. Materials and Methods
2. Materials and Methods
2.1. Study Area
2.1. Study Area
The study area is the Island of Newfoundland, covering an approximate area of 106,000
km2 ,
The study area is the Island of Newfoundland, covering an approximate area of 106,000 km2,
located within the Atlantic sub-region of Canada (Figure 1). According to the Ecological Stratification
located within the Atlantic sub-region of Canada (Figure 1). According to the Ecological Stratification
Workings
Group Group
of Canada,
“each “each
part ofpart
theof
province
is characterized
by distinctive
regional
ecological
Workings
of Canada,
the province
is characterized
by distinctive
regional
factors,ecological
including
climatic,
physiography,
vegetation,
soil, water,
and land
use”
[43].
factors,
including
climatic, physiography,
vegetation,
soil, fauna,
water, fauna,
and land
use”
[43].
Figure 1. The geographic location of the study area with distribution of the training and testing
polygons across four pilot sites on the Island of Newfoundland.
Remote Sens. 2019, 11, 43
5 of 27
In general, the Island of Newfoundland has a cool summer and a humid continental climate,
which is greatly affected by the Atlantic Ocean [43]. Black spruce forests that dominate the central
area, and balsam fir forests that dominate the western, northern, and eastern areas, are common on the
island [44]. Based on geography, the Island of Newfoundland can be divided into three zones, namely
the southern, middle, and northern boreal regions, and each is characterized by various ecoregions [45].
For example, the southern boreal zone contains the Avalon forest, Southwestern Newfoundland,
Maritime Barrens, and South Avalon-Burin Oceanic Barrens ecoregions. St. John’s, the capital city,
is located at the extreme eastern portion of the island, in the Maritime Barren ecoregion, and is the
foggiest, windiest, and cloudiest Canadian city.
All wetland classes characterized by the Canadian Wetland Classification System (CWCS), namely
bog, fen, marsh, swamp, and shallow-water [1], are found throughout the island. However, bog and fen
are the most dominant classes relative to the occurrence of swamp, marsh, and shallow-water. This is
attributed to the island climate, which facilitates peatland formation (i.e., extensive agglomeration of
partially-decomposed organic peat under the surface). Other land cover classes are upland, deep-water,
and urban/bare land. The urban and bare land classes, both having either an impervious surface or
exposed soil [46], include bare land, roads, and building facilities and, thus, are merged into one single
class in the final classification map.
Four pilot sites, which are representative of regional variation in terms of both landscape and
vegetation, were selected across the island for in-situ data collection (see Figure 1). The first pilot
site is the Avalon area, located in the south-east of the island in the Maritime Barren ecoregion,
which experiences an oceanic climate of foggy, cool summers, and relatively mild winters [47].
The second and third pilot sites are Grand Falls-Windsor, located in the north-central area of the
island, and Deer Lake, located in the northern portion of the island. Both fall within the Central
Newfoundland ecoregion and experience a continental climate of cool summers and cold winters [47].
The final pilot site is Gros Morne, located on the extreme west coast of the island, in the Northern
Peninsula ecoregion, and this site experiences a maritime-type climate with cool summers and mild
winters [47].
2.2. Reference Data
In-situ data were collected via an extensive field survey of the sites mentioned above in the
summers and falls of 2015, 2016 and 2017. Using visual interpretation of high resolution Google Earth
imagery, as well as the CWCS definition of wetlands, potential and accessible wetland sites were
flagged across the island. Accessibility via public roads, the public or private ownership of lands,
and prior knowledge of the area were also taken into account for site visitation. In-situ data were
collected to cover a wide range of wetland and non-wetland classes with a broad spatial distribution
across NL. One or more Global Positioning System (GPS) points, depending on the size of each wetland,
along with the location’s name and date were recorded. Several digital photographs and ancillary
notes (e.g., dominant vegetation and hydrology) were also recorded to aid in preparing the training
samples. During the first year of data collection (i.e., 2015), no limitation was set on the size of the
wetland, and this resulted in the production of several small-size classified polygons. To move forward
with a larger size, wetlands of size >1 ha (where possible) were selected during the years 2016 and 2017.
Notably, a total of 1200 wetland and non-wetland sites were visited during in-situ data collection at
the Avalon, Grand Falls-Windsor, Deer Lake, and Gros Morne pilot sites over three years. Such in-situ
data collection over a wide range of wetland classes across NL captured the variability of wetlands
and aided in developing robust wetland training samples. Figure 1 depicts the distribution of the
training and testing polygons across the Island.
Recorded GPS points were then imported into ArcMap 10.3.1 and polygons illustrating classified
delineated wetlands were generated using a visual analysis of 50 cm resolution orthophotographs
and 5 m resolution RapidEye imagery. Next, polygons were sorted based on their size and alternately
assigned to either training or testing groups. Thus, the training and testing polygons were obtained
Remote
Sens.
2018,
Remote
Sens.
2019,
11,10,
43x FOR PEER REVIEW
6 of6 of
27 27
size and alternately assigned to either training or testing groups. Thus, the training and testing
from
independent
samplesfrom
to ensure
robust accuracy
This accuracy
alternative
assignment
also
polygons
were obtained
independent
samples assessment.
to ensure robust
assessment.
This
alternative
also ensured
thatand
both
the training
and testing
(~50%)
polygons
ensured
thatassignment
both the training
(~50%)
testing
(~50%)(~50%)
polygons
had equal
numbers
of had
small
equal
numbers
of small
and large
polygons,
similar
pixelinto
counts
and taking
into account
theof
and
large
polygons,
allowing
similar
pixelallowing
counts and
taking
account
the large
variation
large variation
of Table
intra-wetland
size.
1 presents
the number
of training
andfor
testing
polygons
intra-wetland
size.
1 presents
theTable
number
of training
and testing
polygons
each class.
for each class.
Table 1. Number of training and testing polygons in this study.
Table 1. Number of training and testing polygons in this study.
Class
Training Polygons
Testing Polygons
Class
bog
bog
fen
fen
marsh
swamp
marsh
shallow-water
swamp
deep-water
shallow-water
upland
deep-water
urban/bare
upland
land
urban/bare
total land
total
Training Polygons
92
92
93
93
75
78
75
55
78
17
55
92
17
99
92
99
601
601
Testing Polygons
91
91
92
92
75
79
75
56
79
16
56
92
16
98
92
98
599
599
2.3. Satellite Data, Pre-Processing, and Feature Extraction
2.3. Satellite Data, Pre-Processing, and Feature Extraction
2.3.1. SAR Imagery
2.3.1. SAR Imagery
A total of 247 and 525 C-band Level-1 Ground Range Detected (GRD) Sentinel-1 SAR images in
A total of 247 and 525 C-band Level-1 Ground Range Detected (GRD) Sentinel-1 SAR images in
ascending
and descending orbits, respectively, were used in this study. This imagery was acquired
ascending and descending orbits, respectively, were used in this study. This imagery was acquired
during the interval between June and August of 2016, 2017 and 2018 using the Interferometric Wide
during the interval between June and August of 2016, 2017 and 2018 using the Interferometric Wide
(IW) swath mode with a pixel spacing of 10 m and a swath of 250 km with average incidence angles
(IW) swath mode with a pixel spacing of 10 m and a swath of 250 km with average incidence angles
varying between 30◦ and 45◦ . As a general rule, Sentinel-1 collects dual- (HH/HV) or single- (HH)
varying between 30° and 45°. As a general rule, Sentinel-1 collects dual- (HH/HV) or single- (HH)
polarized data over Polar Regions (i.e., sea ice zones) and dual- (VV/VH) or single- (VV) polarized
polarized data over Polar Regions (i.e., sea ice zones) and dual- (VV/VH) or single- (VV) polarized
data over all other zones [48]. However, in this study, we took advantage of being close to the Polar
data over all other zones [48]. However, in this study, we took advantage of being close to the Polar
regions
and thus, both HH/HV and VV/VH data were available in our study region. Accordingly,
regions and thus, both HH/HV and VV/VH data were available in our study region. Accordingly, of
of247
247SAR
SARascending
ascendingobservations
observations(VV/VH),
(VV/VH),
120and
and
115
images
were
collected
2016,
2017
and
12,12,
120
115
images
were
collected
inin
2016,
2017
and
2018,
respectively.
(HH/HV),111,
111,260,
260,and
and154
154
images
2018,
respectively.Additionally,
Additionally,ofof525
525descending
descending observations
observations (HH/HV),
images
were
acquired
in
2016,
2017
and
2018,
respectively.
Figure
2
illustrates
the
number
of
SAR
observations
were acquired in 2016, 2017 and 2018, respectively. Figure 2 illustrates the number of SAR
over
the summer
ofthe
thesummer
aforementioned
years.
observations
over
of the aforementioned
years.
Figure
2. The
total
numberofof(a)
(a)ascending
ascending Synthetic
Synthetic Aperture
(VV/VH)
Figure
2. The
total
number
ApertureRadar
Radar(SAR)
(SAR)observations
observations
(VV/VH)
and
(b)
descending
SAR
observations
(HH/HV)
during
summers
of
2016,
2017
and
2018.
The
color bar
and (b) descending SAR observations (HH/HV) during summers of 2016, 2017 and 2018. The color
bar represents
the number
of collected
images.
represents
the number
of collected
images.
Remote Sens. 2019, 11, 43
7 of 27
Sentinel-1 GRD data were accessed through GEE. We applied the following pre-processing steps,
including updating orbit metadata, GRD border noise removal, thermal noise removal, radiometric
calibration (i.e., backscatter intensity), and terrain correction (i.e., orthorectification) [49]. These steps
resulted in generating the geo-coded backscatter intensity images. Notably, this is similar to the
pre-processing steps implemented in the ESA’s SNAP Sentinel-1 toolbox. The unitless backscatter
intensity images were then converted into normalized backscattering coefficient (σ0 ) values in dB
(i.e., the standard unit for SAR backscattering representation). Further pre-processing steps, including
incidence angle correction [50] and speckle reduction (i.e., 7 × 7 adaptive sigma Lee filter in this
study) [51,52], were also carried out on the GEE platform.
0 , σ0 , σ0 , and σ0 (i.e., backscatter coefficient
Following the procedure described above, σVV
V H HH
HV
0 observations are sensitive to soil moisture and are able to
images) were extracted. Notably, σVV
distinguish flooded from non-flooded vegetation [53], as well as various types of herbaceous wetland
classes (low, sparsely vegetated areas) [54]. This is particularly true for vegetation in the early stages
of growing when plants have begun to grow in terms of height, but have not yet developed their
canopy [53]. σV0 H observations can also be useful for monitoring wetland herbaceous vegetation.
This is because cross-polarized observations are produced by volume scattering within the vegetation
0
canopy and have a higher sensitivity to vegetation structures [55]. σHH
is an ideal SAR observation for
wetland mapping due to its sensitivity to double-bounce scattering over flooded vegetation [41,56].
0
0 , making the former
is less sensitive to the surface roughness compared to σVV
Furthermore, σHH
advantageous for discriminating water and non-water classes. In addition to SAR backscatter
coefficient images, a number of other polarimetric features were also extracted and used in this
study. Table 2 represents polarimetric features extracted from the dual-pol VV/VH and HH/HV
Sentinel-1 images employed in this study. Figure 3a illustrates the span feature, extracted from
HH/HV data, for the Island of Newfoundland.
Table 2. A description of extracted features from SAR and optical imagery.
Data
Sentinel-1
Feature Description
Span or total scattering power
difference between co- and cross-polarized observations
Sentinel-2
Formula
vertically transmitted, vertically received SAR backscattering
coefficient
vertically transmitted, horizontally received SAR
backscattering coefficient
horizontally transmitted, horizontally received SAR
backscattering coefficient
horizontally transmitted, vertically received SAR
backscattering coefficient
0
σVV
σV0 H
0
σHH
0
σHV
SVV
S HH
SVV
S HH
2
2
+
+
2
−
2
−
SV H ,
2
2
2
2
S HV
2
SV H ,
S HV
ratio
|SVV | |S HH |
,
|SV H |2 |S HV |2
spectral bands 2 (blue), 3 (green), 4 (red) and 8 (NIR)
the normalized difference vegetation index (NDVI)
the normalized difference water index (NDWI)
B2 , B3 , B4 , B8
modified soil-adjusted vegetation index 2 (MSAVI2)
2
B8 − B4
B8 + B4
B3 − B8
q B3 + B8
2B8 +1− (2B8 +1)2 −8( B8 − B4 )
2
Sentinel-2
(NDVI)
the normalized difference water index
(NDWI)
modified soil-adjusted vegetation index 2
(MSAVI2)
Remote Sens. 2019, 11, 43
𝐵 −𝐵
𝐵 +𝐵
2𝐵 + 1 − (2𝐵 + 1) − 8(𝐵 − 𝐵 )
2
8 of 27
Figure 3. Three examples of extracted features for land cover classification in this study. The multi-year
summer composite of (a) span feature extracted from HH/HV Sentinel-1 data, (b) normalized difference
vegetation index (NDVI), and (c) normalized difference water index (NDWI) features extracted from
Sentinel-2 data.
2.3.2. Optical Imagery
Creating a 10 m cloud-free Sentinel-2 composition for the Island of Newfoundland over a short period
of time (e.g., one month) is a challenging task due to chronic cloud cover. Accordingly, the Sentinel-2
composite was created for three-months between June and August, during the leaf-on season for 2016,
2017 and 2018. This time period was selected since it provided the most cloud-free data and allowed
for maximum wall-to-wall data coverage. Furthermore, explicit wetland phenological information
could be preserved by compositing data acquired during this time period. Accordingly, monthly
composite and multi-year summer composite were used to obtain cloud-free or near-cloud-free
wall-to-wall coverage.
Both Sentinel-2A and Sentinel-2B Level-1C data were used in this study. There were a total of 343,
563 and 1345 images in the summer of 2016, 2017 and 2018, respectively. The spatial distribution of all
Sentinel-2 observations during the summers of 2016, 2017 and 2018 are illustrated in Figure 4a. Notably,
a number of these observations were affected by cloud coverage. Figure 4b depicts the percentage
of cloud cover distribution during these time periods. To mitigate the limitation that arises due to
cloud cover, we applied a selection criteria to cloud percentage (<20%) when producing our cloud-free
composite. Next, the QA60 bitmask band (a quality flag band) provided in the metadata was used to
detect and mask out clouds and cirrus. Sentinel-2 has 13 spectral bands at various spatial resolutions,
including four bands at 10 m, six at 20 m, and three bands at 60 m spatial resolution. For this study,
only blue (0.490 µm), green (0.560 µm), red (0.665 µm), and near-infrared (NIR, 0.842 µm) bands were
percentage of cloud cover distribution during these time periods. To mitigate the limitation that arises
due to cloud cover, we applied a selection criteria to cloud percentage (<20%) when producing our
cloud-free composite. Next, the QA60 bitmask band (a quality flag band) provided in the metadata
was used to detect and mask out clouds and cirrus. Sentinel-2 has 13 spectral bands at various spatial
resolutions,
four bands at 10 m, six at 20 m, and three bands at 60 m spatial resolution. For
Remote
Sens. 2019,including
11, 43
9 of 27
this study, only blue (0.490 µm), green (0.560 µm), red (0.665 µm), and near-infrared (NIR, 0.842 µm)
bands were used. This is because the optical indices selected in this study are based on the above
used.
This isoptical
because
the (see
optical
indices
in thisallstudy
based
ona the
above
mentioned
mentioned
bands
Table
2) and,selected
furthermore,
these are
bands
are at
spatial
resolution
of
optical
bands
(see
Table
2)
and,
furthermore,
all
these
bands
are
at
a
spatial
resolution
of
10
m.
10 m.
Figure
4. (a)
Spatial
distribution
observations(total
(totalobservations)
observations)
during
summers
Figure
4. (a)
Spatial
distributionof
ofSentinel-2
Sentinel-2 observations
during
summers
of of
2016,
2017
and
2018
and
(b)
the
number
of
observations
affected
by
varying
degrees
of
cloud
cover
2016, 2017 and 2018 and (b) the number of observations affected by varying degrees of cloud cover (%)
in the
forarea
each
(%)study
in thearea
study
forsummer.
each summer.
InInaddition
and 8),
8), NDVI,
NDVI,NDWI
NDWIand
andMSAVI2
MSAVI2indices
indiceswere
were
also
extracted
additiontotooptical
opticalbands
bands(2,
(2, 3,
3, 44 and
also
extracted
(see
Table
and commonly
commonlyused
usedvegetation
vegetationindices
indicesforfor
the
(see
Table2).2).NDVI
NDVIisisone
oneofofthe
themost
most well-known
well-known and
the
characterization
of
vegetation
phenology
(seasonal
and
inter-annual
changes).
Using
the
ratioing
characterization of vegetation phenology (seasonal and inter-annual changes). Using the ratioing
noises,such
suchasassun
sunillumination
illumination
operation
operation(see
(seeTable
Table2),
2),NDVI
NDVI decreases
decreases several
several multiplicative
multiplicative noises,
differences,
attenuationand
andtopographic
topographicvariations,
variations,
differences,cloud
cloudshadows,
shadows, as
as well
well as
as some
some atmospheric
atmospheric attenuation
within
[57]. NDVI
NDVIisissensitive
sensitivetotophotosynthetically
photosynthetically
withinvarious
variousbands
bandsofofmultispectral
multispectral satellite
satellite images
images [57].
active
as well
well as
as wetland/non-wetland
wetland/non-wetland
activebiomasses
biomassesand
andcan
candiscriminate
discriminatevegetation/non-vegetation,
vegetation/non-vegetation, as
classes. NDWI is also useful, since it is sensitive to open water and can discriminate water from land.
Notably, NDWI can be extracted using different bands of multispectral data [58], such as green and
shortwave infrared (SWIR) [59], red and SWIR [60], as well as green and NIR [61]. Although some
studies reported the superiority of SWIR for extracting the water index due to its lower sensitivity
to the sub-pixel non-water component [58], we used the original NDWI index proposed by [61] in
this study. This is because it should provide accurate results at our target resolution and, moreover,
it uses green and NIR bands of Sentinel-2 data, both of which are at a 10 m spatial resolution. Finally,
MSAVI2 was used because it addresses the limitations of NDVI in areas with a high degree of exposed
soil surface. Figure 3b,c demonstrates the multi-year summer composite of NDVI and NDWI features
extracted from Sentinel-2 optical imagery.
2.4. Multi-Year Monthly and Summer Composite
Although several studies have used the Landsat archive to generate nearly-cloud-free Landsat
composites of a large area (e.g., [62–64]), to the best of our knowledge, such an investigation has not
yet been thoroughly examined for Sentinel-2 data. This is unfortunate since the latter data offer both
improved temporal and spatial resolution relative to Landsat imagery, making them advantageous for
producing high resolution land cover maps on a large-scale. For example, Roy et al. (2010) produced
monthly, seasonally, and yearly composites using maximum NDVI and brightness temperature
obtained from Landsat data for the conterminous United States [64]. Recent studies also used different
compositing approaches, such as seasonally [62] and yearly [63] composites obtained from Landsat
data in their analysis.
In this study, two different types of image composites were generated: Multi-year monthly
and summer composites. Due to the prevailing cloudy and rainy weather conditions in the study
area, it was impossible to collect sufficient cloud-free optical data to generate a full-coverage monthly
composite of Sentinel-2 data for classification purposes. However, we produced the monthly composite
Remote Sens. 2019, 11, 43
10 of 27
(optical) for spectral signature analysis to identify the month during which the most semantic
information of wetland classes could be obtained. A multi-year summer composite was produced to
capture explicit phenological information appropriate for wetland mapping. As suggested by recent
research [65], the multi-year spring composite is advantageous for wetland mapping in the Canada’s
boreal regions. This is because such time-series data capture within-year surface variation. However,
in this study, the multi-year summer composite was used given that the leaf-on season begins in late
spring/early summer on the Island of Newfoundland.
Leveraging the GEE composite function, 10 m wall-to-wall, cloud-free composites of Sentinel-2
imagery, comprising original optical bands (2, 3, 4 and 8), NDVI, NDWI, and MSAVI2 indices, across
0 , σ0 , σ0 , σ0 , span, ratio,
the Island of Newfoundland were produced. SAR features, including σVV
V H HH HV
and difference between co- and cross-polarized SAR features (see Table 2), were also stacked using
GEE’s array-based computational approach. Specifically, each monthly and summer season group of
images were stacked into a single median composite on a per-pixel, per band basis.
2.5. Separability Between Wetland Classes
In this study, the separability between wetland classes was determined both qualitatively, using
box-and-whiskers plots, and quantitatively, using Jeffries–Matusita (JM) distance. The JM distance
indicates the average distance between the density function of two classes [66]. It uses both the first
order (mean) and second order (variance) statistical variables from the samples and has been illustrated
to be an efficient separability measure for remote sensing data [67,68]. Given normal distribution
assumptions, the JM distance between two classes is represented as
J M = 2 1 − e− B
(1)
where B is the Bhattacharyya (BH) distance given by
−1
(
Σ
+
Σ
)
/2
Σ
+
Σ
1
1
i
j
i
j
ln q
µi − µ j +
B = ( µi − µ j ) T
8
2
2
| Σi | Σ j
(2)
where µi and Σi are the mean and covariance matrix of class i and µ j and Σ j are the mean and
covariance matrix of class j. The JM distance varies between 0 and 2, with values that approach 2
demonstrating a greater average distance between two classes. In this study, the separability analysis
was limited to extracted features from optical data. This is because a detailed backscattering analysis
of wetland classes using multi-frequency SAR data, including X-, C-, and L-band, has been presented
in our previous study [19].
2.6. Classification Scheme
2.6.1. Random Forest
In this study, the random forest (RF) algorithm was used for both pixel-based and object-based
wetland classifications. RF is a non-parametric classifier, comprised of a group of tree classifiers,
and is able to handle high dimensional remote sensing data [69]. It is also more robust compared
to the DT algorithm and easier to execute relative to SVM [23]. RF uses bootstrap aggregating
(bagging) to produce an ensemble of decision trees by using a random sample from the given training
data, and determines the best splitting of the nodes by minimizing the correlation between trees.
Assigning a label to each pixel is based on the majority vote of trees. RF can be tuned by adjusting
two input parameters [70], namely the number of trees (Ntree), which is generated by randomly
selecting samples from the training data, and the number of variables (Mtry), which is used for
tree node splitting [71]. In this study, these parameters were selected based on (a) direction from
previous studies (e.g., [56,69,72]) and (b) a trial-and-error approach. Specifically, Mtry was assessed
Remote Sens. 2019, 11, 43
11 of 27
for the following values (when Ntree was adjusted to 500): (a) One third of the total number of input
features; (b) the square root of the total number of input features; (c) half of the total number of input
features; (d) two thirds of the total number of input features; and (e) the total number of input features.
This resulted in marginal or no influence on the classification accuracies. Accordingly, the square root
of the total number of variables was selected for Mtry, as suggested by [71]. Next, by adjusting the
optimal value for Mtry, the parameter Ntree was assessed for the following values: (a) 100; (b) 200;
(c) 300; (d) 400; (e) 500; and (f) 600. A value of 400 was then found to be appropriate in this study,
as error rates for all classification models were constant beyond this point. The 601 training polygons
in different categories were used to train the RF classifier on the GEE platforms (see Table 1).
2.6.2. Simple Non-Iterative Clustering (SNIC) Superpixel Segmentation
Conventional pixel-based classification algorithms rely on the exclusive use of the spectral/
backscattering value of each pixel in their classification scheme. This results in “salt and pepper”
noise in the final classification map, especially when high-resolution images are employed [73].
An object-based algorithm, however, can mitigate the problem that arises during such image processing
by taking into account the contextual information within a given imaging neighborhood [74]. Image
segmentation divides an image into regions or objects based on the specific parameters (e.g., geometric
features and scaled topological relation). In this study, simple non-iterative clustering (SNIC) algorithm
was selected for superpixel segmentation (i.e., object-based) analysis [75]. The algorithm starts by
initializing centroid pixels on a regular grid in the image. Next, the dependency of each pixel relative
to the centroid is determined using its distance in the five-dimensional space of color and spatial
coordinates. In particular, the distance integrates normalized spatial and color distances to produce
effective, compact and approximately uniform superpixels. Notably, there is a trade-off between
compactness and boundary continuity, wherein larger compactness values result in more compact
superpixels and, thus, poor boundary continuity. SNIC uses a priority queue, 4- or 8-connected
candidate pixels to the currently growing superpixel cluster, to select the next pixels to join the cluster.
The candidate pixel is selected based on the smallest distance from the centroid. The algorithm takes
advantage of both priority queue and online averaging to evolve the centroid once each new pixel
is added to the given cluster. Accordingly, SNIC is superior relative to similar clustering algorithms
(e.g., Simple Linear Iterative Clustering) in terms of both memory and processing time. This is
attributed to the introduction of connectivity (4- or 8-connected pixels) that results in computing fewer
distances during centroid evolution [75].
2.6.3. Evaluation Indices
Four evaluation indices, including overall accuracy (OA), Kappa coefficient, producer accuracy,
and user accuracy were measured using the 599 testing polygons held back for validation purposes
(see Table 1). Overall accuracy determines the overall efficiency of the algorithm and can be measured
by dividing the total number of correctly-labeled samples by the total number of the testing samples.
The Kappa coefficient indicates the degree of agreement between the ground truth data and the
predicted values. Producer’s accuracy represents the probability that a reference sample is correctly
identified in the classification map. User’s accuracy indicates the probability that a classified pixel in
the land cover classification map accurately represents that category on the ground [76].
Additionally, the McNemar test [77] was employed to determine the statistically significant
differences between various classification scenarios in this study. Particularly, the main goals were to
determine: (1) Whether a statistically significant difference exists between pixel-based and object-based
classifications based on either SAR or optical data; and (2) whether a statistically significant difference
exists between object-based classifications using only one type of data (SAR or optical data) and an
integration of two types of data (SAR and optical data). The McNemar test is non-parametric and is
based on the classification confusion matrix. The test is based on a chi-square (χ2 ) distribution with
Remote Sens. 2019, 11, 43
12 of 27
one degree of freedom [78,79] and assumes the number of correctly and incorrectly identified pixels
are equal for both classification scenarios [77],
χ2 =
( f 12 − f 21 )2
f 12 + f 21
(3)
where f 12 and f 21 represent the number of pixels that were correctly identified by one classifier as
compared to the number of pixels that the other method incorrectly identified, respectively.
2.7. Processing Platform
The GEE cloud computing platform was used for both the pixel-based and superpixel RF
classification in this study. Both Sentinel-1 and Sentinel-2 data hosted within the GEE platform
were used to construct composite images. The zonal boundaries and the reference polygons were
imported into GEE using Google fusion tables. A JavaScript API in the GEE code editor was used for
pre-processing, feature extraction, and classification in this study. Accordingly, we generated 10 m
spatial resolution wetland maps of Newfoundland for our multi-year seasonal composites of optical,
SAR, and integration of both types of data using pixel-based and object-based approaches.
3. Results
3.1. Spectral Analysis of Wetland Classes Using Optical Data
To examine the discrimination capabilities of different spectral bands and vegetation indices,
spectral analysis was performed for all wetland classes. Figures 5–7 illustrate the statistical distribution
of reflectance, NDVI, NDWI, and MSAVI2 values for the multi-year monthly composites of June, July,
Remote Sens. 2018, 10, x FOR PEER REVIEW
13 of 27
and August, respectively, using box-and-whisker plots.
Figure
5. 5.Box-and-whisker
of the
the multi-year
multi-yearJune
Junecomposite
compositeillustrating
illustrating
distribution
Figure
Box-and-whisker plot of
thethe
distribution
of of
reflectance,
wetlandclasses
classesobtained
obtainedusing
using
pixel
values
extracted
reflectance,NDVI,
NDVI,NDWI,
NDWI,and
and MSAVI2
MSAVI2 for wetland
pixel
values
extracted
from
training
datasets.
Note
that
black,
horizontal
bars
within
boxes
illustrate
median
values,
from
training
datasets.
Note
that
black,
horizontal
bars
within
boxes
illustrate
median
values,
boxes
boxes demonstrate
theand
lower
and quartiles,
upper quartiles,
and whiskers
extend
to minimum
and maximum
demonstrate
the lower
upper
and whiskers
extend
to minimum
and maximum
values.
values.
As shown, all visible bands poorly distinguish spectrally similar wetland classes, especially
As shown,
allmarsh
visibleclasses.
bands poorly
distinguish spectrally
similar wetland
especially
the
the bog,
fen, and
The shallow-water
class, however,
can beclasses,
separated
from other
bog,
fen,
and
marsh
classes.
The
shallow-water
class,
however,
can
be
separated
from
other
classes
classes using the red band in August (see Figure 7). Among the original bands, NIR represents clear
using the red
band
in August (see
7). Among
original
bands,
NIR represents
advantages
when
discriminating
the Figure
shallow-water
fromthe
other
classes
(see Figures
5–7), butclear
is not
advantages when discriminating the shallow-water from other classes (see Figures 5, 6 and 7), but is
not more advantageous for classifying herbaceous wetland classes. Overall, vegetation indices are
superior when separating wetland classes compared to the original bands.
boxes demonstrate the lower and upper quartiles, and whiskers extend to minimum and maximum
values.
As shown, all visible bands poorly distinguish spectrally similar wetland classes, especially the
bog, Sens.
fen, 2019,
and marsh
Remote
11, 43 classes. The shallow-water class, however, can be separated from other classes
13 of 27
using the red band in August (see Figure 7). Among the original bands, NIR represents clear
advantages when discriminating the shallow-water from other classes (see Figures 5, 6 and 7), but is
more
advantageous
for classifying
herbaceous
wetland
classes.classes.
Overall,
vegetation
indicesindices
are superior
not more
advantageous
for classifying
herbaceous
wetland
Overall,
vegetation
are
when
separating
wetland
classes
compared
to
the
original
bands.
superior when separating wetland classes compared to the original bands.
Figure
6. 6.Box-and-whisker
the multi-year
multi-yearJuly
Julycomposite
compositeillustrating
illustrating
distribution
Figure
Box-and-whisker plot
plot of the
thethe
distribution
of of
reflectance,
NDVI,
wetland classes
classesobtained
obtainedusing
using
pixel
values
extracted
reflectance,
NDVI,NDWI,
NDWI,and
and MSAVI2
MSAVI2 for wetland
pixel
values
extracted
from
training
datasets.
from
training
datasets.
AsSens.
illustrated
in
5–7,
the shallow-water
class isclass
easily
distinguishable
from other
classes
illustrated
inFigures
Figures
5, 6 and
7, the shallow-water
is easily
distinguishable
from14other
RemoteAs
2018, 10, x FOR
PEER REVIEW
of 27
using
allusing
vegetation
indices. The
swamp
bog classes
also separable
using the using
NDVIthe
index
from
classes
all vegetation
indices.
Theand
swamp
and bogare
classes
are also separable
NDVI
all
threefrom
months.
Although
both Although
NDVI andboth
MSAVI2
areand
unable
to discriminate
herbaceous
wetland
index
all three
months.
NDVI
MSAVI2
are unable
to discriminate
classes
usingwetland
the Juneclasses
composite,
of bog andthe
fenclasses
are distinguishable
the NDVI index
herbaceous
usingthe
theclasses
June composite,
of bog and fenusing
are distinguishable
obtained
theindex
July obtained
and August
composites.
using thefrom
NDVI
from
the July and August composites.
Figure
7. 7.Box-and-whisker
multi-yearAugust
Augustcomposite
composite
illustrating
distribution
Figure
Box-and-whiskerplot
plot of
of the multi-year
illustrating
thethe
distribution
of of
reflectance,
NDVI,
NDWI,
and
MSAVI2
forfor
wetland
classes
obtained
using
pixel
values
extracted
from
reflectance,
NDVI,
NDWI,
and
MSAVI2
wetland
classes
obtained
using
pixel
values
extracted
training
datasets.
from training
datasets.
The
from the
the multi-year
multi-yearsummer
summercomposite
compositefor
forwetland
wetland
classes
Themean
meanJM
JM distances
distances obtained from
classes
areare
represented
representedin
inTable
Table3.3.
Table 3. Jeffries–Matusita (JM) distances between pairs of wetland classes from the multi-year
summer composite for extracted optical features in this study.
Optical Features
blue
d1
0.002
d2
0.204
d3
0.470
d4
1.153
d5
0.232
d6
0.299
d7
1.218
d8
0.520
d9
1.498
d10
0.380
Remote Sens. 2019, 11, 43
14 of 27
Table 3. Jeffries–Matusita (JM) distances between pairs of wetland classes from the multi-year summer
composite for extracted optical features in this study.
Optical Features
d1
d2
d3
d4
d5
d6
d7
d8
d9
d10
blue
green
red
NIR
NDVI
NDWI
MSAVI2
all
0.002
0.002
0.108
0.205
0.703
0.268
0.358
1.098
0.204
0.331
0.567
0.573
0.590
0.449
0.509
1.497
0.470
0.391
0.570
0.515
0.820
0.511
0.595
1.561
1.153
0.971
1.495
1.395
1.644
1.979
1.763
1.999
0.232
0.372
0.546
0.364
0.586
0.643
0.367
1.429
0.299
0.418
0.640
0.612
0.438
0.519
0.313
1.441
1.218
1.410
1.103
1.052
1.809
1.792
1.745
1.999
0.520
0.412
0.634
0.649
0.495
0.760
0.427
1.614
1.498
1.183
1.391
1.175
1.783
1.814
1.560
1.805
0.380
0.470
0.517
1.776
1.938
1.993
1.931
1.999
Note: d1 : Bog/Fen, d2 : Bog/Marsh, d3 : Bog/ Swamp, d4 : Bog/Shallow-water, d5 : Fen/Marsh, d6 : Fen/Swamp, d7 :
Fen/Shallow-water, d8 : Marsh/Swamp, d9 : Marsh/Shallow-water, and d10 : Swamp/Shallow-water.
According to the JM distance, shallow-water is the most separable class from other wetland
classes. In general, all wetland classes, excluding shallow-water, are hardly distinguishable from
each other using single optical feature and, in particular, bog and fen are the least separable classes.
However, the synergistic use of all features considerably increases the separability between wetland
classes, with JM values exceeding 1.4 in most cases; however, bog and fen remain hardly discernible in
this case.
3.2. Classification
The overall accuracies (OA) and Kappa coefficients of different classification scenarios are presented
in Table 4. Overall, the classification results using optical imagery were more advantageous relative
to SAR imagery. As illustrated, the optical imagery resulted in approximately 4% improvements in
both the pixel-based and object-based approaches. Furthermore, object-based classifications were
found to be superior to pixel-based classifications using optical (~6.5% improvement) and SAR
(~6% improvements) imagery in comparative cases. It is worth noting that the accuracy assessment in
this study was carried out using the testing polygons well distributed across the whole study region.
Table 4. Overall accuracies and Kappa coefficients obtained from different classification scenarios in
this study.
Classification
pixel-based
object-based
Data Composite
Scenario
Overall Accuracy (%)
Kappa Coefficient
SAR
Optic
SAR
Optic
SAR + optic
S1
S2
S3
S4
S5
73.12
77.16
79.14
83.79
88.37
0.68
0.72
0.74
0.80
0.85
The McNemar test revealed that the difference between the accuracies of pixel-based and object-based
classifications was statistically significant when either SAR (p = 0.023) or optical (p = 0.012) data were
compared (see Table 5). There was also a statistically very significant difference between object-based
classifications using SAR vs. SAR/optical data (p = 0.0001) and optical vs. SAR/optical data (p = 0.008).
Table 5. The results of McNemar test for different classification scenarios in this study.
Scenarios
S1 vs.
S2 vs.
S3 vs.
S4 vs.
S3
S4
S5
S5
χ2
p-Value
5.21
6.27
9.27
7.06
0.023
0.012
0.0001
0.008
Remote Sens. 2019, 11, 43
Scenarios
S1 vs. S3
S2 vs. S4
S3 vs. S5
S4 vs. S5
𝝌𝟐
5.21
6.27
9.27
7.06
p-value
0.023
0.012
0.0001
0.008
15 of 27
Figure 88 demonstrates
demonstrates the
the classification
classification maps
maps using
using SAR
SAR and
and optical
optical multi-year
multi-year summer
summer
Figure
composites
for
Newfoundland
obtained
from
pixeland
object-based
RF
classifications.
They
illustrate
composites for Newfoundland obtained from pixel- and object-based RF classifications. They
the distribution
of land cover
classes,
including
wetland
and
non-wetland
classes, identifiable
illustrate
the distribution
of land
cover
classes, both
including
both
wetland
and non-wetland
classes,
at
a
10
m
spatial
resolution.
In
general,
the
classified
maps
indicate
fine
separation
of
all
land
identifiable at a 10 m spatial resolution. In general, the classified maps indicate fine separation cover
of all
units,cover
including
and fen,bog
shallowand
deep-water,
and swamp and
as well
as other
land
land
units,bog
including
and fen,
shallowand deep-water,
and upland,
swamp and
upland,
as well
cover
types.
as
other
land cover types.
Remote Sens. 2018, 10, x FOR PEER REVIEW
16 of 27
Figure8.8. The
The land
land cover
cover maps
maps of
Figure
of Newfoundland
Newfoundland obtained
obtainedfrom
fromdifferent
differentclassification
classificationscenarios,
scenarios,
including(a)
(a)S1,
S1,(b)
(b)S2,
S2,(c)(c)S3S3and
and(d)
(d)S4S4ininthis
thisstudy.
study.
including
Figure 9 depicts the confusion matrices obtained from different methods, wherein the diagonal
elements
The user’s
user’s accuracies
accuracies of
of land
land cover
cover classes
classes using different
elements are the producer’s accuracies. The
classification scenarios
scenariosare
arealso
alsodemonstrated
demonstrated
Figure
Overall,
the classification
of wetlands
classification
in in
Figure
10.10.
Overall,
the classification
of wetlands
have
have
lower
accuracies
compared
to
those
of
the
non-wetland
classes.
In
particular,
the
classification
lower accuracies compared to those of the non-wetland classes. In particular, the classification of
of swamp
lowest
producer’s
user’s
accuracies
among
wetland
(and
all) classes
this
swamp
hashas
the the
lowest
producer’s
and and
user’s
accuracies
among
wetland
(and all)
classes
in this in
study.
study.
In contrast,
the classification
accuracies
of bog
shallow-water
higher(both
(both user’s
user’s and
In
contrast,
the classification
accuracies
of bog
andand
shallow-water
arearehigher
producer’s accuracies) than the other wetland classes.
elements are the producer’s accuracies. The user’s accuracies of land cover classes using different
classification scenarios are also demonstrated in Figure 10. Overall, the classification of wetlands have
lower accuracies compared to those of the non-wetland classes. In particular, the classification of
swamp has the lowest producer’s and user’s accuracies among wetland (and all) classes in this study.
In contrast, the classification accuracies of bog and shallow-water are higher (both user’s
and
Remote Sens. 2019, 11, 43
16 of 27
producer’s accuracies) than the other wetland classes.
Remote Sens. 2018, 10, x FOR PEER REVIEW
17 of 27
(a)
(c)
(b)
(d)
Figure 9. The confusion matrices obtained from different classification scenarios, including (a) S1,
matrices
(b)Figure
S2, (c) 9.
S3The
andconfusion
(d) S4 in this
study.obtained from different classification scenarios, including (a)
S1, (b) S2, (c) S3 and (d) S4 in this study.
Notably, all methods successfully classified the non-wetland classes with producer’s accuracies
Notably,
methods
classifiedthe
theobject-based
non-wetlandclassification
classes with producer’s
accuracies
beyond
80%.all
Among
thesuccessfully
first four scenarios,
using optical
imagery
beyond
thesuccessful
first four scenarios,
classification
using
optical
imagery
(i.e.,
(i.e., S4)80%.
wasAmong
the most
approach the
for object-based
classifying the
non-wetland
classes,
with
producer’s
S4)
the accuracies
most successful
approach
for classifying
the non-wetland
classes,
with
producer’s
and
andwas
user’s
exceeding
90% and
80%, respectively.
The wetland
classes
were
also identified
user’s
accuracies
exceeding
90%
and
80%,
respectively.
The
wetland
classes
were
also
identified
with
with high accuracies in most cases (e.g., bog, fen, and shallow-water) in S4.
high accuracies in most cases (e.g., bog, fen, and shallow-water) in S4.
Notably, all methods successfully classified the non-wetland classes with producer’s accuracies
beyond 80%. Among the first four scenarios, the object-based classification using optical imagery (i.e.,
S4) was the most successful approach for classifying the non-wetland classes, with producer’s and
user’s accuracies exceeding 90% and 80%, respectively. The wetland classes were also identified with
Remote
Sens. 2019, 11,
17 of 27
high accuracies
in43most cases (e.g., bog, fen, and shallow-water) in S4.
Figure 10. The user’s accuracies for various land cover classes in different classification scenarios in
Figure 10. The user’s accuracies for various land cover classes in different classification scenarios
this study.
in this study.
The object-based approach, due to its higher accuracies, was selected for the final classification
The object-based approach, due to its higher accuracies, was selected for the final classification
scheme in this study, wherein the multi-year summer SAR and optical composites were integrated (see
scheme in this study, wherein the multi-year summer SAR and optical composites were integrated
Figure 11).
(see Figure 11).
The final land cover map is noiseless and accurately represents the distribution of all land cover
classes on a large-scale. As shown, the classes of bog and upland are the most prevalent wetland and
non-wetland classes, respectively, in the study area. These observations agree well both with field
notes recorded by biologists during the in-situ data collection and with visual analysis of aerial and
satellite imagery. Figure 11 also illustrates several insets from the final land cover map in this study.
The visual interpretation of the final classified map by ecological experts demonstrated that most land
cover classes were correctly distinguished across the study area. For example, ecological experts noted
that bogs appear as a reddish color in optical imagery (true color composite). As shown in Figure 11,
most bog wetlands are accurately identified in all zoomed areas. Furthermore, small water bodies
(e.g., small ponds) and the perimeter of deep water bodies are correctly mapped belonging to the
shallow-water class. The upland and urban/bare land classes were also correctly distinguished.
The confusion matrix for the final classification map is illustrated in Figure 12. Despite the
presence of confusion among wetland classes, the results obtained from the multi-year SAR/optical
composite were extremely positive, taking into account the complexity of distinguishing similar
wetland classes. As shown in Figure 12, all non-wetland classes and shallow-water were correctly
identified with producer’s accuracies beyond 90%. The most similar wetland classes, namely bog and
fen, were classified with producer’s accuracies exceeding 80%. The other two wetland classes were
also correctly identified with a producer’s accuracy of 78% for marsh and 70% for swamp.
Remote
Sens.
2019,
11,10,
43x FOR PEER REVIEW
Remote
Sens.
2018,
1827
of 27
18 of
Figure
11. The
cover
Island
of Newfoundland
obtained
from
objectFigure
11. The
finalfinal
landland
cover
mapmap
for for
the the
Island
of Newfoundland
obtained
from
thethe
object-based
based
Random
Forest
(RF)
classification
using
the
multi-year
summer
SAR/optical
composite.
An
Random Forest (RF) classification using the multi-year summer SAR/optical composite. An overall
overall
accuracy
of
88.37%
and
a
Kappa
coefficient
of
0.85
were
achieved.
A
total
of
six
insets
and
accuracy of 88.37% and a Kappa coefficient of 0.85 were achieved. A total of six insets and
their corresponding optical images (i.e., Sentinel-2) were also illustrated to appreciate some of the
classification details. Please also see Supplementary Materials for details of the final classification map.
presence of confusion among wetland classes, the results obtained from the multi-year SAR/optical
composite were extremely positive, taking into account the complexity of distinguishing similar
wetland classes. As shown in Figure 12, all non-wetland classes and shallow-water were correctly
identified with producer’s accuracies beyond 90%. The most similar wetland classes, namely bog and
fen, were classified with producer’s accuracies exceeding 80%. The other two wetland classes19were
Remote Sens. 2019, 11, 43
of 27
also correctly identified with a producer’s accuracy of 78% for marsh and 70% for swamp.
Figure
classification map
map obtained
obtainedfrom
fromthe
theobject-based
object-basedRF
RF
Figure12.
12.The
Theconfusion
confusionmatrix
matrix for
for the
the final
final classification
classification
using
the
multi-year
summer
SAR/optical
composite
(OA:
88.37%,
K:
0.85).
classification using the multi-year summer SAR/optical composite (OA: 88.37%, K: 0.85).
4. Discussion
4. Discussion
In general, the results of the spectral analysis demonstrated the superiority of the NIR band compared
to the visible bands (i.e., blue, green, and red) for distinguishing various wetland classes. This was
In general, the results of the spectral analysis demonstrated the superiority of the NIR band
particularly
shallow-water,
was easily
separable
using NIR. This
is logical,
given
that
compared totrue
the for
visible
bands (i.e., which
blue, green,
and red)
for distinguishing
various
wetland
classes.
water
and
vegetation
exhibit
strong
absorption
and
reflection,
respectively,
in
this
region
of
the
This was particularly true for shallow-water, which was easily separable using NIR. This is logical,
electromagnetic
NDVI
was found
be the most
vegetation
index. in
This
given that water spectrum.
and vegetation
exhibit
strongto
absorption
anduseful
reflection,
respectively,
thisfinding
region
is
potentially
explained
by
the
high
sensitivity
of
NDVI
to
photosynthetically
active
biomasses
[57].
of the electromagnetic spectrum. NDVI was found to be the most useful vegetation index. This
Furthermore,
the
results
of
the
spectral
analysis
of
wetland
classes
indicated
that
class
separability
finding is potentially explained by the high sensitivity of NDVI to photosynthetically active
using the NDVI index is maximized in July, which corresponds to the peak growing season in
Newfoundland. According to the box-and-whisker plots and the JM distances, the spectral similarities
of wetland classes are slightly concerning, as they revealed the difficulties in distinguishing similar
wetland classes using a single optical feature, which is in agreement with a previous study [80].
However, the inclusion of all optical features significantly increased the separability between
wetland classes.
As shown in Figure 9, confusion errors occurred among all classes, especially those of wetlands
using the pixel-based classification approach. Notably, the highest confusion was found between the
swamp and upland classes in some cases. The upland class is characterized by dry forested land,
and swamps are specified as woody (forested) wetland. This results in similarities in both the visual
appearance and spectral/backscattering signatures for these classes. With regard to SAR signatures,
for example, the dominant scattering mechanism for both classes is volume scattering, especially
when the water table is low in swamp [81], which contributes to the misclassification between the
two. This is of particular concern when shorter wavelengths (e.g., C-band) are employed, given their
shallower penetration depth relative to that of longer wavelengths (e.g., L-band).
Confusion was also common among the herbaceous wetland classes, namely bog, fen, and marsh.
This is attributable to the heterogeneity of the landscape in the study area. As field notes suggest,
the herbaceous wetland classes were found adjacent to each other without clear cut borders, making
them hardly distinguishable. This is particularly severe for bog and fen, since both have very similar
Remote Sens. 2019, 11, 43
20 of 27
ecological and visual characteristics. For example, both are characterized by peatlands, dominated by
ecologically similar vegetation types of Sphagnum in bogs and Graminoid in fens.
Another consideration when interpreting the classification accuracies for different wetland classes
is the availability of the training samples/polygons for the supervised classification. As shown
in Table 1, for example, bogs have a larger number of training polygons compared to the swamp
class. This is because NL has a moist and cool climate [43], which contributes to extensive peatland
formation. Accordingly, bog and fen were potentially the most visited wetland classes during in-situ
data collection. This resulted in the collection of a larger number of training samples/polygons for
these classes. On the other hand, the swamp class is usually found in physically smaller areas relative
to those of other classes; for example, in transition zones between wetland and other land cover classes.
As such, they may have been dispersed and mixed with other land cover classes, making them difficult
to distinguish by the classifier.
Comparison of the classification accuracies using optical and SAR images (i.e., S1 vs. S2 and S3 vs.
S4) indicated, according to all evaluation indices in this study, the superiority of the former relative
to the latter for wetland mapping in most cases. This suggests that the phenological variations in
vegetative productivity captured by optical indices (e.g., NDVI), as well as the contrast between water
and non-water classes captured by the NDWI index are more efficient for wetland mapping in our
study area than the extracted features from dual-polarimetric SAR data. This finding is consistent with
the results of a recent study [12] that employed optical, SAR, and topographic data for predicting the
probability of wetland occurrence in Alberta, Canada, using the GEE platform. However, it should be
acknowledged that the lower success of SAR compared to optical data is, at least, partially related to the
fact that the Sentinel-1 sensor does not collect full-polarimetric data at the present time. This hinders
the application of advanced polarimetric decomposition methods that demand full-polarimetric data.
Several studies highlighted the great potential of polarimetric decomposition methods for identifying
similar wetland classes by characterizing their various scattering mechanisms using such advanced
approaches [19,56].
Despite the superiority of optical data relative to SAR, the highest classification accuracy was
obtained by integrating multi-year summer composites of SAR and optical imagery using the
object-based approach (see Table 4(S5)). In particular, this classification scenario demonstrates an
improvement of about 9% and 4.5% in overall accuracy compared to the object-based classification
using the multi-year summer SAR and optical composites, respectively. This is because optical
and SAR data are based on range and angular measurements and collect information about the
chemical and physical characteristics of wetland vegetation, respectively [82]; thus, the inclusion of
both types of observations enhances the discrimination of backscattering/spectrally similar wetland
classes [41,42]. Accordingly, it was concluded that the multi-year summer SAR/optical composite is
very useful for improving overall classification accuracy by capturing chemical, biophysical, structural,
and phenological variations of herbaceous and woody wetland classes. This was later reaffirmed via the
confusion matrix (see Figure 12) of the final classification map, wherein confusion decreased compared
to classifications based on either SAR or optical data (see Figure 9). Furthermore, the McNemar
test indicated that there was a very statistically significant difference (p < 0.05) for object-based
classifications using SAR vs. optical/SAR (S3 vs. S5) and optical vs. optical/SAR (S4 vs. S5) models
(see Table 5).
Notably, the multi-year summer SAR/optical composite improved the producer’s accuracies of
marsh and swamp classes. Specifically, the inclusion of SAR and optical data improved the producer’s
accuracies of marsh in the final classification map by about 14% and 11% compared to the object-based
classification using SAR and optical imagery on their own, respectively. This, too, occurred to a
lesser degree for swamp, wherein the producer’s accuracies improved in the final classified map by
about 12% and 10% compared to those of object-based classified maps using optical and SAR imagery,
respectively. The accuracies for other wetland classes, namely bog and fen, were also improved
Remote Sens. 2019, 11, 43
21 of 27
by about 4% and 5%, respectively, in this case relative to the object-based classification using the
multi-year optical composite.
Despite significant improvements in the producer’s accuracies for some wetland classes
(e.g., marsh and swamp) using the SAR/optical data composite, marginal to no improvements were
obtained in this case for the non-wetland classes compared to classification based only on optical data.
In particular, the use of SAR data does not offer substantial gains beyond the use of optical imagery
for distinguishing typical land cover classes, such as urban and deep-water, nor does it present any
clear disadvantages. Nevertheless, combining both types of observations addresses the limitation
that arises due to the inclement weather in geographic regions with near-permanent cloud cover,
such as Newfoundland. Therefore, the results reveal the importance of incorporating multi-temporal
optical/SAR data for classification of backscattering/spectrally similar land cover classes, such as
wetland complexes. Accordingly, given the complementary advantages of SAR and optical imagery,
the inclusion of both types of data still offers a potential avenue for further research in land cover
mapping on a large scale.
The results demonstrate the superiority of object-based classification compared to the pixel-based
approach in this study. This is particularly true when SAR imagery was employed, as the producer’s
accuracies for all wetland classes were lower than 70% (see Figure 9a). Despite applying speckle reduction,
speckle noise can remain, and this affects the classification accuracy during such processing. In contrast
to the pixel-based approach, object-based classification benefits from both backscattering/spectral
information, as well as contextual information within a given neighborhood. This further enhances
semantic land cover information and is very useful for the classification of SAR imagery [31].
As noted in a previous study [83], the image mosaicking technique over a long time-period may
increase classification errors in areas of high inter-annual change, causing a signal of seasonality to
be overlooked. Although this image mosaicking technique is essential for addressing the limitation
of frequent cloud cover for land cover mapping using optical remote sensing data across a broad
spatial scale, this was mitigated in this study to a feasible extent. In particular, to diminish the
effects of multi-seasonal observations, the mosaicked image in this study was produced from the
multi-year summer composite rather than the multi-year, multi-seasonal composite. The effectiveness
of using such multi-year seasonal (e.g., either spring or summer) composites has been previously
highlighted, given the potential of such data to capture surface condition variations beneficial for
wetland mapping [65]. The overall high accuracy of this technique obtained in this study further
corroborates the value of such an approach for mapping wetlands at the provincial-level.
Although the classification accuracies obtained from our previous studies were slightly better
in some cases (e.g., [19,31]), our previous studies involve more time and resources when compared
with the current study. For example, our previous study [19] incorporated multi-frequency (X-, C-,
and L-bands), multi-polarization (full-polarimetric RADARSAT-2) SAR data to produce local-scale
wetland inventories. However, the production of such inventories demanded significant levels of labor,
in terms of data preparation, feature extraction, statistical analysis, and classification. Consequently,
updating wetland inventories using such methods on a regular basis for a large scale is tedious and
expensive. In contrast, the present study relies on open access, regularly updated remotely sensed
imagery collected by the Sentinel Missions at a 10 m spatial resolution, which is of great value for
provincial- and national-scale wetland inventory maps that can be efficiently and regularly updated.
As mentioned earlier, GEE is an ideal platform that hosts Sentinel-1 and Sentinel-2 data and offers
advanced processing functionally. This removes the process of downloading a large number of satellite
images, which are already in “analysis ready” formats [34] and, as such, offers significant built-in
time saving aspects [84]. Despite these benefits, limitations with GEE are related to both the lack of
atmospherically-corrected Sentinel-2 data within its archive and the parallel method of the atmospheric
correction at the time of this research. This may result in uncertainty due to the bidirectional reflectance
effects caused by variations in sun, sensor, and surface geometries during satellite acquisitions [12].
Such an atmospheric correction algorithm has been carried out in local applications, such as the
Remote Sens. 2019, 11, 43
22 of 27
estimation of forest aboveground biomass [85], using the Sentinel-2 processing toolbox. Notably,
Level-2A Sentinel-2 bottom-of-atmosphere (BOA) data that are atmospherically-corrected are of
great value for extracting the most reliable temporal and spatial information, but such data are
not yet available within GEE. Recent research, however, reported the potential of including BOA
Sentinel-2 data in the near future into the GEE archive [12]. Although the high accuracies of wetland
classifications in this study indicated that the effects of top-of-atmosphere (TOA) reflectance could be
negligible, a comparison between TOA and BOA Sentinel-2 data for wetland mapping is suggested for
future research.
In the near future, the addition of more machine learning tools and EO data to the GEE API
and data catalog, respectively, will further simplify information extraction and data processing. For
example, the availability of deep learning approaches through the potential inclusion of TensorFlow
in the GEE platform will offer unprecedented opportunities for several remote sensing tasks [13].
Currently, however, employing state-of-the-art classification algorithms across broad spatial scales
requires downloading data for additional local processing tasks and uploading data back to GEE
due to the lack of functionality for such processing at present. Downloading such a large amount
of remote sensing data is time consuming, given bandwidth limitations, and further, its processing
demands a powerful local processing machine. Nevertheless, full exploitation of deep learning
methods for mapping wetlands at hierarchical levels requires abundant, high-quality representative
training samples.
The approaches presented in this study may be extended to generate a reliable, hierarchical,
national-scale Canadian wetland inventory map and are an essential step toward global-scale wetland
mapping. However, more challenges are expected when the study area is extended to the national-scale
(i.e., Canada) with more cloud cover, more fragmented landscapes, and various dominant wetland
classes across the country [86]. Notably, the biggest challenge in producing automated, national-scale
wetland inventories is collecting a sufficient amount of high quality training and testing samples to
support dependable coding, rapid product delivery, and accurate wetland mapping on large-scale.
Although using GEE for discriminating wetland and non-wetland samples could be useful, it is
currently inefficient for identifying hierarchical wetland ground-truth data. There are also challenges
related to inconsistency in terms of wetland definitions at the global-scale that can vary by country
(e.g., Canadian Wetland Classification System, New Zealand, and East Africa) [1]. However, given
recent advances in cloud computing and big data, these barriers are eroding and new opportunities
for more comprehensive and dynamic views of the global extent of wetlands are arising. For example,
the integration of Landsat and Sentinel data using the GEE platform will address the limitations
of cloud cover and lead to production of more accurate, finer category wetland classification maps,
which are of great benefit for hydrological and ecological monitoring of these valuable ecosystems [87].
The results of this study suggest the feasibility of generating provincial-level wetland inventories
by leveraging the opportunities offered by cloud-computing resources, such as GEE. The current
study will contribute to the production of regular, consistent, provincial-scale wetland inventory
maps that can support biodiversity and sustainable management of Newfoundland and Labrador’s
wetland resources.
5. Conclusions
Cloud-based computing resources and open-access EO data have caused a remarkable paradigm-shift
in the field of landcover mapping by replacing the production of standard static maps with those
that are more dynamic and application-specific thanks to recent advances in geospatial science.
Leveraging the computational power of the Google Earth Engine and the availability of high spatial
resolution remote sensing data collected by Copernicus Sentinels, the first detailed (category-based),
provincial-level wetland inventory map was produced in this study. In particular, multi-year summer
Sentinel-1 and Sentinel-2 data were used to map a complex series of small and large, heterogeneous
wetlands on the Island of Newfoundland, Canada, covering an approximate area of 106,000 km2 .
Remote Sens. 2019, 11, 43
23 of 27
Multiple classification scenarios, including those that were pixel- versus object-based, were considered
and the discrimination capacities of optical and SAR data composites were compared. The results
revealed the superiority of object-based classification relative to the pixel-based approach. Although
classification accuracy using the multi-year summer optical composite was found to be more accurate
than the multi-year summer SAR composite, the inclusion of both types of data (i.e., SAR and optical)
significantly improved the accuracies of wetland classification. An overall classification accuracy of
88.37% was achieved using an object-based RF classification with the multi-year (2016–2018) summer
optical/SAR composite, wherein wetland and non-wetland classes were distinguished with accuracies
beyond 70% and 90%, respectively.
This study further contributes to the development of Canadian wetland inventories, characterizes
the spatial distribution of wetland classes over a previously unmapped area with high spatial resolution,
and importantly, augments previous local-scale wetland map products. Given the relatively similar
ecological characteristics of wetlands across Canada, future work could extend this study by examining
the value of the presented approach for mapping areas containing wetlands with similar ecological
characteristics and potentially those with a greater diversity of wetland classes in other Canadian
provinces and elsewhere. Further extension of this study could also focus on exploring the efficiency
of a more diverse range of multi-temporal datasets (e.g., the 30 years Landsat dataset) to detect and
understand wetland dynamics and trends over time in the province of Newfoundland and Labrador.
Supplementary Materials: The following are available online at http://www.mdpi.com/2072-4292/11/1/43/
s1, The 10 m wetland extent product mapped complex series of small and large wetland classes accurately
and precisely.
Author Contributions: M.M. and F.M. designed and performed the experiments, analyzed the data, and wrote
the paper. B.S., S.H., and E.G. contributed editorial input and scientific insights to further improve the paper.
All authors reviewed and commented on the manuscript.
Funding: This project was undertaken with the financial support of the Research & Development Corporation
of Government of Newfoundland and Labrador (now InnovateNL) under Grant to M. Mahdianpari (RDC
5404-2108-101) and the Natural Sciences and Engineering Research Council of Canada under Grant to B. Salehi
(NSERC RGPIN2015-05027).
Acknowledgments: Field data were collected by various organizations, including Ducks Unlimited Canada,
Government of Newfoundland and Labrador Department of Environment and Conservation, and Nature
Conservancy Canada. The authors thank these organizations for the generous financial support and providing such
valuable datasets. The authors would like to thank the Google Earth Engine team for providing cloud-computing
resources and European Space Agency (ESA) for providing open-access data. Additionally, the authors would
like to thank anonymous reviewers for their helpful comments and suggestions.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
2.
3.
4.
5.
6.
7.
Tiner, R.W.; Lang, M.W.; Klemas, V.V. Remote Sensing of Wetlands: Applications and Advances; CRC Press: Boca
Raton, FL, USA, 2015.
Mitsch, W.J.; Bernal, B.; Nahlik, A.M.; Mander, Ü.; Zhang, L.; Anderson, C.J.; Jørgensen, S.E.; Brix, H.
Wetlands, carbon, and climate change. Landsc. Ecol. 2013, 28, 583–597. [CrossRef]
Mitsch, W.J.; Gosselink, J.G. The value of wetlands: Importance of scale and landscape setting. Ecol. Econ.
2000, 35, 25–33. [CrossRef]
Gallant, A.L. The Challenges of Remote Monitoring of Wetlands. Remote Sens. 2015, 7, 10938–10950.
[CrossRef]
Maxa, M.; Bolstad, P. Mapping northern wetlands with high resolution satellite images and LiDAR. Wetlands
2009, 29, 248. [CrossRef]
Tiner, R.W. Wetlands: An overview. In Remote Sensing of Wetlands; CRC Press: Boca Raton, FL, USA, 2015;
pp. 20–35.
Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Homayouni, S. Unsupervised Wishart Classfication
of Wetlands in Newfoundland, Canada Using Polsar Data Based on Fisher Linear Discriminant Analysis.
Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 305. [CrossRef]
Remote Sens. 2019, 11, 43
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
24 of 27
Wulder, M.A.; Masek, J.G.; Cohen, W.B.; Loveland, T.R.; Woodcock, C.E. Opening the archive: How free
data has enabled the science and monitoring promise of Landsat. Remote Sens. Environ. 2012, 122, 2–10.
[CrossRef]
Xie, Y.; Sha, Z.; Yu, M. Remote sensing imagery in vegetation mapping: A review. J. Plant Ecol. 2008, 1, 9–23.
[CrossRef]
Teluguntla, P.; Thenkabail, P.; Oliphant, A.; Xiong, J.; Gumma, M.K.; Congalton, R.G.; Yadav, K.; Huete, A.
A 30-m landsat-derived cropland extent product of Australia and China using random forest machine
learning algorithm on Google Earth Engine cloud computing platform. ISPRS J. Photogramm. Remote Sens.
2018, 144, 325–340. [CrossRef]
Shelestov, A.; Lavreniuk, M.; Kussul, N.; Novikov, A.; Skakun, S. Exploring Google earth engine platform for
Big Data Processing: Classification of multi-temporal satellite imagery for crop mapping. Front. Earth Sci.
2017, 5, 17. [CrossRef]
Hird, J.N.; DeLancey, E.R.; McDermid, G.J.; Kariyeva, J. Google Earth Engine, open-access satellite data,
and machine learning in support of large-area probabilistic wetland mapping. Remote Sens. 2017, 9, 1315.
[CrossRef]
Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine:
Planetary-scale geospatial analysis for everyone. Remote Sens. Environ. 2017, 202, 18–27. [CrossRef]
Sazib, N.; Mladenova, I.; Bolten, J. Leveraging the Google Earth Engine for Drought Assessment Using
Global Soil Moisture Data. Remote Sens. 2018, 10, 1265. [CrossRef]
Aguilar, R.; Zurita-Milla, R.; Izquierdo-Verdiguier, E.; de By, R.A. A Cloud-Based Multi-Temporal Ensemble
Classifier to Map Smallholder Farming Systems. Remote Sens. 2018, 10, 729. [CrossRef]
de Lobo Lobo, F.; Souza-Filho, P.W.M.; de Moraes Novo, E.M.L.; Carlos, F.M.; Barbosa, C.C.F. Mapping
Mining Areas in the Brazilian Amazon Using MSI/Sentinel-2 Imagery (2017). Remote Sens. 2018, 10, 1178.
[CrossRef]
Kumar, L.; Mutanga, O. Google Earth Engine Applications since Inception: Usage, Trends, and Potential.
Remote Sens. 2018, 10, 1509. [CrossRef]
Waske, B.; Fauvel, M.; Benediktsson, J.A.; Chanussot, J. Machine learning techniques in remote sensing data
analysis. In Kernel Methods for Remote Sensing Data Analysis; Wiley Online Library: Hoboken, NJ, USA, 2009;
pp. 3–24.
Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Motagh, M. Random forest wetland classification using
ALOS-2 L-band, RADARSAT-2 C-band, and TerraSAR-X imagery. ISPRS J. Photogramm. Remote Sens. 2017,
130, 13–31. [CrossRef]
Thanh Noi, P.; Kappas, M. Comparison of random forest, k-nearest neighbor, and support vector machine
classifiers for land cover classification using Sentinel-2 imagery. Sensors 2018, 18, 18. [CrossRef]
Huang, C.; Davis, L.S.; Townshend, J.R.G. An assessment of support vector machines for land cover
classification. Int. J. Remote Sens. 2002, 23, 725–749. [CrossRef]
Pal, M. Random forest classifier for remote sensing classification. Int. J. Remote Sens. 2005, 26, 217–222.
[CrossRef]
Rodriguez-Galiano, V.F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sanchez, J.P. An assessment of the
effectiveness of a random forest classifier for land-cover classification. ISPRS J. Photogramm. Remote Sens.
2012, 67, 93–104. [CrossRef]
Gislason, P.O.; Benediktsson, J.A.; Sveinsson, J.R. Random forests for land cover classification. Pattern
Recognit. Lett. 2006, 27, 294–300. [CrossRef]
Whyte, A.; Ferentinos, K.P.; Petropoulos, G.P. A new synergistic approach for monitoring wetlands using
Sentinels-1 and 2 data with object-based machine learning algorithms. Environ. Model. Softw. 2018, 104,
40–54. [CrossRef]
Pekel, J.-F.; Cottam, A.; Gorelick, N.; Belward, A.S. High-resolution mapping of global surface water and its
long-term changes. Nature 2016, 540, 418. [CrossRef] [PubMed]
Hansen, M.C.; Potapov, P.V.; Moore, R.; Hancher, M.; Turubanova, S.A.A.; Tyukavina, A.; Thau, D.;
Stehman, S.V.; Goetz, S.J.; Loveland, T.R. High-resolution global maps of 21st-century forest cover change.
Science 2013, 342, 850–853. [CrossRef] [PubMed]
Remote Sens. 2019, 11, 43
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
25 of 27
Xiong, J.; Thenkabail, P.S.; Gumma, M.K.; Teluguntla, P.; Poehnelt, J.; Congalton, R.G.; Yadav, K.; Thau, D.
Automated cropland mapping of continental Africa using Google Earth Engine cloud computing. ISPRS J.
Photogramm. Remote Sens. 2017, 126, 225–244. [CrossRef]
Tsai, Y.; Stow, D.; Chen, H.; Lewison, R.; An, L.; Shi, L. Mapping Vegetation and Land Use Types in
Fanjingshan National Nature Reserve Using Google Earth Engine. Remote Sens. 2018, 10, 927. [CrossRef]
Huang, H.; Chen, Y.; Clinton, N.; Wang, J.; Wang, X.; Liu, C.; Gong, P.; Yang, J.; Bai, Y.; Zheng, Y. Mapping
major land cover dynamics in Beijing using all Landsat images in Google Earth Engine. Remote Sens. Environ.
2017, 202, 166–176. [CrossRef]
Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Brisco, B.; Mahdavi, S.; Amani, M.; Granger, J.E. Fisher
Linear Discriminant Analysis of coherency matrix for wetland classification using PolSAR imagery. Remote
Sens. Environ. 2018, 206, 300–317. [CrossRef]
Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Motagh, M.; Brisco, B. An efficient feature optimization
for wetland mapping by synergistic use of SAR intensity, interferometry, and polarimetry data. Int. J. Appl.
Earth Obs. Geoinf. 2018, 73, 450–462. [CrossRef]
Ozesmi, S.L.; Bauer, M.E. Satellite remote sensing of wetlands. Wetlands Ecol. Manag. 2002, 10, 381–402.
[CrossRef]
d’Andrimont, R.; Lemoine, G.; van der Velde, M. Targeted Grassland Monitoring at Parcel Level Using
Sentinels, Street-Level Images and Field Observations. Remote Sens. 2018, 10, 1300. [CrossRef]
Aschbacher, J.; Milagro-Pérez, M.P. The European Earth monitoring (GMES) programme: Status and
perspectives. Remote Sens. Environ. 2012, 120, 3–8. [CrossRef]
Bwangoy, J.-R.B.; Hansen, M.C.; Roy, D.P.; De Grandi, G.; Justice, C.O. Wetland mapping in the Congo Basin
using optical and radar remotely sensed data and derived topographical indices. Remote Sens. Environ. 2010,
114, 73–86. [CrossRef]
Mahdianpari, M.; Salehi, B.; Rezaee, M.; Mohammadimanesh, F.; Zhang, Y. Very deep convolutional neural
networks for complex land cover mapping using multispectral remote sensing imagery. Remote Sens. 2018,
10, 1119. [CrossRef]
Rezaee, M.; Mahdianpari, M.; Zhang, Y.; Salehi, B. Deep convolutional neural network for complex wetland
classification using optical remote sensing imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11,
3030–3039. [CrossRef]
Amarsaikhan, D.; Saandar, M.; Ganzorig, M.; Blotevogel, H.H.; Egshiglen, E.; Gantuyal, R.; Nergui, B.;
Enkhjargal, D. Comparison of multisource image fusion methods and land cover classification. Int. J. Remote
Sens. 2012, 33, 2532–2550. [CrossRef]
Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Brisco, B. An assessment of simulated compact
polarimetric SAR data for wetland classification using random Forest algorithm. Can. J. Remote Sens. 2017,
43, 468–484. [CrossRef]
van Beijma, S.; Comber, A.; Lamb, A. Random forest classification of salt marsh vegetation habitats using
quad-polarimetric airborne SAR, elevation and optical RS data. Remote Sens. Environ. 2014, 149, 118–129.
[CrossRef]
Zhang, J. Multi-source remote sensing data fusion: Status and trends. Int. J. Image Data Fusion 2010, 1, 5–24.
[CrossRef]
Ecological Stratification Working Group. A National Ecological Framework for Canada; Agriculture and
Agri-Food Canada, Research Branch, Centre for Land and Biological Resources Research, and Environment
Canada, State of the Environment Directorate, Ecozone Analysis Branch: Ottawa/Hull, QC, Canada, 1996.
South, R. Biogeography and Ecology of the Island of Newfoundland; Springer Science & Business Media:
Berlin/Heidelberg, Germany, 1983; Volume 48, ISBN 9061931010.
Meades, S.J. Ecoregions of Newfoundland and Labrador; St. John’s, Newfoundland and Labrador: Parks and
Natural Areas Division, Department of Environment and Conservation, Government of Newfoundland and
Labrador: Corner Brook, NL, Canada, 1990.
Zhang, X.; Wu, B.; Ponce-Campos, G.; Zhang, M.; Chang, S.; Tian, F. Mapping up-to-Date Paddy Rice
Extent at 10 M Resolution in China through the Integration of Optical and Synthetic Aperture Radar Images.
Remote Sens. 2018, 10, 1200. [CrossRef]
Remote Sens. 2019, 11, 43
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
26 of 27
Marshall, I.B.; Schut, P.; Ballard, M. A National Ecological Framework for Canada: Attribute Data; Environmental
Quality Branch, Ecosystems Science Directorate, Environment Canada and Research Branch, Agriculture
and Agri-Food Canada: Ottawa, QC, Canada, 1999.
Sentinel-1-Observation Scenario—Planned Acquisitions—ESA. Available online: https://sentinel.esa.int/
web/sentinel/missions/sentinel-1/observation-scenario (accessed on 13 November 2018).
Sentinel-1 Algorithms. Google Earth Engine API. Google Developers. Available online: https://developers.
google.com/earth-engine/sentinel1 (accessed on 13 November 2018).
Gauthier, Y.; Bernier, M.; Fortin, J.-P. Aspect and incidence angle sensitivity in ERS-1 SAR data. Int. J. Remote
Sens. 1998, 19, 2001–2006. [CrossRef]
Lee, J.-S.; Wen, J.-H.; Ainsworth, T.L.; Chen, K.-S.; Chen, A.J. Improved sigma filter for speckle filtering of
SAR imagery. IEEE Trans. Geosci. Remote Sens. 2009, 47, 202–213.
Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F. The effect of PolSAR image de-speckling on wetland
classification: Introducing a new adaptive method. Can. J. Remote Sens. 2017, 43, 485–503. [CrossRef]
Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Brisco, B.; Motagh, M. Multi-temporal, multi-frequency,
and multi-polarization coherence and SAR backscatter analysis of wetlands. ISPRS J. Photogramm. Remote
Sens. 2018, 142, 78–93. [CrossRef]
Baghdadi, N.; Bernier, M.; Gauthier, R.; Neeson, I. Evaluation of C-band SAR data for wetlands mapping.
Int. J. Remote Sens. 2001, 22, 71–88. [CrossRef]
Steele-Dunne, S.C.; McNairn, H.; Monsivais-Huertero, A.; Judge, J.; Liu, P.-W.; Papathanassiou, K. Radar
remote sensing of agricultural canopies: A review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10,
2249–2273. [CrossRef]
de Almeida Furtado, L.F.; Silva, T.S.F.; de Moraes Novo, E.M.L. Dual-season and full-polarimetric C band
SAR assessment for vegetation mapping in the Amazon várzea wetlands. Remote Sens. Environ. 2016, 174,
212–222. [CrossRef]
Jensen, J.R. Remote Sensing of the Environment: An Earth Resource Perspective 2/e; Pearson Education: Delhi,
India, 2009.
Ji, L.; Zhang, L.; Wylie, B. Analysis of dynamic thresholds for the normalized difference water index.
Photogramm. Eng. Remote Sens. 2009, 75, 1307–1317. [CrossRef]
Xu, H. Modification of normalised difference water index (NDWI) to enhance open water features in remotely
sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [CrossRef]
Rogers, A.S.; Kearney, M.S. Reducing signature variability in unmixing coastal marsh Thematic Mapper
scenes using spectral indices. Int. J. Remote Sens. 2004, 25, 2317–2335. [CrossRef]
McFeeters, S.K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water
features. Int. J. Remote Sens. 1996, 17, 1425–1432. [CrossRef]
Flood, N. Seasonal composite Landsat TM/ETM+ images using the medoid (a multi-dimensional median).
Remote Sens. 2013, 5, 6481–6500. [CrossRef]
Griffiths, P.; van der Linden, S.; Kuemmerle, T.; Hostert, P. A pixel-based Landsat compositing algorithm for
large area land cover mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 2088–2101. [CrossRef]
Roy, D.P.; Ju, J.; Kline, K.; Scaramuzza, P.L.; Kovalskyy, V.; Hansen, M.; Loveland, T.R.; Vermote, E.; Zhang, C.
Web-enabled Landsat Data (WELD): Landsat ETM+ composited mosaics of the conterminous United States.
Remote Sens. Environ. 2010, 114, 35–49. [CrossRef]
Wulder, M.; Li, Z.; Campbell, E.; White, J.; Hobart, G.; Hermosilla, T.; Coops, N. A National Assessment of
Wetland Status and Trends for Canada’s Forested Ecosystems Using 33 Years of Earth Observation Satellite
Data. Remote Sens. 2018, 10, 1623. [CrossRef]
Swain, P.H.; Davis, S.M. Remote sensing: The quantitative approach. IEEE Trans. Pattern Anal. Mach. Intell.
1981, 713–714. [CrossRef]
Padma, S.; Sanjeevi, S. Jeffries Matusita based mixed-measure for improved spectral matching in
hyperspectral image analysis. Int. J. Appl. Earth Obs. Geoinf. 2014, 32, 138–151. [CrossRef]
Schmidt, K.S.; Skidmore, A.K. Spectral discrimination of vegetation types in a coastal wetland. Remote Sens.
Environ. 2003, 85, 92–108. [CrossRef]
Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions.
ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [CrossRef]
Remote Sens. 2019, 11, 43
70.
71.
72.
73.
74.
75.
76.
77.
78.
79.
80.
81.
82.
83.
84.
85.
86.
87.
27 of 27
Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Larsen, G.; Peddle, D.R. Mapping land-based oil spills
using high spatial resolution unmanned aerial vehicle imagery and electromagnetic induction survey data.
J. Appl. Remote Sens. 2018, 12, 036015. [CrossRef]
Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [CrossRef]
Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; English, J.; Chamberland, J.; Alasset, P.-J. Monitoring
surface changes in discontinuous permafrost terrain using small baseline SAR interferometry, object-based
classification, and geological features: A case study from Mayo, Yukon Territory, Canada. GIScience Remote
Sens. 2018, 1–26. [CrossRef]
Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65,
2–16. [CrossRef]
Benz, U.C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-resolution, object-oriented fuzzy
analysis of remote sensing data for GIS-ready information. ISPRS J. Photogramm. Remote Sens. 2004, 58,
239–258. [CrossRef]
Achanta, R.; Süsstrunk, S. Superpixels and polygons using simple non-iterative clustering. In Proceedings of
the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26
July 2017; pp. 4895–4904.
Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens.
Environ. 1991, 37, 35–46. [CrossRef]
McNemar, Q. Note on the sampling error of the difference between correlated proportions or percentages.
Psychometrika 1947, 12, 153–157. [CrossRef] [PubMed]
de Leeuw, J.; Jia, H.; Yang, L.; Liu, X.; Schmidt, K.; Skidmore, A.K. Comparing accuracy assessments to infer
superiority of image classification methods. Int. J. Remote Sens. 2006, 27, 223–232. [CrossRef]
Dingle Robertson, L.; King, D.J. Comparison of pixel-and object-based classification in land cover change
mapping. Int. J. Remote Sens. 2011, 32, 1505–1529. [CrossRef]
Adam, E.; Mutanga, O.; Rugege, D. Multispectral and hyperspectral remote sensing for identification and
mapping of wetland vegetation: A review. Wetlands Ecol. Manag. 2010, 18, 281–296. [CrossRef]
Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Brisco, B.; Motagh, M. Wetland Water Level Monitoring
Using Interferometric Synthetic Aperture Radar (InSAR): A Review. Can. J. Remote Sens. 2018, 1–16.
[CrossRef]
Chen, B.; Xiao, X.; Li, X.; Pan, L.; Doughty, R.; Ma, J.; Dong, J.; Qin, Y.; Zhao, B.; Wu, Z. A mangrove forest
map of China in 2015: Analysis of time series Landsat 7/8 and Sentinel-1A imagery in Google Earth Engine
cloud computing platform. ISPRS J. Photogramm. Remote Sens. 2017, 131, 104–120. [CrossRef]
Kelley, L.; Pitcher, L.; Bacon, C. Using Google Earth Engine to Map Complex Shade-Grown Coffee Landscapes
in Northern Nicaragua. Remote Sens. 2018, 10, 952. [CrossRef]
Jacobson, A.; Dhanota, J.; Godfrey, J.; Jacobson, H.; Rossman, Z.; Stanish, A.; Walker, H.; Riggio, J. A novel
approach to mapping land conversion using Google Earth with an application to East Africa. Environ. Model.
Softw. 2015, 72, 1–9. [CrossRef]
Vafaei, S.; Soosani, J.; Adeli, K.; Fadaei, H.; Naghavi, H.; Pham, T.D.; Tien Bui, D. Improving accuracy
estimation of forest aboveground biomass based on incorporation of ALOS-2 PALSAR-2 and sentinel-2A
imagery and machine learning: A case study of the Hyrcanian forest area (Iran). Remote Sens. 2018, 10, 172.
[CrossRef]
Dong, J.; Xiao, X.; Menarguez, M.A.; Zhang, G.; Qin, Y.; Thau, D.; Biradar, C.; Moore, B., III. Mapping paddy
rice planting area in northeastern Asia with Landsat 8 images, phenology-based algorithm and Google Earth
Engine. Remote Sens. Environ. 2016, 185, 142–154. [CrossRef] [PubMed]
Wulder, M.A.; White, J.C.; Masek, J.G.; Dwyer, J.; Roy, D.P. Continuity of Landsat observations: Short term
considerations. Remote Sens. Environ. 2011, 115, 747–751. [CrossRef]
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).