Mapping Burned Areas in Tropical Forests Using a Novel Machine Learning Framework
Abstract
:1. Introduction
2. Material: Study Area and Input Data
3. Methods: RAPT Framework and Validation Procedure
3.1. RAPT Framework
3.1.1. Stage 1: Identifying Burn-Scars from Spectral Data
3.1.2. Stage 2: Identifying Confident Burned Pixels
3.1.3. Stage 3: Spatial Growing and Final Classification
3.1.4. Generating Global Product: Additional Data Processing Considerations
3.2. Validation Procedure
3.2.1. Sampling Design
- The images should have low scene level cloud cover fraction according to Landsat metadata (less than 25%).
- Post-event image should be within 100 days of the date of burn. However, if the selected post-event Landsat image has more than 25% total cloud cover, then the Landsat scene was excluded for that year.
- Pre-event image should be within 100 days of the date of burn. However, if the selected image has more than 25% cloud cover, then we looked for a clean image in the previous year in the same season. If a Landsat image with less than 25% cloud cover can still be not found, then we exclude the scene for this year.
3.2.2. Cloud Mask
3.2.3. Reference Data
3.2.4. Accuracy Measures
4. Results: Validation Using Landsat-Based Reference Maps and Comparison with MCD64 Product
Results
5. Discussion of Results
5.1. Reasons for Higher Producer’s Accuracy of RAPT Compared to MCD64
- MCD64 requires manual selection of parameters-MCD64 uses a number of hand-crafted rules and parameters during pre-processing to select good quality training samples (for example, an active fire hotspot is included as burned training sample only if the loss in vegetation exceeds a manually fixed threshold), and during classification to detect high quality (precise) burned area (for example, a manually fixed threshold on posterior probability is used to identify confident burned locations). Since the values of these thresholds are not selected in a data-adaptive fashion, the choice of these parameters can be too conservative for the tropics.
- RAPT balances omission and commission errors-The RAPT framework does a principled selection of decision threshold on the class probability to maximize the product of producer’s and user’s accuracy of the burned area product. This improves the detection performance compared to the previous approaches that relied on a manually fixed threshold that (i) requires human effort and (ii) may not jointly optimize user’s and producer’s accuracy, i.e., increase one at the expense of the other.
- RAPT makes use of a seasonality-aware classifier-The classifier used in RAPT approach takes into account the yearly context of the burn scar signature instead of just a per time step signature. The longer temporal context helps improve the precision of the detected burned area while maintaining a higher coverage. Moreover, use of a longer temporal context to identify fire scars makes RAPT more robust to poor data quality, as scars often remain detectable for multiple months. On the other hand, MCD64 misses burned locations due to lack of good quality reflectance data around the time of burn event. This problem is particularly relevant for tropical areas with high average percent cloud cover and where burning results in the release of large quantities of particulate matter into the atmosphere. MCD64 product declares some pixels on certain dates to be unburned because the data quality on those pixels in a 20 day neighborhood was too poor to make predictions. The product includes 5 bits of quality flags at monthly scale for each pixel in the MODIS scene. Specifically, when bit 1 takes a value of 1, it indicates that there was sufficient valid data in the reflectance time series for the grid cell to be processed by MCD64. We consider this bit in MCD64 for 3 time steps- the time step corresponding to the only RAPT burned event for the pixel and the next two time steps. In Table 3 we report the fraction of only RAPT burned pixels that have 0, 1, 2 or 3 time steps with good quality flags according to the bit 1 in MCD64 product in each MODIS tile. We observe that about 19% of only RAPT burned pixels have 0 good quality time step around the burn date, and about 20% of only RAPT burned pixels have only 1 good quality time step. Similarly, in Table 4 we report the fraction of MCD64 burned pixels that have 0, 1, 2 or 3 time steps with good quality flags according to the bit 1 in MCD64 product in each MODIS tile. We observe that about 88% of MCD64 burned pixels have 2 or more good quality time step around the burn date. Thus, overall, 39% of all locations detected by only the RAPT algorithm have poor quality data (0 or only 1 good quality time step). On the other hand, only 12% all the locations detected by MCD64 have poor quality data, which suggests that quality plays a role in the performance of MCD64.
5.2. Sources of Errors in the RAPT Product
5.2.1. Errors of Omission
- For regions where RAPT is missing large patches of burned events that are present in the reference maps, the Landsat image composites often show a burn scar that is quite diffused and lighter in intensity (see Figure 5 and Figure 6). These lighter intensity burn scars are most likely due to burns for which the vegetation quickly recovered in a few months. The RAPT algorithm tends to miss fires that have a fast vegetation recovery, and therefore quick recovering fire events correspond to the major source of omission errors in the RAPT product.
- The RAPT algorithm trains a separate classification model for each MODIS tile to account for the geographical variations in the spectral characteristics of burned and unburned pixels. However, for some MODIS tiles there may be considerable variability within a tile itself. In such cases, RAPT classifier in Stage 1 often learns to identify the dominant type of burn spectral signature, and as a consequence has a high omission error rate on the burned pixels of other non-dominant burn spectral signatures.
- Another source of omission errors in RAPT is the dependence on Active Fire signal to identify confident burned pixels. We observe that some burn events, which are identified by Stage 1 classifier, get incorrectly eliminated by RAPT algorithm in Stage 2 because of complete absence of Active fire in the pixels of the burn event. Since none of the pixels belonging to these events have an Active fire presence, there are no confident burned pixels identified in Stage 2. As a result, the event cannot be recovered back in Stage 3 (spatial growing) and gets completely missed in RAPT output. This type of error typically impacts burn events of small spatial extent, as the probability of complete absence of Active fire signal is very small for large burn events.
5.2.2. Errors of Commission:
- RAPT algorithm trains a classification model to distinguish between burned and unburned pixels in the forest land class. Therefore, when it is applied to MODIS pixels that are only partially forest and partially some other land class such as cropland, then it may spuriously detect the pixel as burned. We notice that some of the commission errors were present in such pixels of mixed land cover types.
- The RAPT product is based on the ability of a statistical model to distinguish between burned and unburned pixels based on their spectral data. Soil, smoke and other atmospheric conditions add considerable variability to the spectral characteristics of burned pixels, and sometime result in spurious burn events if there is presence of Active Fire signal in the spatial vicinity.
5.3. Complementarity of RAPT and MCD64 Products
6. Concluding Remarks
Supplementary Materials
Acknowledgments
Author Contributions
Conflicts of Interest
Appendix A. Details of the Classification Procedure
Appendix A.1. Data Pre-Processing
- Level-2 multispectral data from MODIS (MOD09A1) along with Active Fire hotspot data (MOD14A2) and landcover data (MCD12Q1) is obtained for each MODIS tile from http://earthexplorer.usgs.gov/.
- The multispectral reflectance data and Active Fire are available at a temporal resolution of per 8 day (46 time steps in a year). Landcover data is available as a yearly product from 2001 to 2012. Landcover map is used to determine the forested pixels each year. Since, burning might alter the signature of a pixel, we use the landcover map from the previous year to determine forested pixels. Landcover classes from 1 to 5 correspond to different kinds of forests, but we train a single model for all of them. Also, since the landcover maps are only available until 2012, for 2013 and 2014, landcover map from 2012 is used.
- The multispectral reflectance data and landcover data are available at a spatial resolution of 500 m, while Active Fire data is available at a spatial resolution of 1 km. In our algorithm, since Active Fire is used as labels to train the classifier over features at 500 m resolution, we downscale Active Fire data from 1 km resolution to 500 m resolution i.e., at each of the 46 time steps in the year, a 500 m pixel gets an Active Fire if the corresponding 1 km pixel had an Active Fire at that time step.
- To handle the errors of MODIS landcover product, we only consider pixels that the product has consistently masked forests for multiple years. We define a confident forest mask consisting of pixels that have been marked as forests in all of the first 4 years (2001-04). All maps are produced on the set of pixels that belong to this set of confident forests and were also marked forests in the year before the year of consideration.
- Each of the 7 bands in the reflectance data are z-normalized i.e the mean and the standard deviation for each of the 7 bands is computed across all pixels (and time steps) in the tile and the reflectance values are transformed as , where r is the reflectance value in a certain band and m and s are the mean and the standard deviation in the reflectance values in that band for all pixels in the scene across time.
Appendix A.2. Training the Classification Model
- One model is trained for each MODIS tile. The classifier operates on the set of features corresponding to all 46 time steps of a year for a given pixel and outputs the probability that the pixel experienced a burn activity in that year.
- The training set consists of equal number of samples of both kinds-Burned (positive class) and unburned (negative class). Every sample corresponds to data for a 500 m pixel for some year in the 14 year time window (2001-14). For each sample I in the training set, we provide the collection of reflectance data for all 46 time steps and a label which is 1 if an Active Fire hotspot was observed at the pixel during that year and 0 otherwise.
- To limit the effect of confusion from boundary pixels, while constructing the training set, we mask out pixels that are either at the boundary of a spatially coherent Active fire occurrence or that belong to an Active Fire patch that is too small (<10,500 m pixels in size).
- The classification model has 2 parameters- and w. Given the training set, we learn the parameters using the following procedure
- (a)
- Initialize to a 8 (7 dimensional feature space + bias term) dimensional vector drawn from a standard Gaussian distribution (mean = 0, std. deviation = 1). Similarly, initialize w to a 47 (46 time steps + bias term) dimensional vector drawn from a standard Gaussian distribution (mean = 0, std. deviation = 1).
- (b)
- Until the change in the magnitude of and w from the previous iteration is at less than 1%, repeat the followingis the gradient of the objective function O with respect to and is computed as,
In the above update equations, is the probability of the location in the training set being burned in the given year and is the per time step score for the location at time step t. Given the values of and w from the previous iteration, these can be computed as,
Appendix A.3. Prediction
- Once the model parameters have been learned, the probability that a pixel burned in a given year is given by .
- For each forested pixel, we now have the probability that it burned during each of the 14 years from 2001-14.
Appendix A.4. Fixing the Threshold
- The stage 1 classification maps for each year are a binary output obtained by thresholding the probability values at each pixel for that year. This threshold is chosen so as to optimize the product of user’s and producer’s accuracy on the training set.
- For each threshold from a set of threshold choices 0.01, 0.02, …, 0.99, the product (scaled up to a constant) of user’s and producer’s accuracy is estimated to be,
- The threshold that maximizes the product is used to binarize the probability maps to produce the stage 1 classification output.
Appendix A.5. Stage 2
Appendix A.6. Stage 3
References
- Tanimoto, H.; Kajii, Y.; Hirokawa, J.; Akimoto, H.; Minko, N.P. The atmospheric impact of boreal forest fires in far eastern Siberia on the seasonal variation of carbon monoxide: Observations at Rishiri, a northern remote island in Japan. Geophys. Res. Lett. 2000, 27, 4073–4076. [Google Scholar] [CrossRef]
- Fuller, D.O.; Fulk, M. Burned area in Kalimantan, Indonesia mapped with NOAA-AVHRR and Landsat TM imagery. Int. J. Remote Sens. 2001, 22, 691–697. [Google Scholar] [CrossRef]
- Chen, Y.; Randerson, J.T.; Morton, D.C.; DeFries, R.S.; Collatz, G.J.; Kasibhatla, P.S.; Giglio, L.; Jin, Y.; Marlier, M.E. Forecasting fire season severity in South America using sea surface temperature anomalies. Science 2011, 334, 787–791. [Google Scholar] [CrossRef] [PubMed]
- Randerson, J.T.; Chen, Y.; Werf, G.R.; Rogers, B.M.; Morton, D.C. Global burned area and biomass burning emissions from small fires. J. Geophys. Res. Biogeosci. 2012, 117. [Google Scholar] [CrossRef] [Green Version]
- Schultz, M.G. On the use of ATSR fire count data to estimate the seasonal and interannual variability of vegetation fire emissions. Atmos. Chem. Phys. 2002, 2, 387–395. [Google Scholar] [CrossRef]
- Smith, A.M.S.; Lentile, L.B.; Hudak, A.T.; Morgan, P. Evaluation of linear spectral unmixing and DNBR for predicting post-fire recovery in a North American ponderosa pine forest. Int. J. Remote Sens. 2007, 28, 5159–5166. [Google Scholar] [CrossRef]
- Sukhinin, A.I.; French, N.H.; Kasischke, E.S.; Hewson, J.H.; Soja, A.J.; Csiszar, I.A.; Hyer, E.J.; Loboda, T.; Conrad, S.G.; Romasko, V.I.; et al. AVHRR-based mapping of fires in Russia: New products for fire management and carbon cycle studies. Remote Sens. Environ. 2004, 93, 546–564. [Google Scholar] [CrossRef]
- Giglio, L.; Loboda, T.; Roy, D.P.; Quayle, B.; Justice, C.O. An active-fire based burned area mapping algorithm for the MODIS sensor. Remote Sens. Environ. 2009, 113, 408–420. [Google Scholar] [CrossRef]
- Fraser, R.H.; Li, Z.; Cihlar, J. Hotspot and NDVI differencing synergy (HANDS): A new technique for burned area mapping over boreal forest. Remote Sens. Environ. 2000, 74, 362–376. [Google Scholar] [CrossRef]
- Bastarrika, A.; Alvarado, M.; Artano, K.; Martinez, M.P.; Mesanza, A.; Torre, L.; Ramo, R.; Chuvieco, E. BAMS: A tool for supervised burned area mapping using Landsat data. Remote Sens. 2014, 6, 12360–12380. [Google Scholar] [CrossRef]
- Loboda, T.V.; Csiszar, I.A. Reconstruction of fire spread within wildland fire events in Northern Eurasia from the MODIS active fire product. Glob. Planet. Chang. 2007, 56, 258–273. [Google Scholar] [CrossRef]
- Pu, R.; Gong, P. Determination of burnt scars using logistic regression and neural network techniques from a single post-fire Landsat 7 ETM+ image. Photogramm. Eng. Remote Sens. 2004, 70, 841–850. [Google Scholar] [CrossRef]
- Pu, R.; Gong, P.; Li, Z.; Scarborough, J. A dynamic algorithm for wildfire mapping with NOAA/AVHRR data. Int. J. Wildl. Fire 2004, 13, 275–285. [Google Scholar] [CrossRef]
- Roy, D.P. Multi-temporal active-fire based burn scar detection algorithm. Int. J. Remote Sens. 1999, 20, 1031–1038. [Google Scholar] [CrossRef]
- Mithal, V.; Nayak, G.; Khandelwal, A.; Kumar, V.; Oza, N.C.; Nemani, R. Rapt: Rare class prediction in absence of true labels. IEEE Trans. Knowl. Data Eng. 2017, 29, 2484–2497. [Google Scholar] [CrossRef]
- Vermote, E.F.; Kotchenova, S.Y.; Ray, J.P. MODIS Surface Reflectance User’s Guide. MODIS Land Surface Reflectance Science Computing Facility, version 1; 2011. Available online: http://modis-sr.ltdri.org/guide/MOD09_UserGuide_v1_3.pdf (accessed on 4 January 2018).
- Giglio, L.; Van der Werf, G.R.; Randerson, J.T.; Collatz, G.J.; Kasibhatla, P. Global estimation of burned area using MODIS active fire observations. Atmos. Chem. Phys. 2006, 6, 957–974. [Google Scholar] [CrossRef] [Green Version]
- Padilla, M.; Stehman, S.V.; Chuvieco, E. Validation of the 2008 MODIS-MCD45 global burned area product using stratified random sampling. Remote Sens. Environ. 2014, 144, 187–196. [Google Scholar] [CrossRef]
- MOD09A1, MOD14A2, MOD12Q1, MCD64A1 Data Products Were Retrieved from the Online Data Pool, Courtesy of the NASA Land Processes Distributed Active Archive Center (LP DAAC), USGS/Earth Resources Observation and Science (EROS) Center, Sioux Falls, South Dakota. Available online: https://lpdaac.usgs.gov/data_access/data_pool (accessed on 4 January 2018).
- Friedl, M.A.; McIver, D.K.; Hodges, J.C.; Zhang, X.Y.; Muchoney, D.; Strahler, A.H.; Woodcock, C.E.; Gopal, S.; Schneider, A.; Cooper, A.; et al. Global land cover mapping from MODIS: Algorithms and early results. Remote Sens. Environ. 2002, 83, 287–302. [Google Scholar] [CrossRef]
- Nayak, G.; Mithal, V.; Jia, X.; Kumar, V. Classifying multivariate time series by learning sequence-level discriminative patterns. In Proceedings of the 2018 Society for Industrial and Applied Mathematics (SIAM) International Conference on Data Mining, San Diego, CA, USA, 3–5 May 2018. [Google Scholar]
- Zhu, Z.; Woodcock, C.E. Object-based cloud and cloud shadow detection in Landsat imagery. Remote Sens. Environ. 2012, 118, 83–94. [Google Scholar] [CrossRef]
- Zhu, Z.; Wang, S.; Woodcock, C.E. Improvement and expansion of the Fmask algorithm: Cloud, cloud shadow, and snow detection for Landsats 4–7, 8, and Sentinel 2 images. Remote Sens. Environ. 2015, 159, 269–277. [Google Scholar] [CrossRef]
- Giglio, L.; Randerson, J.T.; Werf, G.R. Analysis of daily, monthly, and annual burned area using the fourth-generation global fire emissions database (GFED4). J. Geophys. Res. Biogeosci. 2013, 118, 317–328. [Google Scholar] [CrossRef]
- Padilla, M.; Stehman, S.V.; Ramo, R.; Corti, D.; Hantson, S.; Oliva, P.; Alonso-Canas, I.; Bradley, A.V.; Tansey, K.; Mota, B.; et al. Comparing the accuracies of remote sensing global burned area products using stratified random sampling and estimation. Remote Sens. Environ. 2015, 160, 114–121. [Google Scholar] [CrossRef]
- Roy, D.P.; Jin, Y.; Lewis, P.E.; Justice, C.O. Prototyping a global algorithm for systematic fire-affected area mapping using MODIS time series data. Remote Sens. Environ. 2005, 97, 137–162. [Google Scholar] [CrossRef]
- Tansey, K.; Bradley, A.; Smets, B.; van Best, C.; Lacaze, R. The Geoland2 BioPar burned area product. In Proceedings of the EGU General Assembly Conference Abstracts, Vienna, Austria, 22–27 April 2012; Volume 14. [Google Scholar]
- Chuvieco, E. ESA CCI ECV Fire Disturbance—Product Specification Document. ESA Fire-CCI project 2013. Available online: https://www.esa-fire-cci.org/webfm_send/932 (accessed on 4 January 2018).
ID | Tile | Landsat | Pre-Date | Post-Date | Cloud | User’s | Prod’s | Dice | Overall | User’s | Prod’s | Dice | Overall | Number of | Number of |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Scene | Cover | Acc. | Acc. | Co-ef. | Acc. | Acc. | Acc. | Co-ef. | Acc. | Burned | Unburned | ||||
RAPT | RAPT | RAPT | RAPT | MCD | MCD | MCD | MCD | Pixels | Pixels | ||||||
1 | h11v09 | 230066 | 18 June 2010 | 8 October 2010 | 2 | 0.63 | 0.63 | 0.63 | 0.99 | 0.86 | 0.37 | 0.52 | 0.99 | 359 | 34,408 |
2 | h12v09 | 230066 | 18 June 2010 | 8 October 2010 | 2 | 0.63 | 0.44 | 0.52 | 0.99 | 0.77 | 0.02 | 0.03 | 0.99 | 855 | 103,964 |
3 | h11v10 | 233068 | 7 June 2010 | 27 September 2010 | 28 | 0.61 | 0.29 | 0.4 | 0.93 | 0.91 | 0.16 | 0.27 | 0.93 | 5941 | 69,437 |
4 | h12v09 | 227065 | 12 June 2004 | 2 October 2004 | 28 | 0.79 | 0.6 | 0.68 | 0.98 | 0.87 | 0 | 0.01 | 0.96 | 3646 | 92,248 |
5 | h12v10 | 229070 | 26 June 2004 | 16 October 2004 | 12 | 0.78 | 0.26 | 0.39 | 0.95 | 0.83 | 0.35 | 0.49 | 0.95 | 3534 | 52,315 |
6 | h11v10 | 232070 | 16 June 2010 | 6 October 2010 | 9 | 0.29 | 0.74 | 0.42 | 0.85 | 0.34 | 0.33 | 0.33 | 0.90 | 3606 | 46,017 |
7 | h29v09 | 118062 | 17 June 2004 | 21 September 2004 | 26 | 0.35 | 0.57 | 0.44 | 0.97 | 0.41 | 0.63 | 0.5 | 0.97 | 1110 | 53,239 |
8 | h29v09 | 118062 | 14 May 2009 | 18 August 2009 | 41 | 0.32 | 0.3 | 0.31 | 0.99 | 0.24 | 0.26 | 0.25 | 0.99 | 329 | 35,745 |
9 | h11v10 | 231070 | 13 July 2005 | 1 October 2005 | 74 | 0.88 | 0.61 | 0.72 | 0.99 | 0.87 | 0.42 | 0.56 | 0.99 | 652 | 29,259 |
10 | h11v10 | 231070 | 25 June 2010 | 2 December 2010 | 83 | 0.18 | 0.99 | 0.31 | 0.94 | 0.18 | 0.46 | 0.26 | 0.96 | 198 | 14,056 |
11 | h12v09 | 228066 | 6 June 2005 | 12 October 2005 | 85 | NaN | 0 | NaN | 1 | NaN | 0 | NaN | 1 | 4 | 17,616 |
12 | h11v10 | 002067 | 8 September 2004 | 13 October 2005 | 29 | 0.7 | 0.31 | 0.43 | 0.92 | 0.79 | 0.04 | 0.07 | 0.90 | 4391 | 38,524 |
13 | h11v09 | 002067 | 8 September 2004 | 13 October 2005 | 4 | 0.58 | 0.41 | 0.48 | 0.89 | 0.78 | 0.12 | 0.2 | 0.88 | 5445 | 37,084 |
14 | h11v10 | 232068 | 4 July 2005 | 9 November 2005 | 31 | 0.28 | 0.84 | 0.42 | 0.97 | 0.21 | 0.09 | 0.13 | 0.99 | 854 | 76,257 |
15 | h11v09 | 002066 | 9 July 2005 | 13 October 2005 | 4 | 0.43 | 0.57 | 0.49 | 0.97 | 0.59 | 0.15 | 0.24 | 0.98 | 3368 | 129,531 |
16 | h12v09 | 225066 | 28 April 2010 | 21 October 2010 | 20 | 0.56 | 0.68 | 0.62 | 0.98 | 0.54 | 0.49 | 0.52 | 0.98 | 2441 | 101,855 |
17 | h30v09 | 110062 | 11 July 2001 | 18 October 2002 | 40 | 0.42 | 0.67 | 0.51 | 0.91 | 0.5 | 0.61 | 0.55 | 0.93 | 1245 | 16,808 |
18 | h10v09 | 006066 | 31 May 2004 | 23 September 2005 | 74 | NaN | 0 | NaN | 1 | NaN | 0 | NaN | 1 | 93 | 31,097 |
19 | h11v08 | 232058 | 4 April 2006 | 20 March 2007 | 13 | 0.49 | 0.41 | 0.45 | 0.95 | 0.62 | 0.06 | 0.1 | 0.95 | 2046 | 37,003 |
MODIS Tile | RAPT Only | Common | MCD64 Only |
---|---|---|---|
h10v08 | 13,040 | 326 | 3344 |
h11v08 | 51,858 | 7744 | 6011 |
h10v09 | 23,686 | 695 | 1299 |
h11v09 | 194,456 | 40,055 | 16,976 |
h12v09 | 266,621 | 107,571 | 35,199 |
h13v09 | 85,442 | 11,139 | 5405 |
h11v10 | 342,575 | 109,382 | 50,166 |
h12v10 | 254,563 | 116,862 | 97,935 |
h13v10 | 6738 | 4607 | 2514 |
h27v08 | 12,170 | 1373 | 3256 |
h28v08 | 121,498 | 20,450 | 11,927 |
h29v08 | 38,930 | 1749 | 10,859 |
h28v09 | 76,559 | 13,479 | 22,401 |
h29v09 | 94,143 | 39,015 | 32,004 |
h30v09 | 11,532 | 1493 | 1155 |
h31v09 | 1469 | 158 | 6919 |
Total | 1,595,280 | 476,098 | 307,370 |
Tile | RAPT Only | Fraction of Locations | Fraction of Locations | Fraction of Locations | Fraction of Locations |
---|---|---|---|---|---|
with 0 Good | with 1 Good | with 2 Good | with 3 Good | ||
Quality Timesteps | Quality Timesteps | Quality Timesteps | Quality Timesteps | ||
h10v08 | 13,040 | 0.46 | 0.26 | 0.10 | 0.18 |
h11v08 | 51,858 | 0.06 | 0.25 | 0.13 | 0.57 |
h10v09 | 23,686 | 0.21 | 0.23 | 0.17 | 0.39 |
h11v09 | 194,456 | 0.07 | 0.12 | 0.23 | 0.58 |
h12v09 | 266,621 | 0.26 | 0.34 | 0.22 | 0.19 |
h13v09 | 85,442 | 0.51 | 0.20 | 0.13 | 0.15 |
h11v10 | 342,575 | 0.19 | 0.25 | 0.15 | 0.41 |
h12v10 | 254,563 | 0.06 | 0.08 | 0.15 | 0.71 |
h13v10 | 6738 | 0.07 | 0.20 | 0.17 | 0.57 |
h27v08 | 12,170 | 0.06 | 0.08 | 0.13 | 0.73 |
h28v08 | 121,498 | 0.23 | 0.17 | 0.20 | 0.41 |
h29v08 | 38,930 | 0.11 | 0.12 | 0.18 | 0.59 |
h28v09 | 76,559 | 0.25 | 0.21 | 0.20 | 0.33 |
h29v09 | 94,143 | 0.31 | 0.25 | 0.17 | 0.27 |
h30v09 | 11,532 | 0.00 | 0.03 | 0.07 | 0.90 |
h31v09 | 1469 | 0.28 | 0.16 | 0.20 | 0.36 |
Total | 1,595,280 | 0.1876 | 0.2045 | 0.1789 | 0.4290 |
Tile | MCD64 | Fraction of Locations | Fraction of Locations | Fraction of Locations | Fraction of Locations |
---|---|---|---|---|---|
with 0 Good | with 1 Good | with 2 Good | with 3 Good | ||
Quality Timesteps | Quality Timesteps | Quality Timesteps | Quality Timesteps | ||
h10v08 | 3670 | 0.00 | 0.20 | 0.25 | 0.55 |
h11v08 | 13,755 | 0.00 | 0.14 | 0.16 | 0.71 |
h10v09 | 1994 | 0.00 | 0.07 | 0.33 | 0.60 |
h11v09 | 57,031 | 0.00 | 0.07 | 0.24 | 0.69 |
h12v09 | 142,770 | 0.00 | 0.16 | 0.41 | 0.43 |
h13v09 | 16,544 | 0.00 | 0.18 | 0.29 | 0.54 |
h11v10 | 159,548 | 0.00 | 0.10 | 0.21 | 0.68 |
h12v10 | 214,797 | 0.00 | 0.07 | 0.17 | 0.76 |
h13v10 | 7121 | 0.00 | 0.23 | 0.12 | 0.66 |
h27v08 | 4629 | 0.00 | 0.10 | 0.17 | 0.74 |
h28v08 | 32,377 | 0.00 | 0.13 | 0.21 | 0.66 |
h29v08 | 12,608 | 0.00 | 0.05 | 0.11 | 0.84 |
h28v09 | 35,880 | 0.00 | 0.27 | 0.30 | 0.42 |
h29v09 | 71,019 | 0.00 | 0.20 | 0.23 | 0.56 |
h30v09 | 2648 | 0.00 | 0.05 | 0.12 | 0.82 |
h31v09 | 7077 | 0.00 | 0.05 | 0.14 | 0.81 |
Total | 783,468 | 0 | 0.12 | 0.24 | 0.64 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mithal, V.; Nayak, G.; Khandelwal, A.; Kumar, V.; Nemani, R.; Oza, N.C. Mapping Burned Areas in Tropical Forests Using a Novel Machine Learning Framework. Remote Sens. 2018, 10, 69. https://doi.org/10.3390/rs10010069
Mithal V, Nayak G, Khandelwal A, Kumar V, Nemani R, Oza NC. Mapping Burned Areas in Tropical Forests Using a Novel Machine Learning Framework. Remote Sensing. 2018; 10(1):69. https://doi.org/10.3390/rs10010069
Chicago/Turabian StyleMithal, Varun, Guruprasad Nayak, Ankush Khandelwal, Vipin Kumar, Ramakrishna Nemani, and Nikunj C. Oza. 2018. "Mapping Burned Areas in Tropical Forests Using a Novel Machine Learning Framework" Remote Sensing 10, no. 1: 69. https://doi.org/10.3390/rs10010069