Assessment of Convolution Neural Networks for Wetland Mapping with Landsat in the Central Canadian Boreal Forest Region
Abstract
:1. Introduction
Objectives
2. Materials and Methods
2.1. Study Area and Training Data Sources
2.1.1. Alberta Merged Wetland Inventory (AMWI)
2.1.2. Alberta Biodiversity Monitoring Institute Photo-Plots (ABMI_PP)
2.1.3. AMWI and ABMI_PP Class Distributions
2.2. Sampling
2.2.1. Spatial Extension Sampling Assessment
2.2.2. Regional Sampling Assessment
2.3. Landsat and Elevation Data
2.4. Machine and Deep-Learning Methods
2.4.1. CNN Configurations
2.4.2. CNN Training Strategies
2.4.3. Random Forest Comparison and Accuracy Measures
2.4.4. Applying CNNs to Generate Maps
3. Results
3.1. Examination of ResNet Depth and Input Image Size
3.2. Comparison of ResNets and MSRE with Object Size
3.3. Inclusion of Seasonal Image Composites in the MSRE
3.4. MSRE and Random Forest Comparison for Spatial Extension Accuracy Assessment
3.5. MSRE and Random Forest Comparison for Region Plot Sampling Assessment
4. Discussion
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Gibbs, J.P. Wetland Loss and Biodiversity Conservation. Conserv. Biol. 2001, 14, 314–317. [Google Scholar] [CrossRef]
- Tiner, R. Remote Sensing of Wetlands; Tiner, R., Lang, M., Klemas, V., Eds.; CRC Press: Boca Raton, CA, USA, 2015. [Google Scholar]
- Gallant, A.L. The challenges of remote monitoring of wetlands. Remote Sens. 2015, 7, 10938–10950. [Google Scholar] [CrossRef]
- Dechka, J.A.; Franklin, S.E.; Watmough, M.D.; Bennett, R.P.; Ingstrup, D.W. Classification of wetland habitat and vegetation communities using multi-temporal Ikonos imagery in southern Saskatchewan. Can. J. Remote Sens. 2002, 28, 679–685. [Google Scholar] [CrossRef]
- Baker, C.; Lawrence, R.L.; Montagne, C.; Patten, D. Change detection of wetland ecosystems using Landsat imagery and change vector analysis. Wetlands 2007, 27, 610–619. [Google Scholar] [CrossRef]
- Grenier, M.; Demers, A.-M.; Labrecque, S.; Benoit, M.; Fournier, R.A.; Drolet, B. An object-based method to map wetland using RADARSAT-1 and Landsat ETM images: Test case on two sites in Quebec, Canada. Can. J. Remote Sens. 2007, 33, S28–S45. [Google Scholar] [CrossRef]
- Grenier, M.; Labrecque, S.; Garneau, M.; Tremblay, A. Object-based classification of a SPOT-4 image for mapping wetlands in the context of greenhouse gases emissions: The case of the Eastmain region, Québec, Canada. Can. J. Remote Sens. 2008, 34, S398–S413. [Google Scholar] [CrossRef]
- Frohn, R.C.; Reif, M.; Lane, C.; Autrey, B. Satellite remote sensing of isolated wetlands using object-oriented classification of Landsat-7 data. Wetlands 2009, 29, 931. [Google Scholar] [CrossRef]
- Powers, R.P.; Hay, G.J.; Chen, G. How wetland type and area differ through scale: A GEOBIA case study in Alberta’s Boreal Plains. Remote Sens. Environ. 2012, 117, 135–145. [Google Scholar] [CrossRef]
- Corcoran, J.; Knight, J.; Brisco, B.; Kaya, S.; Cull, A.; Murnaghan, K. The integration of optical, topographic, and radar data for wetland mapping in northern Minnesota. Can. J. Remote Sens. 2012, 37, 564–582. [Google Scholar] [CrossRef]
- Millard, K.; Richardson, M. On the Importance of Training Data Sample Selection in Random Forest Image Classification: A Case Study in Peatland Ecosystem Mapping. Remote Sens. 2015, 7, 8489–8515. [Google Scholar] [CrossRef] [Green Version]
- Bourgeau-Chavez, L.; Riordan, K.; Powell, R.B.; Miller, N.; Nowels, M. Improving Wetland Characterization with Multi-Sensor, Multi-Temporal SAR and Optical/Infrared Data Fusion. In Advances in Geoscience and Remote Sensing; Jedlovec, G., Ed.; IntechOpen: Rijeka, Croatia, 2009. [Google Scholar] [Green Version]
- Corcoran, J.M.; Knight, J.F.; Gallant, A.L. Influence of multi-source and multi-temporal remotely sensed and ancillary data on the accuracy of random forest classification of wetlands in northern Minnesota. Remote Sens. 2013, 5, 3212–3238. [Google Scholar] [CrossRef]
- Henderson, F.M.; Lewis, A.J. Radar detection of wetland ecosystems: A review. Int. J. Remote Sens. 2008, 29, 5809–5835. [Google Scholar] [CrossRef]
- Touzi, R.; Deschamps, A.; Rother, G. Phase of Target Scattering for Wetland Characterization Using Polarimetric C-Band SAR. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3241–3261. [Google Scholar] [CrossRef]
- Bourgeau-Chavez, L.L.; Endres, S.; Powell, R.; Battaglia, M.J.; Benscoter, B.; Turetsky, M.; Kasischke, E.S.; Banda, E. Mapping boreal peatland ecosystem types from multitemporal radar and optical satellite imagery. Can. J. For. Res. 2016, 47, 545–559. [Google Scholar] [CrossRef]
- Li, J.; Chen, W. A rule-based method for mapping Canada’s wetlands using optical, radar and DEM data. Int. J. Remote Sens. 2005, 26, 5051–5069. [Google Scholar] [CrossRef]
- Hogg, A.R.; Todd, K.W. Automated discrimination of upland and wetland using terrain derivatives. Can. J. Remote Sens. 2007, 33, S68–S83. [Google Scholar] [CrossRef]
- Hird, J.N.; DeLancey, E.R.; McDermid, G.J.; Kariyeva, J. Google earth engine, open-access satellite data, and machine learning in support of large-area probabilisticwetland mapping. Remote Sens. 2017, 9. [Google Scholar] [CrossRef]
- Mahdavi, S.; Salehi, B.; Granger, J.; Amani, M.; Brisco, B.; Huang, W. Remote sensing for wetland classification: A comprehensive review. GIScience Remote Sens. 2018, 55, 623–658. [Google Scholar] [CrossRef]
- Kloiber, S.M.; Macleod, R.D.; Smith, A.J.; Knight, J.F.; Huberty, B.J. A Semi-Automated, Multi-Source Data Fusion Update of a Wetland Inventory for East-Central Minnesota, USA. Wetlands 2015, 35, 335–348. [Google Scholar] [CrossRef]
- Latifovic, R.; Pouliot, D.; Olthof, I. Circa 2010 Land Cover of Canada: Local Optimization Methodology and Product Development. Remote Sens. 2017, 9, 1098. [Google Scholar] [CrossRef]
- Wickham, J.; Stehman, S.V.; Gass, L.; Dewitz, J.A.; Sorenson, D.G.; Granneman, B.J.; Poss, R.V.; Baer, L.A. Thematic accuracy assessment of the 2011 National Land Cover Database (NLCD). Remote Sens. Environ. 2017, 191, 328–341. [Google Scholar] [CrossRef]
- Hermosilla, T.; Wulder, M.A.; White, J.C.; Coops, N.C.; Hobart, G.W. Disturbance-Informed Annual Land Cover Classification Maps of Canada’s Forested Ecosystems for a 29-Year Landsat Time Series. Can. J. Remote Sens. 2018, 44, 67–87. [Google Scholar] [CrossRef]
- Filatow, D.; Carswel, T.; Cameron, M. Predictive Wetland Mapping of the FWCP- Peace Region; Ministry of Environment and Climate Change Strategy: Victoria, BC, Canada, 2018.
- National Wetlands Working Group. Canadian Wetland Classification System; Warner, B., Rubec, C., Eds.; University of Waterloo: Waterloo, ON, Canada, 1997. [Google Scholar]
- Halabisky, M.; Babcock, C.; Moskal, L.M. Harnessing the temporal dimension to improve object-based image analysis classification of wetlands. Remote Sens. 2018, 10. [Google Scholar] [CrossRef]
- Gabrielsen, C.G.; Murphy, M.A.; Evans, J.S. Using a multiscale, probabilistic approach to identify spatial-temporal wetland gradients. Remote Sens. Environ. 2016, 184, 522–538. [Google Scholar] [CrossRef]
- Castelluccio, M.; Poggi, G.; Sansone, C.; Verdoliva, L. Land Use Classification in Remote Sensing Images by Convolutional Neural Networks. arXiv, 2015; arXiv:1508.00092. [Google Scholar]
- Scott, G.J.; England, M.R.; Starms, W.A.; Marcum, R.A.; Davis, C.H. Training Deep Convolutional Neural Networks for Land–Cover Classification of High-Resolution Imagery. IEEE Geosci. Remote Sens. Lett. 2017, 14, 549–553. [Google Scholar] [CrossRef]
- Mahdianpari, M.; Salehi, B.; Rezaee, M.; Mohammadimanesh, F.; Zhang, Y. Very deep convolutional neural networks for complex land cover mapping using multispectral remote sensing imagery. Remote Sens. 2018, 10. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Process. Syst. 2012, 1–9. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv, 2015; arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conf. Comput. Vis. Pattern Recognit, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Zagoruyko, S.; Komodakis, N. Wide Residual Networks. arXiv, 2016arXiv:1605.07146.
- Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated Residual Transformations for Deep Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HW, USA, 21–26 July 2017. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.E.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. arXiv, 2014; arXiv:1409.4842. [Google Scholar]
- Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar] [CrossRef]
- Sermanet, P.; Eigen, D.; Zhang, X.; Mathieu, M.; Fergus, R.; LeCun, Y. OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks. arXiv, 2013; arXiv:1312.6229. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [PubMed]
- Radoux, J.; Chomé, G.; Jacques, D.C.; Waldner, F.; Bellemans, N.; Matton, N.; Lamarche, C.; D’Andrimont, R.; Defourny, P. Sentinel-2′s potential for sub-pixel landscape feature detection. Remote Sens. 2016, 8. [Google Scholar] [CrossRef]
- Yang, F.; Choi, W.; Lin, Y. Exploit All the Layers: Fast and Accurate CNN Object Detector with Scale Dependent Pooling and Cascaded Rejection Classifiers. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2129–2137. [Google Scholar] [CrossRef]
- Cai, Z.; Fan, Q.; Feris, R.S.; Vasconcelos, N. A unified multi-scale deep convolutional neural network for fast object detection. Lect. Notes Comput. Sci. 2016, 9908 LNCS, 354–370. [Google Scholar] [CrossRef]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar] [CrossRef]
- Kong, T.; Yao, A.; Chen, Y.; Sun, F. HyperNet: Towards Accurate Region Proposal Generation and Joint Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 845–853. [Google Scholar]
- Shrivastava, A.; Sukthankar, R.; Malik, J.; Gupta, A. Beyond Skip Connections: Top-Down Modulation for Object Detection. arXiv, 2016; arXiv:1612.06851. [Google Scholar]
- Kathleen, J. Alberta Merged Wetland Inventory and Relative Wetland Value Assessment Unit Wetlands by Section; Alberta Enironment and Sustainable Development: Edmonton, AB, Canada, 2015. [Google Scholar]
- Smith, K. A User’s Guide to the Enhanced Wetland Classification for the Al-Pac Boreal Conservation Project; Ducks Unlimited Inc. Western Region Office: Rancho Cordova, CA, USA, 2007. [Google Scholar]
- Castilla, G.; Hird, J.; Maynes, B.; McDermid, G. ABMI Photo-Plot Interpretation Manual; Alberta Biodiversity Monitoring Institute Remote Sensing Group: Edmonton, AB, Canada, 2011. [Google Scholar]
- Pouliot, D.; Latifovic, R.; Parkinson, W. Influence of Sample Distribution and Prior Probability Adjustment on Land Cover Classification; Geomatics Open File: Ottawa, ON, Canada, 2016.
- Buda, M.; Maki, A.; Mazurowski, M.A. A systematic study of the class imbalance problem in Convolutional Neural Networks. Neural Netw. 2018, 106, 249–259. [Google Scholar] [CrossRef] [PubMed]
- Wang, S.; Liu, W.; Wu, J.; Cao, L.; Meng, Q.; Kennedy, P.J. Training deep neural networks on imbalanced data sets. In Proceedings of the 2016 International Joint Conference on Neural Networks, Vancouver, Canada, 24–29 July 2016; pp. 4368–4374. [Google Scholar] [CrossRef]
- Homer, C.C.G.; Dewitz, J.J.A.; Yang, L.; Jin, S.; Danielson, P.; Xian, G.; Coulston, J.; Herold, N.N.D.; Wickham, J.D.J.; Megown, K. Completion of the 2011 National Land Cover Database for the conterminous United States representing a decade of land cover change information. Photogramm. Eng. Remote Sens. 2015, 81, 345–354. [Google Scholar] [CrossRef]
- Chen, J.; Chen, J.; Liao, A.; Cao, X.; Chen, L.; Chen, X.; He, C.; Han, G.; Peng, S.; Lu, M.; et al. Global land cover mapping at 30 m resolution: A POK-based operational approach. ISPRS J. Photogramm. Remote Sens. 2014, 103, 7–27. [Google Scholar] [CrossRef]
- Sørensen, R.; Zinko, U.; Seibert, J. On the calculation of the topographic wetness index: Evaluation of different methods based on field observations. Hydrol. Earth Syst. Sci. 2006, 10, 101–112. [Google Scholar] [CrossRef]
- Nielsen, M.A. Neural Networks and Deep Learning; Determination Press, 2015; Available online: http://neuralnetworksanddeeplearning.com/index.html (accessed on 28 March 2019).
- Hu, X.; Xu, X.; Xiao, Y.; Chen, H.; He, S.; Qin, J.; Heng, P.-A. SINet: A Scale-insensitive Convolutional Neural Network for Fast Vehicle Detection. IEEE Trans. Intell. Transp. Syst. 2018, 1–10. [Google Scholar] [CrossRef]
- Jia, X.; Xu, X.; Cai, B.; Guo, K. Single Image Super-Resolution Using Multi-Scale Convolutional Neural Network. arXiv, 2017; arXiv:1705.05084. [Google Scholar]
- Ceri, S.; Bozzon, A.; Brambilla, M.; Della Valle, E.; Fraternali, P.; Quarteroni, S. An Introduction to Information Retrieval; Springer: Berlin/Hiedelberg, Germany, 2013; pp. 3–11. [Google Scholar] [CrossRef]
- Fraser, R.H.; Olthof, I.; Pouliot, D. Monitoring land cover change and ecological integrity in Canada’s national parks. Remote Sens. Environ. 2009, 113, 1397–1409. [Google Scholar] [CrossRef]
- Pouliot, D.; Latifovic, R.; Zabcic, N.; Guindon, L.; Olthof, I. Development and assessment of a 250m spatial resolution MODIS annual land cover time series (2000–2011) for the forest region of Canada derived from change-based updating. Remote Sens. Environ. 2014, 140, 731–743. [Google Scholar] [CrossRef]
- Liu, X.; Li, S.; Kan, M.; Shan, S.; Chen, X. Self-Error-Correcting Convolutional Neural Network for Learning with Noisy Labels. In Proceedings of the 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition, Washington, DC, USA, 30 May–3 June 2017; pp. 111–117. [Google Scholar] [CrossRef]
- Jindal, I.; Nokleby, M.; Chen, X. Learning deep networks from noisy labels with dropout regularization. In Proceedings of the 2016 IEEE 16th International Conference on Data Mining, Barcelona, Spain, 12–15 December 2017; pp. 967–972. [Google Scholar] [CrossRef]
- Ege, T.; Yanai, K. Simultaneous estimation of food categories and calories with multi-task CNN. In Proceedings of the 2017 Fifteenth IAPR International Conference on Machine Vision Applications, Nagoya, Japan, 8–12 May 2017; pp. 198–201. [Google Scholar] [CrossRef]
Method | 6 Classes | 3 Classes | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
OA | Kappa | Mean_F1 | F1-Score | OA | Kappa | Mean_F1 | F1-Score | ||||||||
Bog | Fen | Marsh | Water | Swamp | Upland | Wet | Water | Upland | |||||||
MSRE | 0.624 | 0.538 | 0.601 | 0.628 | 0.480 | 0.417 | 0.952 | 0.442 | 0.689 | 0.862 | 0.749 | 0.843 | 0.889 | 0.952 | 0.689 |
0.006 | 0.008 | 0.009 | 0.006 | 0.009 | 0.064 | 0.006 | 0.012 | 0.013 | 0.004 | 0.006 | 0.005 | 0.004 | 0.006 | 0.013 | |
RF_Spatial_CV | 0.605 | 0.519 | 0.583 | 0.616 | 0.488 | 0.359 | 0.955 | 0.449 | 0.632 | 0.847 | 0.718 | 0.822 | 0.878 | 0.955 | 0.632 |
0.001 | 0.001 | 0.001 | 0.001 | 0.001 | 0.004 | 0.000 | 0.001 | 0.002 | 0.000 | 0.001 | 0.001 | 0.000 | 0.000 | 0.002 | |
RF_Spatial | 0.601 | 0.512 | 0.574 | 0.610 | 0.482 | 0.318 | 0.955 | 0.447 | 0.631 | 0.847 | 0.718 | 0.822 | 0.878 | 0.955 | 0.631 |
0.000 | 0.001 | 0.001 | 0.001 | 0.001 | 0.005 | 0.001 | 0.002 | 0.001 | 0.001 | 0.001 | 0.000 | 0.000 | 0.001 | 0.001 | |
RF_Pixel_CV | 0.578 | 0.485 | 0.555 | 0.605 | 0.467 | 0.316 | 0.944 | 0.430 | 0.565 | 0.827 | 0.676 | 0.791 | 0.864 | 0.944 | 0.565 |
0.000 | 0.000 | 0.000 | 0.002 | 0.001 | 0.002 | 0.001 | 0.001 | 0.002 | 0.000 | 0.001 | 0.000 | 0.000 | 0.001 | 0.002 | |
RF_Pixel | 0.573 | 0.479 | 0.547 | 0.598 | 0.460 | 0.292 | 0.944 | 0.424 | 0.567 | 0.827 | 0.677 | 0.792 | 0.864 | 0.944 | 0.567 |
0.001 | 0.002 | 0.002 | 0.003 | 0.003 | 0.009 | 0.001 | 0.003 | 0.002 | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | 0.002 | |
Test samples: | 3236 | 5497 | 2549 | 5163 | 5451 | 5834 | 16,733 | 5163 | 5834 | ||||||
Training samples: | 3241 | 5428 | 2272 | 4353 | 4527 | 5792 | 15,468 | 4353 | 5792 |
Method | 6 Classes | 3 Classes | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
OA | Kappa | Mean_F1 | F1-Score | OA | Kappa | Mean_F1 | F1-Score | ||||||||
Bog | Fen | Marsh | Water | Swamp | Upland | Wet | Water | Upland | |||||||
MSRE | 0.683 | 0.619 | 0.673 | 0.683 | 0.438 | 0.826 | 0.944 | 0.468 | 0.679 | 0.874 | 0.746 | 0.844 | 0.907 | 0.944 | 0.679 |
0.003 | 0.004 | 0.007 | 0.012 | 0.036 | 0.009 | 0.003 | 0.019 | 0.008 | 0.003 | 0.002 | 0.001 | 0.003 | 0.003 | 0.008 | |
RF_Spatial_CV | 0.650 | 0.580 | 0.641 | 0.664 | 0.418 | 0.775 | 0.937 | 0.472 | 0.577 | 0.854 | 0.692 | 0.803 | 0.896 | 0.937 | 0.577 |
0.000 | 0.000 | 0.000 | 0.001 | 0.001 | 0.001 | 0.001 | 0.001 | 0.002 | 0.000 | 0.001 | 0.001 | 0.000 | 0.001 | 0.002 | |
RF_Spatial | 0.645 | 0.574 | 0.636 | 0.660 | 0.413 | 0.775 | 0.935 | 0.463 | 0.569 | 0.851 | 0.687 | 0.799 | 0.893 | 0.935 | 0.569 |
0.000 | 0.000 | 0.000 | 0.001 | 0.001 | 0.002 | 0.001 | 0.003 | 0.002 | 0.001 | 0.001 | 0.001 | 0.000 | 0.001 | 0.002 | |
RF_Pixel_CV | 0.623 | 0.548 | 0.613 | 0.653 | 0.385 | 0.742 | 0.929 | 0.447 | 0.521 | 0.838 | 0.656 | 0.778 | 0.885 | 0.929 | 0.521 |
0.000 | 0.000 | 0.000 | 0.001 | 0.001 | 0.002 | 0.001 | 0.001 | 0.001 | 0.000 | 0.001 | 0.000 | 0.000 | 0.001 | 0.001 | |
RF_Pixel | 0.619 | 0.543 | 0.609 | 0.649 | 0.382 | 0.741 | 0.926 | 0.440 | 0.517 | 0.836 | 0.652 | 0.775 | 0.883 | 0.926 | 0.517 |
0.001 | 0.001 | 0.001 | 0.001 | 0.003 | 0.002 | 0.001 | 0.001 | 0.003 | 0.001 | 0.002 | 0.002 | 0.001 | 0.001 | 0.003 | |
Test samples: | 7113 | 7683 | 8000 | 7527 | 7661 | 7931 | 30,457 | 7527 | 7931 | ||||||
Training samples: | 7346 | 7519 | 8000 | 7337 | 7603 | 7957 | 30,468 | 7337 | 7957 |
Method | 6 Classes | 3 Classes | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
OA | Kappa | Mean_F1 | F1-Score | OA | Kappa | Mean_F1 | F1-Score | ||||||||
Bog | Fen | Marsh | Water | Swamp | Upland | Wet | Water | Upland | |||||||
MSRE | 0.686 | 0.611 | 0.691 | 0.620 | 0.591 | 0.645 | 0.966 | 0.509 | 0.814 | 0.905 | 0.821 | 0.902 | 0.925 | 0.966 | 0.814 |
0.011 | 0.014 | 0.013 | 0.057 | 0.018 | 0.026 | 0.008 | 0.019 | 0.010 | 0.008 | 0.013 | 0.008 | 0.009 | 0.008 | 0.010 | |
RF_Spatial_CV | 0.666 | 0.586 | 0.663 | 0.608 | 0.569 | 0.556 | 0.963 | 0.496 | 0.786 | 0.888 | 0.791 | 0.887 | 0.911 | 0.963 | 0.786 |
0.009 | 0.013 | 0.004 | 0.037 | 0.007 | 0.041 | 0.009 | 0.010 | 0.007 | 0.006 | 0.014 | 0.005 | 0.006 | 0.009 | 0.007 | |
RF_Spatial | 0.662 | 0.581 | 0.659 | 0.606 | 0.567 | 0.547 | 0.963 | 0.488 | 0.784 | 0.887 | 0.789 | 0.886 | 0.910 | 0.963 | 0.784 |
0.009 | 0.013 | 0.005 | 0.038 | 0.006 | 0.044 | 0.010 | 0.008 | 0.006 | 0.006 | 0.015 | 0.005 | 0.005 | 0.010 | 0.006 | |
RF_Pixel_CV | 0.642 | 0.556 | 0.635 | 0.595 | 0.543 | 0.486 | 0.957 | 0.471 | 0.759 | 0.874 | 0.764 | 0.872 | 0.899 | 0.957 | 0.759 |
0.009 | 0.013 | 0.004 | 0.029 | 0.008 | 0.053 | 0.012 | 0.006 | 0.006 | 0.006 | 0.015 | 0.006 | 0.006 | 0.012 | 0.006 | |
RF_Pixel | 0.637 | 0.550 | 0.630 | 0.590 | 0.538 | 0.474 | 0.957 | 0.463 | 0.757 | 0.873 | 0.762 | 0.871 | 0.899 | 0.957 | 0.757 |
0.010 | 0.014 | 0.005 | 0.032 | 0.009 | 0.054 | 0.011 | 0.004 | 0.007 | 0.006 | 0.017 | 0.005 | 0.006 | 0.011 | 0.007 | |
Test samples: | 2913 | 4613 | 1289 | 2946 | 4714 | 4962 | 13,529 | 2946 | 4962 | ||||||
Training samples: | 6914 | 10,433 | 3520 | 6902 | 10,250 | 10,946 | 31,117 | 6902 | 10,946 |
Reference | |||||||||
---|---|---|---|---|---|---|---|---|---|
Classified | Bog | Fen | Marsh | Water | Swamp | Upland | Row Total | User’s Accuracy | |
Bog | 1785 | 444 | 0 | 0 | 498 | 11 | 2738 | 0.65 | |
Fen | 538 | 2797 | 197 | 6 | 1102 | 199 | 4839 | 0.58 | |
Marsh | 1 | 142 | 785 | 38 | 133 | 50 | 1149 | 0.68 | |
Water | 1 | 13 | 96 | 2878 | 11 | 11 | 3010 | 0.96 | |
Swamp | 564 | 961 | 107 | 11 | 2384 | 608 | 4635 | 0.51 | |
Upland | 24 | 256 | 104 | 13 | 586 | 4083 | 5066 | 0.81 | |
Column Total | 2913 | 4613 | 1289 | 2946 | 4714 | 4962 | |||
Producer’s Accuracy | 0.61 | 0.61 | 0.61 | 0.98 | 0.51 | 0.82 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pouliot, D.; Latifovic, R.; Pasher, J.; Duffe, J. Assessment of Convolution Neural Networks for Wetland Mapping with Landsat in the Central Canadian Boreal Forest Region. Remote Sens. 2019, 11, 772. https://doi.org/10.3390/rs11070772
Pouliot D, Latifovic R, Pasher J, Duffe J. Assessment of Convolution Neural Networks for Wetland Mapping with Landsat in the Central Canadian Boreal Forest Region. Remote Sensing. 2019; 11(7):772. https://doi.org/10.3390/rs11070772
Chicago/Turabian StylePouliot, Darren, Rasim Latifovic, Jon Pasher, and Jason Duffe. 2019. "Assessment of Convolution Neural Networks for Wetland Mapping with Landsat in the Central Canadian Boreal Forest Region" Remote Sensing 11, no. 7: 772. https://doi.org/10.3390/rs11070772