Using the MSFNet Model to Explore the Temporal and Spatial Evolution of Crop Planting Area and Increase Its Contribution to the Application of UAV Remote Sensing
Abstract
:1. Introduction
1.1. Remote Sensing Image Applications
1.2. Deep Learning-Based Remote Sensing Image Segmentation Technology
1.3. Main Work of This Paper
- (1)
- A multi-scale fusion neural network is proposed, which can extract the local and global features of remote sensing images at the same time and fully mine the information in the high-resolution remote sensing images;
- (2)
- By introducing the MAML structure optimized by the PSO, new remote sensing images of different time series can achieve good results by fine-tuning with a small amount of datasets;
- (3)
- The continuous time series of crop maps after semantic segmentation, combined with the analysis of local low-carbon emission reduction policies and natural climatic conditions, provides feasibility for the large-scale periodic monitoring of crops and the formulation of planting strategies;
- (4)
- Subsequent research can quickly obtain high-resolution image data at different heights and angles through UAV remote sensing technology. Combined with the methods proposed in this study, comprehensive coverage and detailed analysis of large-scale agricultural areas can be achieved.
2. Study Area and Data
2.1. Study Area
2.2. Data Collection and Pre-Processing
3. Methods
3.1. Multi-Scale Fusion Network Model (MSFNet)
3.1.1. Network Architecture Subsubsection
3.1.2. Cascade Feature Fusion
3.2. PSML Model
3.2.1. Model-Agnostic Meta-Learning (MAML)
3.2.2. Particle Swarm Optimization to Optimize MAML
3.2.3. The Fusion of MAML and MSFNet
- (a)
- Assume there exists a parameterized model consisting of parameter a and the distribution on the task. Firstly, randomly initialize the model parameter ;
- (b)
- Some different batches of tasks are randomly selected from task , that is, . For example, there are three selected tasks: ;
- (c)
- Inner loop: For each task taken from , sample data points and prepare training and testing datasets:
- (d)
- Outer loop: The meta-test set is utilized to minimize the loss function based on the parameters obtained from the inner loop. This process aims to derive an optimal set of initialization parameters , which enhance the model’s ability to quickly adapt to new tasks.
3.3. Experimental Setup
4. Results
5. Discussion
5.1. Analysis of Factors Influencing the Changes of Crop Area
5.2. Prediction Analysis of Crop Planting Area Change
5.3. The Advantages and Disadvantages of the Model
5.4. Contribution to UAV Remote Sensing
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Niu, S.; Nie, Z.; Li, G.; Zhu, W. Early Drought Detection in Maize Using UAV Images and YOLOv8+. Drones 2024, 8, 170. [Google Scholar] [CrossRef]
- Albaaji, G.F.; SS, V.C. Artificial intelligence SoS framework for sustainable agricultural production. Comput. Electron. Agric. 2023, 213, 108182. [Google Scholar] [CrossRef]
- Crippa, M.; Solazzo, E.; Guizzardi, D.; Monforti-Ferrario, F.; Tubiello, F.N.; Leip, A.J.N.F. Food systems are responsible for a third of global anthropogenic GHG emissions. Nat. Food 2021, 2, 198–209. [Google Scholar] [CrossRef]
- Zhu, H.; Goh, H.H.; Zhang, D.; Ahmad, T.; Liu, H.; Wang, S.; Li, S.; Liu, T.; Wu, T. Key technologies for smart energy systems: Recent developments, challenges, and research opportunities in the context of carbon neutrality. J. Clean. Prod. 2022, 331, 129809. [Google Scholar] [CrossRef]
- Zhang, Z.; Zhu, L. A review on unmanned aerial vehicle remote sensing: Platforms, sensors, data processing methods, and applications. Drones 2023, 7, 398. [Google Scholar] [CrossRef]
- Zhou, J.; Lu, X.; Yang, R.; Chen, H.; Wang, Y.; Zhang, Y.; Huang, J.; Liu, F. Developing novel rice yield index using UAV remote sensing imagery fusion technology. Drones 2022, 6, 151. [Google Scholar] [CrossRef]
- Adamopoulos, E.; Rinaudo, F. UAS-based archaeological remote sensing: Review, meta-analysis and state-of-the-art. Drones 2020, 4, 46. [Google Scholar] [CrossRef]
- Cao, Y.; Chen, T.; Zhang, Z.; Chen, J. An intelligent grazing development strategy for unmanned animal husbandry in china. Drones 2023, 7, 542. [Google Scholar] [CrossRef]
- Sun, Y.; Luo, J.; Wu, T.; Zhou, Y.N.; Liu, H.; Gao, L.; Dong, W.; Liu, W.; Yang, Y.; Hu, X.; et al. Synchronous response analysis of features for remote sensing crop classification based on optical and SAR time-series data. Sensors 2019, 19, 4227. [Google Scholar] [CrossRef]
- Chen, D.; Hu, H.; Liao, C.; Ye, J.; Bao, W.; Mo, J.; Wu, Y.; Dong, T.; Fan, H.; Pei, J. Crop NDVI time series construction by fusing Sentinel-1, Sentinel-2, and environmental data with an ensemble-based framework. Comput. Electron. Agric. 2023, 215, 108388. [Google Scholar] [CrossRef]
- Ferchichi, A.; Abbes, A.B.; Barra, V.; Rhif, M.; Farah, I.R. Multi-attention Generative Adversarial Network for multi-step vegetation indices forecasting using multivariate time series. Eng. Appl. Artif. Intell. 2024, 128, 107563. [Google Scholar] [CrossRef]
- Liang, J.; Gong, J.; Li, W. Applications and impacts of Google Earth: A decadal review (2006–2016). ISPRS J. Photogramm. Remote Sens. 2018, 146, 91–107. [Google Scholar] [CrossRef]
- Liu, Y.; Yan, Z.; Tan, J.; Li, Y. Multi-purpose oriented single nighttime image haze removal based on unified variational retinex model. IEEE Trans. Circuits Syst. Video Technol. 2022, 33, 1643–1657. [Google Scholar] [CrossRef]
- Park, D.; Lee, B.H.; Chun, S.Y. All-in-one image restoration for unknown degradations using adaptive discriminative filters for specific degradations. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 5815–5824. [Google Scholar]
- Zhou, T.; Lv, W.; Geng, Y.; Xiao, S.; Chen, J.; Xu, X.; Pan, J.; Si, B.; Lausch, A. National-scale spatial prediction of soil organic carbon and total nitrogen using long-term optical and microwave satellite observations in Google Earth Engine. Comput. Electron. Agric. 2023, 210, 107928. [Google Scholar] [CrossRef]
- Wu, C.; Guo, X. Adaptive enhanced interval type-2 possibilistic fuzzy local information clustering with dual-distance for land cover classification. Eng. Appl. Artif. Intell. 2023, 119, 105806. [Google Scholar] [CrossRef]
- Wang, Y.; Zhang, Z.; Feng, L.; Ma, Y.; Du, Q. A new attention-based CNN approach for crop mapping using time series Sentinel-2 images. Comput. Electron. Agric. 2021, 184, 106090. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Hui, H.; Zhang, X.; Li, F.; Mei, X.; Guo, Y. A partitioning-stacking prediction fusion network based on an improved attention U-Net for stroke lesion segmentation. IEEE Access 2020, 8, 47419–47432. [Google Scholar] [CrossRef]
- Li, F.; Bai, J.; Zhang, M.; Zhang, R. Yield estimation of high-density cotton fields using low-altitude UAV imaging and deep learning. Plant Methods 2022, 18, 55. [Google Scholar] [CrossRef] [PubMed]
- Pan, Q.; Gao, M.; Wu, P.; Yan, J.; Li, S. A deep-learning-based approach for wheat yellow rust disease recognition from unmanned aerial vehicle images. Sensors 2021, 21, 6540. [Google Scholar] [CrossRef] [PubMed]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Kemker, R.; Salvaggio, C.; Kanan, C. Algorithms for semantic segmentation of multispectral remote sensing imagery using deep learning. ISPRS J. Photogramm. Remote Sens. 2018, 145, 60–77. [Google Scholar] [CrossRef]
- Sun, W.; Wang, R. Fully convolutional networks for semantic segmentation of very high resolution remotely sensed images combined with DSM. IEEE Geosci. Remote Sens. Lett. 2018, 15, 474–478. [Google Scholar] [CrossRef]
- Rao, M.; Tang, P.; Zhang, Z. Spatial–spectral relation network for hyperspectral image classification with limited training samples. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 5086–5100. [Google Scholar] [CrossRef]
- Liu, B.; Yu, X.; Yu, A.; Zhang, P.; Wan, G.; Wang, R. Deep few-shot learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 57, 2290–2304. [Google Scholar] [CrossRef]
- Lake, B.M.; Ullman, T.D.; Tenenbaum, J.B.; Gershman, S.J. Building machines that learn and think like people. Behav. Brain Sci. 2017, 40, e253. [Google Scholar] [CrossRef]
- Hochreiter, S.; Younger, A.S.; Conwell, P.R. 2001. Learning to learn using gradient descent. In Artificial Neural Networks—ICANN 2001, Proceedings of the International Conference, Vienna, Austria, 21–25 August 2001; Springer: Berlin/Heidelberg, Germany, 2001; pp. 87–94. [Google Scholar]
- Schmidhuber, J.; Zhao, J.; Wiering, M. Shifting inductive bias with success-story algorithm, adaptive Levin search, and incremental self-improvement. Mach. Learn. 1997, 28, 105–130. [Google Scholar] [CrossRef]
- Collier, M.; Beel, J. Implementing neural turing machines. In Artificial Neural Networks and Machine Learning–ICANN 2018, Proceedings of the 27th International Conference on Artificial Neural Networks, Rhodes, Greece, 4–7 October 2018; Springer: Cham, Switzerland, 2018; Part III 27; pp. 94–104. [Google Scholar]
- Koch, G.; Zemel, R.; Salakhutdinov, R. Siamese neural networks for one-shot image recognition. In Proceedings of the ICML—Deep Learning Workshop, Lille Grande Palais, Lille, France, 10–11 July 2015; Volume 2, pp. 1–30. [Google Scholar]
- Shaban, A.; Bansal, S.; Liu, Z.; Essa, I.; Boots, B. One-shot learning for semantic segmentation. arXiv 2017, arXiv:1709.03410. [Google Scholar]
- Blaes, S.; Burwick, T. Few-shot learning in deep networks through global prototyping. Neural Netw. 2017, 94, 159–172. [Google Scholar] [CrossRef]
- Nguyen, C.; Do, T.T.; Carneiro, G. Uncertainty in model-agnostic meta-learning using variational inference. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass, CO, USA, 1–5 March 2020; pp. 3090–3100. [Google Scholar]
- Belgiu, M.; Csillik, O. Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis. Remote Sens. Environ. 2018, 204, 509–523. [Google Scholar]
- Li, H.; Zhang, C.; Zhang, S.; Atkinson, P.M. Full year crop monitoring and separability assessment with fully-polarimetric L-band UAVSAR: A case study in the Sacramento Valley, California. Int. J. Appl. Earth Obs. Geoinf. 2019, 74, 45–56. [Google Scholar] [CrossRef]
- Shivers, S.W.; Roberts, D.A.; McFadden, J.P.; Tague, C. Using imaging spectrometry to study changes in crop area in California’s Central Valley during drought. Remote Sens. 2018, 10, 1556. [Google Scholar] [CrossRef]
- Qu, Y.; Zhao, W.; Yuan, Z.; Chen, J. Crop mapping from sentinel-1 polarimetric time-series with a deep neural network. Remote Sens. 2020, 12, 2493. [Google Scholar] [CrossRef]
- Zhang, Z.Z.; Gao, J.Y.; Zhao, D. MIFNet: Pathological image segmentation method for stomach cancer based on multi-scale input and feature fusion. J. Comput. Appl. 2019, 39, 107–113. [Google Scholar]
- Chen, G.; Yu, J. Particle Swarm Optimization Algorithm. Inf. Control 2005, 186, 454–458. [Google Scholar]
- Zhang, M.; Jiang, Y.; Li, C.; Yang, F. Fully convolutional networks for blueberry bruising and calyx segmentation using hyperspectral transmittance imaging. Biosyst. Eng. 2020, 192, 159–175. [Google Scholar] [CrossRef]
- Farasin, A.; Colomba, L.; Garza, P. Double-step u-net: A deep learning-based approach for the estimation of wildfire damage severity through sentinel-2 satellite data. Appl. Sci. 2020, 10, 4332. [Google Scholar] [CrossRef]
- Liu, Z.; Cao, Y.; Wang, Y.; Wang, W. Computer vision-based concrete crack detection using U-net fully convolutional networks. Autom. Constr. 2019, 104, 129–139. [Google Scholar] [CrossRef]
- John, D.; Zhang, C. An attention-based U-Net for detecting deforestation within satellite sensor imagery. Int. J. Appl. Earth Obs. Geoinf. 2022, 107, 102685. [Google Scholar] [CrossRef]
- Chang, L.; Lin, Y.H. Meta-learning with adaptive learning rates for few-shot fault diagnosis. IEEE/ASME Trans. Mechatron. 2022, 27, 5948–5958. [Google Scholar] [CrossRef]
- So, C. Exploring meta learning: Parameterizing the learning-to-learn process for image classification. In Proceedings of the 2021 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Jeju Island, Republic of Korea, 13–16 April 2021; pp. 199–202. [Google Scholar]
- Yang, X.S. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74. [Google Scholar]
- Feng, H.; Ni, H.; Zhao, R.; Zhu, X. An enhanced grasshopper optimization algorithm to the bin packing problem. J. Control Sci. Eng. 2020, 2020, 3894987. [Google Scholar] [CrossRef]
- California Department of Food and Agriculture. California Agricultural Statistics Review 2014–2015. Available online: https://www.cdfa.ca.gov/statistics/PDFs/2015Report.pdf (accessed on 10 July 2018).
- California Department of Food and Agriculture. California Agricultural Statistics Review 2019–2020. Available online: https://www.cdfa.ca.gov/Statistics/PDFs/2020Review.pdf (accessed on 10 July 2020).
- California Department of Food and Agriculture. California Greenhouse Gas Emissions for 2000 to 2020 Trends of Emissions and Other Indicators. Available online: https://ww2.arb.ca.gov/sites/default/files/classic/cc/inventory/2000-2020_ghg_inventory_trends.pdf (accessed on 26 October 2022).
- Chen, L.; Jin, Z.; Michishita, R.; Cai, J.; Yue, T.; Chen, B.; Xu, B. Dynamic monitoring of wetland cover changes using time-series remote sensing imagery. Ecol. Inform. 2014, 24, 17–26. [Google Scholar]
- Xue, J.; Hou, S.; Wu, T. A dynamic analysis of carbon emission, economic growth and industrial structure of inner mongolia based on VECM model. J. Inn. Mong. Univ. 2020, 51, 129–134. [Google Scholar]
- Wang, S.; Han, Y.; Chen, J.; He, X.; Zhang, Z.; Liu, X.; Zhang, K. Weed density extraction based on few-shot learning through UAV remote sensing RGB and multispectral images in ecological irrigation area. Front. Plant Sci. 2022, 12, 735230. [Google Scholar] [CrossRef] [PubMed]
- Chen, J.; Zhang, Z.C.; Zhang, K.; Wang, S.B.; Han, Y. UAV-borne LiDAR crop point cloud enhancement using grasshopper optimization and point cloud up-sampling network. Remote Sens. 2020, 12, 3208. [Google Scholar] [CrossRef]
- Chen, J.; Zhang, Z.; Yi, K.; Han, Y.; Ren, Z. Snake-hot-eye-assisted multi-process-fusion target tracking based on a roll-pitch semi-strapdown infrared imaging seeker. J. Bionic Eng. 2022, 19, 1124–1139. [Google Scholar] [CrossRef]
- Zhang, Z.; Wang, S.; Chen, J.; Han, Y. A bionic dynamic path planning algorithm of the micro UAV based on the fusion of deep neural network optimization/filtering and hawk-eye vision. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 3728–3740. [Google Scholar] [CrossRef]
- Zhang, Z.; Chen, J.; Xu, X.; Liu, C.; Han, Y. Hawk-eye-inspired perception algorithm of stereo vision for obtaining orchard 3D point cloud navigation map. CAAI Trans. Intell. Technol. 2023, 8, 987–1001. [Google Scholar] [CrossRef]
- Cao, Y.; Chen, J.; Zhang, Z. A sheep dynamic counting scheme based on the fusion between an improved-sparrow-search YOLOv5x-ECA model and few-shot deepsort algorithm. Comput. Electron. Agric. 2023, 206, 107696. [Google Scholar] [CrossRef]
- Le, W.; Xue, Z.; Chen, J.; Zhang, Z. Coverage path planning based on the optimization strategy of multiple solar powered unmanned aerial vehicles. Drones 2022, 6, 203. [Google Scholar] [CrossRef]
- Chen, J.; Chen, T.; Cao, Y.; Zhang, Z.; Le, W.; Han, Y. Information-integration-based optimal coverage path planning of agricultural unmanned systems formations: From theory to practice. J. Ind. Inf. Integr. 2024, 40, 100617. [Google Scholar] [CrossRef]
- Wang, S.; Chen, J.; He, X. An adaptive composite disturbance rejection for attitude control of the agricultural quadrotor UAV. ISA Trans. 2022, 129, 564–579. [Google Scholar] [CrossRef]
- Chen, T.; Zhao, R.; Chen, J.; Zhang, Z. Data-driven active disturbance rejection control of plant-protection unmanned ground vehicle prototype: A fuzzy indirect iterative learning approach. IEEE/CAA J. Autom. Sin. 2024, 11, 1892–1894. [Google Scholar] [CrossRef]
Class | Number of Pixels |
---|---|
Alfalfa | 123,925 |
Lettuce | 9008 |
Sugar beets | 18,632 |
Onions | 15,374 |
Durum wheat | 21,452 |
Other hay | 24,395 |
Method | Alfalfa | Other Hay | Durum Wheat | Lettuce | Onions | Sugar Beets | Back-Ground | AA | F1_Score | mIoU |
---|---|---|---|---|---|---|---|---|---|---|
U-Net | 92.096% | 81.883% | 82.279% | 88.165% | 86.158% | 89.191% | 89.044% | 87.675% | 81.992% | 77.254% |
PSPNet | 89.136% | 81.600% | 88.878% | 86.062% | 87.196% | 87.692% | 92.295% | 87.960% | 85.343% | 78.502% |
DeepLabv3+ | 91.752% | 90.453% | 81.456% | 89.612% | 86.951% | 90.586% | 90.761% | 88.795% | 87.398% | 80.045% |
MSFNet | 92.429% | 87.475% | 83.634% | 88.584% | 88.174% | 91.108% | 90.144% | 89.359% | 88.835% | 81.130% |
MAML+ U-Net | 93.164% | 89.162% | 83.946% | 90.432% | 87.967% | 94.852% | 91.364% | 90.126% | 88.357% | 79.521% |
PSML+ U-Net | 93.843% | 90.912% | 87.691% | 92.867% | 93.458% | 91.475% | 91.654% | 91.700% | 89.156% | 85.943% |
MAML + MSFNet | 94.216% | 94.363% | 87.417% | 93.649% | 94.074% | 97.337% | 93.194% | 93.705% | 88.699% | 87.699% |
BAML + MSFNet | 96.193% | 92.764% | 92.584% | 93.895% | 95.716% | 95.146% | 93.254% | 94.221% | 89.638% | 88.167% |
PSML + MSFNet | 98.936% | 88.743% | 91.701% | 94.879% | 94.872% | 94.919% | 96.891% | 94.902% | 91.901% | 90.557% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hu, G.; Ren, Z.; Chen, J.; Ren, N.; Mao, X. Using the MSFNet Model to Explore the Temporal and Spatial Evolution of Crop Planting Area and Increase Its Contribution to the Application of UAV Remote Sensing. Drones 2024, 8, 432. https://doi.org/10.3390/drones8090432
Hu G, Ren Z, Chen J, Ren N, Mao X. Using the MSFNet Model to Explore the Temporal and Spatial Evolution of Crop Planting Area and Increase Its Contribution to the Application of UAV Remote Sensing. Drones. 2024; 8(9):432. https://doi.org/10.3390/drones8090432
Chicago/Turabian StyleHu, Gui, Zhigang Ren, Jian Chen, Ni Ren, and Xing Mao. 2024. "Using the MSFNet Model to Explore the Temporal and Spatial Evolution of Crop Planting Area and Increase Its Contribution to the Application of UAV Remote Sensing" Drones 8, no. 9: 432. https://doi.org/10.3390/drones8090432