Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Weed mapping in multispectral drone imagery using lightweight vision transformers

Published: 28 December 2023 Publication History
  • Get Citation Alerts
  • Abstract

    In precision agriculture, non-invasive remote sensing can be used to observe crops and weeds in visible and non-visible spectra. This paper proposes a novel approach for weed mapping using lightweight Vision Transformers. The method uses a lightweight Transformer architecture to process high-resolution aerial images obtained from drones and performs semantic segmentation to distinguish between crops and weeds. The method also employs specific architectural designs to enable transfer learning from RGB weights in a multispectral setting. For this purpose, the WeedMap dataset, acquired by drones equipped with multispectral cameras, was used. The experimental results demonstrate the effectiveness of the proposed method, exceeding the state-of-the-art. Our approach also enables more efficient mapping, allowing farmers to quickly and easily identify infested areas and prioritize their control efforts. These results encourage using drones as versatile computer vision flying devices for herbicide management, thereby improving crop yields. The code is available at https://github.com/pasqualedem/LWViTs-for-weedmapping.

    Highlights

    A new weed mapping approach is proposed for images captured by a drone.
    The approach uses a lightweight Transformer that performs semantic segmentation.
    New techniques are proposed for transfer learning from RGB in a multispectral setting.
    The method outperforms the state-of-the-art on the WeedMap dataset.
    This technology can lead to more sustainable agricultural practices.

    References

    [1]
    FAO, How to Feed the World in 2050. Insights from an Expert Meet, FAO, 2009.
    [2]
    Vougioukas S.G., Agricultural robotics, Annu. Rev. Control Robot. Auton. Syst. 2 (2019) 365–392.
    [3]
    Burke M., Driscoll A., Lobell D.B., Ermon S., Using satellite imagery to understand and promote sustainable development, 2020, CoRR abs/2010.06988. arXiv:2010.06988.
    [4]
    Castellano G., Cotardo E., Mencar C., Vessio G., Density-based clustering with fully-convolutional networks for crowd flow detection from drones, Neurocomputing (2023).
    [5]
    Passalis N., Tefas A., Deep reinforcement learning for controlling frontal person close-up shooting, Neurocomputing 335 (2019) 37–47.
    [6]
    Singh A.K., Ganapathysubramanian B., Sarkar S., Singh A., Deep learning for plant stress phenotyping: Trends and future perspectives, Trends Plant Sci. 23 (10) (2018) 883–898.
    [7]
    Wang D., Li W., Liu X., Li N., Zhang C., UAV environmental perception and autonomous obstacle avoidance: A deep learning and depth camera combined solution, Comput. Electron. Agric. 175 (2020).
    [8]
    dos Santos Ferreira A., Freitas D.M., da Silva G.G., Pistori H., Folhes M.T., Weed detection in soybean crops using ConvNets, Comput. Electron. Agric. 143 (2017) 314–324.
    [9]
    Sa I., Chen Z., Popović M., Khanna R., Liebisch F., Nieto J., Siegwart R., Weednet: Dense semantic weed classification using multispectral images and mav for smart farming, IEEE Robot. Autom. Lett. 3 (1) (2017) 588–595.
    [10]
    B. Hobba, S. Akıncı, A.H. Göktogan, Efficient Herbicide Spray Pattern Generation for Site-Specific Weed Management Practices Using Semantic Segmentation on UAV Imagery, in: Australasian Conference on Robotics and Automation (ACRA-2021), 2021, pp. 1–10.
    [11]
    Dosovitskiy A., Beyer L., Kolesnikov A., Weissenborn D., Zhai X., Unterthiner T., Dehghani M., Minderer M., Heigold G., Gelly S., et al., An image is worth 16x16 words: Transformers for image recognition at scale, 2020, arXiv preprint arXiv:2010.11929.
    [12]
    Yan C., Hao Y., Li L., Yin J., Liu A., Mao Z., Chen Z., Gao X., Task-adaptive attention for image captioning, IEEE Trans. Circuits Syst. Video Technol. 32 (1) (2022) 43–51.
    [13]
    Sa I., Popović M., Khanna R., Chen Z., Lottes P., Liebisch F., Nieto J., Stachniss C., Walter A., Siegwart R., WeedMap: A large-scale semantic weed mapping framework using aerial multispectral imaging and deep neural network for precision farming, Remote Sens. 10 (9) (2018) 1423.
    [14]
    Zeng W., Li M., Crop leaf disease recognition based on self-attention convolutional neural network, Comput. Electron. Agric. 172 (2020).
    [15]
    Liu Y., Liu S., Xu J., Kong X., Xie L., Chen K., Liao Y., Fan B., Wang K., Forest pest identification based on a new dataset and convolutional neural network model with enhancement strategy, Comput. Electron. Agric. 192 (2022).
    [16]
    Wiesner-Hanks T., Stewart E.L., Kaczmar N., DeChant C., Wu H., Nelson R.J., Lipson H., Gore M.A., Image set for deep learning: Field images of maize annotated with disease symptoms, BMC Res. Notes 11 (1) (2018) 440.
    [17]
    K. Garg, S. Bhugra, B. Lall, Automatic Quantification of Plant Disease from Field Image Data Using Deep Learning, in: 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 1964–1971.
    [18]
    Mittler R., Abiotic stress, the field environment and stress combination, Trends Plant Sci. 11 (1) (2006) 15–19.
    [19]
    Virnodkar S.S., Pachghare V.K., Patil V.C., Jha S.K., Remote sensing and machine learning for crop water stress determination in various crops: A critical review, Precis. Agric. 21 (5) (2020) 1121–1155.
    [20]
    Chandel N.S., Chakraborty S.K., Rajwade Y.A., Dubey K., Tiwari M.K., Jat D., Identifying crop water stress using deep learning models, Neural Comput. Appl. 33 (10) (2021) 5353–5367.
    [21]
    Feng X., Zhan Y., Wang Q., Yang X., Yu C., Wang H., Tang Z., Jiang D., Peng C., He Y., Hyperspectral imaging combined with machine learning as a tool to obtain high-throughput plant salt-stress phenotyping, Plant J. 101 (6) (2020) 1448–1461.
    [22]
    Velumani K., Madec S., de Solan B., Lopez-Lozano R., Gillet J., Labrosse J., Jezequel S., Comar A., Baret F., An automatic method based on daily in situ images and deep learning to date wheat heading stage, Field Crops Res. 252 (2020).
    [23]
    Barbedo J.G.A., Detection of nutrition deficiencies in plants using proximal images and machine learning: A review, Comput. Electron. Agric. 162 (2019) 482–492.
    [24]
    Abdalla A., Cen H., Wan L., Mehmood K., He Y., Nutrient status diagnosis of infield oilseed rape via deep learning-enabled dynamic model, IEEE Trans. Ind. Inform. 17 (6) (2020) 4379–4389.
    [25]
    Rasti S., Bleakley C.J., Silvestre G.C.M., Holden N.M., Langton D., O’Hare G.M.P., Crop growth stage estimation prior to canopy closure using deep learning algorithms, Neural Comput. Appl. 33 (5) (2021) 1733–1743.
    [26]
    Van Klompenburg T., Kassahun A., Catal C., Crop yield prediction using machine learning: A systematic literature review, Comput. Electron. Agric. 177 (2020).
    [27]
    Barbosa A., Trevisan R., Hovakimyan N., Martin N.F., Modeling yield response to crop management using convolutional neural networks, Comput. Electron. Agric. 170 (2020).
    [28]
    Tedesco-Oliveira D., Pereira da Silva R., Maldonado W., Zerbato C., Convolutional neural networks in predicting cotton yield from images of commercial fields, Comput. Electron. Agric. 171 (2020).
    [29]
    Nevavuori P., Narra N., Lipping T., Crop yield prediction with deep convolutional neural networks, Comput. Electron. Agric. 163 (2019).
    [30]
    Chu Z., Yu J., An end-to-end model for rice yield prediction using deep learning fusion, Comput. Electron. Agric. 174 (2020).
    [31]
    J. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for semantic segmentation.
    [32]
    Badrinarayanan V., Kendall A., Cipolla R., SegNet: A deep convolutional encoder-decoder architecture for image segmentation, IEEE Trans. Pattern Anal. Mach. Intell. 39 (12) (2017) 2481–2495.
    [33]
    U-Net: convolutional networks for biomedical image segmentation — springerlink. https://link.springer.com/chapter/10.1007/978-3-319-24574-4_28.
    [34]
    H. Zhao, J. Shi, X. Qi, X. Wang, J. Jia, Pyramid Scene Parsing Network, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2881–2890.
    [35]
    L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, H. Adam, Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation, in: Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 801–818.
    [36]
    Xie E., Wang W., Yu Z., Anandkumar A., Alvarez J.M., Luo P., SegFormer: Simple and efficient design for semantic segmentation with transformers, in: Advances in Neural Information Processing Systems, Vol. 34, Curran Associates, Inc., 2021, pp. 12077–12090.
    [37]
    Ranftl R., Bochkovskiy A., Koltun V., Vision transformers for dense prediction, in: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), IEEE, Montreal, QC, Canada, 2021, pp. 12159–12168.
    [38]
    Lottes P., Behley J., Chebrolu N., Milioto A., Stachniss C., Joint stem detection and crop-weed classification for plant-specific treatment in precision farming, in: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2018, pp. 8233–8238.
    [39]
    Chicchón Apaza M.Á., Monzón H.M.B., Alcarria R., Semantic segmentation of weeds and crops in multispectral images by using a convolutional neural networks based on U-net, in: International Conference on Applied Technologies, Springer, 2019, pp. 473–485.
    [40]
    Haug S., Ostermann J., A crop/weed field image dataset for the evaluation of computer vision based precision agriculture tasks, in: Agapito L., Bronstein M.M., Rother C. (Eds.), Computer Vision - ECCV 2014 Workshops, in: Lecture Notes in Computer Science, Springer International Publishing, Cham, 2015, pp. 105–116.
    [41]
    Brilhador A., Gutoski M., Hattori L.T., de Souza Inácio A., Lazzaretti A.E., Lopes H.S., Classification of weeds and crops at the pixel-level using convolutional neural networks and data augmentation, in: 2019 IEEE Latin American Conference on Computational Intelligence (la-CCI), IEEE, 2019, pp. 1–6.
    [42]
    Ramirez W., Achanccaray P., Mendoza L.F., Pacheco M.A.C., Deep convolutional neural networks for weed detection in agricultural crops using optical aerial images, in: 2020 IEEE Latin American GRSS & ISPRS Remote Sensing Conference (LAGIRS), IEEE, 2020, pp. 133–137.
    [43]
    Moazzam S.I., Khan U.S., Qureshi W.S., Tiwana M.I., Rashid N., Alasmary W.S., Iqbal J., Hamza A., A patch-image based classification approach for detection of weeds in sugar beet crop, IEEE Access : Pract. Innov. Open Solut. 9 (2021) 121698–121715.
    [44]
    Khoshboresh-Masouleh M., Akhoondzadeh M., Improving weed segmentation in sugar beet fields using potentials of multispectral unmanned aerial vehicle images and lightweight deep learning, J. Appl. Remote Sens. 15 (3) (2021).
    [45]
    M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, B. Schiele, The Cityscapes Dataset for Semantic Urban Scene Understanding, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 3213–3223.
    [46]
    Yan H., Zhang C., Wu M., Lawin transformer: Improving semantic segmentation transformer with multi-scale representations via large window attention, 2022, arXiv preprint arXiv:2201.01615.
    [47]
    Knipling E.B., Physical and physiological basis for the reflectance of visible and near-infrared radiation from vegetation, Remote Sens. Environ. 1 (3) (1970) 155–159.
    [48]
    Wang P., Gao C., Wang Y., Li H., Gao Y., MobileCount: An efficient encoder-decoder framework for real-time crowd counting, Neurocomputing 407 (2020) 292–299.
    [49]
    Larsson G., Maire M., Shakhnarovich G., Fractalnet: Ultra-deep neural networks without residuals, 2016, arXiv preprint arXiv:1605.07648.
    [50]
    Huang G., Sun Y., Liu Z., Sedra D., Weinberger K.Q., Deep networks with stochastic depth, in: European Conference on Computer Vision, Springer, 2016, pp. 646–661.
    [51]
    J. Hu, L. Shen, G. Sun, Squeeze-and-Excitation Networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7132–7141.
    [52]
    AlMahamid F., Grolinger K., Autonomous unmanned aerial vehicle navigation using reinforcement learning: A systematic review, Eng. Appl. Artif. Intell. 115 (2022).

    Cited By

    View all

    Index Terms

    1. Weed mapping in multispectral drone imagery using lightweight vision transformers
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image Neurocomputing
          Neurocomputing  Volume 562, Issue C
          Dec 2023
          308 pages

          Publisher

          Elsevier Science Publishers B. V.

          Netherlands

          Publication History

          Published: 28 December 2023

          Author Tags

          1. Computer vision
          2. Deep learning
          3. Drones
          4. Precision agriculture
          5. Semantic segmentation
          6. Weed mapping
          7. UAV

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0

          Other Metrics

          Citations

          Cited By

          View all

          View Options

          View options

          Get Access

          Login options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media