Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Advertisement

WeedGan: a novel generative adversarial network for cotton weed identification

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Recently, precision weed management has emerged as a promising solution for reducing the use of herbicides which is hazardous to crops and human health. Thus, accurate identification of weed in the early stage is the urge of current agricultural practices. Despite recent progress, developing an efficient weed identification system for real field scenarios is a serious challenge. To overcome this, a number of deep learning-based methods have been introduced in the literature. However, these methods require large volume of annotated data images which is rarely available. To address this gap, in this paper, a novel WeedGan method has been introduced for generating realistic synthetic images. The proposed WeedGan adopted concept of federated learning to reduce the computational load by introducing two discriminators. Additionally, a new loss function is defined for the efficient training of the generator. Extensive experiments have been performed to validate the performance of the WeedGan. First, the quality of images generated by the WeedGan was validated on cotton weed dataset in terms of FID score, and discriminator accuracy. The results are compared against four other state-of-the-art GAN models namely, DC-GAN, W-GAN, Info-Gan, and VIT-GAN. Further, classifiers performance of the generated dataset was evaluated using seven state-of-the-art transfer learning-based methods on the original, basic augmented, and WeedGan augmented datasets. The experimental results demonstrate that the proposed WeedGan has outperformed all the considered methods by achieving FID score of 282.76. Moreover, the classification performance of WeedGan augmented dataset was recorded highest with 97.82% on testing and 99.87% training accuracy with DenseNet121.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

The data that support the findings of this study are openly available in at, https://www.kaggle.com/datasets/yuzhenlu/cottonweedid15

References

  1. Cotton, I., Profile, A.: The cotton corporation of India ltd. Publication

  2. Manalil, S., Coast, O., Werth, J., Chauhan, B.S.: Weed management in cotton (Gossypium hirsutum l.) through weed-crop competition: a review. Crop Prot. 95, 53–59 (2017)

    Google Scholar 

  3. Oerke, E.-C.: Crop losses to pests. J. Agric. Sci. 144(1), 31–43 (2006)

    Google Scholar 

  4. Gai, J., Tang, L., Steward, B.L.: Automated crop plant detection based on the fusion of color and depth images for robotic weed control. J. Field Robot. 37(1), 35–52 (2020)

    Google Scholar 

  5. Gai, J., Xiang, L., Tang, L.: Using a depth camera for crop row detection and mapping for under-canopy navigation of agricultural robotic vehicle. Comput. Electron. Agric. 188, 106301 (2021)

    Google Scholar 

  6. Picon, A., San-Emeterio, M.G., Bereciartua-Perez, A., Klukas, C., Eggers, T., Navarra-Mestre, R.: Deep learning-based segmentation of multiple species of weeds and corn crop using synthetic and real image datasets. Comput. Electron. Agric. 194, 106719 (2022)

    Google Scholar 

  7. Su, J., Yi, D., Coombes, M., Liu, C., Zhai, X., McDonald-Maier, K., Chen, W.-H.: Spectral analysis and mapping of blackgrass weed by leveraging machine learning and UAV multispectral imagery. Comput. Electron. Agric. 192, 106621 (2022)

    Google Scholar 

  8. Kamath, R., Balachandra, M., Vardhan, A., Maheshwari, U.: Classification of paddy crop and weeds using semantic segmentation. Cogent Eng. 9(1), 2018791 (2022)

    Google Scholar 

  9. Dhakshayani, J., Kulkarni, S.S., Mahapatra, A., Surendiran, B., Nath, M.K.: Weed classification from paddy crops using convolutional neural network. In: Proceedings of the international conference on paradigms of communication, computing and data sciences, pp. 493–507. Springer (2022)

  10. Coleman, G., Salter, W., Walsh, M.: Openweedlocator (OWL): an open-source, low-cost device for fallow weed detection. Sci. Rep. 12(1), 1–12 (2022)

    Google Scholar 

  11. Zhuang, J., Li, X., Bagavathiannan, M., Jin, X., Yang, J., Meng, W., Li, T., Li, L., Wang, Y., Chen, Y., et al.: Evaluation of different deep convolutional neural networks for detection of broadleaf weed seedlings in wheat. Pest Manag. Sci. 78(2), 521–529 (2022)

    Google Scholar 

  12. Hasan, A.M., Sohel, F., Diepeveen, D., Laga, H., Jones, M.G.: A survey of deep learning techniques for weed detection from images. Comput. Electron. Agric. 184, 106067 (2021)

    Google Scholar 

  13. O’Shea, K., Nash, R.: An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458

  14. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017)

  15. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan,D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9 (2015)

  16. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708 (2017)

  17. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778 (2016)

  18. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.-C.: Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4510–4520 (2018)

  19. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556

  20. Chollet, F. Xception: deep learning with depthwise separable convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1251–1258 (2017)

  21. Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and\(<\) 0.5 mb model size, arXiv preprint arXiv:1602.07360

  22. Tan, M., Le, Q.: Efficientnet: rethinking model scaling for convolutional neural networks. In: International conference on machine learning, PMLR, pp. 6105–6114 (2019)

  23. Barbedo, J.G.: Factors influencing the use of deep learning for plant disease recognition. Biosyst. Eng. 172, 84–91 (2018)

    Google Scholar 

  24. Barbedo, J.G.A.: Plant disease identification from individual lesions and spots using deep learning. Biosyst. Eng. 180, 96–107 (2019)

    Google Scholar 

  25. Nagaraju, M., Chawla, P., Upadhyay, S., Tiwari, R.: Convolution network model based leaf disease detection using augmentation techniques. Expert Syst. 39(4), e12885 (2022)

    Google Scholar 

  26. Pan, S.-Q., Qiao, J.-F., Rui, W., Yu, H.-L., Cheng, W., Taylor, K., Pan, H.-Y.: Intelligent diagnosis of northern corn leaf blight with deep learning model. J. Integr. Agric. 21(4), 1094–1105 (2022)

    Google Scholar 

  27. Gour, M., Jain, S.: Stacked convolutional neural network for diagnosis of Covid-19 disease from x-ray images, arXiv preprint arXiv:2006.13817

  28. Wang, L., Xiang, L., Tang, L., Jiang, H.: A convolutional neural network-based method for corn stand counting in the field. Sensors 21(2), 507 (2021)

    Google Scholar 

  29. Ghosh, M., Roy, S.S., Mukherjee, H., Obaidullah, S.M., Santosh, K., Roy, K.: Understanding movie poster: transfer-deep learning approach for graphic-rich text recognition. Vis. Comput. 38(5), 1645–1664 (2022)

    Google Scholar 

  30. Liu, T., Cai, Y., Zheng, J., Thalmann, N.M.: BEACon: a boundary embedded attentional convolution network for point cloud instance segmentation. Vis. Comput. 38(7), 2303–2313 (2022)

    Google Scholar 

  31. Olsen, A., Konovalov, D.A., Philippa, B., Ridd, P., Wood, J.C., Johns, J., Banks, W., Girgenti, B., Kenny, O., Whinney, J., et al.: DeepWeeds: a multiclass weed species image dataset for deep learning. Sci. Rep. 9(1), 1–12 (2019)

    Google Scholar 

  32. Espejo-Garcia, B., Mylonas, N., Athanasakos, L., Fountas, S.: Improving weeds identification with a repository of agricultural pre-trained deep neural networks. Comput. Electron. Agric. 175, 105593 (2020)

    Google Scholar 

  33. Suh, H.K., Ijsselmuiden, J., Hofstee, J.W., van Henten, E.J.: Transfer learning for the classification of sugar beet and volunteer potato under field conditions. Biosyst. Eng. 174, 50–65 (2018)

    Google Scholar 

  34. Khan, A., Sohail, A., Zahoora, U., Qureshi, A.S.: A survey of the recent architectures of deep convolutional neural networks. Artif. Intell. Rev. 53(8), 5455–5516 (2020)

    Google Scholar 

  35. Mylonas, N., Malounas, I., Mouseti, S., Vali, E., Espejo-Garcia, B., Fountas, S.: Eden library: a long-term database for storing agricultural multi-sensor datasets from UAV and proximal platforms. Smart Agric. Technol. 2, 100028 (2022)

    Google Scholar 

  36. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville A. et al.: Generative adversarial nets. Adv. Neural Inf. Process. Syst. 27, 2672–2680 (2014)

  37. Cui, S., Wei, M., Liu, C., Jiang, J.: GAN-segNet: a deep generative adversarial segmentation network for brain tumor semantic segmentation. Int. J. Imaging Syst. Technol. 32(3), 857–868 (2022)

    Google Scholar 

  38. Olaniyi, E., Chen, D., Lu, Y., Huang, Y.: Generative adversarial networks for image augmentation in agriculture: a systematic review, arXiv preprint arXiv:2204.04707

  39. Fawakherji, M., Potena, C., Pretto, A., Bloisi, D.D., Nardi, D.: Multi-spectral image synthesis for crop/weed segmentation in precision farming. Robot. Auton. Syst. 146, 103861 (2021)

    Google Scholar 

  40. Kerdegari, H., Razaak, M., Argyriou, V., Remagnino, P.: Semi-supervised GAN for classification of multispectral imagery acquired by UAVs, arXiv preprint arXiv:1905.10920

  41. Sa, I., Chen, Z., Popović, M., Khanna, R., Liebisch, F., Nieto, J., Siegwart, R.: weedNet: dense semantic weed classification using multispectral images and MAV for smart farming. IEEE Robot. Autom. Lett. 3(1), 588–595 (2017)

    Google Scholar 

  42. Khan, S., Tufail, M., Khan, M.T., Khan, Z.A., Iqbal, J., Alam, M.: A novel semi-supervised framework for UAV based crop/weed classification. Plos One 16(5), e0251008 (2021)

    Google Scholar 

  43. Song, H., Wang, M., Zhang, L., Li, Y., Jiang, Z., Yin, G.: S\(_{2}\) RGAN: sonar-image super-resolution based on generative adversarial network. Vis. Comput. 37(8), 2285–2299 (2021)

    Google Scholar 

  44. Cabezon Pedroso, T., Ser, J.D., Díaz-Rodríguez, N.: Capabilities, limitations and challenges of style transfer with CycleGANs: a study on automatic ring design generation. In: International cross-domain conference for machine learning and knowledge extraction, Springer, pp. 168–187 (2022)

  45. Feng, L., Liu, W., Guo, C., Tang, K., Zhuo, C., Wang, Z.: GANDSE: generative adversarial network based design space exploration for neural network accelerator design, arXiv preprint arXiv:2208.00800

  46. Li, Z., Liu, F., Yang, W., Peng, S., Zhou, J. A survey of convolutional neural networks: analysis, applications, and prospects. IEEE Trans. Neural Netw. Learn. Syst. 33(12) (2021)

  47. Chen, W., Ouyang, S., Yang, J., Li, X., Zhou, G., Wang, L.: JAGAN: a framework for complex land cover classification using Gaofen-5 AHSI images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 15, 1591–1603 (2022)

    Google Scholar 

  48. Chen, W., Zhou, G., Liu, Z., Li, X., Zheng, X., Wang, L.: NIGAN: a framework for mountain road extraction integrating remote sensing road-scene neighborhood probability enhancements and improved conditional generative adversarial network. IEEE Trans. Geosci. Remote Sens. 60, 1–15 (2022)

    Google Scholar 

  49. Jiqing, C., Depeng, W., Teng, L., Tian, L., Huabin, W.: All-weather road drivable area segmentation method based on CycleGAN. Vis. Comput. 38, 1–17 (2022)

  50. Liang, M., Zhang, Q., Wang, G., Xu, N., Wang, L., Liu, H., Zhang, C.: Multi-scale self-attention generative adversarial network for pathology image restoration. Vis. Comput. 38, 1–17 (2022)

  51. Phaphuangwittayakul, A., Ying, F., Guo, Y., Zhou, L., Chakpitak, N.: Few-shot image generation based on contrastive meta-learning generative adversarial network. Vis. Comput. 38, 1–14 (2022)

  52. Manu, C.M., Sreeni, K.: GANID: a novel generative adversarial network for image dehazing. Vis. Comput. 38, 1–14 (2022)

  53. Rao, J., Ke, A., Liu, G., Ming, Y.: MS-GAN: multi-scale GAN with parallel class activation maps for image reconstruction. Vis. Comput. 38, 1–16 (2022)

  54. Zhang, Y., Han, S., Zhang, Z., Wang, J., Bi, H.: CF-GAN: cross-domain feature fusion generative adversarial network for text-to-image synthesis. Vis. Comput. 38, 1–11 (2022)

  55. Pandian, J.A., Kanchanadevi, K., Kumar, V.D., Jasińska, E., Goňo, R., Leonowicz, Z., Jasiński, M.: A five convolutional layer deep convolutional neural network for plant leaf disease detection. Electronics 11(8), 1266 (2022)

    Google Scholar 

  56. Espejo-Garcia, B., Mylonas, N., Athanasakos, L., Vali, E., Fountas, S.: Combining generative adversarial networks and agricultural transfer learning for weeds identification. Biosyst. Eng. 204, 79–89 (2021)

    Google Scholar 

  57. Wu, Q., Chen, Y., Meng, J.: DCGAN-based data augmentation for tomato leaf disease identification. IEEE Access 8, 98716–98728 (2020)

    Google Scholar 

  58. Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: Infogan: Interpretable representation learning by information maximizing generative adversarial nets. Adv. Neural Inf. Process. Syst. 29 (2016)

  59. Gomaa, A.A., Abd El-Latif, Y.M.: Early prediction of plant diseases using CNN and GANs. Int. J. Adv. Comput. Sci. Appl. 12(5) (2021)

  60. Hua, S., Xu, M., Xu, Z., Ye, H., Zhou, C.: Multi-feature decision fusion algorithm for disease detection on crop surface based on machine vision. Neural Comput. Appl. 34(12), 9471–9484 (2022)

    Google Scholar 

  61. Jin, H., Li, Y., Qi, J., Feng, J., Tian, D., Mu, W.: GrapeGAN: Unsupervised image enhancement for improved grape leaf disease recognition. Comput. Electron. Agric. 198, 107055 (2022)

    Google Scholar 

  62. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai,X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: transformers for image recognition at scale, arXiv preprint arXiv:2010.11929

  63. Lee, K., Chang, H., Jiang, L., Zhang, H., Tu, Z., Liu, C.: ViTGAN: training GANs with vision transformers, arXiv preprint arXiv:2107.04589

  64. Zhang, Y., Wa, S., Zhang, L., Lv, C.: Automatic plant disease detection based on tranvolution detection network with GAN modules using leaf images. Front. Plant Sci. 13, 875693 (2022)

    Google Scholar 

  65. Wang, Y., Chen, Y., Wang, D.: Convolution network enlightened transformer for regional crop disease classification. Electronics 11(19), 3174 (2022)

    Google Scholar 

  66. Zhang, L., Zhou, G., Lu, C., Chen, A., Wang, Y., Li, L., Cai, W.: MMDGAN: a fusion data augmentation method for tomato-leaf disease identification. Appl. Soft Comput. 123, 108969 (2022)

    Google Scholar 

  67. Sharma, V., Tripathi, A.K., Mittal, H.: Technological revolutions in smart farming: current trends, challenges & future directions. Comput. Electron. Agric. 201, 107217 (2022)

    Google Scholar 

  68. Sharma, V., Tripathi, A.K.: A systematic review of meta-heuristic algorithms in IoT based application. Array 14, 100164 (2022)

    Google Scholar 

  69. Fei, Z., Olenskyj, A.G., Bailey, B.N., Earles, M.: Enlisting 3D crop models and GANs for more data efficient and generalizable fruit detection. In: Proceedings of the IEEE/CVF international conference on computer vision, pp. 1269–1277 (2021)

  70. Prakash, A.J., Prakasam, P.: An intelligent fruits classification in precision agriculture using bilinear pooling convolutional neural networks. Vis. Comput. 38, 1–17 (2022)

  71. Liang, C., Cheng, B., Xiao, B., He, C., Liu, X., Jia, N., Chen, J.: Semi-/weakly-supervised semantic segmentation method and its application for coastal aquaculture areas based on multi-source remote sensing images-taking the Fujian coastal area (mainly Sanduo) as an example. Remote Sens. 13(6), 1083 (2021)

    Google Scholar 

  72. Oliveira, D.A.B., Pereira, L.G.R., Bresolin, T., Ferreira, R.E.P., Dorea, J.R.R.: A review of deep learning algorithms for computer vision systems in livestock. Livest. Sci. 253, 104700 (2021)

    Google Scholar 

  73. Yang, X., Guo, M., Lyu, Q., Ma, M.: Detection and classification of damaged wheat kernels based on progressive neural architecture search. Biosyst. Eng. 208, 176–185 (2021)

    Google Scholar 

  74. Konečnỳ, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., Bacon, D.: Federated learning: strategies for improving communication efficiency, arXiv preprint arXiv:1610.05492

  75. Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change Loy, C.: ESRGAN: enhanced super-resolution generative adversarial networks. In: Proceedings of the European conference on computer vision (ECCV) workshops, pp. 0–0 (2018)

  76. Koonce, B.: EfficientNet. In: Convolutional neural networks with swift for tensorflow, Springer, pp. 109–123 (2021)

  77. Hardy, C., Le Merrer, E., Sericola, B.: MD-GAN: multi-discriminator generative adversarial networks for distributed datasets. In: IEEE international parallel and distributed processing symposium (IPDPS). IEEE 2019, pp. 866–877 (2019)

  78. Nguyen, T. Le, T., Vu, H., Phung, D.: Dual discriminator generative adversarial nets. Adv. Neural Inf. Process. Syst. 30 (2017)

  79. Fan, C., Liu, P.: Federated generative adversarial learning. In: Chinese conference on pattern recognition and computer vision (PRCV), Springer, pp. 3–15 (2020)

  80. Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4681–4690 (2017)

  81. Wang, X., Xie, L., Dong, C., Shan, Y.: Real-ESRGAN: training real-world blind super-resolution with pure synthetic data. In: Proceedings of the IEEE/CVF international conference on computer vision, pp. 1905–1914 (2021)

  82. Zhao, Y. et al.: Plant disease detection using generated leaves based on DoubleGAN. IEEE/ACM Trans. Comput. Biol. Bioinf. 19(3), 1817–1826. https://doi.org/10.1109/TCBB.2021.3056683

  83. Cap, Q.H., Tani, H., Kagiwada, S., Uga, H., Iyatomi, H.: LASSR: effective super-resolution method for plant disease diagnosis. Comput. Electron. Agric. 187, 106271 (2021)

    Google Scholar 

  84. Zhang, S., Yu, D., Zhou, Y., Wu, Y., Ma, Y.: Enhanced visual perception for underwater images based on multistage generative adversarial network. Vis. Comput. 38, 1–13 (2022)

  85. Wang, F., Yin, D., Song, R.: Image super-resolution using only low-resolution images. Vis. Comput. 38, 1–16 (2022)

  86. Yu, Y., Zhang, W., Deng, Y.: Frechet inception distance (FID) for evaluating GANs

  87. Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training gans. Adv. Neural Inf. Process. Syst. 29 (2016)

  88. Chong, M.J., Forsyth, D.: Effectively unbiased FID and inception score and where to find them. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 6070–6079 (2020)

  89. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local nash equilibrium. Adv. Neural Inf. Process. Syst. 30 (2017)

  90. Chen, D., Lu, Y., Li, Z., Young, S.: Performance evaluation of deep transfer learning on multiclass identification of common weed species in cotton production systems, arXiv preprint arXiv:2110.04960

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ashish Kumar Tripathi.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sharma, V., Tripathi, A.K., Mittal, H. et al. WeedGan: a novel generative adversarial network for cotton weed identification. Vis Comput 39, 6503–6519 (2023). https://doi.org/10.1007/s00371-022-02742-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-022-02742-5

Keywords