Leaf-Counting in Monocot Plants Using Deep Regression Models
Abstract
:1. Introduction
2. Materials and Methods
2.1. Overview
2.2. Network Architecture
2.3. Image Preprocessing
2.4. Image Skeleton
2.5. Image Augmentation
- Original plant image with a size of (Figure 4a);
- Vertically compress the original image (Figure 4b);
- Horizontally compress the original image (Figure 4c);
- Vertically flip the original image (Figure 4d);
- Vertically flip and compress the original image (Figure 4e);
- Vertically flip and horizontally compress the original image (Figure 4f);
- Horizontally flip the original image (Figure 4g);
- Rotate the original image by 180 degrees (Figure 4h);
- Clockwise rotate the original image by 90 degrees (Figure 4i);
- Clockwise rotate the original image by 90 degrees and then vertically flip it (Figure 4j);
- Counterclockwise rotate the original image by 90 degrees (Figure 4k);
- Counterclockwise rotate the original image by 90 degrees and then vertically flip it (Figure 4l).
2.6. Dataset Description
2.7. Implementation
2.8. Evaluation Metrics
2.8.1. Quantitative Evaluation Metrics
2.8.2. Qualitative Evaluation Metrics
3. Results
3.1. Experiment Dataset Configuration
3.1.1. Sorghum Dataset Configuration
3.1.2. Maize Dataset Configuration
3.2. Results from Sorghum Datasets S1 and Maize Datasets M1
3.3. Results from Sorghum Datasets S2 and Maize Datasets M2
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Farjon, G.; Itzhaky, Y.; Khoroshevsky, F.; Bar-Hillel, A. Leaf counting: Fusing network components for improved accuracy. Front. Plant Sci. 2021, 12, 1063. [Google Scholar]
- Buzzy, M.; Thesma, V.; Davoodi, M.; Mohammadpour Velni, J. Real-time plant leaf counting using deep object detection networks. Sensors 2020, 20, 6896. [Google Scholar] [PubMed]
- Praveen Kumar, J.; Domnic, S. Rosette plant segmentation with leaf count using orthogonal transform and deep convolutional neural network. Mach. Vis. Appl. 2020, 31, 6. [Google Scholar]
- Minervini, M.; Fischbach, A.; Scharr, H.; Tsaftaris, S. Plant Phenotyping Datasets. 2015. Available online: http://www.Plant-phenotyping.org/datasets (accessed on 4 January 2023).
- Minervini, M.; Fischbach, A.; Scharr, H.; Tsaftaris, S.A. Finely-grained annotated datasets for image-based plant phenotyping. Pattern Recognit. Lett. 2016, 81, 80–89. [Google Scholar] [CrossRef]
- Jiang, B.; Wang, P.; Zhuang, S.; Li, M.; Li, Z.; Gong, Z. Leaf counting with multi-scale convolutional neural network features and fisher vector coding. Symmetry 2019, 11, 516. [Google Scholar] [CrossRef]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Cui, S.L.; Tian, F. Face recognition method based on SIFT feature and fisher. Comput. Eng. 2009, 35, 195–197. [Google Scholar]
- Miao, C.; Guo, A.; Thompson, A.M.; Yang, J.; Ge, Y.; Schnable, J.C. Automation of leaf counting in maize and sorghum using deep learning. Plant Phenome J. 2021, 4, e20022. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems, Palais des Congrès de Montréal, Montreal, QC, Canada, 7–12 December 2015; Volume 28. [Google Scholar]
- Xiang, L.; Bao, Y.; Tang, L.; Ortiz, D.; Salas-Fernandez, M.G. Automated morphological traits extraction for sorghum plants via 3D point cloud data analysis. Comput. Electron. Agric. 2019, 162, 951–961. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
- Gaillard, M.; Miao, C.; Schnable, J.; Benes, B. Sorghum segmentation by skeleton extraction. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 296–311. [Google Scholar]
- Abed, S.H.; Al-Waisy, A.S.; Mohammed, H.J.; Al-Fahdawi, S. A modern deep learning framework in robot vision for automated bean leaves diseases detection. Int. J. Intell. Robot. Appl. 2021, 5, 235–251. [Google Scholar] [CrossRef]
- Zhang, C.; Zhou, P.; Li, C.; Liu, L. A convolutional neural network for leaves recognition using data augmentation. In Proceedings of the 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, Liverpool, UK, 26–28 October 2015; pp. 2143–2150. [Google Scholar]
- Montavon, G.; Samek, W.; Müller, K.R. Methods for interpreting and understanding deep neural networks. Digit. Signal Process. 2018, 73, 1–15. [Google Scholar] [CrossRef]
- Yu, Y.; Liang, S.; Samali, B.; Nguyen, T.N.; Zhai, C.; Li, J.; Xie, X. Torsional capacity evaluation of RC beams using an improved bird swarm algorithm optimised 2D convolutional neural network. Eng. Struct. 2022, 273, 115066. [Google Scholar] [CrossRef]
- Kaliyar, R.K.; Goswami, A.; Narang, P.; Sinha, S. FNDNet—A deep convolutional neural network for fake news detection. Cogn. Syst. Res. 2020, 61, 32–44. [Google Scholar] [CrossRef]
- Mo, H.; Chen, B.; Luo, W. Fake faces identification via convolutional neural network. In Proceedings of the 6th ACM Workshop on Information Hiding and Multimedia Security, Innsbruck, Austria, 20–22 June 2018; pp. 43–47. [Google Scholar]
- Kang, K.; Li, H.; Yan, J.; Zeng, X.; Yang, B.; Xiao, T.; Zhang, C.; Wang, Z.; Wang, R.; Wang, X.; et al. T-cnn: Tubelets with convolutional neural networks for object detection from videos. IEEE Trans. Circuits Syst. Video Technol. 2017, 28, 2896–2907. [Google Scholar] [CrossRef]
- Strubell, E.; Ganesh, A.; McCallum, A. Energy and policy considerations for deep learning in NLP. arXiv 2019, arXiv:1906.02243. [Google Scholar]
- Pouyanfar, S.; Chen, S.C.; Shyu, M.L. An efficient deep residual-inception network for multimedia classification. In Proceedings of the 2017 IEEE International Conference on Multimedia and Expo (ICME), Hong Kong, China, 10–14 July 2017; pp. 373–378. [Google Scholar]
- Yang, S.; Lin, G.; Jiang, Q.; Lin, W. A dilated inception network for visual saliency prediction. IEEE Trans. Multimed. 2019, 22, 2163–2176. [Google Scholar] [CrossRef] [Green Version]
- Das, D.; Santosh, K.; Pal, U. Truncated inception net: COVID-19 outbreak screening using chest X-rays. Phys. Eng. Sci. Med. 2020, 43, 915–925. [Google Scholar]
- Alom, M.Z.; Yakopcic, C.; Nasrin, M.S.; Taha, T.M.; Asari, V.K. Breast cancer classification from histopathological images with inception recurrent residual convolutional neural network. J. Digit. Imaging 2019, 32, 605–617. [Google Scholar]
- Yu, Y.; Samali, B.; Rashidi, M.; Mohammadi, M.; Nguyen, T.N.; Zhang, G. Vision-based concrete crack detection using a hybrid framework considering noise effect. J. Build. Eng. 2022, 61, 105246. [Google Scholar]
- Farooq, M.; Hafeez, A. Covid-resnet: A deep learning framework for screening of covid19 from radiographs. arXiv 2020, arXiv:2003.14395. [Google Scholar]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar]
- Da Silva, N.B.; Gonçalves, W.N. Regression in Convolutional Neural Networks applied to Plant Leaf Counting. In Proceedings of the Anais do XV Workshop de Visão Computacional. SBC, São Bernardo do Campo, Brazil, 9–11 September 2019; pp. 49–54. [Google Scholar]
- Krishnamoorthy, N.; Prasad, L.N.; Kumar, C.P.; Subedi, B.; Abraha, H.B.; Sathishkumar, V. Rice leaf diseases prediction using deep neural networks with transfer learning. Environ. Res. 2021, 198, 111275. [Google Scholar]
- Du, S.; Lindenbergh, R.; Ledoux, H.; Stoter, J.; Nan, L. AdTree: Accurate, detailed, and automatic modelling of laser-scanned trees. Remote Sens. 2019, 11, 2074. [Google Scholar]
- Guo, Z.; Hall, R.W. Parallel thinning with two-subiteration algorithms. Commun. ACM 1989, 32, 359–373. [Google Scholar]
- Ajiboye, A.; Abdullah-Arshah, R.; Qin, H.; Isah-Kebbe, H. Evaluating the effect of dataset size on predictive model using supervised learning technique. Int. J. Comput. Syst. Softw. Eng. 2015, 1, 75–84. [Google Scholar]
- Taylor, L.; Nitschke, G. Improving deep learning with generic data augmentation. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bangalore, India, 18–21 November 2018; pp. 1542–1547. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–8 December 2012; Volume 25. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 1–48. [Google Scholar] [CrossRef]
- Kuznichov, D.; Zvirin, A.; Honen, Y.; Kimmel, R. Data augmentation for leaf segmentation and counting tasks in rosette plants. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
- Ge, Y.; Bai, G.; Stoerger, V.; Schnable, J.C. Temporal dynamics of maize plant growth, water use, and leaf water content using automated high throughput RGB and hyperspectral imaging. Comput. Electron. Agric. 2016, 127, 625–632. [Google Scholar]
- Vanderlip, R. How a Sorghum Plant Develops; Technical Report, Cooperative Extension Service; Kansas Sate University: Manhattan, KS, USA, 1979. [Google Scholar]
- Dobrescu, A.; Valerio Giuffrida, M.; Tsaftaris, S.A. Leveraging multiple datasets for deep leaf counting. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 2072–2079. [Google Scholar]
- Itzhaky, Y.; Farjon, G.; Khoroshevsky, F.; Shpigler, A.; Bar-Hillel, A. Leaf counting: Multiple scale regression and detection using deep CNNs. In Proceedings of the BMVC, Newcastle, UK, 3–6 September 2018; Volume 328. [Google Scholar]
- Minervini, M.; Abdelsamea, M.M.; Tsaftaris, S.A. Image-based plant phenotyping with incremental learning and active contours. Ecol. Inform. 2014, 23, 35–48. [Google Scholar]
- Scharr, H.; Minervini, M.; French, A.P.; Klukas, C.; Kramer, D.M.; Liu, X.; Luengo, I.; Pape, J.M.; Polder, G.; Vukadinovic, D.; et al. Leaf segmentation in plant phenotyping: A collation study. Mach. Vis. Appl. 2016, 27, 585–606. [Google Scholar]
- Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland; Berlin/Heidelberg, Germany; New York, NY, USA; Dordrecht, The Netherlands; London, UK, 2014; pp. 818–833. [Google Scholar]
- Dobrescu, A.; Valerio Giuffrida, M.; Tsaftaris, S.A. Understanding deep neural networks for regression in leaf counting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
- Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2921–2929. [Google Scholar]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- Dobrescu, A.; Giuffrida, M.V.; Tsaftaris, S.A. Doing more with less: A multitask deep learning approach in plant phenotyping. Front. Plant Sci. 2020, 11, 141. [Google Scholar]
Batch Size | Learning Rate | Optimizer | Loss Function |
---|---|---|---|
16 | 0.0001 | RMSprop | Mean Squared Error |
Dataset & Method | RMSE | Accuracy | |
---|---|---|---|
S1O (ours) | 0.17 | 0.98 | 0.99 |
S1S (ours) | 0.17 | 0.99 | 0.99 |
S1O_6A (ours) | 0.10 | 0.99 | 0.99 |
S1S_6A (ours) | 0.10 | 0.99 | 0.99 |
M1O (ours) | 0.35 | 0.86 | 0.90 |
M1S (ours) | 0.29 | 0.91 | 0.91 |
(Leaf-count-net+FV) ([6]) | 0.57 | - | - |
Regression-CNN ([9]) | 0.96 | 0.87 | 0.45 |
Regression-CNN ([9]) | 1.06 | 0.64 | 0.39 |
Faster-RCNN ([9]) | 1.00 | 0.88 | 0.56 |
Dataset & Method | RMSE | Accuracy | |
---|---|---|---|
S2O (ours) | 0.48 | 0.94 | 0.77 |
S2S (ours) | 0.46 | 0.94 | 0.77 |
S2O_6A (ours) | 0.36 | 0.96 | 0.85 |
S2S_6A (ours) | 0.34 | 0.97 | 0.88 |
S2O_12A (ours) | 0.35 | 0.97 | 0.87 |
S2S_12A (ours) | 0.33 | 0.97 | 0.91 |
M2O (ours) | 0.56 | 0.63 | 0.65 |
M2S (ours) | 0.51 | 0.68 | 0.73 |
Regression-CNN ([9]) | 1.28 | 0.79 | 0.33 |
Regression-CNN ([9]) | 1.06 | 0.64 | 0.39 |
Faster-RCNN ([9]) | 1.33 | 0.78 | 0.43 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xie, X.; Ge, Y.; Walia, H.; Yang, J.; Yu, H. Leaf-Counting in Monocot Plants Using Deep Regression Models. Sensors 2023, 23, 1890. https://doi.org/10.3390/s23041890
Xie X, Ge Y, Walia H, Yang J, Yu H. Leaf-Counting in Monocot Plants Using Deep Regression Models. Sensors. 2023; 23(4):1890. https://doi.org/10.3390/s23041890
Chicago/Turabian StyleXie, Xinyan, Yufeng Ge, Harkamal Walia, Jinliang Yang, and Hongfeng Yu. 2023. "Leaf-Counting in Monocot Plants Using Deep Regression Models" Sensors 23, no. 4: 1890. https://doi.org/10.3390/s23041890