Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
A Novel Algorithm for Optimal Discretization of Stress–Strain Material Curves for Application in Finite Element Analyses
Previous Article in Journal
Research on the Image-Motion Compensation Technology of the Aerial Camera Based on the Multi-Dimensional Motion of the Secondary Mirror
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wheat Powdery Mildew Detection with YOLOv8 Object Detection Model

by
Eray Önler
1,* and
Nagehan Desen Köycü
2
1
Biosystem Engineering Department, Faculty of Agriculture, Tekirdag Namık Kemal University, Tekirdag 59030, Türkiye
2
Plant Protection Department, Faculty of Agriculture, Tekirdag Namık Kemal University, Tekirdag 59030, Türkiye
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(16), 7073; https://doi.org/10.3390/app14167073
Submission received: 9 July 2024 / Revised: 8 August 2024 / Accepted: 9 August 2024 / Published: 12 August 2024

Abstract

:
Wheat powdery mildew is a fungal disease that significantly impacts wheat yield and quality. Controlling this disease requires the use of resistant varieties, fungicides, crop rotation, and proper sanitation. Precision agriculture focuses on the strategic use of agricultural inputs to maximize benefits while minimizing environmental and human health effects. Object detection using computer vision enables selective spraying of pesticides, allowing for targeted application. Traditional detection methods rely on manually crafted features, while deep learning-based methods use deep neural networks to learn features autonomously from the data. You Look Only Once (YOLO) and other one-stage detectors are advantageous due to their speed and competition. This research aimed to design a model to detect powdery mildew in wheat using digital images. Multiple YOLOv8 models were trained with a custom dataset of images collected from trial areas at Tekirdag Namik Kemal University. The YOLOv8m model demonstrated the highest precision, recall, F1, and average precision values of 0.79, 0.74, 0.770, 0.76, and 0.35, respectively.

1. Introduction

Wheat powdery mildew, caused by the fungal pathogen Blumeria graminis tritici, is a prevalent airborne disease that poses a persistent and global problem. This obligate biotrophic pathogen relies on living host organisms for survival and propagation [1]. The pathogen spreads rapidly and adapts due to its short life. Powdery mildew symptoms appear first as white powdery spots on wheat plant leaves. Spots can be enlarged and spread to cover the entire wheat leaf. As a result, crop loss may occur as the plant becomes more suitable for lodging. Furthermore, since it limits photosynthesis, it can also cause kernel quality loss [2].
Researchers have shown that it causes yield losses of up to 34% and, in extreme cases, up to 50% when appropriate protection methods are not used [3]. There are cost-effective approaches to controlling powdery mildew, such as planting resistant varieties, using fungicides, rotating crops, and maintaining excellent sanitation practices. However, fungicides are most commonly used since new virulent pathogen races are continuously emerging [4].
In today’s approach to precision agriculture, proper agricultural input utilization, such as pesticides, is crucial for sustainability [5]. The goal is to apply these inputs in the right amount, maximize their benefits, and minimize their negative effects on the environment and human health [6]. Selective spraying (spraying only diseased areas) is an important precision agriculture approach [7,8,9]. In selective spraying applications, diseased areas are detected with various types of sensors. Digital camera (2D) sensors are commonly used as primary sensors [10,11,12].
A significant precision agriculture technique is selective spraying, which involves targeting only diseased areas [7,8,9]. To achieve this, various sensors, the most common being computer vision-based, are used to detect diseased regions [10,11,12]. Object detection is a computer vision method that identifies and locates objects in images or videos. This task is challenging because objects can vary in shape, size, and color and be partially occluded by other objects [13]. Object detection is used in various fields, including self-driving cars, security cameras, facial recognition, and medical imaging for tumor identification [14,15,16,17,18]. Computer vision-based object detection is used to identify and locate objects in agricultural fields such as pests, diseases, weeds, and crop damage [19,20,21,22,23,24,25]. This technology has the potential to reduce pesticide use, leading to improved crop yields and reduced environmental pollution. In addition, object detection can monitor crop growth and yield [26,27].
The field of object detection in agriculture has progressed significantly, as computer vision techniques are continuously improved, resulting in more precise and efficient object recognition [28,29,30]. Object detection can be approached using two basic methods: classical and deep learning-based techniques. Conventional detection methods are manually designed by individuals for a given purpose. However, deep learning techniques employ deep neural networks to autonomously acquire features from data. Research has shown that deep learning-based approaches are superior in terms of accuracy to conventional methods [13,31]. Deep learning methods are classified into one-stage and two-stage detectors [19]. Two-stage detectors, such as R-CNN and its variations (Fast R-CNN, Faster R-CNN), initially generate proposals for regions of interest and subsequently classify these regions. Although they possess high accuracy levels, they tend to be slower at processing speed. On the contrary, one-stage detectors prioritize speed by performing localization and classification simultaneously [32,33].
You Only Look Once (YOLO) is a prominent one-stage object detection algorithm that has revolutionized real-time object detection. YOLO employs a single neural network to simultaneously predict bounding boxes and class probabilities for multiple objects within an image [34]. Unlike traditional approaches, YOLO introduces a real-time detection system that divides the input image into a grid and predicts bounding boxes based on the grid cell content. Real-time performance is achieved by sacrificing fine-grained localization accuracy, but remains competitive in detecting objects, making it a popular choice for various applications [31].
Over the past few years, researchers have released multiple YOLO versions, each exceeding its previous version. YOLOv2 implemented batch normalization and anchor boxes, but YOLOv3 integrated a more intricate backbone network (Darknet-53) and multi-scale predictions. YOLOv4 improved performance by implementing novel data augmentation techniques and architectural modifications. YOLOv5 has demonstrated significant improvements in both speed and accuracy, as well as being more user-friendly for developers. The most recent versions, including YOLOv7 and YOLOv8, have significantly improved limits, providing cutting-edge performance in terms of speed and accuracy [35,36,37].
YOLO and other deep learning-based object recognition techniques have significantly influenced agricultural applications. They have helped implement more effective and precise identification of plant diseases, pests, and crop health problems. Consequently, they have helped advance intelligent farming systems and precision-based agriculture methods. As these algorithms progress, they have the potential to improve crop management and optimize productivity in the agricultural industry [31,38].
In this study, an object detection model that identified wheat powdery mildew disease in digital images of wheat leaves was developed. For this purpose, YOLOv8 models of different sizes were trained accurately to detect and locate regions affected by powdery mildew. To train, test, and validate the model, wheat leaf images were collected from the trial area of the Faculty of Agriculture at Tekirdag Namik Kemal University. Data augmentation methods were used to enrich the collected data and make the model more robust.

2. Materials and Methods

The study was carried out in the experimental fields of Tekirdag Namik Kemal University, which has a pedoclimatic environment characterized by a combination of vertisols and brown forest soils [39]. With a mean temperature of 16.7 °C in May and a moderate precipitation of 37.2 mm, this region offers a fertile and stable environment suitable for diverse agricultural activities [39,40]. The study used the Flamura-85 variety of bread wheat [41]. In this environment, the dataset was collected and used to train a one-stage object detection model based on supervised deep learning. The dataset was annotated by outlining the regions of affected leaves within the images. The methods employed in building the model are depicted in Figure 1.

2.1. Data Collection

Wheat powdery mildew images were collected using the back camera of an iPhone 8 (Apple, Cupertino, CA, USA). The 224 images were collected in real field environments during the growing season in May and between 14:00 and 15:00 at Namik Kemal University, Tekirdag, Türkiye. The image resolution was 4032 × 3024 pixels. All images were acquired under natural illumination without camera flash or artificial illumination. Figure 2 shows some examples of the dataset. The data include a variety of images containing both diseased and non-diseased wheat leaves. Powdery mildew infection within these images ranges from early to advanced stages, ensuring that the model is trained to recognize the disease at different levels of severity. This variation in disease presentation is crucial for developing a robust system capable of performing accurately under real-world conditions.

2.2. Data Augmentation

Data augmentation is a technique used to artificially increase the size of a dataset. It creates new data points that are similar to existing data points. Data augmentation improves the performance of machine learning models by providing them with more data to learn from. It can also reduce overfitting, which occurs when a model learns the training data too well and cannot generalize to new data. Data augmentation can be used to enhance the diversity of training data and enhance model performance [42]. The number of images increased to 896 with data augmentation. The Albumentations 1.3.1 library was used for data augmentation [43]. Random crop, flip, and contrast methods were used. The results of the augmentation methods applied to a selected image are shown in Figure 3.

2.3. Data Annotation

To train the object detection models, the desired targets on the images must be labeled by cropping them into a rectangle. The labeling process was performed in YOLO format. The coordinate and dimension data of the boxes labeled in each image are stored as a text file. The marking process was performed using the open source LabelIMG tool [44]. The initial dataset comprised 224 images, which was increased to 896 images. For our study, we used the expanded dataset resulting from the augmentation process as the primary dataset. A total of 896 images (3889 instances) were manually labeled (Figure 4). An average of 4.34 labels were applied per image.

2.4. Data Preprocessing

The dataset was randomly split into 75% training, 20% validation, and 5% testing. Larger images in object detection models usually cause longer training times. For this reason, the image sizes were reduced to 640 × 640 pixels.

2.5. Object Detection Model

YOLOv8 is a state-of-the-art object detection model that detects objects in real time. It is based on the You Only Look Once (YOLO) algorithm, which is a single-stage object detection algorithm. YOLOv8 is faster and more accurate than previous versions of YOLO, and can detect a wide variety of objects. YOLOv8 is a one-stage object detection model proposed in 2023. YOLOv8 architecture can be seen in Figure 5 [45]. The network architecture of YOLOv8 includes three primary elements: the backbone, neck, and head. The backbone, based on CSPDarknet53, is responsible for the extraction of features from input images. It employs a series of convolutional layers and cross-stage partial (CSP) connections to efficiently learn hierarchical features [46]. The neck, which utilizes a Path Aggregation Network (PANet), facilitates information flow between different scales by fusing features from various backbone levels [47]. These features enhance the model’s ability to detect objects at multiple scales. The head is responsible for final predictions, including bounding box coordinates and class probabilities [46].
Throughout the network, YOLOv8 incorporates several key components. The C2f (CSP Concatenation and Feature Fusion) module enhances feature fusion and reduces computational complexity [48,49]. Convolutional (Conv) layers are fundamental building blocks that perform feature extraction and transformation. The Spatial Pyramid Pooling–Fast (SPPF) layer, an optimized version of SPP, aggregates features at multiple scales, improving the model’s ability to handle objects of various sizes [50,51]. This architecture allows YOLOv8 to achieve a balance between accuracy and computational efficiency, making it suitable for real-time object detection tasks across diverse applications.

2.6. Model Training Hardware

During the training process, the model was trained on Google Colab using a Tesla V100-SXM2-16GB GPU. The Ultralytics YOLOv8.0.20 model version was used for YOLOv8. All code implementations were performed using Python 3.10.11 and PyTorch 2.0.1 as programming languages.

2.7. Model Selection and Training

The YOLOv8 family consists of models with different depths and widths: YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, and YOLOv8x. The transfer learning approach was used for training the models. In the transfer learning approach, models that were previously trained with a large number of data are trained using custom data to solve a similar problem. The developed model was trained using training and validation datasets. Model training was carried out for 100 epochs. The model was optimized using Stochastic Gradient Descent (SGD) with a learning rate of 0.01, momentum of 0.937, and weight decay of 0.001. A batch size of 16 was used during training to improve performance and ensure efficient convergence.

2.8. Model Evaluation Metrics

To evaluate model performance, the most used metrics in object detection studies were preferred [19,52]. Precision, Recall, IoU, mAP, and F1 are shown in Formulas (1)–(5).
Precision is a performance metric used in classification tasks to assess the correctness of a model’s positive predictions. It represents the proportion of correctly predicted positive instances out of all positive instances predicted. Precision focuses on the quality of positive predictions, specifically the ability to avoid false positives. A higher precision value indicates a model that is more precise at correctly identifying positive instances, minimizing the occurrence of false positives. Precision is particularly important when the cost or impact of false positives is high, as it ensures that the model’s positive predictions are reliable and accurate.
P r e c i s i o n = T r u e   P o s i t i v e T r u e   P o s i t i v e + F a l s e   P o s i t i v e
Recall, also known as sensitivity or true positive rate, is a performance metric used in classification tasks to quantify the model’s capacity to accurately detect positive instances. It quantifies the proportion of true positive instances correctly identified out of all actual positive instances in the dataset. Recall focuses on the completeness of positive predictions, emphasizing the model’s ability to avoid false negatives. A higher recall value indicates a model that is more effective at capturing a larger portion of positive instances, minimizing the occurrence of false negatives. Recall is particularly important in identifying all positive instances, ensuring that the model has a low chance of missing relevant information.
R e c a l l = T r u e   P o s i t i v e T r u e   P o s i t i v e + F a l s e   N e g a t i v e
IoU is a common metric used in object detection tasks to assess the accuracy of bounding box predictions. It calculates the ratio of their intersection area to their union area to measure the overlap between the predicted bounding box and the actual bounding box. A value of 1 indicates a flawless match between the predicted and ground truth bounding boxes. A higher IoU indicates greater localization precision. A commonly used IoU threshold of 0.5 is used to determine whether a predicted bounding box should be considered a true positive or a false positive. It provides a quantitative measure of the object detection model’s ability to locate objects in images.
I o U = O b j e c t D e t e c t e d   B o x O b j e c t D e t e c t e d   B o x
mAP is a performance metric used in object detection and image retrieval tasks. It assesses the precision and robustness of a model’s predictions across multiple object classes or query images. It measures the trade-off between precision and recall for each class or query, representing the ability to correctly identify relevant instances while minimizing false positives. It is a useful metric for evaluating and comparing object detection or retrieval models, as a higher mAP indicates superior overall performance. The mAP50 is computed by calculating precision and recall values at various IoU thresholds and averaging them. AP50-95 is computed across a range of IoU thresholds, capturing the model’s capacity to accurately detect objects with varying degrees of overlap with the ground truth bounding boxes. A greater AP50-95 value indicates superior localization and classification performance across the range of IoU thresholds.
m A P = 1 N i = 1 N A P i
The F1 score is a commonly used metric in classification tasks to evaluate model performance considering both precision and recall. By calculating the harmonic mean of precision and recall, it provides a balance between these two measures. The F1 score ranges from 0 to 1, with 1 representing the best performance achievable. It is especially beneficial when the data are unbalanced or when false positives and false negatives have distinct outcomes. A higher F1 score indicates that a model achieves both high precision and high recall, making it a useful metric to evaluate the overall efficacy of a classification model.
F 1 = 2 R e c a l l P r e c i s i o n P r e c i s i o n + R e c a l l
We used Eigen-CAM [53] to interpret our model, visualizing the most important regions for decision-making processes with color gradients ranging from blue to red, where red indicates the most important regions.

3. Results and Discussion

The results of our study on the training dataset are displayed in Table 1. According to the findings, YOLOv8 is an effective model for detecting wheat powdery mildew. The YOLOv8n model is ideally suited to applications where fast inference speed is essential, whereas the YOLOv8m model achieves a balance between accuracy and inference speed, making it an excellent option for applications where both factors are essential. These results support that YOLO models are suitable for disease detection [54,55]. Object detection models must detect objects in real time for spraying applications. At the same time, the size of the model should be small enough to work in mobile applications [56,57].
Recent studies have also highlighted the effectiveness of deep learning models at detecting plant diseases. For example, a study on powdery mildew detection in strawberries using RGB images demonstrated the potential of deep learning to reduce unnecessary chemical applications [4]. Another study on agricultural object detection with YOLO models emphasized the rapid advancements and multidisciplinary nature of these applications. These findings are consistent with our results, underscoring the importance of real-time detection and model efficiency [31].
There are studies on disease detection with images captured in the laboratory environment. However, models trained with images in a controlled laboratory environment cannot provide successful results under variable illumination and background heterogeneity under real production conditions [56,58,59]. Since the images we used in model training were captured in an outdoor production environment, our model is robust against this variance.
Spectral and hyperspectral imaging was used in most powdery mildew detection studies [60,61]. Although these methods have advantages such as extending image inspection beyond human perception [62], they create disadvantages for the end user in terms of price and ease of access. Therefore, a mobile and cost-effective application is needed [63].
The transfer learning approach made it possible to achieve high accuracy rates with a dataset of 896 images in 100 training epochs. These results show the benefit of the transfer learning approach [4,64].
High precision shows that most of the model’s predictions are correct; however, lower recall shows that the model misses some diseased areas for detection. Increasing dataset size and more hyperparameter tuning can help increase model performance in future studies.
High precision, recall, and F1 scores demonstrated accurate detection of the majority of ground truth objects. In addition, the mAP50 being 0.81 and the mAP50-95 being 0.41 indicates that it is an ideal object detector (Figure 6).
The detector estimates the bounding box coordinates based on their respective confidence scores. Figure 7 shows that as the confidence score increases, the precision score increases and recall decreases. This inverse relationship between precision and recall is used to optimize the object detector. F1 is the harmonic mean of scoring precision and recall. Models with high confidence scores are preferred as object detectors. For this, the confidence score, which provides the optimum value between precision and recall, was found using the graphs below. Selecting 0.6 as the confidence score, where the F1 score has fallen sharply, remains a logical option. The precision values of the model decrease with confidence scores less than 0.6.
According to these results, the model with the highest detection success was saved and tested in the test dataset. The test dataset consists of 45 images. Some sample detections performed on the test dataset are shown in Figure 8. The model demonstrates high detection accuracy in different scenarios; for example, different overlapping leaves, multiple objects, and leaves containing powdery mildew at different densities.
Figure 9 illustrates the use of Eigen-CAM visualizations to understand the model’s predictive detection of wheat powdery mildew. The data are structured into four rows, with each row corresponding to a unique wheat leaf. The leftmost column presents the original images. The middle column shows the model’s detection of the disease. The rightmost column displays Eigen-CAM visualizations, which use color gradients to indicate the most important regions for the model’s decision-making process, with red areas representing the most critical regions. When comparing the model predictions and Eigen-CAM visualizations, it becomes evident that the model accurately directs its attention to the appropriate regions.
When the selected model was used on the test dataset, the precision, recall, F1, mAP50, and mAP50-95 values were 0.796, 0.749, 0.770, 0.765, and 0.383, respectively. It has been observed that False Positive and False Negative errors occur for various reasons, such as changes in natural light, blurring of the background due to the camera’s autofocus feature (Figure 10).
To address these issues, a more diverse training dataset that considers varying lighting conditions can be collected from additional images to improve detection efficacy. Adaptive histogram equalization can be applied as preprocessing to enhance image contrast, reducing the impact of lighting variations. Turning off the camera’s autofocus feature during image collection will prevent background blurring, allowing for more accurate detection of diseased areas that might otherwise be missed. Future research could also focus on predicting the severity of the disease rather than detecting only diseased regions. These steps will help reduce false positive and false negative errors, improving the overall accuracy and reliability of the model in the real-time detection of powdery mildew.

4. Conclusions

In this study, an object detection model based on deep learning was trained to detect leaves with powdery mildew in wheat. Several YOLOv8 models were trained using a custom RGB image dataset. The YOLOv8m model was chosen for its superior performance on training and validation datasets. When applied to the test dataset, the YOLOv8m model achieved precision, recall, F1, mAP50, and mAP50-95 values of 0.796, 0.749, 0.770, 0.765, and 0.383, respectively. This model detects powdery mildew on wheat leaves with high accuracy. Its ability to identify diseased areas in images makes it suitable for integration into precision spraying equipment or mobile field research applications.
Our approach using 2D digital camera sensors aims to reach a wider audience, particularly in resource-limited settings where spectral imaging is financially unfeasible. By offering a simpler method, we strive to democratize technology for agricultural disease detection, allowing more users to benefit from advancements in the field. This inclusivity can lead to a more widespread monitoring and management of crop health, ultimately improving agricultural production outcomes. Our model addresses the immediate need for early and accurate disease detection in agriculture, essential for effective disease management and control. This research contributes to the development of future precision agriculture technologies, such as selective spraying systems. It also supports comprehensive disease monitoring in fields.
However, the study has certain limitations. The model’s performance can vary under different field conditions, such as varying light and shadow, which were not fully represented in the dataset. Additionally, although the dataset size was increased to 896 images, the model may be unable to capture the full spectrum of phenotypic expressions of powdery mildew. This limitation can potentially impact the model’s generalizability to all wheat growth stages and environmental conditions.

Author Contributions

Conceptualization, E.Ö. and N.D.K.; methodology, E.Ö. and N.D.K.; software, E.Ö.; validation, E.Ö.; formal analysis, E.Ö. and N.D.K.; investigation, E.Ö. and N.D.K.; resources, E.Ö. and N.D.K.; data curation, E.Ö.; writing—original draft, E.Ö.; writing—review and editing, E.Ö. and N.D.K.; visualization, E.Ö. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in Zenodo at https://doi.org/10.5281/zenodo.13137587.

Acknowledgments

We are grateful to Oğuz Bilgin and Alpay Balkan for granting us permission to photograph their wheat trial fields in Tekirdag Namik Kemal University.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, S.; Lin, D.; Zhang, Y.; Deng, M.; Chen, Y.; Lv, B.; Li, B.; Lei, Y.; Wang, Y.; Zhao, L.; et al. Genome-edited powdery mildew resistance in wheat without growth penalties. Nature 2022, 602, 455–460. [Google Scholar] [CrossRef] [PubMed]
  2. Zeng, X.; Luo, Y.; Zheng, Y.; Duan, X.; Zhou, Y. Detection of latent infection of wheat leaves caused by Blumeria graminis f. sp. tritici using nested PCR. J. Phytopathol. 2010, 158, 227–235. [Google Scholar] [CrossRef]
  3. Ateş Sönmezoğlu, Ö.; Yıldırım, A.; Türk, Ü.; Yanar, Y. Identification of Powdery Mildew (Blumeria graminis f. sp. tritici) Resistance in Some Durum Wheat Landraces. Eur. J. Sci. Technol. 2019, 17, 944–950. [Google Scholar] [CrossRef]
  4. Shin, J.; Chang, Y.K.; Heung, B.; Nguyen-Quang, T.; Price, G.W.; Al-Mallahi, A. A deep learning approach for RGB image-based powdery mildew disease detection on strawberry leaves. Comput. Electron. Agric. 2021, 183, 106042. [Google Scholar] [CrossRef]
  5. Mavridou, E.; Vrochidou, E.; Papakostas, G.A.; Pachidis, T.; Kaburlasos, V.G. Machine vision systems in precision agriculture for crop farming. J. Imaging 2019, 5, 89. [Google Scholar] [CrossRef] [PubMed]
  6. Kumar, N.H.; Shashank, C.D.; Adithya, N.; Galla, A.; Likeith, B.; Deepak, G. A Comprehensive Survey on Weed Identification in Agriculture using Machine Learning. In Proceedings of the IEEE International Conference on Artificial Intelligence and Applications (ICAIA) Alliance Technology Conference (ATCON-1), Bangalore, India, 21–22 April 2023; pp. 1–6. [Google Scholar]
  7. Mahmud, M.S.A.; Abidin, M.S.Z.; Emmanuel, A.A.; Hasan, H.S. Robotics and automation in agriculture: Present and future applications. Appl. Model. Simul. 2020, 4, 130–140. [Google Scholar]
  8. Liu, B.; Bruch, R. Weed detection for selective spraying: A review. Curr. Robot. Rep. 2020, 1, 19–26. [Google Scholar] [CrossRef]
  9. Das, G.P.; Gould, I.; Zarafshan, P.; Heselden, J.; Badiee, A.; Wright, I.; Pearson, S. Applications of robotic and solar energy in precision agriculture and smart farming. In Solar Energy Advancements in Agriculture and Food Production Systems; Academic Press: Cambridge, MA, USA, 2022; pp. 351–390. [Google Scholar]
  10. Salazar-Gomez, A.; Darbyshire, M.; Gao, J.; Sklar, E.I.; Parsons, S. Towards practical object detection for weed spraying in precision agriculture. arXiv 2021, arXiv:2109.11048. [Google Scholar]
  11. Redolfi, J.A.; Felissia, S.F.; Bernardi, E.; Araguás, R.G.; Flesia, A.G. Learning to Detect Vegetation Using Computer Vision and Low-Cost Cameras. In Proceedings of the IEEE International Conference on Industrial Technology (ICIT), Buenos Aires, Argentina, 26–28 February 2020; pp. 791–796. [Google Scholar]
  12. Alam, M.; Alam, M.S.; Roman, M.; Tufail, M.; Khan, M.U.; Khan, M.T. Real-time machine-learning based crop/weed detection and classification for variable-rate spraying in precision agriculture. In Proceedings of the IEEE 7th International Conference on Electrical and Electronics Engineering (ICEEE), Antalya, Turkey, 14–16 April 2020; pp. 273–280. [Google Scholar]
  13. Zou, Z.; Chen, K.; Shi, Z.; Guo, Y.; Ye, J. Object detection in 20 years: A survey. Proc. IEEE 2023, 111, 257–276. [Google Scholar] [CrossRef]
  14. Su, S.; Li, Y.; He, S.; Han, S.; Feng, C.; Ding, C.; Miao, F. Uncertainty quantification of collaborative detection for self-driving. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 5588–5594. [Google Scholar]
  15. Karbalaie, A.; Abtahi, F.; Sjöström, M. Event detection in surveillance videos: A review. Multimed. Tools Appl. 2022, 81, 35463–35501. [Google Scholar] [CrossRef]
  16. Li, D.; Su, H.; Jiang, K.; Liu, D.; Duan, X. Fish face identification based on rotated object detection: Dataset and exploration. Fishes 2022, 7, 219. [Google Scholar] [CrossRef]
  17. Tsuneki, M. Deep learning models in medical image analysis. J. Oral Biosci. 2022, 64, 312–320. [Google Scholar] [CrossRef] [PubMed]
  18. Zhou, T.; Ye, X.; Lu, H.; Zheng, X.; Qiu, S.; Liu, Y. Dense convolutional network and its application in medical image analysis. BioMed Res. Int. 2022, 25, 2384830. [Google Scholar] [CrossRef]
  19. Önler, E. Real time pest detection using YOLOv5. Int. J. Agric. Nat. Sci. 2021, 14, 232–246. [Google Scholar]
  20. Lee, D.I.; Lee, J.H.; Jang, S.H.; Oh, S.J.; Doo, I.C. Crop Disease Diagnosis with Deep Learning-Based Image Captioning and Object Detection. Appl. Sci. 2023, 13, 3148. [Google Scholar] [CrossRef]
  21. Habib, M.; Sekhra, S.; Tannouche, A.; Ounejjar, Y. The Identification of Weeds and Crops Using the Popular Convolutional Neural Networks. In Proceedings of the International Conference on Digital Technologies and Applications 2023, Fez, Morocco, 27–28 January 2023; pp. 484–493. [Google Scholar]
  22. Li, R.; Li, Y.; Qin, W.; Abbas, A.; Li, S.; Ji, R.; Wu, Y.; He, Y.; Yang, J. Lightweight Network for Corn Leaf Disease Identification Based on Improved YOLO v8s. Agriculture 2024, 14, 220. [Google Scholar] [CrossRef]
  23. Zhong, Z.; Yun, L.; Cheng, F.; Chen, Z.; Zhang, C. Light-YOLO: A Lightweight and Efficient YOLO-Based Deep Learning Model for Mango Detection. Agriculture 2024, 14, 140. [Google Scholar] [CrossRef]
  24. Zhu, R.; Hao, F.; Ma, D. Research on Polygon Pest-Infected Leaf Region Detection Based on YOLOv8. Agriculture 2023, 13, 2253. [Google Scholar] [CrossRef]
  25. Susheel, K.S.; Nadu, T.; Rajkumar, R. A Review on Cutting Edge Technologies In Crop Pests And Diseases Detection. J. Data Acquis. Process. 2023, 38, 640. [Google Scholar]
  26. Cho, S.; Kim, T.; Jung, D.H.; Park, S.H.; Na, Y.; Ihn, Y.S.; Kim, K. Plant growth information measurement based on object detection and image fusion using a smart farm robot. Comput. Electron. Agric. 2023, 207, 107703. [Google Scholar] [CrossRef]
  27. Ren, G.; Wu, H.; Bao, A.; Lin, T.; Ting, K.C.; Ying, Y. Mobile robotics platform for strawberry temporal–spatial yield monitoring within precision indoor farming systems. Front. Plant Sci. 2023, 14, 1162435. [Google Scholar] [CrossRef] [PubMed]
  28. Li, X.; Xiao, S.; Kumar, P.; Demir, B. Data-driven few-shot crop pest detection based on object pyramid for smart agriculture. J. Electron. Imaging 2023, 32, 052403. [Google Scholar] [CrossRef]
  29. Edan, Y.; Adamides, G.; Oberti, R. Agriculture Automation. In Springer Handbook of Automation; Springer: Cham, Switzerland, 2023; pp. 1055–1078. [Google Scholar]
  30. Wosner, O.; Farjon, G.; Bar-Hillel, A. Object Detection in Agricultural Contexts: A Multiple Resolution Benchmark and Comparison to Human. Comput. Electron. Agric. 2021, 189, 106404. [Google Scholar] [CrossRef]
  31. Badgujar, C.M.; Poulose, A.; Gan, H. Agricultural Object Detection with You Only Look Once (YOLO) Algorithm: A Bibliometric and Systematic Literature Review. Comput. Electron. Agric. 2024, 223, 109090. [Google Scholar] [CrossRef]
  32. Ariza-Sentís, M.; Vélez, S.; Martínez-Peña, R.; Baja, H.; Valente, J. Object Detection and Tracking in Precision Farming: A Systematic Review. Comput. Electron. Agric. 2024, 219, 108757. [Google Scholar] [CrossRef]
  33. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  34. Mujkic, E.; Christiansen, M.P.; Ravn, O. Object Detection for Agricultural Vehicles: Ensemble Method Based on Hierarchy of Classes. Sensors 2023, 23, 7285. [Google Scholar] [CrossRef]
  35. Nepal, U.; Eslamiat, H. Comparing YOLOv3, YOLOv4 and YOLOv5 for Autonomous Landing Spot Detection in Faulty UAVs. Sensors 2022, 22, 464. [Google Scholar] [CrossRef]
  36. Bektaş, J. Evaluation of YOLOv8 Model Series with HOP for Object Detection in Complex Agriculture Domains. Int. J. Pure Appl. Sci. 2024, 10, 162–173. [Google Scholar] [CrossRef]
  37. Jiang, P.; Ergu, D.; Liu, F.; Cai, Y.; Ma, B. A Review of Yolo algorithm developments. Procedia Comput. Sci. 2022, 199, 1066–1073. [Google Scholar] [CrossRef]
  38. Huang, Y.; Qian, Y.; Wei, H.; Lu, Y.; Ling, B.; Qin, Y. A Survey of Deep Learning-Based Object Detection Methods in Crop Counting. Comput. Electron. Agric. 2023, 215, 108425. [Google Scholar] [CrossRef]
  39. Sari, H.; Özcan, O. Soil Properties of The Quarry Areas in Suleymanpaşa-Tekırdag. Alınteri Zirai Bilim. Derg. 2018, 33, 75–83. [Google Scholar] [CrossRef]
  40. Mgm Turkish State Meteorolgical Service. Available online: https://www.mgm.gov.tr/veridegerlendirme/il-ve-ilceler-istatistik.aspx?k=A&m=TEKIRDAG (accessed on 1 August 2024).
  41. Köycü, N.D. Effect of Fungicides on Spike Characteristics in Winter Wheat Inoculated withFusarium Culmorum. Food Addit. Contam. Part A 2022, 39, 1001–1008. [Google Scholar] [CrossRef] [PubMed]
  42. Önler, E. Image augmentation in agriculture using the albumentations library. In New Trends in Agriculture, Forestry and Aquaculture Sciences; Duvar Publishing: Istanbul, Turkey, 2022; pp. 89–104. [Google Scholar]
  43. Buslaev, A.; Iglovikov, V.I.; Khvedchenya, E.; Parinov, A.; Druzhinin, M.; Kalinin, A.A. Albumentations: Fast and flexible image augmentations. Information 2020, 11, 125. [Google Scholar] [CrossRef]
  44. Tzutalin, D. LabelImg Free Software: MIT License; MIT: Cambridge, MA, USA, 2015. [Google Scholar]
  45. Wang, G.; Chen, Y.; An, P.; Hong, H.; Hu, J.; Huang, T. UAV-YOLOv8: A Small-Object-Detection Model Based on Improved YOLOv8 for UAV Aerial Photography Scenarios. Sensors 2023, 23, 7190. [Google Scholar] [CrossRef] [PubMed]
  46. Terven, J.; Córdova-Esparza, D.-M.; Romero-González, J.-A. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
  47. Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path Aggregation Network for Instance Segmentation. Available online: https://openaccess.thecvf.com/content_cvpr_2018/html/Liu_Path_Aggregation_Network_CVPR_2018_paper.html (accessed on 1 August 2024).
  48. Sen, C.; Singh, P.; Gupta, K.; Jain, A.K.; Jain, A.; Jain, A. UAV Based YOLOV-8 Optimization Technique to Detect the Small Size and High Speed Drone in Different Light Conditions. In Proceedings of the 2nd International Conference on Disruptive Technologies (ICDT), Greater Noida, India, 15–16 March 2024. [Google Scholar] [CrossRef]
  49. Luo, D.; Xue, Y.; Deng, X.; Yang, B.; Chen, H.; Mo, Z. Citrus Diseases and Pests Detection Model Based on Self-Attention YOLOV8. IEEE Access 2023, 11, 139872–139881. [Google Scholar] [CrossRef]
  50. Zhang, W. Research on 10 Kinds of Beverage Target Detection Based on YOLOv8. In Proceedings of the IEEE 6th International Conference on Automation, Electronics and Electrical Engineering (AUTEEE), Shenyang, China, 15–17 December 2023. [Google Scholar] [CrossRef]
  51. Liu, M.; Li, R.; Hou, M.; Zhang, C.; Hu, J.; Wu, Y. SD-YOLOv8: An Accurate Seriola Dumerili Detection Model Based on Improved YOLOv8. Sensors 2024, 24, 3647. [Google Scholar] [CrossRef]
  52. Kocakanat, K.; Serif, T. Turkish Traffic Sign Recognition: Comparison of Training Step Numbers and Lighting Conditions. Avrupa Bilim ve Teknoloji Dergisi 2021, 28, 1469–1475. [Google Scholar] [CrossRef]
  53. Jiang, P.-T.; Zhang, C.-B.; Hou, Q.; Cheng, M.-M.; Wei, Y. LayerCAM: Exploring Hierarchical Class Activation Maps for Localization. IEEE Trans. Image Process. 2021, 30, 5875–5888. [Google Scholar] [CrossRef]
  54. Lippi, M.; Bonucci, N.; Carpio, R.F.; Contarini, M.; Speranza, S.; Gasparri, A. A yolo-based pest detection system for precision agriculture. In Proceedings of the IEEE 29th Mediterranean Conference on Control and Automation (MED), Online, 22–25 June 2021; pp. 342–347. [Google Scholar]
  55. Soeb, M.J.A.; Jubayer, M.F.; Tarin, T.A.; Al Mamun, M.R.; Ruhad, F.M.; Parven, A.; Mubarak, N.M.; Karri, S.L.; Meftaul, I.M. Tea leaf disease detection and identification based on YOLOv7 (YOLO-T). Sci. Rep. 2023, 13, 6078. [Google Scholar] [CrossRef]
  56. Khan, F.; Zafar, N.; Tahir, M.N.; Aqib, M.; Waheed, H.; Haroon, Z. A mobile-based system for maize plant leaf disease detection and classification using deep learning. Front. Plant Sci. 2023, 14, 1079366. [Google Scholar] [CrossRef] [PubMed]
  57. Zhao, K.; Zhao, L.; Zhao, Y.; Deng, H. Study on Lightweight Model of Maize Seedling Object Detection Based on YOLOv7. Appl. Sci. 2023, 13, 7731. [Google Scholar] [CrossRef]
  58. Bansal, H.; Grover, A. Leaving reality to imagination: Robust classification via generated datasets. arXiv 2023, arXiv:2302.02503. [Google Scholar]
  59. Miao, Z.; Yu, X.; Li, N.; Zhang, Z.; He, C.; Li, Z.; Deng, C.; Sun, T. Efficient tomato harvesting robot based on image processing and deep learning. Precis. Agric. 2023, 24, 254–287. [Google Scholar] [CrossRef]
  60. Khan, I.H.; Liu, H.; Li, W.; Cao, A.; Wang, X.; Liu, H.; Cheng, T.; Tian, Y.; Zhu, Y.; Cao, W. Early detection of powdery mildew disease and accurate quantification of its severity using hyperspectral images in wheat. Remote Sens. 2021, 13, 3612. [Google Scholar] [CrossRef]
  61. Xuan, G.; Li, Q.; Shao, Y.; Shi, Y. Early diagnosis and pathogenesis monitoring of wheat powdery mildew caused by blumeria graminis using hyperspectral imaging. Comput. Electron. Agric. 2022, 197, 106921. [Google Scholar] [CrossRef]
  62. Yako, M.; Yamaoka, Y.; Kiyohara, T.; Hosokawa, C.; Noda, A.; Tack, K.; Spooren, N.; Hirasawa, T.; Ishikawa, A. Video-rate hyperspectral camera based on a CMOS-compatible random array of Fabry–Pérot filters. Nat. Photonics 2023, 17, 218–223. [Google Scholar] [CrossRef]
  63. Urbieta, M.; Urbieta, M.; Pereyra, M.; Laborde, T.; Villarreal, G.; Del Pino, M. A scalable offline AI-based solution to assist the diseases and plague detection in agriculture. J. Decis. Syst. 2023, 33, 459–476. [Google Scholar] [CrossRef]
  64. Goyal, L.; Sharma, C.M.; Singh, A.; Singh, P.K. Leaf and spike wheat disease detection & classification using an improved deep convolutional architecture. Inform. Med. Unlocked 2021, 25, 10064. [Google Scholar]
Figure 1. Model development workflow.
Figure 1. Model development workflow.
Applsci 14 07073 g001
Figure 2. Image samples from the dataset.
Figure 2. Image samples from the dataset.
Applsci 14 07073 g002
Figure 3. Image augmentation examples.
Figure 3. Image augmentation examples.
Applsci 14 07073 g003
Figure 4. Screenshot of image labeling with LabelIMG.
Figure 4. Screenshot of image labeling with LabelIMG.
Applsci 14 07073 g004
Figure 5. YOLOv8 architecture.
Figure 5. YOLOv8 architecture.
Applsci 14 07073 g005
Figure 6. YOLOv8m model results in training.
Figure 6. YOLOv8m model results in training.
Applsci 14 07073 g006
Figure 7. YOLOv8m evaluation metrics against the confidence score.
Figure 7. YOLOv8m evaluation metrics against the confidence score.
Applsci 14 07073 g007
Figure 8. Powdery mildew detection results in some test images.
Figure 8. Powdery mildew detection results in some test images.
Applsci 14 07073 g008
Figure 9. Eigen-CAM visualization of powdery mildew detection.
Figure 9. Eigen-CAM visualization of powdery mildew detection.
Applsci 14 07073 g009
Figure 10. Powdery mildew misclassification examples.
Figure 10. Powdery mildew misclassification examples.
Applsci 14 07073 g010
Table 1. Training results of object detection models.
Table 1. Training results of object detection models.
ModelInference Speed
(ms)
Model Size
(MB)
mAP50PrecisionRecallF1 Score
YOLOv8n0.56.20.760.950.740.82
YOLOv8s1.122.50.770.980.770.85
YOLOv8m2.352.00.810.980.770.87
YOLOv8l3.487.60.810.980.770.87
YOLOv8x5.8136.70.810.980.790.87
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Önler, E.; Köycü, N.D. Wheat Powdery Mildew Detection with YOLOv8 Object Detection Model. Appl. Sci. 2024, 14, 7073. https://doi.org/10.3390/app14167073

AMA Style

Önler E, Köycü ND. Wheat Powdery Mildew Detection with YOLOv8 Object Detection Model. Applied Sciences. 2024; 14(16):7073. https://doi.org/10.3390/app14167073

Chicago/Turabian Style

Önler, Eray, and Nagehan Desen Köycü. 2024. "Wheat Powdery Mildew Detection with YOLOv8 Object Detection Model" Applied Sciences 14, no. 16: 7073. https://doi.org/10.3390/app14167073

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop