Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

LLDD

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 18

Lemon Leaf Disease Detection Using Deep Learning

Techniques
Abstract:
Global agriculture relies heavily on the production of lemons to meet both food and economic
needs. Lemon trees can, however, become infected with a number of diseases that, if not found
and treated right away, can severely lower yield and quality. Generally, the symptoms of the
disease are observed on leaves, stems, flowers etc. Here, the leaves of the affected plant are
used for the identification and classification of the disease. Leaf image is captured using a smart
phone as the first step and then they are processed to determine the condition of the plant.
Identification of plant disease follows the steps like loading the image of the plant leaf,
histogram equalization for enhancing contrast of the image, segmentation by using confusion
matrix and feature plot .The system under consideration employs a convolutional neural
network (CNN) architecture that has been trained on an extensive dataset of images of lemon
leaves, including both healthy and diseased leaves affected by common ailments like
anthracnose, citrus canker, and citrus greening.
1. Introduction

One of the most important agricultural techniques in the world is citrus production, as lemons
are essential to the citrus sector. On the other hand, a number of illnesses that can seriously
affect production, quality, and commercial viability can affect lemon trees. Citrus canker and
citrus greening, two of the most common diseases that plague lemon trees, are brought on by
bacterial pathogens. For crop protection and disease management to be successful, early and
efficient diagnosis of these diseases is essential. Disease management is a critical aspect of
lemon cultivation. Plants are the multi-cellular photosynthetic eukaryotes of the kingdom Plant.
Impairment in the normal state of a plant interrupts and modifies its pivotal functions such as
respiration and photosynthesis. Diseases in plants affect them by interfering with several
processes such as absorbance and translocation of water, minerals and nutrients. Flower and
fruit development, plant growth, cell division and cell enlargement are some of the processes
that are affected because of the plant diseases. All plant species irrespective of the type of
their type of sowing and rearing are subjected to various kinds of diseases. Plant diseases occur
based on the presence of various pathogens and environmental conditions. Plant diseases are
usually caused by different types of fungi, bacteria, phytoplasma, viruses, viroids and
nematodes . Certain plant varieties show resistance towards these pathogens while some are
subjected to outbreaks and are hence affected by the pathogens. Plant diseases are also the
effect of habitat loss poor land management. Plant diseases can annihilate natural ecosystems
and the biological community. Thereby it aggravates environmental problems. Identifying the
cause, symptoms and knowing when and how to effectively control those diseases is a great
challenge underway. The austerity of diseases caused by these pathogens varies from light and
mild symptoms to dwindling of the infected plants, depending on the aggressiveness of the
pathogen, resistance of the whole plant, environmental conditions, duration of infection and
other such factors. Plant disease symptoms vary with the infecting pathogen and the part on
which it is infected. In tropical countries like India and Malaysia where their environmental
conditions are particularly
favourable, incomes of the population are low, knowledge and investments in crop health
management are minimal then the crop losses tend to be greatest there. Crop losses can mean
that the communities become more dependent on imported foods and processed foods, often
replacing a proper balanced and a healthy diet with foods that create various health issues. The
quantity, quality and the yield of agricultural product are reduced as a major result of diseases.
When the plant diseases are caused by micro-organism like fungi, the prediction of its lifecycle
is not easy and not always accurate.

2. Literature Survey

There are many researchers conducted on plant disease prediction. Anand H. Kulkarni et al.
[1] developed a model for early and accurate plant disease identification using artificial neural
network (ANN) and several image processing techniques. Given that the suggested strategy is
based on ANN Gabor filter for features and a classifier for classification extraction, it produces
superior outcomes with an increased rate of identification as much as 91%. Different plants are
classified using an ANN-based classifier uses a blend of textures, colour, and illnesses
characteristics to identify those disorders. Revathi et al. [2] developed a model for the
identification of plant visual disorders. The plant is captured on camera and prepped for
digital imaging. it was extracted techniques including colour space, texture, and edge
detection Next, the elements are carried out. The characteristics that are retrieved the
classifiers with are supplied. This study seeks to determine the infection of cotton leaves
using image processing technique. Sharada P. et al. [3] developed a model for the disease
identification by using a deep learning method The dataset utilized for the deep learning
method's execution includes a collection of pictures of several kinds of crops, along with their
attributes and good image. Alex Net is one of two architectures, as well as GoogleLeNet, which
also offers a 99% accuracy rate. Because of this, their model revealed a low categorization
rate for various types of photos on a contrasting background. Savita N. Ghaiwat et al. [4] a
review of the various classification methods that can be used to categorize plant leaf diseases.
The easiest of all methods for class prediction appears to be the k-nearest-neighbour
technique for the test scenario provided. Finding the correct parameters for SVM looks to be
one of its limitations if training data is not linearly separable. Sindhuja Sankaran et al. [5]
Plant diseases are discovered in their early stages by early symptoms. For the purpose of
providing analytical information about the representation of the plant based on its colour
preferences. in addition, the method for processing images K-means grouping algorithm has
also been utilised to determine the End of the illness and, at long last, Fuzzy Logic
implemented for the disease classification. Mrunalini R et al. [6] The method for categorising
and identifying the various plant diseases is discussed in the paper. A machine learning-based
recognition system will be highly helpful for the Indian economy since it saves time, money,
and effort. The Color Co-occurrence Method is the method suggested in this for feature set
extraction. Neural networks are used to detect illnesses in leaves automatically. The suggested
method can considerably aid in the precise detection of leaves and appears to be a crucial
method in cases of steam and root infections while requiring less computational work.

3) Methodology
workflow

3.1) Data collection:


The Leaf Diseases Dataset is a collection of optical representations of healthy and
unhealthy leaves from a different lemon plant. unhealthy leaves in the dataset have spots,
powdery mildew, and leaf scorch. Each image in the dataset is labelled with its corresponding
plant species and disease. The availability of such datasets is crucial for the development of
automated systems for the detection and diagnosis of plants. With the increasing global demand
for food, it is essential to optimize crop yield and reduce losses due to disease. Automated
systems that can accurately and efficiently detect and diagnose plant diseases can help farmers
identify and treat infected plants before the disease spreads, ultimately leading to increased
crop yield and quality.
Fig-1 Diseased lemon leaves

3.2) Data preprocessing:

It is essential to use high-quality and well-balanced image data when using the
CNN technique for image classification. As a result, it is essential to apply appropriate image
development and balancing mechanisms to improve the model's image quality. This module
scrutinizes the different image processing techniques. LeNet-4,Vgg-16 and MobileNet-V3, and
ResNet-152 for instance, divides images into tiles or contextual portions, computes a bar
graph for each tile, there after a specific histogram distribution parameter copies the output.

Fig-2 flow chart for distribution of para meters

3.2.1) Image enhancement:


Adaptive Histogram Equalization(AHE)
Adaptive Histogram Equalization differs from ordinary histogram equalization in the respect
that the adaptive method computes several histograms, each corresponding to a distinct
section of the image, and uses them to redistribute the lightness values of the image. It is
therefore suitable for improving the local contrast and enhancing the definitions of edges in
each region of an image.

Fig-3 Enhanced images of lemon diseased leaves.

Initial Image Adaptive Histogram Equlized

image Fig-4 Intial image and Adaptive Histogram Equalized image of AHE

3.2. Data Classification:


DataBase Name Disease Class Number
of
Images
Black spot 19

CanKer 78

Fruit disease
image(FDI) Greening 16

Healthy 22

Scab 15

Black spot 171

CanKer 163

Leaf Disease Greening 204


images(LDI)
Healthy 58

Malenose 13

759
Total Images
Table 1 Number of two dataset samples before
augmentation
DataBase Name Disease Class Number of Images Number of
images after
augmentation
Black spot 19 1209

CanKer 78 1678

Fruit disease Greening 16 1589


image(FDI)
Healthy 22 1400

Scab 15 1590

Black spot 171 1031

CanKer 163 927

Leaf Disease Greening 204 1007


images(LDI)
Healthy 58 800

Malenose 13 980

759 12211
Total Images
Table 2 Number of two dataset samples after augmentation

Steps to train the images:


Step 1: Start with images of which their classes are known
prior. Step 2: Find the feature set for each of the image and
label them.
Step 3: Take the successive image as an input and identify features of the one as a new input.
Step 4: Device the binary SVM to multi class SVM method.
Step 5: Train the SVM using any
function. Step 6: Find the class of the
input image.
Step 7: Depending on the result, a label is given to the next image. Add these features to the
database.
Step 8: Steps 3 to 7 are performed for all images that will be used in database.
Step 9: The outcome species is the class of the input image.
Step 10: To find the accuracy of the system or the LeNet-4, in this case, random set of inputs
are chosen for training and testing from the database. Two different sets for train and test are
generated. The steps for training and testing are same, however, followed by the test is
performed.

PROPOSED METHODOLOGY:
A Convolutional Neural Network (CNN) is a machine learning method from deep learning.
CNNs are trained using large collections of images. From these large collections, CNNs can
learn many of the feature representations for the whole of the collection of images. As an easier
way to perform the classification without shedding time and effort, they are trained prior as
an extractor of the features. The following are the steps involved:

Step 1: Start with the input image


Step 2: Feature map is created by applying four filters (convolution, mean and median,
average).
Step 3: Apply a ReLU function for its non-linearity to increase.
Step 4: Apply pooling layer to each of the feature map.
Step 5: Flatten the pooled images into a single long vector.
Step 6: Gets the vector into a fully connected layer.
Step 7: Processes the features and the final fully connected layer provides the voting of the
classes we require.
Step8: Trains for many epochs through the forward propagation and back propagation and
this will repeat until a defined neural network with the trained features are obtained.

3.4) Result Evaluation:


LeafNet
LeafNet is a term often used to refer to deep learning models specifically designed for leaf
classification and recognition tasks. These tasks involve identifying plant species based on
images of their leaves. LeafNet models are particularly valuable in fields such as botany,
agriculture, ecology, and environmental science for tasks like plant species identification,
disease detection, and biodiversity monitoring. LeafNet architectures typically consist of
convolutional neural networks (CNNs) or other deep learning architectures tailored for image
classification tasks. These networks are designed to effectively learn hierarchical
representations of leaf images, enabling accurate classification. LeafNet models require large
datasets of labeled leaf images for training. These datasets may include images of leaves from
various plant species, captured under different conditions and angles to ensure robustness and
generalization of the model. Preprocessing techniques such as resizing, cropping, and
augmentation are often applied to the leaf images before feeding them into the network.
These techniques help improve the model's ability to generalize to unseen data and enhance
its performance.
Transfer learning is commonly used in LeafNet models, where pretrained models (e.g.,
models trained on ImageNet) are fine-tuned on leaf-specific datasets. This approach leverages
the knowledge learned from large-scale datasets and accelerates the training process for leaf
classification tasks. LeafNet models are evaluated based on metrics such as accuracy, precision,
recall, and F1-score, which measure the model's ability to correctly classify leaves into their
respective species or categories. LeafNet models find applications in various domains,
including plant taxonomy, agriculture, forestry, environmental monitoring, and plant disease
diagnosis. They enable researchers and practitioners to efficiently identify plant species, study
vegetation dynamics, and assess ecological patterns and processes.Overall, LeafNet models
play a crucial role in automating and streamlining leaf-related tasks, contributing to
advancements in plant science, biodiversity conservation, and sustainable agriculture. As deep
learning techniques continue to evolve, we can expect further improvements in the accuracy,
efficiency, and applicability of LeafNet models in real-world scenarios.
Classify Image using ResNet-152
ResNet is an abbreviation for Residual Network. ResNet-152 has many variants that use the
same concept but have different numbers of layers. ResNet-152 is the name given to the
variant that can work with 152 neural network layers. ResNet-152 is a deep convolutional
neural network with 152 layers. It is possible to load a pretrained version of the network that
has been trained on over a million images from the ImageNet database. Pretrained networks can
classify images into over 1000 different object categories. As a result of this, the network has
learned detailed feature representations for a wide range of images. The network accepts
image input of 224 by 224 pixels. There are 152 layers in the network. The number of
sequential convolutional or fully connected layers on a network is defined as its depth. Then
we add another 10 layers for highest accuracy for our dataset. Then we get 162 layers in
ResNet-152. Then we can call it as Residual network-162 (ResNet-162) In this paper, we
assess the performance of ResNet-162 networks in image classification, demonstrating that an
ImageNet pretrained ResNet-50 achieves an accuracy of around 93.5%.

DESIGN OBSERVATIONS :
TABLE 3 Confusion matrix for Multiclass ResNet-152
Total images Black spot Citrus Greasy spot Healthy leaf Malenose
taken canker

Blackspot7 6 1 0 0 0

Citrus 1 18 0 0 0
canker19
Greening – 7 1 1 5 0 0

Healthy leaf – 1 5 0 10 0
16
Malenose-9 1 1 0 5 2
The accuracy obtained for Multiclass ResNet-152 is 90%.

TABLE 4 Confusion matrix for ResNet-162


Total images Black Citrus Greening Healthy leaf Malenose
taken spot canker
Black spot-7 7 0 0 0 0

Citrus canker- 2 17 0 0 1
19
Greening – 7 0 0 7 0 0

Healthy leaf – 0 1 0 15 0
16
Malenose-9 0 0 0 5 4

The accuracy obtained for confusion matrix of ResNet-162 93.5%

Fig-5 Confusion matrix image view of detecting a disease spot of lemon leaf.

4) RESULT ANALYSIS:
A whole of 4000 leaf images were used in this study. Table 3 contains information about the
data used in each class. The images are rescaled to 224 x 224 pixels. The databank was
classified into 60:20:20 training, validation, and test sets. Five transfer learning models,
namely
VGG-16, MobileNet-V3, LeNet-4, ResNet-152, and ResNet-162, were also implemented to
analyse the performance of the proposed ResNet-162 model.
4.1) Evaluation Matrix
For the evaluation the TL models, performance parameters such as Specificity, Recall,
Accuracy, Precision, False Positive Rate (FPR), False Negative Rate (FNR) and F1-score are
determined. Confusion matrix is used for evaluating parameters of individual models. Accuracy
is calculated to find out the percentage of accurate predictions. Precision is computed to
estimate the probability of positive classifications. Specificity evaluates the percentage of
rightly forecasted negative categories from all performance parameters. Contrasting specificity,
recall estimates the percentage of correctly forecasted positive classes. The F1-score is used to
determine the stability between specificity and recall. The performance parameters are
demonstrated in below equations.

Precision=TP/(TP+FP)
Recall=TP/(TP+FN)
F1=2*precision*recall/(precision+recall)
Accuracy=TP+TN/(TP+FN+TN+FP)
Specificity= TN/(TN+FP)

TPR=TP/(Actual Positive)=TP/(TP+FN)
FNR=FN/(Actual Positive)=FN/(TP+FN)
TNR=TN/(Actual Negative)=TN/(TN+FP)
FPR=FP/(Actual Negative)=FP/(TN+FP)
In this context, True Positive (TP) refers to cases where the model correctly identifies leaf
disease in plants which actually have the disease, while True Negative (TN) indicates cases
where the model correctly identifies the absence of leaf disease in plants which do not have it.
Alternately, False Positive (FP) arises when the model forecasts the existence of leaf disease in
plants which do not have it, and False Negative (FN) arises when the model predicts the
absence of leaf disease in plants which have it. These elements are then used to calculate the
performance parameters.

4.2) Performance of pre-trained transfer learning models.


We began by training and evaluating five transfer learning models by computing their
performance parameters using the above equations. Figure 6 depicts the performance

results.

Fig-6 Bar graph for all used methodologies.

Fig: Transfer learning models can be evaluated using various performance methodologies.
Among the models, ResNet-162(customized) performed consistently well, with a percentage
of 93.5%. The remaining models, ResNet-152, MobileNet-V3, Vgg-16, AlexNet and LeNet-4
had 90%,83%, 87%, 67%, and 78%, respectively.
4.3) Performance of proposed ResNet-162
Among the models employed to predict lesions on leaves, the ResNet-162 transfer learning
model displays the most potential. Nevertheless, the accuracy of the ResNet-162 model,
which stands at a mere 93.5, indicates a disturbingly high incorrect prediction rate of 9.0.
Such an accuracy level is patently unsatisfactory for diagnostic purposes. In order to lessen
the likelihood of diagnostic errors, the model’s accuracy must be improved to a level as close
to 100% as possible, while avoiding overfitting. As a result, this paper advances a refined
version of the model, the ResNet-162, which has been fine-tuned over 300 epochs. During the
training
process, the accuracy and loss values of the model were registered for each epoch. As
demonstrated by below figures, the model’s accuracy rises moderately with each epoch,
whereas the loss value reduces. The ultimate accuracy of the ResNet-162 model is 93.5% for
training data and 90.62 for validation data, with the lowest loss values being 8.21 for training
and 9.38 for validation at the 300th epoch, respectively.

Fig-7 scaler makers for ResNet-162


ResNet-162 CNN architecture no. of epochs and its accuracy for 15% validation set and its
learning rate. The time varies from 1.17 seconds to 0.5seconds for each epochs

Fig-8 Rader markers for ResNet-162


Fig-9 Confusion matrix for ResNet-152 and ResNet-162

shows the accuracy shown by ResNet-162 architecture with its learning rate. The time
varies from 2.29 to 2.44 seconds.shows the accuracy for ResNet34 architecture with 20%
validation set. The time varies from 1.11 to1.22 seconds for each epoch.
All models used here conducted much better than randomly guessing, even with different
background in pictures such as human hands, soil or other distracting things. Results also
indicate that the models were not over-fitted to the datasets because the split training validation
information had a tiny impact on the general accuracies reported.
The confusion matrix from the mango dataset allows a more detailed analysis by showing how
the model performance varies with different disease representations in the images. On the first
confusion matrix plot for the 15% validation set, data split the rows shows the true classes and
the column shows the predicted classes. The diagonal cell reflects the proportion of instances
that the qualified network correctly predicts the groups of observations i.e. the proportion that
fits real and expected classes. The off-diagonal cells reflect where there were inconsistencies
in the network. The proportion shown in the off-diagonal cells and on diagonal cells for the
models showed that the highest reported prediction accuracy is 0.95 for golmich , when 20%
validation set is taken for ResNet-152 architecture and redrust is 0.95 when 15% validation set
is taken for ResNet-162
5) CONCLUSION:
In this paper, we have introduced the basic knowledge of deep learning and presented a
comprehensive review of recent research work done in plant leaf disease recognition using deep
learning. Provided sufficient data is available for training, deep learning techniques are
capable of recognizing plant leaf diseases with high accuracy. The importance of collecting
large data sets with high variability, data augmentation, transfer learning, and visualization of
CNN activation maps in improving classification accuracy, and the importance of hyper –
spectral imaging for early detection of plant disease have been discussed. At the same time,
there are also some inadequacies. Most of the DL frameworks proposed in the literature have
good detection effects on their datasets, but the effects are not good on other datasets, that is
the model has poor robustness. Therefore, better robustness DL models are needed to adapt
the
diverse disease datasets. In the most of the researches, the lemon leaf disease dataset was used
to evaluate the performance of the DL models. Although this dataset has a lot of images of
several plant species with their diseases, it was taken. Therefore, it is expected to although some
studies are using hyperspectral images of diseased leaves, and some DL frameworks are used
for early detection of plant leaves diseases. We are concluding in this papar Relu function is
having highest accuracy by multi class classification of ResNet-162 is 93.5%.

References:
References
[1] Anand.H. Kulkarni, Ashwin Patil R. K.” Applying image processing technique to detect
plant diseases”, International Journal of Modern Engineering Research, vol. 2, no. 5, pp.
36613664, Sep-Oct. 2012.
[2] Revathi, P., Hemalatha, M. (2012) “Classification of cotton leaf spot diseases using image
processing edge detection techniques”.169-173.
[3] J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol. 2. Oxford:
Clarendon, 1892, pp. 68–73.
[4] Sharada P. Mohanty, David P. Hughes, and Marcel Salathe, Using Deep Learning for
Image-Based Plant Disease Detection - Front. Plant Sci. 7:1419, 2016.
[5] Savita N. Ghaiwat, Parul Arora, “Detection and Classification of Plant Leaf Diseases
Using Image processing Techniques: A Review”, International Journal of Recent
Advances in Engineering & Technology, vol. 2, no. 3, 2014.
[6] Sindhuja Sankaran, Ashish Mishra, and Cristina Davis, A review of advanced techniques
for detecting plant diseases, Computers and Electronics in Agriculture, 72(2010), 1–13.
[7] Mrunalini R. Badnakhe and Prashant R. Deshmukh” An Application of K-Means
Clustering and Artificial Intelligence in Pattern Recognition for Crop Diseases”,
International Conference on Advancements in Information Technology 2011 IPCSIT, vol. 20,
2011
[8] Open CV Based Disease Identification of Mango Leaves Jayaprakash Sethupathy #1,
Amrita Vishwa Vidyapeetham, Amrita Coimbatore, India – 6410351
jayaprakash.settupathy@in.bosch.com, s_veni@cb.amrita.edu
[9] Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., andWojna, Z. (2016). “Rethinking the
inception architecture for computer vision,” in Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition (Las Vegas), 2818–2826.
[10] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., and Fei- Fei, L. (2014).
“Large-scale video classification with convolutional neural networks,” in Proceedings of the
IEEE conference on Computer Vision and Pattern ecognition (Columbus), 1725–1732.
[11] Y. Kang et al.
Climate change impacts on crop yield, crop water productivity and food security – a
review.Prog. Nat. Sci.
(2009)
[12] R. Caserta et al.
Citrus biotechnology: what has been done to improve disease resistance in such an important
crop?
Biotechnol. Res. Innov.
(2019)
[13] X. Deng et al.
Citrus greening detection using visible spectrum imaging and C-SVC
Comput. Electron. Agric.
(2016)
[14] M. Zhang et al.
Automatic citrus canker detection from leaf images captured in
field Pattern Recognit. Lett.(2011)
[15] M. Sharif et al.
Detection and classification of citrus diseases in agriculture based on optimized weighted
segmentation and feature selection
Comput. Electron.
Agric. (2018)
[16] G. Stegmayer et al.
Automatic recognition of quarantine citrus
diseases Expert Syst. Appl.
(2013)
[17] T.U. Rehman et al.
Current and future applications of statistical machine learning algorithms for agricultural
machine vision systems
Comput. Electron.
Agric. (2019)
[18] H. Ali et al.
Symptom based automated detection of citrus diseases using color histogram and textural
descriptors
Comput. Electron.
Agric. (2017)
[19] S. Khan et al.
A review on the application of deep learning in system health management
Mech. Syst. Signal Process.
(2018)
[20] S.-.H. Wang et al.
Classification of Alzheimer's disease based on eight-layer convolutional neural network with
leaky rectified linear unit and max pooling
J. Med. Syst.
(2018)

[21] K.P. Ferentinos


Deep learning models for plant disease detection and
diagnosis Comput. Electron. Agric.
(2018)
[22] C.R. Rahman et al.
Identification and recognition of rice diseases and pests using convolutional neural networks
Biosyst. Eng.
(2020)
[23] J. Lu et al.
An in-field automatic wheat disease diagnosis
system Comput. Electron. Agric.
(2017)
[24] J. Ma et al.
A recognition method for cucumber diseases using leaf symptom images based on deep
convolutional neural network
Comput. Electron.
Agric. (2018)
[25] G. G et al.
Identification of plant leaf diseases using a nine-layer deep convolutional neural network
Comput. Electr. Eng.
(2019)
[26] J. Gu et al.
Recent advances in convolutional neural
networks Pattern Recognit.
(2018)
[27] M. Khanramaki et al.
Citrus pests classification using an ensemble of deep learning models
Comput. Electron. Agric.
(2021)
[28] A. Michele et al.
MobileNet convolutional neural networks and support vector machines for palmprint
recognition
Procedia Comput. Sci.
(2019)
[29] S. Rajpal et al.
Using handpicked features in conjunction with ResNet-50 for improved detection of COVID-
19 from chest X-ray images
Chaos, Solitons & Fractals
(2021)
[30] D. Kim et al.
Citrus black spot detection using hyperspectral imaging
Int. J. Agric. Biol. Eng.
(2014)
[31] C.A. Deutsch et al.
Increase in crop losses to insect pests in a warming climate
Science (80-.)
(2018)
[32] K. Khanchouch et al.
Major and emerging fungal diseases of citrus in the mediterranean region
[33] S. Savary et al.
Crop health and its global impacts on the components of food
security Food Secur.
(2017)
[34] J.F. Sundström et al.
Future threats to agricultural food production posed by environmental degradation, climate
change, and animal and plant diseases – a risk analysis in three economic and climate settings
Food Secur.
(2014)
[35] L. Sun et al.
Citrus genetic engineering for disease resistance: past, present and future
Int. J. Mol. Sci.
(2019)
[36] H. Jia et al.
Genome editing of the disease susceptibility gene CsLOB1 in citrus confers resistance to citrus
canker
Plant Biotechnol.
J. (2017)
[37] Sachin D. Khirade et al ,"
Plant disease detection using image processing ",2015 International conference on
computing communication control and automation, PP. 768-771, 2015.
[38] Anand R et al.,
“An Application of image processing techniques for Detection of Diseases on Brinjal
Leaves Using K-Means Clustering Method,” 2016 Fifth International Conference On Recent
Trends In Information Technology, 2016.
[39] Sushil R. Kamlapurkar,
“Detection of Plant Leaf Disease Using Image Processing Approach,” International Journal of
Scientific and Research Publications, Vol. 6, Issue 2, February 2016.
[40] T. Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar,
“Focal Loss for Dense Object Detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no.
2, pp. 318–327, 2020, doi: 10.1109/TPAMI.2018.2858826.

You might also like