Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

PAPER2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

International Journal of Computer Applications (0975 – 8887)

Volume 144 – No.12, June 2016

Detection and Recognition of Diseases from Paddy Plant


Leaf Images
K. Jagan Mohan M. Balasubramanian S. Palanivel
Assistant Professor Assistant Professor Professor
Dept. of CSE Dept. of CSE Dept. of CSE
Annamalai University Annamalai University Annamalai University
Annamalainagar – 608002 Annamalainagar – 608002 Annamalainagar – 608002

ABSTRACT The main objective of this work is to develop a system for


In agricultural field, paddy cultivation plays a vital role. But their classifying the paddy plant diseases using image processing
growths are affected by various diseases. There will be decrease technique.
in the production, if the diseases are not identified at an early
stage. The main goal of this work is to develop an image
2. RELATED WORK
processing system that can identify and classify the various paddy P. R. Rothe [1], has proposed a work in light of example
plant diseases affecting the cultivation of paddy namely brown acknowledgment framework for distinguishing proof and order of
spot disease, leaf blast disease and bacterial blight disease. This three cotton leaf maladies in particular Bacterial Blight,
work can be divided into two parts namely, paddy plant disease Myrothecium, Alternaria. Image Segmentation is done utilizing
detection and recognition of paddy plant diseases. In disease dynamic shape model. Versatile neuro-fuzzy derivation
detection, the disease affected portion of the paddy plant is first framework utilized Hu's moments as components for the
identified using Haar-like features and AdaBoost classifier. The preparation technique. The arrangement precision is 85 percent.
detection accuracy rate is found to be 83.33%. In disease Viraj A. Gulhane [2], has tended to the issue of finding of
recognition, the paddy plant disease type is recognized using sicknesses on cotton leaf utilizing Principal Component Analysis
Scale Invariant Feature Transform (SIFT) feature and classifiers and Nearest Neighborhood Classifier. It includes sicknesses like
namely k-Nearest Neighbour (k-NN) and Support Vector Blight,Leaf Nacrosis, Gray Mildew, Alternaria and Magnesium
Machine (SVM). By this approach one can detect the disease at Deficiency. The characterization precision is 95 percent.
an early stage and thus can take necessary steps in time to
minimize the loss of production. The disease recognition accuracy Rong Zhou et.al., [3], has introduced a novel strategy for vigorous
rate is 91.10% using SVM and 93.33% using k-NN. and early Cercospora leaf spot identification in sugar beet
utilizing hybrid algorithm of template matching support vector
Keywords machine. This technique is strong and practical for ahead of
Pre-processing, Disease detection, Disease recognition, Haar-Like schedule identification and persistent quantization under normal
features, AdaBoost classifier, SIFT features, k-NN classifier. light conditions. Sometimes the recognition gets to be mind
boggling because of the unpredictable changes of outer
1. INTRODUCTION environment.
India is agriculture based country that has many people working
John William ORILLO et.al., [4], has joined the computerized
in the agriculture industry. The agricultural sector plays an
picture handling systems to dispose of the manual examination of
important role in economic development by providing rural
maladies in rice plant that more often than not happen in
employment. Paddy is one of the nation‟s most important
Philippine's farmlands in particular Bacterial leaf blight, Brown
products as it is considered to be one of India‟s staple food and
spot and Rice blight. The sickness recognizable proof precision is
cereal crops and because of that, many efforts have taken to
100 percent around 134 example images.
ensure its safety, one of them is crop management of paddy
plants. Paddy plants are affected by various fungal and bacterial Auzi Asfarian et.al., [5], has endeavored to distinguish the four
diseases. This work focuses on recognizing three paddy plant noteworthy paddy sicknesses in Indonesia to be specific leaf blast,
diseases namely Brown Spot Disease (BSD), Leaf Blast Disease brown spot, bacterial leaf blight and tungro. Fractal descriptors
(LBD), Bacterial Blight Disease (BBD). The proper detection and are utilized to dissect the composition of the sores. The infection
recognition of disease is very important in applying required distinguishing proof precision is 83 percent.
fertilizer.BSD is one of the most common fungal disease and
most damaging paddy plant disease. They are at first little, Kholis Majid et.al., [6], has added to a portable application for
roundabout and dim chestnut to purple cocoa. Completely created paddy plant malady identification framework utilizing fuzzy
sores are round to oval with a light cocoa to dark focus, entropy and Probabilistic neural system classifier that keeps
encompassed by a rosy chestnut edge brought on by the poison running on Android versatile's working framework. It includes for
delivered by the organisms. BBD is caused by bacteria sorts of maladies in particular brown spot, leaf blast, tungro and
Xanthomonas oryzae. It causes wilting of seedlings and yellowing bacterial leaf blight. The exactness of paddy sicknesses
and drying of leaves. LBD is caused by fungus Magnaporthe distinguishing proof is 91.46 percent.
oryzae. Starting indications show up as white to dim green
injuries or spots, with dull green outskirts. More seasoned sores
3. OUTLINE OF THE WORK
This recognition system involves feature extraction and
on the leaves are circular or shaft formed and whitish to dark
classification of Diseases using K-NN and SVM proposed work is
focuses with red to caramel fringes. Some take after jewel shape,
described in Section 4. Disease detection is described in section 5.
wide in the middle and indicated either end.
Feature extraction is described in Section 6 Image Classification

34
International Journal of Computer Applications (0975 – 8887)
Volume 144 – No.12, June 2016

is described in Section 7.Experimental results are described in A Haar like element considers adjoining rectangular region at a
Section 8.Conclusion and future scope are described in Section 9. particular area in a detection window, whole up the pixel
intensities in every locale and figures the distinction between
4. PROPOSED WORK these totals. This distinction is then contrasted with a scholarly
The block diagram for proposed system is shown in the Fig. 1. It limit that isolates infected locale from non-infected area. The
is an image recognition system for identifying the paddy plant principle point of preference of Haar such as highlight is its high
diseases that first involves disease detection and then disease computation speed. Haar like component of any size can be
recognition. This work does not involve any colour and shape ascertained in consistent time.
feature extraction techniques instead it uses SIFT (Scale Invariant
Feature Transform) for extracting the local features in images that A basic rectangular Haar like component can be characterized as
have sudden changes in intensities by filtering images at various the distinctions of the aggregate of pixels of zones inside the
scales and patches of interest. rectangle, which can be at any position and scale inside of the first
picture. This altered list of capabilities is called 2-rectangle
Disease detection part uses Haar-like feature and AdaBoost highlight. The value indicates certain characteristics of a
(Adaptive Boosting) classifier to locate the disease affected particular area of the image. Rectangular Haar like feature type is
portion of the paddy plant. Disease recognition part uses SIFT shown in fig 2. which indicates (a) horizontal variations, (b)
(Scale Invariant Feature Transform) feature extraction and two vertical variations, (c) horizontal changes and (d) diagonal
classifiers namely k-NN (k-Nearest Neighbours) and SVM variations of pixel in images respectively.
(Support Vector Machine) to recognize the various categories of
diseases like brown spot, leaf blast and bacterial blight.
-1
-1 +1
InputDetection
Disease Image
+1

Haar-like Feature Extraction

(a) (b)

AdaBoost Classifier
+1 -1
+1 -2 +1
-1 +1

(c) (d)
Fig. 2 Haar like rectangular feature type
Disease Recognition
The feature vector is given as F(x)=w1.Sum(r1)+w2.sum(r2)
where r1 and r2 represents the darker rectangular region and
SIFT Feature Extraction lighter rectangular region respectively. And w1 and w2 refers to
the weights which can be negative(-1) and positive(+1)
respectively.

5.2 Adaboost Classifier


k-NN SVM Adaboost classifier another way to say "Versatile Boosting" is a
Classifier Classifier machine learning meta calculation. It is regularly alluded to as the
best-out-of-box classifier. Adaboost preparing process chooses
just those elements known to improve correctness power of the
model, lessening dimensionality and conceivably enhancing
execution time as unessential components should not be
Recognized Diseases registered. Adaboost classifier is shown in the fig.3

Fig 1. Block diagram for the proposed system

5. DISEASE DETECTION USING HAAR-


LIKE FEATURE AND ADABOOST
CLASSIFIER
5.1 Haar- Like Feature Extraction
Haar-like elements are advanced picture highlights utilized for
identification purposes. They claim their name because of natural
closeness with Haar wavelets. Verifiably working with just
picture intensities (i.e., the RGB pixel values at every last pixel of
picture) made the undertaking of highlight count computationally Fig. 3 Adaboost Classifier
costly. Viola and Jones adjusted the thought of utilizing Haar
wavelets and grew purported Haar-like elements. Input image is subdivided into the number of sub-windows.
Rectangle prototypes are measured independently in vertical and
horizontal directions. Rectangle prototypes are applied to all sub-

35
International Journal of Computer Applications (0975 – 8887)
Volume 144 – No.12, June 2016

windows of the image for getting complete set of rectangle


features. Rectangle features are clustered into N-sets (N is
normally 20). A classifier is taught using AdaBoost learning
algorithm for each set of rectangular features. N-stage classifier is
created from the N-sets of rectangle features.

5.3 Testing and Training of AdaBoost


classifier
For paddy plant disease detection, totally 60 disease affected
sample images are taken. Each image undergoes training and
testing using AdaBoost classifier algorithm. Based on number of
iterations, thus the training stages and False alarm rate is getting
varied for every input image given to the classifier. The AdaBoost
classifier is mainly used to convert the weak classifier to strong
classifier. The various paddy plant disease images are taken such
as brown spot, leaf blast and bacterial blight. Among the 60
diseased images only 50 images are detected accurately. The Fig. 6 Screen shot for disease detection of bacterial blight
detection rate for diseased affected image is 83.33%.
The cropped view of diseased image after detection are shown in
the figures below

Fig.7 Cropped view of brown spot disease

Fig.4 Screen shot for disease detection of brown spot

Fig.8 Cropped view of leaf blast disease

Fig.9 Cropped view of bacterial blight disease


Fig.5 Screen shot for disease detection of leaf blast
6. FEATURE EXTRACTION IN DISEASE
RECOGNITION
6.1 Scale Invariant Feature Transform(SIFT)
Filter is a strategy to extricate unmistakable elements from dark
level pictures, by sifting pictures at different scales and fixes of
fad that have sharp changes in neighborhood picture intensities.
The SIFT calculation comprises of four noteworthy stages: Scale-

36
International Journal of Computer Applications (0975 – 8887)
Volume 144 – No.12, June 2016

space extrema detection, Key point localization, Orientation


Assignment and Representation of a key point descriptor.
The elements are situated at maxima and minima of a Difference
of Gaussian (DoG) capacities connected in scale space. Next, the
descriptors are processed as an arrangement of orientation
histograms on 4×4 pixel neighborhoods, and every histogram
contains 8 bins. This prompts a SIFT feature vector with 4×4×8
(i.e., 128) measurements on every patch.

6.1.1 Detection of Scale-Space Extrema


The principal stage is to develop a Gaussian "scale space"
function from the input image. This is shaped by convolution of
the original image with Gaussian elements of shifting widths. The
scale space of a image is characterized as a capacity L(x,y,σ) that Figure 11. Detection of scale space extrema
is delivered from the convolution of a variable-scale Gaussian,
G(x,y,σ) with an input image, I(x,y):
6.1.2 6.1.2 Key Point Localization
This stage endeavours to dispense with a few points from the
competitor rundown of key points by finding those that have low
L(x,y,σ) = G(x,y,σ) ∗ I(x,y) difference or are inadequately limited on an edge. The estimation
of the key points in the DoG pyramid at the extrema is given by:
Where ∗ is the convolution operation in x and y, and 1 ∂D −1
D (z) = D + 2 ∂x z
1 2 +y 2 )/2σ2 On the off chance that the threshold esteem at z is underneath
G(x,y,σ) = e−(x limit esteem, then this point is barred.
2πσ2

To eliminate inadequately limited extrema we utilize the way that


To effectively distinguish stable keypoint areas in scale space, in these cases there is a huge principle curvature over the edge yet
David Lowe proposed utilizing scale-space extrema as a part of a little recede and flow in the opposite course in the distinction of
the difference of Gaussian capacity convolved with the image, Gaussian function. A 2x2 Hessian network, H, registered at the
D(x, y, σ), which can be processed from the distinction of two location and scale of the keypoints is utilized to discover the
adjacent scales isolated by a consistent multiplicative element k: curvature. With these equations, the proportion of principle
DoG (x, y,σ ) = (G(x,y,kσ) − G(x, y, σ)) ∗ I(x,y) curvature can be checked proficiently.
Dxx Dxy
H=
Dxy Dyy

6.1.3 Orientation Assignment


This step expects to allocate a steady orientation to the key points
in light of nearby image properties. An orientation histogram is
framed from the gradient orientation of test points inside of an
area around the key points. A 16x16 square is picked in this
usage. The orientation histogram has 36 bins covering the 360
degree scope of orientations. The gradient magnitude, m(x, y),
and orientation, θ(x, y), are pre-processed utilizing pixel
contrasts:
m(x,y)=
2 2
L x + 1, y − L x − 1, y + L x, y + 1 − L x, y − 1

L x,y+1 −L x,y−1
Fig.10 Difference of Gaussian θ x, y = tan−1 L x+1,y −L x−1,y

There are various explanations behind picking this function. To Every sample is weighted by its slope extent and by a Gaussian-
begin with, it is an especially effective capacity to register, as the weighted roundabout window with a σ that is 1.5 times that of the
smoothed images, L, should be processed regardless for scale size of the key focuses. Peaks in the orientation histogram
space feature depiction, and DoG can be figured by simple image compare to predominant headings of nearby inclinations. We find
subtraction as appeared in the fig.10. To identify the nearby the most elevated peak in the histogram and utilize this peak and
maxima and minima of DoG(x, y, σ) every point is contrasted some other neighbourhood peak inside 80% of the height of this
with its 8 neighbors at the same scale, and its 9 neighbors all over peak to make key points with that orientation. A few points will
one scale as appeared in the fig.11. In the event that this value is be provided with various orientations if there are numerous peaks
the base or greatest of every one of these focuses then this point is of comparative greatness. A Gaussian circulation is fit to the 3
an extrema. histogram values nearest to every peak to insert the peaks position
for better precision. This registers the location, orientation and
scale of SIFT features that have been found in the image. These
features react emphatically to the corners and force angles. The
length of the arrow demonstrates the greatness of the difference at
the key focuses, and the arrow indicates from the dark to brighter
side.

37
International Journal of Computer Applications (0975 – 8887)
Volume 144 – No.12, June 2016

6.1.4 Keypoint Descriptor


In this stage, a descriptor is registered for the nearby image region
that is as particular as could be expected under the circumstances
at every hopeful key point. The image gradient magnitudes and
orientations are tested around the key point area. A Gaussian
weighting function with σ identified with the size of the key point
is utilized to relegate a weight to the magnitude. We utilize an σ
equivalent to one a large portion of the width of the descriptor
window in this execution. To accomplish orientation invariance,
the directions of the descriptor and the gradient orientations are
pivoted in respect to the key point orientation. This procedure is
demonstrated in fig.12 a 16x16 sample cluster is figured and a
histogram with 8 bins is utilized. So a descriptor contains
16x16x8 components altogether.

Fig.13 Architecture of SVM

7.1.1 SVM Principle


Support vector machine (SVM) boycott be utilized for ordering
the got information (Burges, 1998). SVM are an arrangement of
related directed learning strategies utilized for classification and
regression. They fit in with a group of summed up straight
classifiers. Give us a chance to denote a input vector (termed as
example) by x=(x1, x2, .., xn) and its class label by y such that y
= {+1, −1}. In this manner, consider the issue of isolating the
arrangement of n-training patterns having a place with two
classes.
Fig.12 Building keypoint
These are weighted by a Gaussian window, demonstrated by the 7.1.2 SVM for Linearly Separable Data
Overlaid circle. The image gradients are added to an orientation A linear SVM is used to isolate data sets which are linearly
histogram. Every Histogram incorporate 8 directions showed by separable. The SVM linear classifier extends the margin between
the arrows and is processed from 4x4 Sub locales. The length of the separating hyperplane. The features lying on the maximal
every arrow compares to the whole of the gradient magnitude margins are called support vectors. Such a hyper plane with
close to that course inside of the region. maximum margin is called maximum margin hyperplane. In case
of linear SVM, the discriminate function is of the form:
7. CLASSIFICATION IN DISEASE g (x) = ωt x + b
RECOGNITION
Such that g (xi) ≥ 0 for yi= +1 and g (x i) < 0 for yi = −1. In other
7.1 SVM Classifier words, training samples from the two different classes are
Support vector machine (SVM) depends on the Structural Risk detached by the hyperplane g (x) = ωtx+b =0. SVM finds the
Minimization (SRM). Like RBFNN, Support vector machines can hyperplane those results in the largest separation between the
be utilized for pattern classification and nonlinear regression. decision function values from the two classes. Theoretically, this
SVM builds a linear model to appraise the decision function hyperplane can be found by minimizing the following cost
utilizing non-linear class limits taking into account support function:
vectors. On the off chance that the information are straightly
isolated, SVM trains linear machines for an ideal hyper plane that j (ω) = ωt ω
isolates the information without mistake and into the most
For the linearly separable case, the decision rules is scripted by an
extreme separation between the hyper plane and the nearest
optimal hyperplane separating the binary decision classes are
training points. The training points indicating the nearest the ideal
given in the following equation in terms of the support vectors:
isolating hyper plane are called support vectors. Fig. 13
demonstrates the architecture of the SVM. SVM maps the input i=N
Y = sign ( i=1 yi xxi + b)
image into a higher dimensional element space through some
nonlinear mapping picked from the earlier. A direct choice Where Y is the outcome, yi is the class value of the training
surface is then developed in this high dimensional component sample xi and represents the inner product. The vector tends to an
space. Along these lines, SVM is a straight classifier in the input and the vectors xi, i = 1. . .Ns, are the support vectors.
parameter space, however it turns into a nonlinear classifier as an
after effect of the nonlinear mapping of the space of the data 7.1.3 SVM for Linearly Non- Separable Data
designs into the highdimensional element.space. For non-linearly separable data, it depicts the data in the input
space into a high dimension space with kernel function (x), to
find the separating hyperplane.
i=N
Y = sign ( i=1 yi k xxi + b)
7.1.4 Determining Support Vectors
The support vectors are the (transformed) training patterns. The
support vectors are (equally) close to hyperplane. The support
vectors are training samples that gives the optimal separating
hyperplane and are the most difficult patterns to classify.

38
International Journal of Computer Applications (0975 – 8887)
Volume 144 – No.12, June 2016

Informally speaking, they are the patterns most informative for 8. EXPERIMENTAL RESULT
the classification task.
8.1 SIFT Features with SVM
7.2 k-NN Classifier Support Vector Machine is used to construct the optimal
In example acknowledgment, the k-Nearest Neighbours algorithm separating hyper plane for various paddy plant disease features.
(or k-NN for short) is a non-parametric strategy utilized for For identifying a disease, paddy plant diseases features are
classification and regression. In both cases, the data comprises of extracted using SIFT from the input images for the three disease
the k nearest training samples in the component space. The yield categories. In the training phase, seven dimensional feature vector
relies on upon whether k-NN is utilized for classification or is extracted from each diseased image and is given as input to the
regression. SVM model. The seven features are x position, y position,
scale(sub-level), size of feature on image, edge flag, edge
In k-NN classification, the yield is class participation. An object orientation, curvature of response through scale space. For
is well-organized by a larger part vote of its neighbours, with the training, seven features per diseases are extracted, but number of
object being assigned to the class most regular among its k closest keypoint varies and it depends on the image complexity.
neighbours (k is a positive whole number, ordinarily little). On
the off chance that k = 1, then the object is basically allotted to For recognition, seven disease features fed into the SVM model
the class of that single closest neighbour. and the distance between each of the feature vectors and the SVM
hyperplane is derived. The average distance is calculated for each
In k-NN regression, the yield is the property estimation for the model. The average distance gives better result than using
object. This worth is the normal of the estimations of its k closest distance for each feature vector. The recognition of the disease is
neighbors. decided based on the maximum distance.
k-NN is a kind of example based learning, or sluggish realizing,
where the capacity is just approximated locally and all calculation
8.2 SIFT Features with k-NN
In training phase, SIFT is applied to all paddy plant disease
is conceded until characterization. The k-NN calculation is among
categories. In our work seven SIFT features are extracted from
the most straightforward of all machine learning calculations. The
each key point. Number of pixels extracted from an input image is
preparation illustrations are vectors in a multidimensional element
differ from image to image, as well as depends on the complexity
space, each with a class mark. The preparation period of the
of an image. The k-Nearest Neighbours algorithm is a non-
calculation comprises just of putting away the component vectors
parametric method used for classification and regression. In both
and class names of the preparation tests.
cases, the input consists of the 3 closest training examples in the
In the order stage, k is a user characterized steady, and an feature space. The training phase of the algorithm consists only of
unlabeled vector (an inquiry or test point) is arranged by storing the feature vectors and class labels of the training samples.
assigning the label which is most incessant among the k preparing
In the classification phase, k is a user-defined constant (In this
tests closest to that question point. A usually utilized separation
work k=3), and an unlabeled vector (a query or test point) is
metric for consistent variables is Euclidean separation. For
classified by assigning the label which is most frequent among
discrete variables, for example, for content characterization,
the k training samples nearest to that query point. The k-NN
another metric can be utilized, for example, the cover metric (or
recognizes the disease by using the test features compared with
Hamming separation). In the setting of quality expression
train features of different disease affected images. The recognition
microarray information, for instance, k-NN has likewise been
of the disease is based on the minimum distance value.
utilized with connection coefficients, for example, Pearson and
Spearman. Frequently, the grouping exactness of k-NN can be The paddy plant disease affected images for 3 classes namely
enhanced essentially if the separation metric is found out with Brown Spot, Leaf Blast and Bacterial Blight have been taken.
specific calculations, for example, Large Margin Nearest Using SIFT, seven features namely x position, y position,
Neighbor or Neighborhood parts examination. scale(sub-level), size of feature on image, edge flag, edge
orientation, curvature of response through scale space are
A disadvantage of the essential "larger part voting" arrangement
extracted from each point. In training phase, SIFT is applied to all
happens when the class conveyance is skewed. That is,
train image categories.
illustrations of a more incessant class have a tendency to
command the forecast of the new sample, since they have a The images are then arranged through the framework that
tendency to be basic among the k closest neighbours because of includes recognizable proof of nearby elements and representation
their extensive number. One approach to defeat this issue is to of those elements as Scale Invariant Feature Transform (SIFT)
weight the order, considering the separation from the test point to descriptors, development of codebooks which gives an approach
each of its k closest neighbors. Another approach to overcome to outline descriptors into an altered length vector in histogram
skew is by reflection in information representation. For instance space and the multi-class grouping of the element histograms
in a Self Organizing Map (SOM), every core is a assign (an utilizing bolster vector machines (SVM) and k-Nearest
inside) of a group of comparative focuses, paying little notice to Neighbors.
their thickness in the first preparing information. k-NN can then
be connected to the SOM. A linear SVM is utilized to characterize information sets which
are straightly distinct. The SVM straight classifier tries to expand
A k-NN calculation commonly includes selecting a predefined the edge between the isolating hyperplane. The examples lying on
number of days comparative in qualities to the day of fad. One of the maximal edges are called bolster vectors. In machine learning,
nowadays is arbitrarily re-examined to speak to the climate of the support vector machines are managed learning models with
following day in the reproduction period. In spite of their intrinsic related learning calculations that investigate information and
straightforwardness, closest neighbor calculations are viewed as perceive designs, utilized for order. The k-Nearest Neighbours (k-
adaptable and powerful. These techniques have been seriously NN)
examined in the field of insights and in example acknowledgment
systems that go for recognizing distinctive examples. The closest classification divides data into a test set and training set. For each
neighbor approach includes synchronous inspecting of the climate row of the test set, the k nearest training set objects are found
variables, for example, precipitation and temperature. The testing based on Euclidean distance and the classification is determined
is completed from the watched information, with substitution. by majority vote with ties broken at random.

39
International Journal of Computer Applications (0975 – 8887)
Volume 144 – No.12, June 2016

Total number of disease affected images (3 classes) = 120 images


Total number of training samples = 90 images (each class
contains 30 images)
Total number of testing samples = 30 images (each class contains
10 images)
Table 1 Confusion Matrix for SIFT using SVM

Paddy Plant TP FN FP TN
Diseases
Brown Spot 9 1 1 19

Leaf Blast 8 2 2 18
Fig.15 Screen shot for detection and Recognition of spot
Bacterial 9 1 1 19 disease
Blight

Table 2 Confusion Matrix for SIFT using k-NN

Paddy Plant TP FN FP TN
Diseases
Brown Spot 9 1 0 20

Leaf Blast 9 1 2 18

Bacterial 9 1 1 19
Blight
Table 3 Performance table for paddy plant disease
recognition with SIFT using SVM and k-NN
Feature
Fig.16 Screen shot for detection and Recognition of Leaf blast
Precision Recall Accuracy F- disease
(%) (%) (%) score
(%)
Classifier
SIFT+SVM 86.66 86.66 91.10 86.66

SIFT+k-NN 90.60 90.00 93.33 90.14

94

92

90

88 SVM
k-NN
86

84

82 Fig.17 Screen shot for detection and Recognition of Bacterial


Blight disease
Precision Recall Accuracy F-score
9. CONCLUSION AND FUTURE SCOPE
Fig.14 Performance chart for diseases recognition using SIFT The image processing techniques were used to deploy the
with SVM and k-NN classification system. In this work Scale Invariant Feature
Transform (SIFT) is used to get features from the disease affected
images. Then these features are taken to recognise the image
using Support Vector Machine (SVM) and k-Nearest Neighbours

40
International Journal of Computer Applications (0975 – 8887)
Volume 144 – No.12, June 2016

(KNN). This work mainly concentrates on three main diseases of [7] Nunik Noviana Kurniawati, Siti Norul Huda Sheikh
paddy plant namely Brown spot, Leaf blast and Bacterial blight. It Abdullah, Salwani Abdullah, Saad Abdullah, “Investigation
is useful to farmers and agriculture related researches. On Image Processing Techniques For Diagnosing Paddy
Experimental Result showed that the model is capable to predict Diseases”, International Conference Of Soft Computing And
the disease with accuracy of 91.10% using SVM and 93.33% Pattern Recognition, 2009.
using k-NN.
[8] Nunik Noviana Kurniawati, Siti Norul Huda Sheikh
For future work, some alternative methods can be used to extract Abdullah, Salwani Abdullah, Saad Abdullah, ”Texture
features and some other classifiers can be used to improve the Analysis For Diagnosing Paddy Disease”, International
result accuracy. Conference On Electrical Engineering And Informatics,
2009.
10. REFERENCES
[1] P. R. Rothe, “Cotton Leaf Disease Identification Using [9] G.Anthonys, N. Wickramarachchi, “An Image Recognition
Pattern Recognition Techniques”, International Conference System For Crop Disease Identification Of Paddy Fields In
On Pervasive Computing, 2015. Sri Lanka”, Fourth International Conference On Industrial
And Information Systems, 2009.
[2] Viraj A. Gulhane, Maheshkumar H. Kolekar, “Diagnosis Of
Diseases On Cotton Leaves Using Principal Component [10] Santanu Phadikar And Jaya Sil, “Rice Disease Identification
Analysis Classifier”, Annual IEEE India Conference, 2014. Using Pattern Recognition Techniques”, Proceedings Of 11th
International Conference On Computer And Information
[3] Rong Zhou, Shun‟ichi Kaneko, Fumio Tanaka, Miyuki Technology, 2008.
Kayamori, Motoshige Shimizu, “Early Detection And
Continuous Quantization Of Plant Disease Using Template [11] G.Anthonys and N. Wickramarachchi, „An Image
Matching And Support Vector Machine Algorithms”, First Recognition System for Crop Disease Identification of
International Symposium On Computing And Networking, Paddy Fields In Sri Lanka‟, Fourth International Conference
2013. on Industrial and Information Systems (ICIIS), 28-31
December 2009.
[4] John William Orillo, Jennifer Dela Cruz, Leobelle Agapito,
Paul Jensen Satimbre Ira Valenzuela, “Identification Of [12] Santanu Phadikar and Jaya Sil, „Rice Disease Identification
Diseases In Rice Plant (Oryza Sativa) Using Back Using Pattern Recognition Techniques‟, Proceedings of 11th
Propagation Artificial Neural Network”, 7th IEEE International Conference on Computer and Information
International Conference, 2013. Technology (ICCIT), 25-27 December 2008.

[5] Auzi Asfarian, Yeni Herdiyeni, Aunu Rauf, Kikin Hamzah [13] Qin Z and Zhang, „Detection of rice sheath blight for in-
Mutaqin, “Paddy Diseases Identification With Texture season disease management using multispectral remote
Analysis Using Fractal Descriptors Based On Fourier sensing‟, International Journal of Applied Earth Observation
Spectrum”, International Conference On Computer, Control, and Geoinformation, 2005.
Informatics And Its Applications,2013. [14] J.B. Cunha, „Application of Image Processing Techniques in
[6] Kholis Majid, Yeni Herdiyeni, Annu Rauf, “I-Pedia: Mobile the Characterization of Plant Leafs‟, Proc. IEEE Intl‟
Application For Paddy Disease Identification Using Fuzzy Symposium on Industrial Electronics, 2003.
Entropy And Probabilistic Neural Network”, ICACSIS, [15] L. Lucchese and S.K. Mitra, „Color Image Segmentation: A
2013. State of-the-Art Survey‟, Proceeding of the Indian National
Science Academy, Vol. 67A, No. 2, 2001, pp. 207-221.

IJCATM : www.ijcaonline.org
41

You might also like