Pattern Recognition
Pattern Recognition
Pattern Recognition
SEMINAR PRESENTATION
ON
PRESENTED BY
SUBMITTED TO
June, 2021
1
TABLE OF CONTENT
Introduction - - - - - - - - - -3
Conclusion - - - - - - - - - - -12
Reference
2
INTRODUCTION
3
special transform and the new features have a much lower dimensions. Furthermore, in a
robust pattern recognition system, features should be independent of the size, orientation,
and location of the pattern. This independence can be achieved by extracting features that
are translation-, rotation-, and scale-invariant. Choosing discriminative and independent
features is the key to any pattern recognition method being successful in classification.
Examples of features that can be used: color, shape, size, texture, position, etc. Many
methods exist for this step, such as nonlinear principal components analysis (NLPCA),
Radon transform , dual-tree complex wavelet transform, Fourier transform, Gabor
wavelets, Curvelet transform , skeletonisation, rough set theory (RST), Fuzzy invariant
vector, etc. Some of these methods are reviewed in this paper. After the feature extraction
step, the classification could be done. This step enables us to recognize an object or a
pattern by using some characteristics (features) derived from the previous steps. It is the
step which attempts to assign each input value of the feature vector to one of a given set
of classes. For example, when determining whether a given image contains a face or not,
the problem will be a face/non-face classification problem. Classes, or categories, are
groups of patterns having feature values similar according to a given metric. Pattern
recognition is generally categorized according to the type of learning used to generate the
output value in this step. Supervised learning assumes that a set of training data (training
set) has been provided, consisting of a set of instances that have been properly labeled by
hand with the correct output. It generates a model that attempts to meet two sometimes
conflicting objectives: Perform as well as possible on the training data, and generalize as
well as possible to new data. Unsupervised learning, on the other hand, assumes training
data that has not been hand-labeled, and attempts to find inherent patterns in the data that
can then be used to determine the correct output value for new data instances.
Classification learning techniques include Support Vector Machine (SVM), Multi-class
SVM, Neural Networks, Logical combinatorial approach, Markov random field models
(MRF), Fuzzy ART, CLAss-Featuring Information Compressing (CLAFIC), K-nearest
neighbor, rule generation , etc.
4
Pattern is everything around in this digital world. A pattern can either be seen physically
or it can be observed mathematically by applying algorithms.
Example: The colours on the clothes, speech pattern etc. In computer science, a pattern
is represented using vector features values.
5
A set of features that are taken together, forms the features vector.
Example: In the above example of face, if all the features (eyes, ears, nose etc) taken
together then the sequence is feature vector([eyes, ears, nose]). Feature vector is the
sequence of a features represented as a d-dimensional column vector. In case of speech,
MFCC (Melfrequency Cepstral Coefficent) is the spectral features of the speech.
Sequence of first 13 features forms a feature vector.
Training set:
Training set is used to build a model. It consists of the set of images which are used to
train the system. Training rules and algorithms used give relevant information on how to
associate input data with output decision. The system is trained by applying these
algorithms on the dataset, all the relevant information is extracted from the data and
results are obtained. Generally, 80% of the data of the dataset is taken for training data.
6
Testing set:
Testing data is used to test the system. It is the set of data which is used to verify whether
the system is producing the correct output after being trained or not. Generally, 20% of
the data of the dataset is used for testing. Testing data is used to measure the accuracy of
the system. Example: a system which identifies which category a particular flower
belongs to, is able to identify seven category of flowers correctly out of ten and rest
others wrong, then the accuracy is 70 %
7
Example: While representing spherical objects, (25, 1) may be represented as an
spherical object with 25 units of weight and 1 unit diameter. The class label can form a
part of the vector. If spherical objects belong to class 1, the vector would be (25, 1, 1),
where the first element represents the weight of the object, the second element, the
diameter of the object and the third element represents the class of the object.
APPLICATIONS:
Image processing, segmentation and analysis
Pattern recognition is used to give human recognition intelligence to machine which
is required in image processing.
Computer vision
Pattern recognition is used to extract meaningful features from given image/video
samples and is used in computer vision for various applications like biological and
biomedical imaging.
8
Seismic analysis
Pattern recognition approach is used for the discovery, imaging and interpretation of
temporal patterns in seismic array recordings. Statistical pattern recognition is
implemented and used in different types of seismic analysis models.
Speech recognition
The greatest success in speech recognition has been obtained using pattern
recognition paradigms. It is used in various algorithms of speech recognition which
tries to avoid the problems of using a phoneme level of description and treats larger
units such as words as pattern
9
CONCLUSION
In this seminar we expatiate pattern recognition in the round, including the definition, the
composition, the methods of pattern recognition, the comparison of each classification
method, plus the application of pattern recognition. The developing of pattern recognition
is increasing very fast, the related fields and the applications of pattern recognition has
become wider and wider. It aims to classify a pattern into one of a number of classes. It
appears in various fields like computer vision, agriculture, robotics, biometrics, etc. In
this context, a challenge consists of finding some suitable description features since
commonly, the pattern to be classified must be represented by a set of features
characterizing it. These features must have discriminative properties: efficient features
must be affined insensitive transformations. They must be robust against noise and
against elastic deformations.
10
REFERENCE
Camargo .A and J.S. Smith. Image pattern classification for the identification of disease
causing agents in plants. Computers and Electronics in Agriculture, 66(2):121 –
125, 2009.
D. Liu, Yukihiko Yamashita, and Hidemitsu Ogawa. Pattern recognition in the presence
of noise. Pattern Recognition, 28(7):989–995, 1995.
Guangyi Chen and W. F. Xie. Pattern recognition with svm and dual-tree complex
wavelets. Image Vision Comput., 25(6):960–966, 2007.
Guangyi Chen, Tien D. Bui, and Adam Krzyzak. Invariant pattern recognition using
radon, dualtree complex wavelet and fourier transforms. Pattern Recognition,
42(9):2013–2019, 2009.
Jinhai Cai and Zhi-Qiang Liu. Pattern recognition using markov random field models.
Pattern Recognition, 35(3):725–733, 2002.
Jos Francisco Martnez Trinidad and Adolfo Guzmn-Arenas. The logical combinatorial
approach to pattern recognition, an overview through selected works. Pattern
Recognition, 34(4):741–751, 2001.
Kun Feng, Zhinong Jiang, Wei He, and Bo Ma. A recognition and novelty detection
approach based on curvelet transform, nonlinear pca and svm with application to
indicator diagram diagnosis. Expert Syst. Appl., 38(10):12721–12729, 2011.
Lin-Lin SHEN and Zhen JI. Gabor wavelet selection and svm classification for object
recognition. Acta Automatica Sinica, 35(4):350 – 355, 2009.
11
Santanu Phadikar, Jaya Sil, and Asit Kumar Das. Rice diseases classification using
feature selection and rule generation techniques. Computers and Electronics in
Agriculture, 90(0):76 – 85, 2013.
Schenglian Lu Yuan Tian, Chunjiang Zhao and Xinyu Guo. Multiple classifier
combination for recognition of wheat leaf diseases. Intell Autom Soft Co,
17(5):519–529, 2011.
Sung-Bae Cho. Pattern recognition with neural networks combined by genetic algorithm.
Fuzzy Sets and Systems, 103(2):339 – 347, 1999. ¡ce:title¿Soft Computing for
Pattern Recognition¡/ce:title¿.
12