Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Applying AI To Biometric Identification For Recognizing Text Using One-Hot Encoding and CNN

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Volume 8, Issue 6, June 2023 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Applying AI to Biometric Identification for


Recognizing Text using One-Hot Encoding and CNN
Abhishek Jha1, Dr. Hitesh Singh 2, Dr. Vivek KumarDr 3, Dr. Kumud Saxena4
2
Associate Professor, 3Professor, 4HOD,
1,2,3,4
Computer Science and Engineering, NIET, AKTU, India

Abstract:- Text on an image often contains important The method of selecting and extracting the most
information and directly carries high-level semantics in unique characteristics from an image is known as feature
academic institutions and financial institutions. This extraction and selection. This peculiar 5 feature has the
makes it an important source of information and a significant ability to accentuate or expand the difference
popular research topic. Many studies have shown that between different class patterns while maintaining its
CNN-based neural networks are very good at classifying invariance to the same class patterns of other kinds. This is
images, which is the foundation of text recognition. By one of its important characteristics [5].
combining AI with the process of biometric
identification, a technique for text recognition in The class of methods that are used for character
academic institutions and financial institutions is recognition in practice is, in principle, not dissimilar to the
performed using Convolutional Neural Network (CNN). class of methods that are used for any broad pattern
Initially, preprocessing is done for making the document recognition issue. On the other hand, based on the
image suitable for feature extraction. One hot encoding- characteristics that are employed, the various methodologies
based feature extraction is performed. Two-dimensional for character recognition may be roughly categorized as
CNN is used to classify the final features. Finally, follows: Approaches for matching templates and performing
RMSprop is used to optimize the results and improve the correlations, as well as techniques for analyzing and
accuracy. Results of the proposed method show that the matching features [6].
accuracy is 99%, which is more when compared to the
existing methods. Template matching and correlation techniques: These
approaches are high-level machine vision algorithms that
Keywords:- Adam optimizer, AI, CNN, RMSprop, One hot identify the entirety or a portion of an image that match a
encoding . standard prototype template. These techniques may be used
to either a single picture or several images simultaneously.
I. INTRODUCTION In this step, the pixels of an input character are compared,
one by one, with character prototypes that have been saved
In today's world, optical character recognition (OCR) before [7].
technologies are utilised for a wide variety of tasks, some of
which include scanner-based data entry, bank checks, Feature analysis and matching techniques: At the
business cards, automatic mail sorting, handheld price label present time, the methods of character recognition that are
scanners, and a variety of document recognition applications utilized most frequently include feature analysis and
[1]. These days, the market also contains the commercial matching algorithms. This strategy is also sometimes
character identification scheme that are accessible to referred to as the structural analysis method. These
purchase. As a direct result of this innovation, researchers approaches replicate human thinking better than template
are now able to work on the issue of online handwriting matching did, which was the previous standard. Using this
recognition. This was made possible as a result of the methodology, relevant characteristics are first retrieved from
development of electronic tablets that collected the x, y the input character, and then the extracted features are
coordinate data as well as the movement of the pen tip [2]. compared with the feature descriptions of the trained
characters. Recognition can be achieved by using the
The quality of the source document that is supplied description that is the closest match [8].
into any OCR system has a noteworthy influence on the
performance of that system. It is possible that the original Many of today's OCR systems are modelled based on
document is rather old and that it's undergone some physical mathematical models in order to reduce the amount of
wear and tear. It is possible that the original document was categorization errors that occur. A method for recognizing
of poor quality because to the changes in toner density that characters like these might make use of either structural or
were present in it [3]. There is a possibility that the scanning pixel-based information. The following list includes some of
process will miss some of the fainter sections that were them. Using hyper surfaces in multi-dimensional feature
present in the original document, which might result in the spaces, discriminant function classifiers attempt to reduce
corruption of many characters in the original text. the mean-squared classification error by separating the
Gradations in the text picture can be brought about by feature descriptions of characters that belong to various
scanning on low-quality paper or by printing of low-quality semantic classes [9]. Bayesian classifiers make use of
copies. The performance of optical character recognition is probability theory in order to minimize a loss function that
also significantly impacted by other factors, including is connected with character misclassification. The theories
accurate line and word segmentation [4]. of human and animal vision are the basis for Artificial

IJISRT23JUN2333 www.ijisrt.com 2954


Volume 8, Issue 6, June 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Neural Networks (ANN), which use error BP methods to When it comes to saving space and maximising
decrease the possibility of incorrect categorization. SVM is performance, the idea of applying partial least square (PLS)
an important innovation in the field of machine learning based feature reduction for optical character recognition was
algorithms. It is a technique that is built on a kernel. There is introduced by Akhtar et al. [15]. Here, we offer a novel
a collection of techniques for supervised learning that may approach to automated OCR based on the fusion and
be used for classification or regression, and these are the selection of several attributes from a given set of features.
techniques [10][11]. Parallel LR (PLS) based selection is used to combine the
features serially. An entropy fitness function is used for the
Most recent articles have incorporated a machine selection process. An ensemble classifier is then used to
learning method in certain way. Particularly it is utilized label the completed features. This paper presents a method
widely for the identification of visual characters largely that can be used to make printed text machine-readable,
owing to the massive dataset availability. The researchers which has applications in areas such as licence plate
frequently utilize a machine learning strategy for a language recognition.
with big dataset for machine learning in order to study the
meaningful method. This improvement has come with more Nadeem and Rizvi [16] proposed using template
computational complexity, as was indicated above, even if matching to identify both typed and handwritten characters.
the frameworks depending on machine learning approaches Rather than manually recognising each input pattern, one of
achieve greater accuracy during classification [12]. There the goals is to create a system that can automatically
have only been a handful of studies conducted in recent categorise them into the class to which they belong based on
years that have used a traditional method of feature the information they contain. The system is responsible for
extraction in conjunction with algorithmic approaches to character recognition. When compared to the recognition
feature selection and have succeeded in producing state-of- rate of typewritten Standard English alphabets fonts
the-art results. Additional research difficulty that requires (94.30%), the rate at which handwritten English alphabets
the research community attention is the construction of are understood is significantly lower at 75.42%.
systems that are able to recognize on-screen text and
characters in various circumstances in everyday life Adhvaryu [17] presented an algorithm for matching
situations [13]. templates for alphabets. The letters of the alphabet, from A
to Z, were utilised in tests, and photos were grayscale with
To overcome these issues, we applied AI to Biometric Times New Roman font type so that the prototype system
Identification for Recognizing Text Using One-hot encoding could recognise the letters by comparing them. The
and CNN in case of documents related to academic prototype's limitations are that it can only be used to test
institutions and financial institutions to significantly with the alphabet, that it can only use grayscale photographs
improve performance and reduce data storage needs. The with the font type, and that it can only use the Template
novelty of the suggested work is that it combines one-hot Matching method to recognise the letters.
encoding feature extraction, CNN classifier, and RMSprop
optimizer to produce the highest recognition accuracy for Ziaratban et al. [18] suggested a method for character
text recognition among peer researchers. When compared to recognition they called "template matching." In order to do
other machine learning models used by peer researchers, the feature extraction, this technique looks for distinct templates
recognition accuracy obtained in this work utilising the within the input photos. Each template's success rate is
CNN + RMSprop model is superior. recorded as a feature, and the optimal matching area in an
image is pinpointed and archived.
The rest of the paper is structured as follows: The
related work in the field of text recognition is described in Rajib et al. [19] offer an HMM-based method for
Section 2, the proposed approach with the pseudocode is recognising English handwritten characters. In their
described in Section 3, the results are discussed and a research, they have used both system-wide and
comparative analysis is presented in Section 4, and the neighborhood-specific techniques for extracting features.
conclusion and recommendations for the future are The HMM is trained with these features, and trials are run.
presented in Section 5. They have compiled a database of 13,000 examples, with
five examples produced for each character by each of 100
II. LITERATURE REVIEW writers. After 2600 samples were used to teach the HMM,
we are now using the remaining samples to validate the
A deep learning-based model for ACR has been recognition architecture. A recognition rate of 98.26% is
proposed by Guptha et al. [14]. The data is gathered and achievable with the proposed system.
then pre-processed with Gaussian filtering and skew
detection. As an added step, projection profile and Jamtsho et al. [20] presented a model for a system
thresholding methods are used to separate the lines and where the system can read aloud the characters/text in the
characters from the denoised images. Elephant Herding photos. A model for OCR and a model for speech synthesis
Optimization (EHO) is used to pick discriminative features make up the system's two major subsystems. The characters
from the extracted features of the segmented images, while and text in the photos are recognised by the OCR model,
Enhanced Local Binary Pattern and Inverse Difference which then turns them into editable text by a variety of
Moment Normalized descriptors will be used to extract techniques including preprocessing, segmentation, and
features from the images.

IJISRT23JUN2333 www.ijisrt.com 2955


Volume 8, Issue 6, June 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
classification. Deep learning (DL) neural networks and An innovative concept and implementation of an
machine learning (ML) are used to train both models. OCR-based application for Automated NGO Connect using
ML were offered by Sharma et al. [23]. Image de-noising,
A segmentation-free OCR system that incorporates DL binarization, data extraction, and data conversion are among
techniques, the creation of synthetic training data, and data the phases put into practise. Tesseract OCR based on DL is
augmentation strategies was presented by Namysl and included into the framework along with image processing
Konya [21]. With the use of enormous text corpora and and data visualisation modules.
more than 2000 fonts, we render synthetic training data. We
add geometric distortions to collected samples as well as a III. PROPOSED METHODOLOGY
suggested data augmentation method called alpha-
compositing with background textures to emulate text A. Overview
appearing in complex natural situations. The CNN encoder In this paper, optical character recognition is performed
in our models is used to extract features from text images. for the images of the document. Initially, median filtering is
applied to eliminate the noise in the image. After that, RGB
The main focus of Surana et al. [22] was on various to HSV transformation is done. The representation of the
ML techniques that may be used to extract text from HSV image should be chained to binary using binarization
handwritten documents and photos, recognise them in method. One hot encoding-based feature extraction is
digital format, and then translate it in accordance with the performed. CNN is used to classify the final features.
user's needs. Finally, RMSprop is used to optimize the results and
improve the accuracy. The entire block diagram of the
proposed algorithm is illustrated here in Figure 1.

Document RGB to HSV Median filter


transformation Binarization
image

One hot encoding


feature extraction

Character RMSprop CNN


recognition optimization Classification

Fig. 1: Block Diagram

In this chapter, we discuss the proposed OCR disrupt the operation of the future phases. In order to remedy
algorithm by using geometric and texture features. The this situation, a method known as binarization is applied to
algorithm has four main phases: (a) preprocessing, (b) the filtered image to produce a binary representation of it. In
features extraction, (c) classification, (d) Optimization. this method, a threshold value is determined, and the pixels
whose intensity values are more than the threshold are made
B. Preprocessing to be white (0), while less than the threshold are made to be
The OCR system can acquire images of the document black. The value of the threshold is determined by finding
either by scanning the text or by taking a photograph of the the document's overall average pixel intensity and then
text. Before carrying out any processes, photos are scaled subtracting that amount from 1.
down to 256 by 256 pixels for normalization purposes. After
that, the proposed method iterated four times to construct an C. Feature Extraction Using One-hot encoding
existing three-dimensional median filter, which ensures that Many significant properties in real-world datasets are
there is no noise, maintains the image's quality, and prevents categorical rather than numerical. These categorical features
edges. It did a good job of getting rid of the salt and pepper must be changed or converted into a numeric format in order
sounds by preventing a significant level of blurriness. After to be used in training and fitting machine learning
being filtered, the images in RGB are converted to the HSV algorithms because they are crucial for increasing the
color system. Utilize the HSV color space, which accuracy of ML models. While there are many ways to
differentiates chromatic information, so that color visibility convert these qualities into a numerical format, One Hot
can be differentiated. The median filter will serve as the Encoding is the most popular and widely used method.
foundation for the purpose of noise elimination. The
adjacent pixels will be ranked based on their intensity, and In one hot encoding method, the representation of
the median value is used to determine the novel value for the categorical data is changed into numeric data by splitting the
central pixel after the median filter has been applied. The column into multiple columns. The numeric data can be fed
final product of the filtering method can still contain a into algorithms for deep learning and machine learning. It is
background that is faintly textured or coloured, which could a binary vector representation of a categorical variable, with
all values in the vector being 0 except for the ith value,

IJISRT23JUN2333 www.ijisrt.com 2956


Volume 8, Issue 6, June 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
which will be 1 and represent the ith category of the Rectified Linear Unit (ReLu) activation function is
variable. The length of the vector is equal to the number of used in the suggested model. An activation function called
unique categories in the variable. ReLu has substantial biological and mathematical backing.
For every negative input, ReLu returns 0, and for positive
The encoding of categorical variables requires a input, it returns the value. ReLu's max operation allows it to
collection of finite and gathered categories with elements compute more fast than other activation functions. It is
that are mutually exclusive. A vector of numerical values is frequently the default activation function for a wide variety
used by CNN to represent the input images for every of neural networks.
subclass. The one-hot encoding method used in this work
denotes the input length as l and the total number of input f(x)=max(0,x) (1)
values as 8×l. The benefit of utilising this method is that it
just needs a small amount of time for a large dataset. With This is typically applied element-by-element to the
fewer computations, it uses fewer memory, though. It is result of another function, like the matrix-vector product.
possible to transform a one-dimensional vector (1× l) into a Since each layer of the network applies a nonlinearity, CNN
two-dimensional matrix (m × n). needs this function.

D. CNN E. RMS prop Optimization


CNN uses neural networks as part of its architecture, Optimizers must be used to increase accuracy and reduce
which is widely utilised for image-based classification. losses. Optimizers modify the weight settings in order to
CNNs include a variety of trainable parameter-based filters minimise the loss function. The objective of optimisation is
that are integrated with the image to recognise to produce the optimal design in regard to a set of prioritised
characteristics like edges and shapes. The spatial qualities of constraints or criteria. Root Mean Square Propagation
the image are captured by these high-level filters, which are (RMSprop) is similar to a gradient descent method with
constructed using learnt weights from the spatial attributes momentum in that it restricts oscillations to the vertical
at each subsequent level. Hyper Parameters are a type of direction. As a result, as learning rate is increased, the
parameter that interpret this architecture. These factors have algorithm progresses significantly in the horizontal direction
an impact on how the network is trained and how it is and quickly converges. It uses the magnitude of the current
structured, both of which are controlled prior to training. gradient descents to normalise the gradient. The learning
rate is automatically modified by choosing a different
The dataset is used to train and test a CNN with three learning rate for each parameter. The parameters are updated
hidden layers. Three convolutional layers, Maxpooling using the learning rate, that has a value of 0.001, the
layers, and a fully connected layer make up the network. A exponential average of the squares of the gradients, and the
Convolution2D layer serves as the first hidden layer. gradient at time t.
Following the Maxpooling layer with a pool size of 22, 256
filters have been utilised, each with a kernel size of 33 and
zero padding. Padding is used in the architecture, preserving
the initial input size. The second convolution layer, which
has 256 filters with 33 kernel sizes apiece, is the following
phase. This is followed by the Maxpooling layer, which has
a 22 pool size. In addition to a Maxpooling layer with a pool
size of 22, a third convolution layer with 256 filters, each
with a kernel size of 33 and padding and stride equal to 1,
has been developed. Before creating the completely
connected layers, a flatten layer is utilised to transform the
2D matrix data into a 1D vector. A completely connected
layer with the ReLu activation function was then used.
Then, to reduce overfitting, a regularisation layer called
Dropout is set up to arbitrarily remove 20% of the layer's
neurons. Finally, a 16-neuron output layer with a sigmoid
activation function is put into practise. A vital component of
CNN, the activation function finds a neuron's output based
on its inputs. The activation function's function is to add
nonlinearity to the model. An effective activation function
can enhance a CNN model's performance.

IJISRT23JUN2333 www.ijisrt.com 2957


Volume 8, Issue 6, June 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

F. Pseudocode for CNN

IV. RESULTS AND DISCUSSION The dataset comprises 26 files (A-Z), each of which
has a handwritten image with a size of 2828 pixels and a
A. Experimental Setup box size of 2020 pixels.
The dataset used in the proposed experiment is given
below. Figure 2 illustrates that the sample dataset image.

https://www.kaggle.com/datasets/sachinpatel21/az-
handwritten-alphabets-in-csv-format

Fig. 2: Sample dataset

The use of a convolutional neural network model to Preprocessing plays a key role in ensuring that the model
identify text in academic and financial institution papers is performs well. Preprocessing approaches for images
discussed in this research. The model has proved successful improve the features of the image, improving recognition
in recognising characters in real-time, expanding its use. precision. Figure 3 illustrates the pre-processing process.

IJISRT23JUN2333 www.ijisrt.com 2958


Volume 8, Issue 6, June 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Fig. 3: Pre-processing

Figure 4 illustrates the feature extraction process done by using one hot encoding technique. The preprocessed features are
converted to the format suitale for CNN classifier.

Fig. 4: Feature extraction using one hot encoding technique

The data is normalized. The data is split into training and testing in the ratio of 70:30.

Both CNN model with RMSprop and CNN model with Adam optimizer is run to compare the results.

 CNN model with Adam Optimizer

IJISRT23JUN2333 www.ijisrt.com 2959


Volume 8, Issue 6, June 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

 CNN model with rmsprop Optimizer

B. Performance Analysis normal images), and F2 - False negative (no. of images


The parameters used in the implementation are given misclassified as normal). In the majority of deep learning
below: applications, the training loss and validation loss are often
 Training Loss: It is a term utilized for assessing how mixed on a graph. This is done to assess the model's
well a deep learning method matches the training set of performance and identify the components that need to be
information. adjusted.
 Test Loss: Test loss is a statistic that is used to assess
how well a deep learning algorithm performed on the his paper discusses the implementation of
validation set. Convolutional Neural Network Model to recognize
 Test Accuracy: The precision of the validation dataset is English handwritten characters along with text images.
referred to as test accuracy. The model has been successfully able to recognize
 Training Accuracy: The precision of the training characters in real-time which further increases its scope.
dataset is referred to as training accuracy. Preprocessing is an important factor in ensuring high
performance of the model. Image preprocessing
Accuracy is estimated as follows. techniques enhance the features of the image, thereby
increasing the accuracy of recognition. The recognition is
T1+T2 shown in Fig.7.
Accuracy =
T1+F1+T2+F2
(7) The proposed model was tested with the database
given above. The accuracy and loss results are reported in
Loss is estimated as follows.
Figures 5 and 6. It has been seen that a nice enhancement in
Loss = (T1 + T2) − (F1 + F2) the recognition accuracy was recorded by CNN and
(8) RMSprop model. Both testing and training accuracy is 99%
in the proposed method. The results were obtained when
Where T1 - true positive (no. of correctly classified as 70% of the samples were used for training and the rest used
covid-19), F1 - false positive (no. of images misclassified as for testing. It is also shown that the loss of the proposed
covid-19), T2 - True negative (no. of correctly classified model is considerably low.

Fig. 5: Accuracy Results

IJISRT23JUN2333 www.ijisrt.com 2960


Volume 8, Issue 6, June 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Fig. 6: Loss Results

The prediction results are given in Figure 8. From the figure, it is clear that the proposed model accurately identify the text
from the document image.

Fig. 7: Output Classified image

The CNN architecture with RMSProp optimizer Classifier [27], Bi-LSTM [28], and Hybrid PSO-SVM [29].
produced the highest level of recognition accuracy. This Compared to all the aforementioned models, the proposed
network might, however, function more effectively with CNN with RMSprop model has the maximum accuracy.
alternative optimizers. We used the Adam optimizer to From Table 1, it is clear that training and testing accuracy
conduct experiments to further our investigation. When with ‘adam’ optimizer is 97% and 98%, respectively.
compared to the Adam optimizer, RMSprop optimizer has Training and testing accuracy with ‘rmsprop’ optimizer is
more accuracy. The training and testing accuracy of the 99%. The two models are evaluated by using two metrics,
proposed model is also compared with CNN+LSTM [24], loss and accuracy. It is observed that CNN with rmsprop
SVM [25], VGG-16 model [26], LSTM and Adaptive outperformed in terms of accuracy.

V. COMPARATIVE ANALYSIS

Table 1: Comparison of Accuracy


Techniques Training accuracy Testing accuracy
CNN + Adam optimizer 97% 98%
CNN + RMSprop optimizer 99% 99%
CNN+LSTM [20] 92.72% 91%
SVM [21] 96.29% -
VGG-16 model [22] 98.43% 98.16%
LSTM And Adaptive Classifier [23] 98% 98%
Bi-LSTM [24] 96.06% 87.94%
Hybrid PSO-SVM [25] 93% 92.2%

From Table 1, it is clear that training and testing optimizer is 99%. The two models are evaluated by using
accuracy with ‘adam’ optimizer is 97% and 98%, two metrics, loss and accuracy. It is observed that CNN with
respectively. Training and testing accuracy with ‘rmsprop’ rmsprop outperformed in terms of accuracy.

IJISRT23JUN2333 www.ijisrt.com 2961


Volume 8, Issue 6, June 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Comparitve Analysis
CNN + Adam optimizer

Hybrid PSO-SVM

Bi-LSTM

LSTM And Adaptive Classifier

VGG-16 model

SVM

CNN+LSTM

CNN + RMSprop optimizer

86 88 90 92 94 96 98 100

Percentage

Fig. 8: Accuracy Comparison graph

VI. CONCLUSION and bagging methodologies,’’ Neural Process. Lett.,


vol. 50,pp. 43–56, Sep. 2018.
A method for optical character recognition is [4.] C. Wolf, J.-M. Jolion, and F. Chassaing, ‘‘Text
developed in this article using one-hot encoding-based localization, enhancementand binarization in
feature extraction and CNN. At first, the document picture is multimedia documents,’’ in Proc. Object
given a preprocessing treatment in order to get it ready for Recognit.Supported User Interact. Service Robots,
the feature extraction process. It is decided to carry out one vol. 2, 2002, pp. 1037–1040.
hot encoding-based feature extraction. CNN in two [5.] J. Pradeep, E. Srinivasan, and S. Himavathi,
dimensions is utilised to make the final classification of the ‘‘Diagonal based featureextraction for handwritten
features. RMSprop is applied in order to perfect the results character recognition system using neuralnetwork,’’
and make them more accurate. Performance metrics used are in Proc. 3rd Int. Conf. Electron. Comput. Technol.
training and validation accuracy and loss. Results show that (ICECT),vol. 4, Apr. 2011, pp. 364–368.
the text in the input image is accurately extracted. As the [6.] D. C. Ciresan, U. Meier, L. M. Gambardella, and J.
number of photos that this algorithm was trained and tested Schmidhuber,‘‘Convolutional neural network
on in the local dataset is significantly lower. Because committees for handwritten characterclassification,’’
everyone's handwriting is unique, the suggested algorithm in Proc. Int. Conf. Document Anal. Recognit., Sep.
will have a poor performance when we test it on different 2011,pp. 1135–1139.
examples of people's writing. In the future, work will be [7.] Pritpal Singh and Sumit Budhiraja, “Feature
performed to progress the performance of characters that are Extraction and Classification Techniques in O.C.R.
embedded in environments with complex backgrounds and Systems for Handwritten Gurmukhi Script”.
images that have been distorted. International Journal of Engineering Research and
Applications (IJERA), Vol.1, ISSUE 4, pp.1736-
REFERENCES 1739.
[1.] Om Prakash Sharma, M. K. Ghose, Krishna Bikram [8.] N. Suguna and Dr. K. Thanushkodi, “An Improved k-
Shah, "An Improved Zone Based Hybrid Feature Nearest Neighbour Classification Using Genetic
Extraction Model for Handwritten Alphabets Algorithm”, IJCSI, Vol. 7, Issue 4, No 2, July 2010.
Recognition Using Euler Number", International [9.] Emanuel Indermuhle, Marcus Liwicki and Horst
Journal of Soft Computing and Engineering, Vol.2, Bunke, “Recognition of Handwritten Historical
Issue 2, pp. 504-58, May 2012. Documents: HMM-Adaptation vs. Writer Specific
[2.] C. C. Tappert, C. Y. Suen, and T. Wakahara, ‘‘The Training”.
state of the art inonline handwriting recognition,’’ [10.] B.V.Dhandra , Gururaj Mukarambi and Mallikarjun
IEEE Trans. Pattern Anal. Mach.Intell.,vol. 12, no. 8, Hangarge, “Handwritten Kannada Vowels and
pp. 787–808, Aug. 1990, doi: 10.1109/34.57669 English Character Recognition System”, International
[3.] M. Kumar, S. R. Jindal, M. K. Jindal, and G. S. Journal of Image Processing and Vision
Lehal, ‘‘Improvedrecognition results of medieval Sciences,Vol. 1 Issue1 ,2012.
handwritten Gurmukhi manuscripts usingboosting [11.] S. Antani, L. Agnihotri, Guajarati character
recognition, Proceedings of the Fifth International

IJISRT23JUN2333 www.ijisrt.com 2962


Volume 8, Issue 6, June 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Conference on Document Analysis and Recognition,
1999, pp. 418–421.
[12.] Noha A. Yousri, Mohamed S. Kamel and Mohamed
A. Ismail, “Finding Arbitrary Shaped Clusters for
Character Recognition”.
[13.] Faisal Mohammad, Jyoti Anarase, Milan Shingote
and Pratik Ghanwat, “Optical Character Recognition
Implementation Using Pattern Matching”,
International Journal of Computer Science and
Information Technologies, Vol. 5(2), 2014.
[14.] Nirmala S Guptha, V. Balamurugan, Geetha
Megharaj, Khalid Nazim Abdul Sattar, J. Dhiviya
Rose, “Cross lingual handwritten character
recognition using long short term memory network
with aid of elephant herding optimization algorithm”,
Pattern Recognition Letters, Volume 159, Issue C, Jul
2022 pp 16–22.
[15.] Akhtar, Z., Lee, J. W., Attique Khan, M., Sharif, M.,
Ali Khan, S., & Riaz, N. (Accepted/In press). Optical
character recognition (OCR) using partial least square
(PLS) based feature reduction: an application to
artificial intelligence for biometric identification.
Journal of Enterprise Information Management.
https://doi.org/10.1108/JEIM-02-2020-0076
[16.] Danish Nadeem and Miss. Saleha Rizvi, “Character
Recognition using Template Matching”, Department
of computer Science, JMI.
[17.] Rachit Virendra Adhvaryu, “Optical Character
Recognition using Template Matching”, International
Journal of Computer Science Engineering and
Information Technology Research, Vol. 3, Issue 4,
Oct 2013.
[18.] M. Ziaratban, K. Faez, F. Faradji, “Language-based
feature extraction using template-matching in
Farsi/Arabic handwritten numeral recognition”, Ninth
International Conference on Document Analysis and
Recognition, pp. 297 - 301, 2007.
[19.] Rajib Lochan Das, Binod Kumar Prasad, Goutam
Sanyal, "HMM based Offline Handwritten Writer
Independent English Character Recognition using
Global and Local Feature Extraction", International
Journal of Computer Applications (0975 8887),
Volume 46 No.10, pp. 45-50, May 2012.

IJISRT23JUN2333 www.ijisrt.com 2963

You might also like