Online Handwriting Recognition by Using Microcontroller
Online Handwriting Recognition by Using Microcontroller
50219118216
JANUARY 2021
DECLARATION
I declare that this report is my original work and all references have been cited
adequately as required by the University.
III | PAGE
APPROVAL PAGE
We have supervised and examined this report and verify that it meets the
program and University’s requirements for the Bachelor in Electromechanical
System.
IV | PAGE
ACKNOWLEDGEMENT
Secondly, I would also like to thank my parents, family and friends who helped
me a lot in finalizing this project within the limited time frame. All the support
and inspiration I receive is a valuable thing that I cannot forget. This is truly
warm my heart. Without the patience and love from my surrounding, all of this
would have been hard and possible. Thank you for the support, love, concern,
and strength I get all this year. Finally, I hope this report would give benefit to
me and the readers in terms of the knowledge. Hopefully, this project will be
success and give good memories for me and for us.
5 | PAGE
TABLE OF CONTENTS
DECLARATION....................................................................................................... iii
APPROVAL PAGE .................................................................................................. iv
ACKNOWLEDGEMENT ........................................................................................... 5
LIST OF FIGURES ................................................................................................... 8
LIST OF TABLE ..................................................................................................... 10
LIST OF ABBREVIATION ...................................................................................... 11
ABSTRACT ............................................................................................................ 12
1. CHAPTER 1 - INTRODUCTION ...................................................................... 13
1.1 Introduction.............................................................................................................. 13
1.2 Project Overview .................................................................................................... 15
1.4 Project Objectives .................................................................................................. 17
1.5 Project Scope .......................................................................................................... 17
1.5.1 System ............................................................................................................... 17
1.5.2 User Scope........................................................................................................ 17
1.5.3 Time Frame ....................................................................................................... 18
1.5.4 Approach ........................................................................................................... 18
2. CHAPTER 2 - LITERATURE REVIEW ............................................................ 19
2.1 Introduction.............................................................................................................. 19
2.2 Handwriting Recognition...................................................................................... 19
2.3 Offline Handwriting Recognition ........................................................................ 23
2.4 Online Handwriting Recognition ........................................................................ 26
2.5 Dataset for Recognition ........................................................................................ 31
2.6 Analysis On Literature Review ........................................................................... 35
3. CHAPTER 3 - METHODOLOGY ..................................................................... 36
3.1 Introduction.............................................................................................................. 36
3.2 Project Planning Flowchart ................................................................................. 37
3.3 Method of Handwriting Character Recognition .............................................. 38
3.3.1 Convolutional Neural Network (CNN) ........................................................ 38
3.3.2 EMNIST dataset ............................................................................................... 39
3.4 Methodology Structure ..................................................................................... 40
3.4.1 Training Of Convolutional Neural Network Model .................................. 40
3.4.2 Deployment of Neural Network ................................................................... 42
3.5.1 Arduino IDE ...................................................................................................... 43
6 | PAGE
3.5.3 STM32CUBE MX .............................................................................................. 43
3.5.4 Solidwork .......................................................................................................... 43
3.5.5 Microsoft Visio ................................................................................................. 43
3.6 Hardware Features ................................................................................................. 44
3.6.1 Arduino ATMega 2560 .................................................................................... 44
3.6.2 Arduino TFT Touch-shield ............................................................................ 44
3.6.3 STM32-L496G DISCOVERY........................................................................... 45
3.6.4 Hard Casing ...................................................................................................... 45
3.7.1 Implementation On Hardware ...................................................................... 48
3.7.2 Implementation On Software ....................................................................... 49
3.8 Design of Experiment ............................................................................................ 52
3.8.1 Procedure Of Operation ................................................................................ 52
3.8.2 Sample Test & Analysis................................................................................. 54
3.9 Project Estimate of Cost ....................................................................................... 56
3.10 Gantt Chart ............................................................................................................. 57
3.11 Project Milestone.................................................................................................. 59
4. CHAPTER 4: RESULT AND DISCUSSION .................................................... 60
4.1 Introduction.............................................................................................................. 60
4.2 Data Collection ........................................................................................................ 60
4.3 Character Recognition .......................................................................................... 60
4.3.1 First Trial of Character Recognition ........................................................... 61
4.3.2 Second Trial of Character Recognition ..................................................... 64
4.3.3 Third Trial .......................................................................................................... 67
4.4.1 First Trial of Digit Recognition .................................................................... 70
4.4.2 Second Trial of Digit Recognition............................................................... 72
4.4.3 Third Trial of Digit Recognition ................................................................... 74
4.5 Overall Accuracy of Recognition System ........................................................ 76
4.6 Result Summary ..................................................................................................... 78
5. CHAPTER 5 – CONCLUSION AND RECOMMENDATION ............................ 79
5.1 Conclusion ............................................................................................................... 79
5.2 Future Recommendation ...................................................................................... 80
6. REFERENCES ................................................................................................ 81
7. APPENDICES ................................................................................................. 83
7 | PAGE
LIST OF FIGURES
Figure 2-1: Classification of Character Recognition (Kaur & Rani, 2017) ... 21
Figure 2-2: Accuracy Of Various Character Recogition Methods (Sahil
Lamba et al 2019) ........................................................................................ 22
Figure 2-3: Offline handwriting recognition processes. (Tappert & Charles,
2007)............................................................................................................ 23
Figure 2-5: Block diagram of testing part of ANN (Nain & Panwar, 2016) ... 24
Figure 2-4: Block diagram of training part of ANN (Nain & Panwar, 2016) . 24
Figure 2-7: Phases of Quran text recognition System (Iqbal & Zafar, 2019) 25
Figure 2-6: An Arabic word with different diacritics (Iqbal & Zafar, 2019) ... 25
Figure 2-9: Block diagram of the trajectory recognition algorithm. (Wang &
Chuang, 2012) ............................................................................................. 26
Figure 2-10: Average recognition rates versus the feature dimensions of the
PNN classifier by using LDA (Wang & Chuang, 2012) ................................ 26
Figure 2-11: Block Diagram of System (Priyanka et al 2013) ..................... 27
Figure 2-12: Flow Chart of System (Priyanka et al 2013) ........................... 27
Figure 2-13: Basic Block Diagram of the System (Syafeeza et al. 2017) 28
Figure 2-14: Flowchart Of The Whole System (Syafeeza et al 2017) ......... 28
Figure 2-15: Handwritten Input(colored in red on the left) XOR with (Preset
Character ‘B’ in blue in the middle) resulting the overlap pixels (colored in
black on the right) ( Syafeeza et al. 2017) ................................................... 28
Figure 2-16: Sample Images of the dataset that has been created using
Touchscreen (Debnath et al .2019).............................................................. 29
Figure 2-17: Accuracy of testing new inputs (Debnath et al. 2019) ............. 29
Figure 2-18: Flow Chart of the working process of the system (Debnath et al.
2019)............................................................................................................ 29
Figure 2-19: The whole process of character recognition (Soselia et al. 2019)
..................................................................................................................... 30
Figure 2-19: Correlation of accuracy on the number of classes for a fixed
number of training samples for each class. (Soselia et al. 2019) ................. 30
Figure 2-20: Sample of Unprocessed Image & Output of Pre-processing .. 31
Figure 2-21: Sample Of Digits ..................................................................... 31
Figure 2-22: Interface of the recording software (Nagy et al., 2012) ........... 32
Figure 2-23: Typical characteristics of OCR on scanned documents and
natural image text recognition. ..................................................................... 33
Figure 3-1: Project Planning Flowchart ....................................................... 37
Figure 3-2: Handwriting Recognition Methods ............................................ 38
Figure 3-3: Flowchart of the training of the CNN model .............................. 40
Figure 3-4: The Confusion matrix of the trained CNN model with the EMNIST
dataset ......................................................................................................... 41
Figure 3-5: Deployment Of Neural Network on STM32 L496G microcontroller
..................................................................................................................... 42
Figure 3-6: Arduino ATMega 2560 (Microcontroller) ................................... 44
8 | PAGE
Figure 3-7: Arduino TFT Touch-shield ........................................................ 44
Figure 3-8: STM32-L496G DISCOVERY .................................................... 45
Figure 3-9: Hard Casing .............................................................................. 45
Figure 3-10: Design Of Hard Casing ........................................................... 46
Figure 3-11: Basic Block Diagram of the System ........................................ 47
Figure 3-12: Wiring diagram of the System ................................................. 48
Figure 3-13: Flowchart Of The Whole System ............................................ 49
Figure 3-14: Flowchart Of Software System ............................................... 50
Figure 3-15: The Gantt chart of the FYP 1 .................................................. 57
Figure 3-16: The Gantt chart of the FYP 2 .................................................. 58
Figure 4-1: Accuracy of recognition for each character for first trial (%) ..... 62
Figure 4-2: Accuracy of recognition by each volunteer for first trial (%) ...... 62
Figure 4-3: Accuracy of recognition for each character for second trial (%) 65
Figure 4-4: Accuracy of recognition by each volunteer for second trial (%) 65
Figure 4-5: Accuracy of recognition for each character for third trial (%) .... 68
Figure 4-6: Accuracy of recognition by each volunteer for third trial (%) ..... 68
Figure 4-7: Accuracy of recognition for each digit for first trial (%) .............. 71
Figure 4-8: Accuracy of recognition by each volunteer for first trial (%) ...... 71
Figure 4-9: Accuracy of recognition for each digit for second trial (%) ........ 73
Figure 4-10: Accuracy of recognition by each volunteer for second trial (%)
..................................................................................................................... 73
Figure 4-11: Accuracy of recognition for each digit for third trial (%)........... 75
Figure 4-12: Accuracy of recognition by each volunteer for third trial (%) ... 75
Figure 4-13: Accuracy Of Character Recognition For Each Trial (%).......... 76
Figure 4-14: Accuracy Of Digit Recognition For Each Trial (%) .................. 76
9 | PAGE
LIST OF TABLE
10 | PAGE
LIST OF ABBREVIATION
11 | PAGE
ABSTRACT
The handwriting recognition has been studied for over four decades, and many
methods have been developed so far. Even now in all circumstances, there
are complicated issues regarding the recognition process, and there is no one
particular method to tackle it properly and thoroughly. As we know, the
recognition mechanism for handwriting plays a very significant part in the globe
of today. There are many places where words, characters and digits need to
be recognized. In that case, there are two types of recognition systems may
be employed, offline and online recognition. The offline recognition system
involves the automated translation of a text into letter codes for the text
computing application. In contrast, online recognition system of handwriting
demanded automatic text translation where it is necessary to be written on a
specific digitizer when the sensors receive pen-tip motions. Therefore, the
purpose of this project is to develop an online handwriting recognition system
which able to acquire a handwriting input from analog to a digital font using the
Convolutional Neural Network (CNN) training model with EMNIST dataset. To
achieve this purpose, we have utilized the STM32 L496G microcontroller to
deploy the CNN trained model by using the EMNIST dataset. The Google's
Collaborative notebook will be used to train and evaluate the NN model. This
electronic system recognized characters and digits in natural handwriting by
using stylus with intends to improve the technology from the use of a hefty
keyboard system to a portable and comfortable way. In order to make this
work, this system used the STM32CUBE MX software with an Artificial
Intelligence (Al) extension and the programming language is a C language. A
several tests have been conducted upon the completion of the prototype and
the system has achieved an overall accuracy of 84.44%.
12 | PAGE
1. CHAPTER 1 – INTRODUCTION
This chapter includes the introduction, overview of the project, the problem
statements, the objectives and scope of the project. It will provide the primary
purpose and how this project will be created.
1.1 Introduction
The handwriting recognition has been studied for over four decades, and many
methods have been developed so far. There are complicated issues regarding
the recognition process, and there is no one particular method to tackle it
properly and thoroughly, even now in all circumstances. In the process of
handwriting recognition, a text image must be provided and pre-processed
adequately. Next, the text should be segmented or feature extracted. The
result would be some part of the text, which the machine must understand.
Finally, contextual information to the identified symbols should be added to
confirm the result. (Kacalak & Majewski, 2012) Artificial Neural Networks
(ANN), used to recognize written languages, have a high capacity to
generalize and require no deep contextual information and formalization to
tackle the challenges in handwriting recognition.
13 | PAGE
There are two types of recognition systems may be employed, offline and
online recognition. (Arica & Yarman-Vural, 2001)The offline recognition system
involves the automated translation of a text into letter codes for the text
computing application. The data from this form is considered to be a
changeless representation of handwriting. It is pretty hard for the offline
handwriting recognition to get high accuracy as people have distinct sorts of
handwriting. On the other hand, (Chakravarthy et al., 2011) online recognition
system of handwriting demanded automatic text translation where it is
necessary to be written on a specific digitizer when the sensors receive pen-
tip motions. This sort of knowledge is called digital ink and may be regarded
as a digital handwriting representation. In computer and text processing
application, the signals acquired subsequently translated into letter codes.
14 | PAGE
1.2 Project Overview
For this project, (Shawon et al., 2018) the CNN will be the classifier and, as a
result of clean data that may be provided, we will utilize a common EMNIST
dataset. Therefore, the algorithms employed in this research start with the
extraction of functions in which pixels are utilized as a label for the picture, as
CNN handles the extraction and classification of functions. Finally, the
performance of this system is determined with the accuracy rate.
15 | PAGE
1.3 Problem Statement
16 | PAGE
1.4 Project Objectives
1.5.1 System
This system can be used by all group of people regardless of what field they
are working in. For example, this system can help the education field where
students or lecturers might use it during learning lessons. Besides, it can also
be implemented in the medical field where it can help the medical personnel
with the medical chart; in the bank, it may help the finance personnel with the
bank checks or other documents. The user might find that this system is easy
to be used.
17 | PAGE
1.5.3 Time Frame
The time frame given for me and my other friends that register this subject in
semester 6 is 17 weeks for FYP 1. In next semester, we will also provide 17
weeks for FYP 2 to continue the progress we have done in FYP 1. In FYP 1, I
need to find enough information that relate with my project to get ready for
presentation at week 18 that I need to explain about my project.
For FYP 2, I need to start developing and building my project idea subsequent
to the FYP 1 proposal. I will finish the project, the final report, data analysis,
and all the related work toward my project within the timeframe. I will look up
the hardware needed for this project at the beginning of semester 7. As time
goes by, I will build the hardware, develop the software and its electrical wiring,
troubleshoot the problems that have occurred, and get the data analysis on
semester 7. I will ensure all preparations for the final presentation are
completed before week 18. I will do the presentation based on the outcome
that I get from my project
1.5.4 Approach
At first, I meet with my supervisor Mr. Mahadzir bin Abdul Ghani to help me
last semester (sixth semester) and this semester (seventh semester) on the
Final Year Project (FYP). I make a meeting with him and discuss the FYP idea.
We got the title, which is development of Online Handwriting Recognition By
Using STM32 Microcontroller, from the idea that resulted from brainstorming
between us. I had explained this project roughly to him about the title that was
agreed upon. Week after week, I will meet with my supervisor to show progress
in designing the part of the project's structure, and to gather any new
information about the project.
18 | PAGE
2. CHAPTER 2 - LITERATURE REVIEW
2.1 Introduction
(Surya & Afseena, 2015) described that OCR is the handwritten and typed
material computer-based recognition. In the OCR approach, a digital camera
or a scanner is utilized to capture and translate into machine-editing forms like
ASCII code of various forms of documents such as paper, PDF, or character
imagery. The text type Optical Character Recognition (OCR) is based on two
types which are Handwriting Character Recognition (HCR) where the
recognition is for understandable handwritten source information such as
paper documents and the other is and Printed Character Recognition (PCR),
the recognition of printed documents. The fundamental reason for the great
difficulty in HCR is the diverse styles of handwriting from different people. Even
for the same person, the writing style and the writing format frequently differ.
The author also describes the roughly split processes of OCR systems which
are pre-processing, segmentation, feature extraction, and classification. Pre-
processing is a crucial stage in the identification of character, including:
19 | PAGE
Noise removal – Noise in an image with no meaning in the output (the
undesirable intensity values).
Binarization - The colour or grayscale picture conversion process to a
bi-level picture. The approach used to binarize a picture is a local and
global thresholding.
Skeletonization - A morphological procedure that transforms the image
into a pixel wide image without compromising its connection.
Normalization - Image process in some standard form.
Segmentation is the process through which the input image is converted into
a character. These include segmentation of line, segmentation of word and
segmentation of character. Next, the main phase of character recognition is
feature extraction, where the most relevant characters will be extracted. The
accuracy of recognition depends mainly on the characters collected. Lastly,
classification is the character recognition step of decision-making.
20 | PAGE
Figure 2-1: Classification of Character Recognition (Kaur & Rani, 2017)
(Kaur & Rani, 2017) Described that character recognition is a method that
connects elements in an image with a symbolic context (letters, symbols, and
numbers), i.e. methods for character recognition associate symbolic
recognition with the character's image. In particular, the character recognition
system uses raw data, which is used in the pre-processing phase of every
recognition system. On the basis of this method of data collecting, the
character recognition system may be separated into online recognition and
offline recognition. Offline handwriting is a recognition technique of words
scanned from a surface (for example, a sheet of paper) digitally processed in
a gray-size format. Further processing after saving is generally done so that
higher recognition is achieved. On the other hand, the handwritten characters
that are identified by online handwriting system can be stored and digitally
processed by several methods. Commonly, a special device is used in tandem
with electronic paper. The coordinates of successive points are construed as
time since the pen goes over the surface and is kept in order.
21 | PAGE
Figure 2-2: Accuracy Of Various Character Recogition Methods (Sahil Lamba et al 2019)
(Lamba et al., 2019) described that for so many years, a variety of handwritten
papers, forms and manuscripts had been maintained by numerous
organizations. These materials annotations may also be distorted over the
years and no longer usable. Because of this fact, if these materials can be
converted to a digital format, they can be saved and make the processes of
managing those files more manageable and safer. An OCR's main task is to
detect and recognize written text efficiently. On the other hand, the HCRS is
divided into two online and offline groups for the recognition of handwriting
features. The representation of 2-dimensional coordinates is carried out in the
online character recognition system of consistent points. It is produced in
accordance with a moment and the order of the strokes developed by the
writers. The offline handwriting technique is essentially translated from text to
image, and then letter code used for text and computer applications. The
retrieved information is the static representation of the written material. It is
more challenging to recognize offline than online handwriting as people have
distinct kinds of handwriting. A deep learning application to improve character
accuracy that might replace HCR are currently being implemented in Neural
Network. By using diverse extraction methods, the accuracy of recognition can
be improved. The higher array of data collected also tends to improve the
accuracy of the results required.
22 | PAGE
2.3 Offline Handwriting Recognition
Figure 2-3: Offline handwriting recognition processes. (Tappert & Charles, 2007)
23 | PAGE
Figure 2-5: Block diagram of training part
of ANN (Nain & Panwar, 2016)
Figure 2-4: Block diagram of testing part of
ANN (Nain & Panwar, 2016)
(Nain & Panwar, 2016) proposed the development of a system that acquired
the handwritten English characters as an input, processes the entries, extracts
the best characters, trains a neural network with either the Resilient Back
Propagation or a scaled conjugate gradient, then recognizes the input text
class and finally produces the computerized text output. Resilient back-
propagation is a supervisory learning technique algorithm for Artificial Neural
Network (ANN) feed, which proposed in 1992. Likewise, resilient back-
propagation considers the sign of a partial derivative, regardless of magnitude,
stated in the Manhattan update learning rule and functions independently for
each weight. The other supervisory learning algorithm is the Scaled Conjugate
Gradient (SCG). It is built on an old directed search method based on the
concept of a decent gradient algorithm. The benefit of SCG is that it does not
need to have a user-defined ANN parameter during training. It will result in
faster convergence and better accuracy when the step size gets adjusted with
each epoch automatically. This method is split into two main sections: ANN
training for an image library and ANN testing with test images. The training
portion of the proposed work includes: development of the data set, pre-
processing of the data set, features extraction from the pre-processed data set
and generation of the ANN vector.
24 | PAGE
Figure 2-7: An Arabic word with
different diacritics (Iqbal & Zafar, 2019)
Figure 2-6: Phases of Quran text recognition
System (Iqbal & Zafar, 2019)
(Iqbal & Zafar, 2019) described that the close presence of many distinct letters
makes it more challenging to recognize Quran's handwriting. Many inquiries in
the field of offline Qur'an handwriting recognition have been published in recent
years. Despite the fact that offline Qur'an handwriting is still a significant
obstacle, owing to different forms of printing, ligatures, duplication and cursive
writing of the Qur'an. Mushaf is considered the written form of the Quran in the
Cufic script, where the early copies of the Quran were written. Kufic is a
standard Arabic script of the eldest kind. Arabic becomes the Holy Quran's
language when it first revealed to the prophet Muhammad by Allah and
become the Muslim Sacred Scripture. The Arabic alphabets are familiar
among all Muslim people all over the world. Every Muslim are bound to the
Qur'an letters. The critical problems of Quranic manuscript identification have
been summarized. First, text of the Quran is cursively composed, which makes
segmenting difficult. Second, the Arabic is inherent in the diacritics. The nature
of diacritics applied to multiple interpretations of the same root word. Third, the
styling of the Qur'an is distinctive, and even same-style letters can vary in
scale. Next, due to the variation in the font, two different authors are writing
the same glyph differently. Nevertheless, if the same person writes differently,
two instances of the same glyph add more identification difficulties. Lastly,
there are a few reasons that cause the Qur’anic images to have low quality.The
Qur'anic manuscripts degrade due to age and frequently used.
25 | PAGE
2.4 Online Handwriting Recognition
Figure 2-8: Block diagram of the trajectory Figure 2-9: Average recognition rates
recognition algorithm. (Wang & Chuang, 2012) versus the feature dimensions of the PNN
classifier by using LDA (Wang & Chuang,
2012)
26 | PAGE
Figure 2-11: Flow Chart of System Figure 2-10: Block Diagram of System
(Priyanka et al 2013) (Priyanka et al 2013)
27 | PAGE
Figure 2-12: Basic Block Diagram of the System
(Syafeeza et al. 2017)
28 | PAGE
Figure 2-15: Sample Images of the
dataset that has been created using
Touchscreen (Debnath et al .2019)
(Debnath et al., 2019) suggested a system that can convert handwritten inputs
into an electronic document instantly. An Arduino microcontroller and a TFT
touchscreen are part of the system where the conversion will be completed
with the assistance of MATLAB. The handwritten inputs on the touchscreen
will be saved as images. Next, these images are later to be changed into the
letters by using an algorithm of handwriting recognition. The input acquired is
extracted when the serial communication between the PC and Arduino is
established. The screen position is specified in the (x, y) coordinates, and
these coordinates will be read by the MATLAB code then transformed into the
image of letters inscribed on the screen. The images acquired is refined to
form a clear shape prior to going into a letter and digit recognition. The images
will then adapt to the size of the images taken throughout the training. An
Artificial Neural Network (ANN) method is then utilized to define the letters. An
extensive data set based on ImageNet classification is trained in multiple
models such as ResNet, AlexNet or VGGNet. For this purpose, a pre-trained
"AlexNet" data model is employed. MATLAB has a support kit for AlexNet
Model Network that may be utilized to tackle various issues. The model is
trained on over one million images and can classify images into 1000 item
categories.
29 | PAGE
Figure 2-18: The whole process of character recognition (Soselia et al. 2019)
(Soselia et al., 2019) describes recent progress in the field of deep learning
has helped quickly to identify handwritten digits and characters. With online
recognition, manuscript and overall handwritten character datasets such as
EMNIST and deep convolution neural networks designs have helped to
achieve fresh state-of-the-art. On the other hand, online recognition of
handwriting features using input from a pointing device is still generally carried
out using standard statistical learning approaches and low-neural network
designs. In order to obtain more precision, this project uses RNNs without
depending on particular writing surfaces or motions. The system consists of a
three-axis accelerometer/gyroscope (MPU-6050), a pushbutton, an STM32-F1
microcontroller and a CSR-BC417 Bluetooth transmitter. A dataset of Latin and
Georgian characters has been adopted in this system. The collection includes
1500 gyroscopes and accelerometer sequences for chosen characters from
the Latin and Georgian alphabets. RNN is helpful at finding distinct patterns of
character in different handwriting of several people. This solution will
drastically increase productivity and cut the costs.
30 | PAGE
2.5 Dataset for Recognition
(Liwicki & Bunke, 2005) proposed a system that be able to Identify hand written
English chars74 dataset numbers and alphabets. The images may be
classified into 62 classes ([0-9], [A-Z], and [a-z]), with the dataset having 55
instances in each class. Each of these images is made up of an alphanumeric
character printed on a white backdrop in black. Each image contains 900 ×
1200 pixels, however the images are of different alignment and location. In
addition, both within class and across classes, the thickness and size of the
character vary greatly. The author also stated that many early handwriting
approaches used the Supportive Vector Machine(SVM), Artificial Neural
Network(ANN), and The Classification of Nearest Neighbour(NN). Neural
networks have recently been popular. Moreover, the deep, but simple, Multi-
layer Perceptrons (MLPs) were utilized to randomly get a great accuracy in the
MNIST digit dataset (0.35 per cent error). A further classification strategy
involves utilizing a combination of more than one such method, one of which
is used to enhance the accuracy of the other.
31 | PAGE
Figure 2-22: Interface of the recording software
(Nagy et al., 2012)
(Nagy et al., 2012) described the design of the database called IAM-OnDB, is
inspired by the IAM-Database. Although the IAM-Database is an offline
database, the IAM-OnDB consists of data collected by a whiteboard online.
The Lancaster-Oslo/Bergen Corpus (LOB) is a large electronic corpus text
comprising all texts contained in the IAM-OnDB. There are 500 texts in the
LOB Corpus, each of which consists of around 2000 words. They range in 15
categories from press, popular literature to scholarly and scientific
publications. The texts have been divided in the corpus into fragments, each
of around 50 words, to get a database of handwritten phrases. These pieces
were copied onto the paper, and the eight forms of the whiteboard text were
asked to be written by each writer. In addition, the interface eBeam2 will be
used to record a user's handwriting. It allows writer to write in a specific case
on a whiteboard with conventional style, sending infrared impulses to a triangle
receptor installed at the corners of the whiteboard. The acquisition interface
provides a sequence of (x, y) coordinates that reflect the position of the tip and
a time stamp at each position. Then, the handwriting recognition system was
trained by using this database with the Hidden Markov Model (HMM) as the
classifier. The tests have been conducted with two type of dictionary being
used; small dictionary (2.3K) and large dictionary (11K). The large dictionary
has larger effect on accuracy rate since it contains more word. The
performance of the system trained on the database increases by 5.4% on the
2.3 K dictionary, while it increases by 8.3% on the 11 K dictionary.
32 | PAGE
Figure 2-23: Typical characteristics of OCR on scanned documents and natural
image text recognition (Sharang, 2013)
33 | PAGE
Figure 2-24: Samples of all letters and digits in the EMNIST dataset.
(Baldominos et al., 2019)
34 | PAGE
2.6 Analysis On Literature Review
Based on all the paper journals studied in this literature review chapter, we
know that the handwriting recognition system is divided into two categories:
Offline Handwriting Recognition and On-line Handwriting Recognition.
Whereas various methods can be used for each of the categories. Both
systems will undergo four phases during the process of recognition: pre-
processing, segmentation, feature extraction, and classification regardless of
what method has been used. However, the offline recognition system is less
accurate compared to the online recognition system. Therefore, along with this
project study title, “Development of Online Handwriting Recognition System
Using STM32 Microcontroller, " there are many methods from the online
recognition system that can be used. The most preferred method is the one
that utilizes Convolutional Neural Network (CNN) training with EMNIST
(Extended of Modified National Institute of Standards and Technology)
dataset. This method is preferred because, based on the previous studies, the
neural network training with image dataset is quite successful with high
accuracy of recognition. Furthermore, it is suitable to be implemented with the
electronic hardware used in this project. The hardware such as STM32
Microcontroller (L496G-Disco) and TFT Touch-shield screen will work well with
this method. Therefore, this project study's challenge is to increase the rate of
accuracy along with some improvement. In conclusion, from all the paper
journals that has been studied and analysed during this literature review
chapter, it will help a lot during the methodology chapter.
35 | PAGE
3. CHAPTER 3 - METHODOLOGY
3.1 Introduction
The Handwriting Recognition System is an electronic system that can read and
convert a handwritten input from paper or photographs and touch screens into
a digital font. Due to the benefits of changing things into digital form, the system
is often used to convert paper documents into digital documents to save space
and cash. In reality, it can also help with miscommunication at work.
Also, in this section, students need to understand why they would have chosen
the hardware to be used and why they should choose the specific methods to
use in this project. The methodology is essential before the product can be
tested. We should know the chronology of the project until it is fully completed.
36 | PAGE
3.2 Project Planning Flowchart
Figure 3-1 show the project planning flowchart which is an overview flow of
the development of this online handwriting recognition system starting from
setting the objectives and scopes followed by analysis of literature review
before proceed with the selection of hardware. Next, after the hardware
required for the system has been finalized, the process continue with the
development of handwriting acquisition along with the development of feature
extraction and classification where these processes involve some software
features to make it work. Upon the completion of the prototype, it will be tested
and after that the result will be analysed and finally all the processes will be
included in the report.
37 | PAGE
3.3 Method of Handwriting Character Recognition
There are eight handwriting recognition methods. (Rosyda & Purboyo, 2018)
The methods consist of Convolutional Neural Network (CNN), Semi-
Incremental Segment, Incremental, Lines and Words, Part, Slope and
Correction Slant, Ensemble, and Zoning. These eight methods can be utilized
for handwriting recognition. The CNN is commonly utilized among the eight
approaches above for this purpose.
38 | PAGE
3.3.2 EMNIST dataset
39 | PAGE
3.4 Methodology Structure
The dataset used for training our network is the EMNIST, and the model is a
simple Convolutional Neural Network (CNN). We will use Google's
Collaborative notebook to train and evaluate the NN model. There are four
steps to be taken in this section:
40 | PAGE
Figure 3-4: The Confusion matrix of the trained CNN model with the EMNIST
dataset
41 | PAGE
3.4.2 Deployment of Neural Network
42 | PAGE
3.5 Software Features
Arduino IDE is an open-source tool that mainly uses the Arduino module to
develop and build codes. The main code will create a hex file which will then
be transferred and uploaded to the controller on the board. The Arduino Mega
was used as an interface to power up the STM32 L496G.
3.5.3 STM32CUBE MX
3.5.4 Solidwork
43 | PAGE
3.6 Hardware Features
This project will use an Arduino module based microcontroller which is Arduino
Mega 2560. It contains 54 digital pins, 16 analog pins and 4 UARTs (serial
hardware ports). The input voltage is 7-12V. This microcontroller will be
programmed to work with the TFT Touch-shield for interface display and it also
will be used to power up the STM32 L496G microcontroller.
44 | PAGE
3.6.3 STM32-L496G DISCOVERY
The hard casing was designed to hold all the hardware in place, which are
STM32-L496G Discovery, Arduino Mega 2560 and the TFT Touch-shield while
all of these hardware is operating. It was made using 3D printing methods, and
the material used to make this item is poly-lactic acid (PLA) plastics. The
design of this item was made by using the SOLIDWORK software.
45 | PAGE
Figure 3-10: Design Of Hard Casing
46 | PAGE
3.7 Development of Prototype
47 | PAGE
3.7.1 Implementation On Hardware
48 | PAGE
3.7.2 Implementation On Software
The software for this system consists of the driver or firmware, which allows
the communication between hardware devices. However, since STM32 has
integrated those drivers or firmware into their library, the drivers or firmware
need not be written by ourselves since it is already provided in the
STM32CubeMX with the Cube.AI extension. The recognition algorithm used
here is Convolutional Neural Network (CNN).
49 | PAGE
Figure 3-13 shows the flowchart of the whole system. From the figure, the
main code of the program is shown where after starting the system, it will
initialize the hardware and save any available input if it exists. Else it will just
keep reading the touch screen for any input. If there is any touched input, the
touched point will be saved, and the timer interrupt would be reset. The
process repeats itself until there is no longer any input touched point being
read. The timer interrupt will only start when no handwriting is detected for a
period where the time is predetermined in the code. Whenever the timer is
interrupted, it will scale the available input then followed by recognizing the
scaled input, then the adaptation, which consists of modifying each pixel's
representation to the range set by the trained NN model and displaying the
recognized value on the LCD 2.
50 | PAGE
Based on the flow chart in Figure 3-14, the software mainly consists of 4 parts:
input, scaling, NN output process, and recognition. The input touch screen
used is a 1.5" TFT Touch-Shield LCD. It is a touch screen that is compatible
with STM32-L496G DISCOVERY Board and the fact that it is a shield; hence
no soldering is required except for just mounting the shield on the Board. For
every point touched on the screen, the points would be saved in an array in
the STM32-L496G. The touch screen would detect the written input until it
stops detecting for a short amount of time. After it stops detecting for a while,
it will start scaling, run for the Neural Network output, and then recognize the
handwritten input. STM32-L496G detects the input by using the timer interrupt
function only to interrupt the code after receiving no input for some time.
51 | PAGE
3.8 Design of Experiment
1 When power is
supplied, the TFT
Touch-Shield will
initialize and display
the interface to
operate the system.
+ button on the
Arduino touch-
shield, the STM32
52 | PAGE
3 The user can choose
to write a character
or digit by pressing
the blue joystick and
resetting the screen
by pressing the
black button, which
is the reset button.
4 When the user starts
to write on the
Touch-Shield of the
STM32 L496G, it will
start to collect the
input by saving the
user's touch point.
5 If the recognition is
Time taken
for timer
a success, it shows
interrupt the percentage of
the weightage from
Time taken
Weightag the CNN model and
fore of
timer
the
interrupt
CNN also the time of the
+ the screen;
Time taken
Weightag otherwise, it will just
fore of
timer
the
interrupt
CNN show the question
Weightag
the system is unable
e of the
to recognize the
CNN
+ handwriting.
53 | PAGE
3.8.2 Sample Test & Analysis
54 | PAGE
Characters Numbers Total
Recognition Recognition
Number Of 390 150 540
Attempts
Number Of 311 121 432
Success
432
= x 100%
540
=80% accuracy
As for this project, the test that we will be conducted will have three trials
instead of a single trial on both character and digit recognition and will involve
10 volunteers. The purpose of having three trials is to see the recognition trend
on both characters and digits. However, we will still use the formula that
resembles the previous study to find the average accuracy of all trials during
the test.
55 | PAGE
3.9 Project Estimate of Cost
56 | PAGE
3.10 Gantt Chart
The figure above shows the Gantt chart, which is the process of the FYP1
project. The project starts with the first task, FYP1 registration and ending with
a report and video submission. The duration of this project is in semester six,
which is from July 2020. The project started on Monday 20th July 2020 and
ended on the 13th November 2020.
57 | PAGE
Figure 3-16: The Gantt chart of the FYP 2
The figure above shows the Gantt chart, which is the process of the FYP2
project. The project starts with the first task, FYP1 registration and ending with
a report and video submission. The duration of this project is in semester seven
which is from January 2021. The project starts on Monday 12 th January 2021
and ended on the 4th June 2021.
58 | PAGE
3.11 Project Milestone
59 | PAGE
4. CHAPTER 4: RESULT AND DISCUSSION
4.1 Introduction
For this chapter, it will be included with the results that obtained from the tests
that has been conducted. For the result and discussion, there will be data
analysis that consist of chart, graph and table which usually for making
comparison between the trial that being attempted by the volunteers.
The results that will be discussed in this chapter will consists of accuracy of
every character and digit written by 10 volunteers. The data was collected from
10 volunteer students and analysed the probability of accuracy for the product.
Each of the students was requested to write all the 26 uppercase alphabet and
10 number characters on the touch screen.
The first test that has been conducted is character recognition where the
volunteers were required to write the uppercase character from A-Z and each
of the character written will be recorded and analysed. The test will have 3 trial
for each of the volunteers to see the trend of the recognition.
60 | PAGE
4.3.1 First Trial of Character Recognition
Table 4-2: Sample character recognition for high accuracy vs low accuracy of first trial
61 | PAGE
Figure 4-1: Accuracy of recognition for each character for first trial (%)
Figure 4-2: Accuracy of recognition by each volunteer for first trial (%)
62 | PAGE
Table 4-1 show the data collected from the first trial. From here we can
observe the data according to each character, the system can 100% correctly
recognized for characters ‘F’, ‘K’, ‘M’, ‘N’, ‘P’, ‘S’, ‘T’, ‘U’ and ‘Y’. This is followed
by 90% correctly recognized characters which are ‘A’, ‘L’, ‘W’, and Z and
followed by 80% correctly recognized characters which are ‘C’, ‘E’, ‘G’, and ‘R’.
As for characters ‘J’, ‘Q’, ‘X’ and also ‘B’, ‘O’, ‘V’, these characters have 70%
and 60% accuracies. On the other hand, the character that had the least
accuracy is ‘D’. It was correctly recognized by only 3 volunteers out of 10, that
is only 30% which is less than half. It same goes to the characters ‘H’ and ‘I’
which each has only 50% and 40% accuracy respectively. The ‘D’ character
was frequently recognized as ‘P’ this is due to some of the volunteers that write
‘D’ with a unique way as we can see from the Table 4-2. Hence, this proved
that it may be caused solely by the problem of similar shape. The same goes
to the characters ‘H’ and ‘I’ where the system recognized the characters written
by volunteers as ‘N’ and ‘T” respectively due to the similar shape of these
characters which make these characters have less than 50% accuracy. Figure
4-1 show the accuracy of each character recognition by the 10 volunteers and
Figure 4-2 show the accuracy of character recognition achieved by each
volunteer.
63 | PAGE
4.3.2 Second Trial of Character Recognition
Table 4-4: Sample characters recognition for high accuracy vs low accuracy of second trial
64 | PAGE
Figure 4-3: Accuracy of recognition for each character for second trial (%)
Figure 4-4: Accuracy of recognition by each volunteer for second trial (%)
65 | PAGE
Table 4-3 show the data collected from the second trial. From here we can
observe the data according to each character, the system can 100% correctly
recognized for characters ‘M’, ‘P’, ‘S’, ‘T’, ‘U’ and ‘W’. This is followed by 90%
correctly recognized character which are ‘A’, ‘I’, ‘K’, ‘R’, ‘Y’ and ‘Z’ and followed
by 80% correctly recognized character which are ‘H’, ‘L’, ‘N’, ‘O’, ‘Q’ and ‘Z’.
As for characters ‘C’, ‘E’, ‘F’ and also ‘B’, ‘G’, ‘J’,’V’ these characters have 70%
and 60% accuracies. Meanwhile, the characters that had the least accuracy is
still ‘D’. This time it was correctly recognized by only 4 volunteers out of 10,
that is only 40% which still less than half. There are some characters which
was wrongly recognized such as ‘V’ recognized as ‘Y’ and also ‘J’ recognized
as ‘T’ as we can see from the Table 4-4. This is due to the problem of having
a similar shape. Figure 4-3 show the accuracy of each character recognition
by the 10 volunteers and Figure 4-4 show the accuracy of the character
recognition achieved by each volunteer.
66 | PAGE
4.3.3 Third Trial
Table 4-6: Sample character recognition for high accuracy vs low accuracy of third trial
67 | PAGE
Figure 4-5: Accuracy of recognition for each character for third trial (%)
Figure 4-6: Accuracy of recognition by each volunteer for third trial (%)
68 | PAGE
Table 4-5 show the data collected from the third trial. From here we can
observe the data according to each character, the system can 100% correctly
recognized for characters ‘K’, ‘M’, ‘N’, ‘P’, ‘Q’, ‘R’, ‘S’, ‘T’, ‘Y’ and ‘Z’. This is
followed by 90% correctly recognized character which are ‘A’, ‘C’, ‘E’, ‘J’ and
‘W’ and followed by 80% correctly recognized character which are ‘B’, ‘H’, ‘L’,
‘U’, ‘V’ and ‘X’. As for characters ‘F’, ‘G’, and ‘Y’, these characters have 70%
accuracy. On the other hand, the characters that had the least accuracy is still
‘D’ and ‘O’ as addition. This time ‘D’ was correctly recognized by 5 volunteers
which make it increase in accuracy from the second trial by 10%. Other than
that, character ‘O’ which was wrongly recognized such as ‘Q’ as we can see
from the Table 4-6. This is due to the problem of having a similar shape.
Figure 4-5 show the accuracy of each character recognition by the 10
volunteers and Figure 4-6 show the accuracy of the character recognition
achieved by each volunteer.
69 | PAGE
4.4 Digit Recognition
The second test that has been conducted is digit recognition where the
volunteers were required to write the digits from 0-9 and each of the digit
written will be recorded and analysed. The test will have 3 trial for each of the
volunteers to see the trend of the recognition.
Table 4-8: Sample digits recognition for high accuracy vs low accuracy of first trial
70 | PAGE
Figure 4-7: Accuracy of recognition for each digit for first trial (%)
Figure 4-8: Accuracy of recognition by each volunteer for first trial (%)
Table 4-7 show the data collected from the first trial of digit recognition. We
can observe the data according to each character, the system can 100%
correctly recognized for digits ‘0’, ‘5’, ‘7’, and ‘9’. This is followed by 90%
correctly recognized digits which are ‘1’, ‘2’, ‘3’ and ‘4’ and followed by 70%
correctly recognized digit which is ‘6’. The digit ‘6’ is wrongly recognized as ‘0’.
On the other hand, the digit that had the least accuracy is ‘8’. The digit ‘8’ was
correctly recognized by only 6 volunteers out of 10 this is because it was
wrongly recognized such as ‘5’ as we can see from the Table 4-8. This is due
to the problem of having a similar characteristic. Figure 4-7 show the accuracy
of each digit recognition by the 10 volunteers and Figure 4-8 show the
accuracy of digit recognition achieved by each volunteer.
71 | PAGE
4.4.2 Second Trial of Digit Recognition
Table 4-10: Sample digits recognition for high accuracy vs low accuracy of second trial
72 | PAGE
Figure 4-9: Accuracy of recognition for each digit for second trial (%)
Figure 4-10: Accuracy of recognition by each volunteer for second trial (%)
Table 4-9 show the data collected from the second trial of digit recognition. We
can observe the data according to each character, the system can 100%
correctly recognized for digits ‘0’, ‘5’, ‘7’, and ‘9’. This is followed by 90%
correctly recognized digits which are ‘2’ and ‘4’ and followed by 80% correctly
recognized digit which are ‘3’ and ‘8’. The digit ‘1’ has accuracy of 70% is often
wrongly recognized as ‘2’. On the other hand, the digit that had the least
accuracy is ‘6’. The digit ‘6’ was correctly recognized by only 6 volunteers out
of 10 this is because it was wrongly recognized such as ‘0’. This is due to some
volunteers whom handwriting of ‘6’ seem like ‘0’ as we can see from the Table
4-10. Figure 4-9 show the accuracy of each digit recognition by the 10
volunteers and Figure 4-10 show the accuracy of digit recognition achieved
by each volunteer.
73 | PAGE
4.4.3 Third Trial of Digit Recognition
Table 4-12: Sample digits recognition for high accuracy vs low accuracy of third trial
74 | PAGE
Figure 4-11: Accuracy of recognition for each digit for third trial (%)
Figure 4-12: Accuracy of recognition by each volunteer for third trial (%)
Table 4-11 show the data collected from the third trial of digit recognition. We
can observe the data according to each character, the system can 100%
correctly recognized for digits ‘0’, ‘3’, ‘5’, ‘7’, and ‘9’. This is followed by 90%
correctly recognized digits which are ‘1’, ‘4’ and ‘8’ and followed by 80%
correctly recognized digit which is ‘2’. The digit ‘2’ was often wrongly
recognized as ‘7’. On the other hand, the digit that had the least accuracy is
‘6’. The digit ‘6’ was correctly recognized by only 7 volunteers out of 10 this is
because it was wrongly recognized such as ‘9’. However, the accuracy of digit
‘6’ in this trial is highest compared to the trials before. We can see how the
digits were wrongly recognized as shown in Table 4-12. Figure 4-11 show the
accuracy of each digit recognition by the 10 volunteers and Figure 4-12 show
the accuracy of digit recognition achieved by each volunteer.
75 | PAGE
4.5 Overall Accuracy of Recognition System
76 | PAGE
𝐓𝐨𝐭𝐚𝐥 𝐍𝐮𝐦𝐛𝐞𝐫 𝐎𝐟 𝐒𝐮𝐜𝐜𝐞𝐬𝐬 𝐅𝐨𝐫 𝐀𝐥𝐥 𝐓𝐫𝐢𝐚𝐥𝐬
𝑶𝒗𝒆𝒓𝒂𝒍𝒍 𝑨𝒄𝒄𝒖𝒓𝒂𝒄𝒚 = x 100%
𝐓𝐨𝐭𝐚𝐥 𝐍𝐮𝐦𝐛𝐞𝐫 𝐎𝐟 𝐀𝐭𝐭𝐞𝐦𝐩𝐭𝐬
(1)
(𝟖𝟗+𝟖𝟕+𝟗𝟐+𝟐𝟎𝟗+𝟐𝟏𝟏+𝟐𝟐𝟒)
= x 100%
𝟏𝟎𝟖𝟎
= 84.44%
From the Figure 4-13 we can see the total number of success recognized
characters for every trial from the test that has been conducted. From here, we
also can get the accuracy of character recognition for every trial. For the first
trial, the number of success recognized characters is 209 out of 260 which the
accuracy is 80%. It is quite high rate of accuracy for the first trial. As for the
second trial, the number of success recognized characters is 211 out of 260
and the accuracy is 81% which slightly higher than the first trial. Last but not
least, the number of success recognized characters for the third trial is 224 out
of 260 and the accuracy is 86% which make the accuracy of the third trial is
the highest of all trial.
From the Figure 4-14 we can see the total number of success recognized
digits for every trial from the test that has been conducted. We also can
observe the accuracy of digit recognition for every trial. For the first trial, the
number of success recognized digits is 89 out of 100 which the accuracy is
89% which is higher than 85%. As for the second trial, the number of success
recognized digits is 87 out of 100 and the accuracy is 87% which is decreasing
from the first trial. Lastly, the number of success recognized characters for the
third trial is 92 out of 100 and the accuracy is 92% which make the accuracy
of the third trial is the highest of all trial. On the whole, the overall accuracy for
both characters and digits recognition in this system is 84.44% which is
satisfactory.
77 | PAGE
4.6 Result Summary
Table 4-15 show the recognition method used for this project and the methods
used in the previous studies. (Syafeeza et al., 2017) was using the XOR Bit-
wise Operation method with the preset data of 8x8 pixels character and digits
as the dataset and the system managed to get an accuracy of 80%. Next,
(Debnath et al., 2019) was using the CNN method with MATLAB dataset. The
method that being used is similar with the one used in this project, only the
dataset used was different and the system had an accuracy of 82%. On the
other hand, (Soselia et al., 2019) was using the RNN method with Latin and
Georgian characters Dataset. The system managed to get an accuracy of
88%.
78 | PAGE
5. CHAPTER 5 – CONCLUSION AND RECOMMENDATION
5.1 Conclusion
This project was aiming on the online handwriting recognition system with
utilization of the STM32 microcontroller to play a key role in the recognition
process by the use of the trained model of CNN with EMNIST dataset. The
user will find this system is portable and can be used in many field. It also has
improved the technology for the use of hefty keyboard system. The most
challenging part of implementing this system is to get the high accuracy of
recognition for various type of handwriting. The success rate of any recognition
system depends not only on the recognition methods but it depends on several
reasons such as the features extraction phase, the pre-processing stage, and
the segmentation step. The fact that every person has a different way in writing
characters, there are times where the preset character in the code will differ to
other people’s way of writing certain characters.
Despite of the challenges, the experiment results proves that this Online
Handwriting Recognition System has been developed successfully. This
system has been tested by 10 volunteer students with several trials. The
results that have been collected consist of the recognition accuracy of each
characters and digits along with the recognition accuracy of each volunteers.
In addition, the accuracy of characters and digits for every trials also been
recorded:
First trial- Character and digit recognition has 80% and 89% accuracy.
Second trial- Character and digit recognition has 81% and 87% accuracy.
Third trial- Character and digit recognition has 86% and 92% accuracy.
To conclude this project, the overall accuracy of the system has been
calculated and the accuracy that the system has achieved in overall is 84.44%
which is satisfactory.
79 | PAGE
5.2 Future Recommendation
Although the main objective was accomplished by this project, there are a few
recommendations. The purpose is to enhance the project in the future and the
recommendations are:
a. This system can be added on with the features of saving the recognized
characters or digit into a text file.
80 | PAGE
6. REFERENCES
81 | PAGE
recognition system using arduino platform. Jurnal Teknologi, 79(7), 51–
59. https://doi.org/10.11113/jt.v79.10229
14. Debnath, B., Anika, A., Abrar, M. A., Chowdhury, T., Chakraborty, R.,
Khan, A. I., Anowarul Fattah, S., & Shahnaz, C. (2019). Automatic
Handwritten words on Touchscreen to Text file converter. IEEE Region
10 Annual International Conference, Proceedings/TENCON, 2018-
Octob(October), 219–223.
https://doi.org/10.1109/TENCON.2018.8650269
15. Soselia, D., Amashukeli, S., Koberidze, I., & Shugliashvili, L. (2019).
RNN-based online handwritten character recognition using accelerometer
and gyroscope data. ArXiv.
16. Liwicki, M., & Bunke, H. (2005). IAM-OnDB - An on-line English sentence
database acquired from handwritten text on a whiteboard. Proceedings of
the International Conference on Document Analysis and Recognition,
ICDAR, 2005(March), 956–961. https://doi.org/10.1109/ICDAR.2005.132
17. Nagy, R., Dicker, A., & Meyer-Wegener, K. (2012). NEOCR: A
configurable dataset for natural image text recognition. Lecture Notes in
Computer Science (Including Subseries Lecture Notes in Artificial
Intelligence and Lecture Notes in Bioinformatics), 7139 LNCS(February),
150–163. https://doi.org/10.1007/978-3-642-29364-1_12
18. Sharang, A. (2013). Machine Learning Techniques Mid Term Report.
10007.
19. Baldominos, A., Saez, Y., & Isasi, P. (2019). A survey of handwritten
character recognition with MNIST and EMNIST. Applied Sciences
(Switzerland), 9(15). https://doi.org/10.3390/app9153169
20. Rosyda, S. S., & Purboyo, T. W. (2018). A Review of Various Handwriting
Recognition Methods. International Journal of Applied Engineering
Research, 13(2), 1155–1164. http://www.ripublication.com
21. Cohen, G., Afshar, S., Tapson, J., & van Schaik, A. (2017). EMNIST: an
extension of MNIST to handwritten letters.
http://arxiv.org/abs/1702.05373
82 | PAGE
7. APPENDICES
APPENDIX A
83 | PAGE
APPENDIX B
84 | PAGE
85 | PAGE
86 | PAGE
Sample of Character Recognition
87 | PAGE
Sample of Digit Recognition
88 | PAGE
APPENDIX C
89 | PAGE
Coding for Arduino Mega Microcontroller
90 | PAGE
APPENDIX D
91 | PAGE
Coding for STM32 L496G Microcontroller
92 | PAGE
93 | PAGE