Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
124 views

Online Handwriting Recognition by Using Microcontroller

This document describes a student project to develop an online handwriting recognition system using an STM32 microcontroller. The project aims to recognize handwritten characters using a convolutional neural network model trained on the EMNIST dataset. The system will include an Arduino board with a touch screen to capture handwriting input, and an STM32 microcontroller to run the neural network model and output character predictions. The methodology discusses training the CNN model, implementing it on the STM32 microcontroller, designing hardware including a touchscreen interface, and testing the system to evaluate recognition accuracy.

Uploaded by

najmudin
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
124 views

Online Handwriting Recognition by Using Microcontroller

This document describes a student project to develop an online handwriting recognition system using an STM32 microcontroller. The project aims to recognize handwritten characters using a convolutional neural network model trained on the EMNIST dataset. The system will include an Arduino board with a touch screen to capture handwriting input, and an STM32 microcontroller to run the neural network model and output character predictions. The methodology discusses training the CNN model, implementing it on the STM32 microcontroller, designing hardware including a touchscreen interface, and testing the system to evaluate recognition accuracy.

Uploaded by

najmudin
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 93

DEVELOPMENT OF ONLINE HANDWRITING RECOGNITION SYSTEM

BY USING STM32 MICROCONTROLER

MUHAMMAD AIMAN NAJMUDIN BIN AZMEE

50219118216

UNIVERSITI KUALA LUMPUR


JANUARY 2021
DEVELOPMENT OF ONLINE HANDWRITING RECOGNITION SYSTEM
BY USING STM32 MICROCONTROLER

MUHAMMAD AIMAN NAJMUDIN BIN AZMEE


50219118216

Report Submitted to Fulfil the Partial Requirements for the Bachelor of


Electromechanical System (Hons.) in Engineering Technology in University
Kuala Lumpur

JANUARY 2021
DECLARATION

I declare that this report is my original work and all references have been cited
adequately as required by the University.

Date: 5/6/2021 Signature……………………………

Full Name: MUHAMMAD AIMAN


NAJMUDIN BIN AZMEE
ID Number: 502191182

III | PAGE
APPROVAL PAGE

We have supervised and examined this report and verify that it meets the
program and University’s requirements for the Bachelor in Electromechanical
System.

Date: 5/6/2021 Signature. ……………………….

Supervisor: MAHADZIR BIN ABD GHANI


Official Stamp

IV | PAGE
ACKNOWLEDGEMENT

Alhamdullilah. First and foremost, I would like to express my special thanks of


gratitude to my supervisor, Mr. Mahadzir Bin Abd Ghani who gave me the
golden opportunity to do this wonderful project on the topic “DEVELOPMENT
OF ONLINE HANDWRITING RECOGNITION SYSTEM BY USING STM32
MICROCONTROLER”, which also helped me in doing a lot of research and I
came to know about so many new things. I am really thankful to them. Not
forgotten, to my friend Iqbal Hanafi, Muammar Johari, Mohammad Suhairee
and Faiz Azman. I truly appreciate the support and willingness for the time they
make for me to guide and discuss with me about my project. Again, thank you
for having me with kind and respect throughout the journey.

Secondly, I would also like to thank my parents, family and friends who helped
me a lot in finalizing this project within the limited time frame. All the support
and inspiration I receive is a valuable thing that I cannot forget. This is truly
warm my heart. Without the patience and love from my surrounding, all of this
would have been hard and possible. Thank you for the support, love, concern,
and strength I get all this year. Finally, I hope this report would give benefit to
me and the readers in terms of the knowledge. Hopefully, this project will be
success and give good memories for me and for us.

5 | PAGE
TABLE OF CONTENTS

DECLARATION....................................................................................................... iii
APPROVAL PAGE .................................................................................................. iv
ACKNOWLEDGEMENT ........................................................................................... 5
LIST OF FIGURES ................................................................................................... 8
LIST OF TABLE ..................................................................................................... 10
LIST OF ABBREVIATION ...................................................................................... 11
ABSTRACT ............................................................................................................ 12
1. CHAPTER 1 - INTRODUCTION ...................................................................... 13
1.1 Introduction.............................................................................................................. 13
1.2 Project Overview .................................................................................................... 15
1.4 Project Objectives .................................................................................................. 17
1.5 Project Scope .......................................................................................................... 17
1.5.1 System ............................................................................................................... 17
1.5.2 User Scope........................................................................................................ 17
1.5.3 Time Frame ....................................................................................................... 18
1.5.4 Approach ........................................................................................................... 18
2. CHAPTER 2 - LITERATURE REVIEW ............................................................ 19
2.1 Introduction.............................................................................................................. 19
2.2 Handwriting Recognition...................................................................................... 19
2.3 Offline Handwriting Recognition ........................................................................ 23
2.4 Online Handwriting Recognition ........................................................................ 26
2.5 Dataset for Recognition ........................................................................................ 31
2.6 Analysis On Literature Review ........................................................................... 35
3. CHAPTER 3 - METHODOLOGY ..................................................................... 36
3.1 Introduction.............................................................................................................. 36
3.2 Project Planning Flowchart ................................................................................. 37
3.3 Method of Handwriting Character Recognition .............................................. 38
3.3.1 Convolutional Neural Network (CNN) ........................................................ 38
3.3.2 EMNIST dataset ............................................................................................... 39
3.4 Methodology Structure ..................................................................................... 40
3.4.1 Training Of Convolutional Neural Network Model .................................. 40
3.4.2 Deployment of Neural Network ................................................................... 42
3.5.1 Arduino IDE ...................................................................................................... 43

6 | PAGE
3.5.3 STM32CUBE MX .............................................................................................. 43
3.5.4 Solidwork .......................................................................................................... 43
3.5.5 Microsoft Visio ................................................................................................. 43
3.6 Hardware Features ................................................................................................. 44
3.6.1 Arduino ATMega 2560 .................................................................................... 44
3.6.2 Arduino TFT Touch-shield ............................................................................ 44
3.6.3 STM32-L496G DISCOVERY........................................................................... 45
3.6.4 Hard Casing ...................................................................................................... 45
3.7.1 Implementation On Hardware ...................................................................... 48
3.7.2 Implementation On Software ....................................................................... 49
3.8 Design of Experiment ............................................................................................ 52
3.8.1 Procedure Of Operation ................................................................................ 52
3.8.2 Sample Test & Analysis................................................................................. 54
3.9 Project Estimate of Cost ....................................................................................... 56
3.10 Gantt Chart ............................................................................................................. 57
3.11 Project Milestone.................................................................................................. 59
4. CHAPTER 4: RESULT AND DISCUSSION .................................................... 60
4.1 Introduction.............................................................................................................. 60
4.2 Data Collection ........................................................................................................ 60
4.3 Character Recognition .......................................................................................... 60
4.3.1 First Trial of Character Recognition ........................................................... 61
4.3.2 Second Trial of Character Recognition ..................................................... 64
4.3.3 Third Trial .......................................................................................................... 67
4.4.1 First Trial of Digit Recognition .................................................................... 70
4.4.2 Second Trial of Digit Recognition............................................................... 72
4.4.3 Third Trial of Digit Recognition ................................................................... 74
4.5 Overall Accuracy of Recognition System ........................................................ 76
4.6 Result Summary ..................................................................................................... 78
5. CHAPTER 5 – CONCLUSION AND RECOMMENDATION ............................ 79
5.1 Conclusion ............................................................................................................... 79
5.2 Future Recommendation ...................................................................................... 80
6. REFERENCES ................................................................................................ 81
7. APPENDICES ................................................................................................. 83

7 | PAGE
LIST OF FIGURES

Figure 2-1: Classification of Character Recognition (Kaur & Rani, 2017) ... 21
Figure 2-2: Accuracy Of Various Character Recogition Methods (Sahil
Lamba et al 2019) ........................................................................................ 22
Figure 2-3: Offline handwriting recognition processes. (Tappert & Charles,
2007)............................................................................................................ 23
Figure 2-5: Block diagram of testing part of ANN (Nain & Panwar, 2016) ... 24
Figure 2-4: Block diagram of training part of ANN (Nain & Panwar, 2016) . 24
Figure 2-7: Phases of Quran text recognition System (Iqbal & Zafar, 2019) 25
Figure 2-6: An Arabic word with different diacritics (Iqbal & Zafar, 2019) ... 25
Figure 2-9: Block diagram of the trajectory recognition algorithm. (Wang &
Chuang, 2012) ............................................................................................. 26
Figure 2-10: Average recognition rates versus the feature dimensions of the
PNN classifier by using LDA (Wang & Chuang, 2012) ................................ 26
Figure 2-11: Block Diagram of System (Priyanka et al 2013) ..................... 27
Figure 2-12: Flow Chart of System (Priyanka et al 2013) ........................... 27
Figure 2-13: Basic Block Diagram of the System (Syafeeza et al. 2017) 28
Figure 2-14: Flowchart Of The Whole System (Syafeeza et al 2017) ......... 28
Figure 2-15: Handwritten Input(colored in red on the left) XOR with (Preset
Character ‘B’ in blue in the middle) resulting the overlap pixels (colored in
black on the right) ( Syafeeza et al. 2017) ................................................... 28
Figure 2-16: Sample Images of the dataset that has been created using
Touchscreen (Debnath et al .2019).............................................................. 29
Figure 2-17: Accuracy of testing new inputs (Debnath et al. 2019) ............. 29
Figure 2-18: Flow Chart of the working process of the system (Debnath et al.
2019)............................................................................................................ 29
Figure 2-19: The whole process of character recognition (Soselia et al. 2019)
..................................................................................................................... 30
Figure 2-19: Correlation of accuracy on the number of classes for a fixed
number of training samples for each class. (Soselia et al. 2019) ................. 30
Figure 2-20: Sample of Unprocessed Image & Output of Pre-processing .. 31
Figure 2-21: Sample Of Digits ..................................................................... 31
Figure 2-22: Interface of the recording software (Nagy et al., 2012) ........... 32
Figure 2-23: Typical characteristics of OCR on scanned documents and
natural image text recognition. ..................................................................... 33
Figure 3-1: Project Planning Flowchart ....................................................... 37
Figure 3-2: Handwriting Recognition Methods ............................................ 38
Figure 3-3: Flowchart of the training of the CNN model .............................. 40
Figure 3-4: The Confusion matrix of the trained CNN model with the EMNIST
dataset ......................................................................................................... 41
Figure 3-5: Deployment Of Neural Network on STM32 L496G microcontroller
..................................................................................................................... 42
Figure 3-6: Arduino ATMega 2560 (Microcontroller) ................................... 44

8 | PAGE
Figure 3-7: Arduino TFT Touch-shield ........................................................ 44
Figure 3-8: STM32-L496G DISCOVERY .................................................... 45
Figure 3-9: Hard Casing .............................................................................. 45
Figure 3-10: Design Of Hard Casing ........................................................... 46
Figure 3-11: Basic Block Diagram of the System ........................................ 47
Figure 3-12: Wiring diagram of the System ................................................. 48
Figure 3-13: Flowchart Of The Whole System ............................................ 49
Figure 3-14: Flowchart Of Software System ............................................... 50
Figure 3-15: The Gantt chart of the FYP 1 .................................................. 57
Figure 3-16: The Gantt chart of the FYP 2 .................................................. 58
Figure 4-1: Accuracy of recognition for each character for first trial (%) ..... 62
Figure 4-2: Accuracy of recognition by each volunteer for first trial (%) ...... 62
Figure 4-3: Accuracy of recognition for each character for second trial (%) 65
Figure 4-4: Accuracy of recognition by each volunteer for second trial (%) 65
Figure 4-5: Accuracy of recognition for each character for third trial (%) .... 68
Figure 4-6: Accuracy of recognition by each volunteer for third trial (%) ..... 68
Figure 4-7: Accuracy of recognition for each digit for first trial (%) .............. 71
Figure 4-8: Accuracy of recognition by each volunteer for first trial (%) ...... 71
Figure 4-9: Accuracy of recognition for each digit for second trial (%) ........ 73
Figure 4-10: Accuracy of recognition by each volunteer for second trial (%)
..................................................................................................................... 73
Figure 4-11: Accuracy of recognition for each digit for third trial (%)........... 75
Figure 4-12: Accuracy of recognition by each volunteer for third trial (%) ... 75
Figure 4-13: Accuracy Of Character Recognition For Each Trial (%).......... 76
Figure 4-14: Accuracy Of Digit Recognition For Each Trial (%) .................. 76

9 | PAGE
LIST OF TABLE

Table 3-1: EMNIST dataset ....................................................................................39


Table 3-2: The Result Of Characters Recognition (A R Syafeeza et al. 2016) ........54
Table 3-3: The Result Of Numbers Recognition (A R Syafeeza et al. 2016) ...........54
Table 3-4: Analysis Of Recognition(A R Syafeeza et al 2016) ................................55
Table 3-5: Characters Recognition For Each Trials ................................................55
Table 3-6: Digits Recognition For Each Trials ........................................................55
Table 3-7: Project Estimation of Cost .....................................................................56
Table 3-8: Project Milestone ...................................................................................59
Table 4-1: Character recognition for first Trial ........................................................61
Table 4-2: Sample character recognition for high accuracy vs low accuracy of first
trial .........................................................................................................................61
Table 4-3: Characters recognition for second trial ..................................................64
Table 4-4: Sample characters recognition for high accuracy vs low accuracy of
second trial .............................................................................................................64
Table 4-5: Characters recognition for third trial .......................................................67
Table 4-6: Sample character recognition for high accuracy vs low accuracy of third
trial .........................................................................................................................67
Table 4-7: Digits recognition for first trial ................................................................70
Table 4-8: Sample digits recognition for high accuracy vs low accuracy of first trial70
Table 4-9: Digits recognition for second trial...........................................................72
Table 4-10: Sample digits recognition for high accuracy vs low accuracy of second
trial .........................................................................................................................72
Table 4-11: Digits recognition for third trial .............................................................74
Table 4-12: Sample digits recognition for high accuracy vs low accuracy of third trial
...............................................................................................................................74
Table 4-13: Characters Recognition For Each Trials ..............................................76
Table 4-14: Digits Recognition For Each Trials .......................................................76
Table 4-15: Result Comparison With previous Studies ...........................................78

10 | PAGE
LIST OF ABBREVIATION

ANN: Artificial Neural Networks ..............................................................................13


CNN: Convolutional Neural Network .......................................................................12
EMNIST: Extended of Modified National Institute of Standards and Technology ....12
FYP: Final Year Project ..........................................................................................18
HCR: Handwriting Character Recognition ...............................................................19
KNN: K-Nearest Neighbor .......................................................................................15
LCD: Liquid-Crystal Display ....................................................................................27
LOB: The Lancaster-Oslo/Bergen ...........................................................................32
MATLAB: MATrix LABoratory .................................................................................29
MEMS: Micro Electro Mechanical System...............................................................27
MLP: Multi-layer Perceptrons ..................................................................................31
MPU: Motion Processing Unit .................................................................................30
NEOCR: Natural Environment Optical Character Recognition ................................33
OCR: Optical Character Recognition ......................................................................13
PCR: Printed Character Recognition.......................................................................19
PNN: Probabilistic Neural Network .........................................................................26
RNN: Recurrent Neural Network .............................................................................30
SCG: Scaled Conjugate Gradient ...........................................................................24
SVM: Support Vector Machine ................................................................................15
TFT: Thin Film Transistor ........................................................................................15
VGG: Visual Geometry Group.................................................................................29
XML: Extensible Markup Language ........................................................................33
XOR: eXclusive OR ................................................................................................28

11 | PAGE
ABSTRACT

The handwriting recognition has been studied for over four decades, and many
methods have been developed so far. Even now in all circumstances, there
are complicated issues regarding the recognition process, and there is no one
particular method to tackle it properly and thoroughly. As we know, the
recognition mechanism for handwriting plays a very significant part in the globe
of today. There are many places where words, characters and digits need to
be recognized. In that case, there are two types of recognition systems may
be employed, offline and online recognition. The offline recognition system
involves the automated translation of a text into letter codes for the text
computing application. In contrast, online recognition system of handwriting
demanded automatic text translation where it is necessary to be written on a
specific digitizer when the sensors receive pen-tip motions. Therefore, the
purpose of this project is to develop an online handwriting recognition system
which able to acquire a handwriting input from analog to a digital font using the
Convolutional Neural Network (CNN) training model with EMNIST dataset. To
achieve this purpose, we have utilized the STM32 L496G microcontroller to
deploy the CNN trained model by using the EMNIST dataset. The Google's
Collaborative notebook will be used to train and evaluate the NN model. This
electronic system recognized characters and digits in natural handwriting by
using stylus with intends to improve the technology from the use of a hefty
keyboard system to a portable and comfortable way. In order to make this
work, this system used the STM32CUBE MX software with an Artificial
Intelligence (Al) extension and the programming language is a C language. A
several tests have been conducted upon the completion of the prototype and
the system has achieved an overall accuracy of 84.44%.

12 | PAGE
1. CHAPTER 1 – INTRODUCTION

This chapter includes the introduction, overview of the project, the problem
statements, the objectives and scope of the project. It will provide the primary
purpose and how this project will be created.

1.1 Introduction

Handwriting recognition defined as the capability of a handwriting device to


read handwriting as a real text. The manual identification method with a pen
or finger as a straight entry into a touchscreen is the most popular usage in
today's mobile environment. This method is helpful since the user can readily
identify numbers and names for contacts instead of entering the same
information using the on-screen keyboard. That is because most individuals
are comfy with writing and can do it fast.(Kaur & Rani, 2017) Optical Character
Recognition (OCR) is the most conventional approach used for handwriting
recognition. It scans and then transfers a manuscript to an essential text
document. It also works by capturing a picture of a handwritten text. OCR is
primarily a method for image recognition, aimed at recognizing handwriting.

The handwriting recognition has been studied for over four decades, and many
methods have been developed so far. There are complicated issues regarding
the recognition process, and there is no one particular method to tackle it
properly and thoroughly, even now in all circumstances. In the process of
handwriting recognition, a text image must be provided and pre-processed
adequately. Next, the text should be segmented or feature extracted. The
result would be some part of the text, which the machine must understand.
Finally, contextual information to the identified symbols should be added to
confirm the result. (Kacalak & Majewski, 2012) Artificial Neural Networks
(ANN), used to recognize written languages, have a high capacity to
generalize and require no deep contextual information and formalization to
tackle the challenges in handwriting recognition.

13 | PAGE
There are two types of recognition systems may be employed, offline and
online recognition. (Arica & Yarman-Vural, 2001)The offline recognition system
involves the automated translation of a text into letter codes for the text
computing application. The data from this form is considered to be a
changeless representation of handwriting. It is pretty hard for the offline
handwriting recognition to get high accuracy as people have distinct sorts of
handwriting. On the other hand, (Chakravarthy et al., 2011) online recognition
system of handwriting demanded automatic text translation where it is
necessary to be written on a specific digitizer when the sensors receive pen-
tip motions. This sort of knowledge is called digital ink and may be regarded
as a digital handwriting representation. In computer and text processing
application, the signals acquired subsequently translated into letter codes.

14 | PAGE
1.2 Project Overview

Online handwriting recognition is a system that uses a pen or stylus to write


on a touch-sensitive surface and integrate it with an output display. A software
application is needed to decode the movements of the style throughout the
writing area and convert the resulting strokes into digital text. Hence, to
construct this handwriting recognition system requires the implementation on
both hardware and software. The software component includes the STM32
CUBE MX and the Arduino IDE software. The hardware components include
equipping the microcontrollers, TFT touch screen, and LCD screens.

The C-language will be the language utilized in the application framework.


Many options include adding specific algorithms or even different forms of
handwriting detection devices for the handwriting recognition process. The
classification step is formerly the most essential element of the recognition
procedure.(S & Afseena, 2015) There are several classification methods for
identifying various pictures via these classifiers, such as the K-Nearest
Neighbor (KNN), Convolutions Neural Network (CNN), and Support Vector
Machine (SVM). The device performance is typically measured by most
researchers based on precise categorization.

For this project, (Shawon et al., 2018) the CNN will be the classifier and, as a
result of clean data that may be provided, we will utilize a common EMNIST
dataset. Therefore, the algorithms employed in this research start with the
extraction of functions in which pixels are utilized as a label for the picture, as
CNN handles the extraction and classification of functions. Finally, the
performance of this system is determined with the accuracy rate.

15 | PAGE
1.3 Problem Statement

People and technology are practically inseparable nowadays. Because of the


complicated scientific nature of the issue and its industrial significance, many
forms of research are motivated to perfectly refine the handwriting recognition
system in this modern day. As time passes, the handwriting recognition system
is becoming more general. The wide range of handwriting styles across people
and the low quality of handwritten text compared to printed text offer
considerable challenges when it comes to make it machine-readable. This
project aims to create a system that will be able to recognize and determine
the handwritten characters and numbers from the users by using the concepts
of deep learning, which is the CNN, where it can automatically detect the
crucial features of the given input without any human supervision by utilizing a
microcontroller.

The primary goal of the proposed system is understanding the deployment of


the CNN with the EMNIST dataset to a low power microcontroller and applying
it to the handwritten recognition system. The system will be used to improve
the technology from the use of a hefty keyboard system to a portable and
convenient way that is more suitable for smaller electronic devices. It could
also deliver benefits such as ensuring accurate medical chart, lowering storage
costs, and maintaining a critical area of study available in the future for
students. Improving the accuracy of identification for different styles of
handwriting is the most challenging aspect of implementing this method.

16 | PAGE
1.4 Project Objectives

1. To develop a handwriting recognition system that utilizes the STM32


microcontroller in the process.
2. To perform feature extraction of digitalized input in natural handwriting
with a stylus.
3. To acquire handwriting input from analogue to digital font using the CNN
training model with EMNIST dataset.

1.5 Project Scope

1.5.1 System

In developing the Online Handwriting Recognition Using STM32


microcontroller, we need to make innovation and improvement of the product.
This project can improve all users. This project is designed to build an online
handwriting recognition system using CNN method and EMNIST as dataset to
be trained along with the STM32 as the microcontroller. TFT Touch-shield
screen as the input and output respectively as the hardware components. For
the software part, this system uses C language and STM32 CUBE MX
software.

1.5.2 User Scope

This system can be used by all group of people regardless of what field they
are working in. For example, this system can help the education field where
students or lecturers might use it during learning lessons. Besides, it can also
be implemented in the medical field where it can help the medical personnel
with the medical chart; in the bank, it may help the finance personnel with the
bank checks or other documents. The user might find that this system is easy
to be used.

17 | PAGE
1.5.3 Time Frame

The time frame given for me and my other friends that register this subject in
semester 6 is 17 weeks for FYP 1. In next semester, we will also provide 17
weeks for FYP 2 to continue the progress we have done in FYP 1. In FYP 1, I
need to find enough information that relate with my project to get ready for
presentation at week 18 that I need to explain about my project.

For FYP 2, I need to start developing and building my project idea subsequent
to the FYP 1 proposal. I will finish the project, the final report, data analysis,
and all the related work toward my project within the timeframe. I will look up
the hardware needed for this project at the beginning of semester 7. As time
goes by, I will build the hardware, develop the software and its electrical wiring,
troubleshoot the problems that have occurred, and get the data analysis on
semester 7. I will ensure all preparations for the final presentation are
completed before week 18. I will do the presentation based on the outcome
that I get from my project

1.5.4 Approach

At first, I meet with my supervisor Mr. Mahadzir bin Abdul Ghani to help me
last semester (sixth semester) and this semester (seventh semester) on the
Final Year Project (FYP). I make a meeting with him and discuss the FYP idea.
We got the title, which is development of Online Handwriting Recognition By
Using STM32 Microcontroller, from the idea that resulted from brainstorming
between us. I had explained this project roughly to him about the title that was
agreed upon. Week after week, I will meet with my supervisor to show progress
in designing the part of the project's structure, and to gather any new
information about the project.

18 | PAGE
2. CHAPTER 2 - LITERATURE REVIEW

2.1 Introduction

Literature review acts as a guidance and containing supportive information for


this research thoroughly. It will be included with references which contain facts
that can help in improving understanding what is the overall research aim and
purpose. As this research scope and study may bring something new in
engineering knowledge application, it requires sufficient information and
explanation for giving much better understanding to the handwriting
recognition system. In order to develop such a system, it will require the
software and hardware implementation. There will be a number of information
to be considered and provided in this chapter. All the journal papers in this
chapter will be revised and organized by sub-topic that related with this project.

2.2 Handwriting Recognition

(Surya & Afseena, 2015) described that OCR is the handwritten and typed
material computer-based recognition. In the OCR approach, a digital camera
or a scanner is utilized to capture and translate into machine-editing forms like
ASCII code of various forms of documents such as paper, PDF, or character
imagery. The text type Optical Character Recognition (OCR) is based on two
types which are Handwriting Character Recognition (HCR) where the
recognition is for understandable handwritten source information such as
paper documents and the other is and Printed Character Recognition (PCR),
the recognition of printed documents. The fundamental reason for the great
difficulty in HCR is the diverse styles of handwriting from different people. Even
for the same person, the writing style and the writing format frequently differ.
The author also describes the roughly split processes of OCR systems which
are pre-processing, segmentation, feature extraction, and classification. Pre-
processing is a crucial stage in the identification of character, including:

19 | PAGE
 Noise removal – Noise in an image with no meaning in the output (the
undesirable intensity values).
 Binarization - The colour or grayscale picture conversion process to a
bi-level picture. The approach used to binarize a picture is a local and
global thresholding.
 Skeletonization - A morphological procedure that transforms the image
into a pixel wide image without compromising its connection.
 Normalization - Image process in some standard form.

Segmentation is the process through which the input image is converted into
a character. These include segmentation of line, segmentation of word and
segmentation of character. Next, the main phase of character recognition is
feature extraction, where the most relevant characters will be extracted. The
accuracy of recognition depends mainly on the characters collected. Lastly,
classification is the character recognition step of decision-making.

20 | PAGE
Figure 2-1: Classification of Character Recognition (Kaur & Rani, 2017)

(Kaur & Rani, 2017) Described that character recognition is a method that
connects elements in an image with a symbolic context (letters, symbols, and
numbers), i.e. methods for character recognition associate symbolic
recognition with the character's image. In particular, the character recognition
system uses raw data, which is used in the pre-processing phase of every
recognition system. On the basis of this method of data collecting, the
character recognition system may be separated into online recognition and
offline recognition. Offline handwriting is a recognition technique of words
scanned from a surface (for example, a sheet of paper) digitally processed in
a gray-size format. Further processing after saving is generally done so that
higher recognition is achieved. On the other hand, the handwritten characters
that are identified by online handwriting system can be stored and digitally
processed by several methods. Commonly, a special device is used in tandem
with electronic paper. The coordinates of successive points are construed as
time since the pen goes over the surface and is kept in order.

21 | PAGE
Figure 2-2: Accuracy Of Various Character Recogition Methods (Sahil Lamba et al 2019)

(Lamba et al., 2019) described that for so many years, a variety of handwritten
papers, forms and manuscripts had been maintained by numerous
organizations. These materials annotations may also be distorted over the
years and no longer usable. Because of this fact, if these materials can be
converted to a digital format, they can be saved and make the processes of
managing those files more manageable and safer. An OCR's main task is to
detect and recognize written text efficiently. On the other hand, the HCRS is
divided into two online and offline groups for the recognition of handwriting
features. The representation of 2-dimensional coordinates is carried out in the
online character recognition system of consistent points. It is produced in
accordance with a moment and the order of the strokes developed by the
writers. The offline handwriting technique is essentially translated from text to
image, and then letter code used for text and computer applications. The
retrieved information is the static representation of the written material. It is
more challenging to recognize offline than online handwriting as people have
distinct kinds of handwriting. A deep learning application to improve character
accuracy that might replace HCR are currently being implemented in Neural
Network. By using diverse extraction methods, the accuracy of recognition can
be improved. The higher array of data collected also tends to improve the
accuracy of the results required.

22 | PAGE
2.3 Offline Handwriting Recognition

Figure 2-3: Offline handwriting recognition processes. (Tappert & Charles, 2007)

(Tappert & Charles, 2007) discovered offline recognition is advantageous


since it can be done at any time following the writing of the document, even
years later. The downside is that it is not done in real-time as individual writing;
hence, it is not suitable for direct text input. There are several offline
applications for recognition of handwriting which are: mail address
interpretation, bank check numbers, and official forms. The OCR also plays an
essential part in digital libraries, enabling the digitization, reconstruction of
images and methods of recognition in computers to enter textual image
information. The recognition of offline handwriting consists of four processes:
acquisition, segmentation, identification and post-processing. First, scanners
or cameras are used for digitizing the handwriting to be recognized. Secondly,
the document's picture is divided into lines, words and individual characters.
Thirdly, each character is acknowledged with OCR methods. Finally, lexicons
or orthography checkers are used to fix errors.

23 | PAGE
Figure 2-5: Block diagram of training part
of ANN (Nain & Panwar, 2016)
Figure 2-4: Block diagram of testing part of
ANN (Nain & Panwar, 2016)

(Nain & Panwar, 2016) proposed the development of a system that acquired
the handwritten English characters as an input, processes the entries, extracts
the best characters, trains a neural network with either the Resilient Back
Propagation or a scaled conjugate gradient, then recognizes the input text
class and finally produces the computerized text output. Resilient back-
propagation is a supervisory learning technique algorithm for Artificial Neural
Network (ANN) feed, which proposed in 1992. Likewise, resilient back-
propagation considers the sign of a partial derivative, regardless of magnitude,
stated in the Manhattan update learning rule and functions independently for
each weight. The other supervisory learning algorithm is the Scaled Conjugate
Gradient (SCG). It is built on an old directed search method based on the
concept of a decent gradient algorithm. The benefit of SCG is that it does not
need to have a user-defined ANN parameter during training. It will result in
faster convergence and better accuracy when the step size gets adjusted with
each epoch automatically. This method is split into two main sections: ANN
training for an image library and ANN testing with test images. The training
portion of the proposed work includes: development of the data set, pre-
processing of the data set, features extraction from the pre-processed data set
and generation of the ANN vector.

24 | PAGE
Figure 2-7: An Arabic word with
different diacritics (Iqbal & Zafar, 2019)
Figure 2-6: Phases of Quran text recognition
System (Iqbal & Zafar, 2019)

(Iqbal & Zafar, 2019) described that the close presence of many distinct letters
makes it more challenging to recognize Quran's handwriting. Many inquiries in
the field of offline Qur'an handwriting recognition have been published in recent
years. Despite the fact that offline Qur'an handwriting is still a significant
obstacle, owing to different forms of printing, ligatures, duplication and cursive
writing of the Qur'an. Mushaf is considered the written form of the Quran in the
Cufic script, where the early copies of the Quran were written. Kufic is a
standard Arabic script of the eldest kind. Arabic becomes the Holy Quran's
language when it first revealed to the prophet Muhammad by Allah and
become the Muslim Sacred Scripture. The Arabic alphabets are familiar
among all Muslim people all over the world. Every Muslim are bound to the
Qur'an letters. The critical problems of Quranic manuscript identification have
been summarized. First, text of the Quran is cursively composed, which makes
segmenting difficult. Second, the Arabic is inherent in the diacritics. The nature
of diacritics applied to multiple interpretations of the same root word. Third, the
styling of the Qur'an is distinctive, and even same-style letters can vary in
scale. Next, due to the variation in the font, two different authors are writing
the same glyph differently. Nevertheless, if the same person writes differently,
two instances of the same glyph add more identification difficulties. Lastly,
there are a few reasons that cause the Qur’anic images to have low quality.The
Qur'anic manuscripts degrade due to age and frequently used.

25 | PAGE
2.4 Online Handwriting Recognition

Figure 2-8: Block diagram of the trajectory Figure 2-9: Average recognition rates
recognition algorithm. (Wang & Chuang, 2012) versus the feature dimensions of the PNN
classifier by using LDA (Wang & Chuang,
2012)

(Wang & Chuang, 2012) suggested a system that used a microcontroller, an


accelerometer, and a wireless transmission module made up the pen-type
portable computer. Via the wireless module, the tri-axial accelerometer's input
is the acceleration signals transmitted to a computer. The users can use the
digital pen to write at a constant speed. By using the algorithm of trajectory
recognition, the acquired acceleration inputs of these motions can be
recognized. This process consists of acquisition of acceleration, pre-
processing of the signal, generation of features, selection of features, and
extraction of features. On the other hand, the signal pre-processing method
will involve a moving average filter, a high-pass filter, calibration and
normalization. Firstly, to eliminate all the flaws from the raw signals, the
acceleration will be tuned. In order to eliminate gravitational acceleration and
high-frequency noise from the raw data, these two filters are used. In addition,
the purpose of the methods of selection of feature and extraction of feature are
not only to ease the computational load burden but also to improve the
classification accuracy. The reduced features are used as classifier inputs. A
Probabilistic Neural Network (PNN) was adopted as the classifier for
handwritten digit and hand gesture recognition for this system.

26 | PAGE
Figure 2-11: Flow Chart of System Figure 2-10: Block Diagram of System
(Priyanka et al 2013) (Priyanka et al 2013)

(Priyanka et al., 2013) An online handwriting recognition of characters is the


proposed method. A MEMS accelerometer performs the identification of
characters. This accelerometer is a 3D digital axial output that responds to any
minor deflection or rotation in the device. Micro Electro Mechanical System
(MEMS) uses both the sensing element on one chip and the signal conditioning
electronics on another chip as a multi-chip solution. The MEMS process
defines both the form and the type of device or sensor. MEMS motion monitors,
such as accelerometers, gyro metres and magnetometers, are commonly used
in the automotive industry. The system uses motion axis values of
accelerometers to track the slight difference of handwriting tilts and give it to
the microcontroller. The microcontroller used is the Arduino UNO board
microcontroller. The microcontroller uses Visual Basic or an LCD monitor to
get the tilt angle values and display them. First, a simple character tilt value is
stored and processed, and it periodically compares with the original value and
shows any matching reference in the Visual Basic. There are advanced
innovations in the password authentication protection framework. Using
handwriting in this recognition process allows the user to make it free from
using a keyboard and specific code. The use of an essential sensor for this
procedure allows to validate faster, quicker and more effective.

27 | PAGE
Figure 2-12: Basic Block Diagram of the System
(Syafeeza et al. 2017)

Figure 2-14: Handwritten Input(colored in red on


the left) XOR with (Preset Character ‘B’ in blue in
the middle) resulting the overlap pixels (colored in
black on the right) ( Syafeeza et al. 2017) Figure 2-13: Flowchart Of The Whole
System (Syafeeza et al 2017)

(Syafeeza et al., 2017) proposes an online recognition method using a


microcontroller for the handwriting recognition system. The method of
handwriting character recognition used here is the XOR bitwise operation
based on the pixel approach. It required to find the characteristics of the
character and then comparing the characteristics-pixel-based contrast of the
handwritten input pixel by pixel with the template/preset character. XOR
bitwise operation uses XOR logic to equate the handwritten input pixel by pixel
to the letter or prototype preset. In the project, Arduino Mega is used to process
the recognition while the touch screen and the LCD are displaying the input
and output. While the recognition system faces many obstacles, the
achievement recognition rate is 80.0 %. However, there are occasions where
the preset character in the code varies from the way certain people write such
characters. In addition, the Arduino Mega has a small amount of memory.
Therefore, bringing in a more significant sample character is not feasible on
such occasions. It is possible to increase the scale to 10-10 pixels to further
improve the recognition process. When more pixels are used, it will make the
character easier to recognize, this can improve the recognition rate. The
Artificial Neural Network (ANN) as the basis of the recognition mechanism has
proved to be very effective.

28 | PAGE
Figure 2-15: Sample Images of the
dataset that has been created using
Touchscreen (Debnath et al .2019)

Figure 2-16: Accuracy of testing new


inputs (Debnath et al. 2019)

Figure 2-17: Flow Chart of the working process


of the system (Debnath et al. 2019)

(Debnath et al., 2019) suggested a system that can convert handwritten inputs
into an electronic document instantly. An Arduino microcontroller and a TFT
touchscreen are part of the system where the conversion will be completed
with the assistance of MATLAB. The handwritten inputs on the touchscreen
will be saved as images. Next, these images are later to be changed into the
letters by using an algorithm of handwriting recognition. The input acquired is
extracted when the serial communication between the PC and Arduino is
established. The screen position is specified in the (x, y) coordinates, and
these coordinates will be read by the MATLAB code then transformed into the
image of letters inscribed on the screen. The images acquired is refined to
form a clear shape prior to going into a letter and digit recognition. The images
will then adapt to the size of the images taken throughout the training. An
Artificial Neural Network (ANN) method is then utilized to define the letters. An
extensive data set based on ImageNet classification is trained in multiple
models such as ResNet, AlexNet or VGGNet. For this purpose, a pre-trained
"AlexNet" data model is employed. MATLAB has a support kit for AlexNet
Model Network that may be utilized to tackle various issues. The model is
trained on over one million images and can classify images into 1000 item
categories.

29 | PAGE
Figure 2-18: The whole process of character recognition (Soselia et al. 2019)

Figure 2-19: Correlation of accuracy on the number of classes for


a fixed number of training samples for each class. (Soselia et al.
2019)

(Soselia et al., 2019) describes recent progress in the field of deep learning
has helped quickly to identify handwritten digits and characters. With online
recognition, manuscript and overall handwritten character datasets such as
EMNIST and deep convolution neural networks designs have helped to
achieve fresh state-of-the-art. On the other hand, online recognition of
handwriting features using input from a pointing device is still generally carried
out using standard statistical learning approaches and low-neural network
designs. In order to obtain more precision, this project uses RNNs without
depending on particular writing surfaces or motions. The system consists of a
three-axis accelerometer/gyroscope (MPU-6050), a pushbutton, an STM32-F1
microcontroller and a CSR-BC417 Bluetooth transmitter. A dataset of Latin and
Georgian characters has been adopted in this system. The collection includes
1500 gyroscopes and accelerometer sequences for chosen characters from
the Latin and Georgian alphabets. RNN is helpful at finding distinct patterns of
character in different handwriting of several people. This solution will
drastically increase productivity and cut the costs.

30 | PAGE
2.5 Dataset for Recognition

Figure 2-21: Sample Of Digits Figure 2-20: Sample of Unprocessed


(Liwicki & Bunke, 2005) Image & Output of Pre-processing
(Liwicki & Bunke, 2005)

(Liwicki & Bunke, 2005) proposed a system that be able to Identify hand written
English chars74 dataset numbers and alphabets. The images may be
classified into 62 classes ([0-9], [A-Z], and [a-z]), with the dataset having 55
instances in each class. Each of these images is made up of an alphanumeric
character printed on a white backdrop in black. Each image contains 900 ×
1200 pixels, however the images are of different alignment and location. In
addition, both within class and across classes, the thickness and size of the
character vary greatly. The author also stated that many early handwriting
approaches used the Supportive Vector Machine(SVM), Artificial Neural
Network(ANN), and The Classification of Nearest Neighbour(NN). Neural
networks have recently been popular. Moreover, the deep, but simple, Multi-
layer Perceptrons (MLPs) were utilized to randomly get a great accuracy in the
MNIST digit dataset (0.35 per cent error). A further classification strategy
involves utilizing a combination of more than one such method, one of which
is used to enhance the accuracy of the other.

31 | PAGE
Figure 2-22: Interface of the recording software
(Nagy et al., 2012)

(Nagy et al., 2012) described the design of the database called IAM-OnDB, is
inspired by the IAM-Database. Although the IAM-Database is an offline
database, the IAM-OnDB consists of data collected by a whiteboard online.
The Lancaster-Oslo/Bergen Corpus (LOB) is a large electronic corpus text
comprising all texts contained in the IAM-OnDB. There are 500 texts in the
LOB Corpus, each of which consists of around 2000 words. They range in 15
categories from press, popular literature to scholarly and scientific
publications. The texts have been divided in the corpus into fragments, each
of around 50 words, to get a database of handwritten phrases. These pieces
were copied onto the paper, and the eight forms of the whiteboard text were
asked to be written by each writer. In addition, the interface eBeam2 will be
used to record a user's handwriting. It allows writer to write in a specific case
on a whiteboard with conventional style, sending infrared impulses to a triangle
receptor installed at the corners of the whiteboard. The acquisition interface
provides a sequence of (x, y) coordinates that reflect the position of the tip and
a time stamp at each position. Then, the handwriting recognition system was
trained by using this database with the Hidden Markov Model (HMM) as the
classifier. The tests have been conducted with two type of dictionary being
used; small dictionary (2.3K) and large dictionary (11K). The large dictionary
has larger effect on accuracy rate since it contains more word. The
performance of the system trained on the database increases by 5.4% on the
2.3 K dictionary, while it increases by 8.3% on the 11 K dictionary.

32 | PAGE
Figure 2-23: Typical characteristics of OCR on scanned documents and natural
image text recognition (Sharang, 2013)

(Sharang, 2013) proposed the dataset (NEOCR) comprising of images


enhanced by extra metadata. The dataset consists of real-world images.
Therefore, a number of sub-datasets may be developed that solve
shortcomings in natural image OCR techniques. The author also stated that
the main benefits of the proposed dataset compared to other related datasets
are annotation of any visible text in an image, extra distortion quadrangles for
an accurate depiction of text regions. In addition, it also has the rich metadata,
which enable the simple construction of sub-datasets with additional features
to further identify deficiencies in OCR techniques. On the other hand, it also
has a total of 5238 bounding boxes which include 659 images (text-fields). The
NEOCR dataset may be expanded continually due to the easy web interface
of the Label-Me. The XML provides annotations for each image which details
the global image features, text boxes and their specific features separately.
The Label-Me XML format has been modified with tags providing more
metadata which are Global Image Metadata and Text-field Metadata. In
conclusion, the dataset is enhanced with extra info such as rotations, occlusion
or inversions as well as bounding box annotations. The quadrangles of
distortion were also noted for a precise portrayal of the ground truth. Due to its
extensive annotation, several subsets of the NEOCR data may be produced
in different contexts for testing methodologies.

33 | PAGE
Figure 2-24: Samples of all letters and digits in the EMNIST dataset.
(Baldominos et al., 2019)

(Baldominos et al., 2019) proposed a dataset that comprises both handwritten


digits and handwritten letters where it is more significant than MNIST dataset.
Several works have been tested over MNIST and show a 1% test error rate.
The fact that several of these studies are approaching a 0.2% error rate shows
that the value for this problem can be irreducible. Therefore, the Extended
MNIST database (EMNIST) was established in April 2017. It consists of both
handwritten digits and letters also sharing the same structure than the MNIST
database. The EMNIST database has been based on the NIST Special
Database 19 including the whole range of manual documents, comprising
more than 800,000 manually verified and labelled characters, from nearly 3700
authors who completed a form. The EMNIST authors deliberately made the
database structurally consistent with MNIST and performed a comparable
procedure. Firstly, storage of original pictures with 128 black and white image
in NIST SD 19. Secondly, lessen the margins of Gaussian blur. Thirdly, remove
empty padding in order to minimize the area of interest to the image. Next,
focus the number on a square box that retains its aspect ratios. After that, add
a 2-pixel blank side padding. Lastly, Bi-cubic interpolation for down-sampling
of 28x28 image pixels. Although there are far less testing with EMNIST than
those on MNIST, because of the new information on EMNIST, there are certain
tests that are getting extremely competitive results. In this scenario, Markov
random fields and convolutional neural networks has 95.34% of the greatest
accuracy for the Letters data set. The best accuracy is attained in the instance
of the Digits database which using convolutional neural networks.

34 | PAGE
2.6 Analysis On Literature Review

Based on all the paper journals studied in this literature review chapter, we
know that the handwriting recognition system is divided into two categories:
Offline Handwriting Recognition and On-line Handwriting Recognition.
Whereas various methods can be used for each of the categories. Both
systems will undergo four phases during the process of recognition: pre-
processing, segmentation, feature extraction, and classification regardless of
what method has been used. However, the offline recognition system is less
accurate compared to the online recognition system. Therefore, along with this
project study title, “Development of Online Handwriting Recognition System
Using STM32 Microcontroller, " there are many methods from the online
recognition system that can be used. The most preferred method is the one
that utilizes Convolutional Neural Network (CNN) training with EMNIST
(Extended of Modified National Institute of Standards and Technology)
dataset. This method is preferred because, based on the previous studies, the
neural network training with image dataset is quite successful with high
accuracy of recognition. Furthermore, it is suitable to be implemented with the
electronic hardware used in this project. The hardware such as STM32
Microcontroller (L496G-Disco) and TFT Touch-shield screen will work well with
this method. Therefore, this project study's challenge is to increase the rate of
accuracy along with some improvement. In conclusion, from all the paper
journals that has been studied and analysed during this literature review
chapter, it will help a lot during the methodology chapter.

35 | PAGE
3. CHAPTER 3 - METHODOLOGY

3.1 Introduction

Chapter 3 presents a complete description of the implementation methodology


from start to finish. There are a lot of projects that can be relate to this project
subsequent to researches and literature review that has been done. The
related aspect is in the term of handwriting recognition. Although they are
related, there are some factors distinguish this project from previous projects.
There are several steps to be followed in the recognition processes of
handwriting.

The Handwriting Recognition System is an electronic system that can read and
convert a handwritten input from paper or photographs and touch screens into
a digital font. Due to the benefits of changing things into digital form, the system
is often used to convert paper documents into digital documents to save space
and cash. In reality, it can also help with miscommunication at work.

Also, in this section, students need to understand why they would have chosen
the hardware to be used and why they should choose the specific methods to
use in this project. The methodology is essential before the product can be
tested. We should know the chronology of the project until it is fully completed.

36 | PAGE
3.2 Project Planning Flowchart

Figure 3-1: Project Planning Flowchart

Figure 3-1 show the project planning flowchart which is an overview flow of
the development of this online handwriting recognition system starting from
setting the objectives and scopes followed by analysis of literature review
before proceed with the selection of hardware. Next, after the hardware
required for the system has been finalized, the process continue with the
development of handwriting acquisition along with the development of feature
extraction and classification where these processes involve some software
features to make it work. Upon the completion of the prototype, it will be tested
and after that the result will be analysed and finally all the processes will be
included in the report.

37 | PAGE
3.3 Method of Handwriting Character Recognition

3.3.1 Convolutional Neural Network (CNN)

Figure 3-2: Handwriting Recognition Methods

There are eight handwriting recognition methods. (Rosyda & Purboyo, 2018)
The methods consist of Convolutional Neural Network (CNN), Semi-
Incremental Segment, Incremental, Lines and Words, Part, Slope and
Correction Slant, Ensemble, and Zoning. These eight methods can be utilized
for handwriting recognition. The CNN is commonly utilized among the eight
approaches above for this purpose.

In nearly every situation, the accuracy of CNN is high. The procedure


accompanying a training procedure and more samples are superior in terms
of handwriting recognition. The CNN procedure was used in this research
because although the time needed for CNN data training is long, the
handwriting recognition remains good as more CNN training will produce more
accurate recognition of handwriting.

38 | PAGE
3.3.2 EMNIST dataset

The EMNIST dataset contains handwritten characters and digits generated


from the NIST Database and transformed to the MNIST data set pixel image
format and data set structure. This dataset has six distinct splits. The following
is a summary of the dataset:

Table 3-1: EMNIST dataset

The NIST Database is fully augmented in the By-Class and By-merge


splits.(Cohen et al., 2017) The Balanced EMNIST dataset has an equal
amount of characters per class. A balanced set of capitals and small letters
fuse the EMNIST Letters dataset into a single 26-class tasks. The EMNIST
Digits and EMNIST MNIST dataset are equally compatible with the original
MNIST dataset in a manually balanced way. This project will use EMNIST
Letters and EMNIST Digits to identify the 26 uppercase letters and ten digits.
The example of training images for this dataset is in Appendix A.

39 | PAGE
3.4 Methodology Structure

3.4.1 Training Of Convolutional Neural Network Model

The dataset used for training our network is the EMNIST, and the model is a
simple Convolutional Neural Network (CNN). We will use Google's
Collaborative notebook to train and evaluate the NN model. There are four
steps to be taken in this section:

Figure 3-3: Flowchart of the training of the CNN model

40 | PAGE
Figure 3-4: The Confusion matrix of the trained CNN model with the EMNIST
dataset

The confusion matrix is generally a table occasionally employed for the


classification model or classification output of a test data collection with actual
values. Figure 3-4 displays the EMNIST dataset confusion matrix of the CNN
trained model. This confusion matrix created using the collaborative book of
Google to verify that the data set is the most accurate to apply the handwriting
recognition we will work on.

41 | PAGE
3.4.2 Deployment of Neural Network

We already used a Deep Learning library running on the desktop or in the


cloud during Neural Network training. In order to translate the trained NN in
an STM32 code, we are leveraging a tool made by ST the STM32CubeMX
with the Cube.AI extension.
we will use STM32CubeMX and then:

Figure 3-5: Deployment Of Neural Network on STM32 L496G microcontroller

42 | PAGE
3.5 Software Features

3.5.1 Arduino IDE

Arduino IDE is an open-source tool that mainly uses the Arduino module to
develop and build codes. The main code will create a hex file which will then
be transferred and uploaded to the controller on the board. The Arduino Mega
was used as an interface to power up the STM32 L496G.

3.5.3 STM32CUBE MX

STM32CubeMX is a graphical tool that can easily configure STM32


microcontrollers and generate the appropriate C initialization code. This
software was used to initialize all the microcontroller peripherals and generate
the c code for this project.

3.5.4 Solidwork

Solid Works is a solid computer-assisted design model and a computer-


assisted application of engineering that works on Microsoft Windows. With
regard to my project, this software has been used to create the hard casing to
support the other hardware.

3.5.5 Microsoft Visio

Microsoft Visio is a diagrammatical software. It is used widely to sketch such


as flow charts, construction plans, floor schemes, business model processes,
maps of 3D, and many more. As for this project, it will be used to sketch a
wiring diagram and also the flowcharts.

43 | PAGE
3.6 Hardware Features

3.6.1 Arduino ATMega 2560

Figure 3-6: Arduino ATMega 2560 (Microcontroller)

This project will use an Arduino module based microcontroller which is Arduino
Mega 2560. It contains 54 digital pins, 16 analog pins and 4 UARTs (serial
hardware ports). The input voltage is 7-12V. This microcontroller will be
programmed to work with the TFT Touch-shield for interface display and it also
will be used to power up the STM32 L496G microcontroller.

3.6.2 Arduino TFT Touch-shield

Figure 3-7: Arduino TFT Touch-shield

Arduino TFT Touch-Shield is an integrated microSD card interface


touchscreen display shield. It has 240x320 pixels (diagonal, 2.4") and
independent control. This monitor added a substantial tactile screen to the
display and allowing it to detect finger touches everywhere on the screen of
the device. In this project, it will play a role of displaying the interface of
operating the handwriting recognition system and become a key to initialize
the system.

44 | PAGE
3.6.3 STM32-L496G DISCOVERY

Figure 3-8: STM32-L496G DISCOVERY

STM32L49G6 is a Cortex-M4 core-based microcontroller. it contains a


memory of 1 Megabyte and a RAM of 320 KB. This package is supplied in a
parallel interface of 1.54-inch of 240 x 240 pixel-TFT Touch-shield colour LCD.
It also has a reset push button and a 4-direction joystick with selection. This
STM32 model is compatible with Arduino, where an Arduino microcontroller
can power it up. This microcontroller will play a major role in this handwriting
recognition system as it will be a medium for writers to write on its touch-shield
and recognize the handwriting.

3.6.4 Hard Casing

Figure 3-9: Hard Casing

The hard casing was designed to hold all the hardware in place, which are
STM32-L496G Discovery, Arduino Mega 2560 and the TFT Touch-shield while
all of these hardware is operating. It was made using 3D printing methods, and
the material used to make this item is poly-lactic acid (PLA) plastics. The
design of this item was made by using the SOLIDWORK software.

45 | PAGE
Figure 3-10: Design Of Hard Casing

46 | PAGE
3.7 Development of Prototype

Figure 3-11: Basic Block Diagram of the System

In this project, STM32-L496G is used as the microcontroller to process the


recognition while the input and output are from the TFT Touch-shield LCD 2
attached with the microcontroller. The Arduino Mega 2560 will be used to
power up the system while the TFT Touch-shield LCD 1 attached with it will
display the interface to operate the system. Figure 3-11 shows the basic block
diagram of the system. Based on the block diagram, it can be seen that the
touch screen serves as the input whereby every point touched by a finger or a
stylus on the screen will be saved in the STM32. It will then process the saved
points and recognize the pattern through recognition. Thus, the recognized
letter or digit will then be displayed on the TFT Touch-shield LCD 2.

47 | PAGE
3.7.1 Implementation On Hardware

Figure 3-12: Wiring diagram of the System

This section discusses the circuits connections made to establish the


communication between the microcontroller and the input/output peripherals
and how the hardware will work. The components involved are Arduino Mega,
STM32 and 2 TFT Touch-Shield LCD. Figure 3-12 shows the wiring diagram
of the system. The Touch-Shield 1 has a resolution of 240×320, while the
touch-shield 2 has a resolution of 240 x 240. The TFT Touch-Shield 1 will be
attached with the Arduino Mega Board, while The TFT Touch-Shield 2 screen
connected directly to the pins of the STM32 with the connection.

48 | PAGE
3.7.2 Implementation On Software

The software for this system consists of the driver or firmware, which allows
the communication between hardware devices. However, since STM32 has
integrated those drivers or firmware into their library, the drivers or firmware
need not be written by ourselves since it is already provided in the
STM32CubeMX with the Cube.AI extension. The recognition algorithm used
here is Convolutional Neural Network (CNN).

Figure 3-13: Flowchart Of The Whole System

49 | PAGE
Figure 3-13 shows the flowchart of the whole system. From the figure, the
main code of the program is shown where after starting the system, it will
initialize the hardware and save any available input if it exists. Else it will just
keep reading the touch screen for any input. If there is any touched input, the
touched point will be saved, and the timer interrupt would be reset. The
process repeats itself until there is no longer any input touched point being
read. The timer interrupt will only start when no handwriting is detected for a
period where the time is predetermined in the code. Whenever the timer is
interrupted, it will scale the available input then followed by recognizing the
scaled input, then the adaptation, which consists of modifying each pixel's
representation to the range set by the trained NN model and displaying the
recognized value on the LCD 2.

Figure 3-14: Flowchart Of Software System

50 | PAGE
Based on the flow chart in Figure 3-14, the software mainly consists of 4 parts:
input, scaling, NN output process, and recognition. The input touch screen
used is a 1.5" TFT Touch-Shield LCD. It is a touch screen that is compatible
with STM32-L496G DISCOVERY Board and the fact that it is a shield; hence
no soldering is required except for just mounting the shield on the Board. For
every point touched on the screen, the points would be saved in an array in
the STM32-L496G. The touch screen would detect the written input until it
stops detecting for a short amount of time. After it stops detecting for a while,
it will start scaling, run for the Neural Network output, and then recognize the
handwritten input. STM32-L496G detects the input by using the timer interrupt
function only to interrupt the code after receiving no input for some time.

51 | PAGE
3.8 Design of Experiment

3.8.1 Procedure Of Operation

In completing this project, we will have to test this online handwriting


recognition system to know the system's ability in recognizing the handwriting
of the user. As we discussed earlier, we know that this system will do the
recognition by utilizing microcontroller STM32-L496G Discovery when it
receives input from the TFT Touch-Shield and translating it into the output to
display the result. These are the few steps to be followed: -

STEP DIAGRAM EXPLAINATION

1 When power is
supplied, the TFT
Touch-Shield will
initialize and display
the interface to
operate the system.

2 When the user


presses the start

+ button on the
Arduino touch-
shield, the STM32

+ L496G will initialize;


hence the system is
operating.

52 | PAGE
3 The user can choose
to write a character
or digit by pressing
the blue joystick and
resetting the screen
by pressing the
black button, which
is the reset button.
4 When the user starts
to write on the
Touch-Shield of the
STM32 L496G, it will
start to collect the
input by saving the
user's touch point.
5 If the recognition is
Time taken
for timer
a success, it shows
interrupt the percentage of
the weightage from
Time taken
Weightag the CNN model and
fore of
timer
the
interrupt
CNN also the time of the

+ timer interrupt for


Time taken
Weightag the recognition on
fore of
timer
the
interrupt
CNN the right corner of

+ the screen;
Time taken
Weightag otherwise, it will just
fore of
timer
the
interrupt
CNN show the question

+ mark, which means

Weightag
the system is unable
e of the
to recognize the
CNN

+ handwriting.

53 | PAGE
3.8.2 Sample Test & Analysis

To measure the recognition accuracy of this method, a test will be conducted


by using our handwriting recognition system. The test conducted is quite
similar to the previous study by (Syafeeza et al., 2017) where the same testing
methods have been used. The previous study had conducted a test in UTeM.
The data was collected from 15 volunteer students and analyzed the
probability of accuracy of the system. Each student was requested to write all
the 26 uppercase alphabet and 10 number characters on the touch screen.
The results were then recorded in the table below:

Table 3-2: The Result Of Characters Recognition (A R Syafeeza et al. 2016)

Table 3-3: The Result Of Numbers Recognition (A R Syafeeza et al.


2016)

54 | PAGE
Characters Numbers Total
Recognition Recognition
Number Of 390 150 540
Attempts
Number Of 311 121 432
Success

Table 3-4: Analysis Of Recognition(A R Syafeeza et al 2016)

From the table, we can deduce that:

Total Number Of Success Recognition


𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = x 100%
Total Number Of Attempts

432
= x 100%
540

=80% accuracy

As for this project, the test that we will be conducted will have three trials
instead of a single trial on both character and digit recognition and will involve
10 volunteers. The purpose of having three trials is to see the recognition trend
on both characters and digits. However, we will still use the formula that
resembles the previous study to find the average accuracy of all trials during
the test.

CHARACTER RECOGNITION DIGIT RECOGNITION


Trials Success Accuracy Trials Success Accuracy
First First
Second Second
Third Third
Table 3-5: Characters Recognition For Table 3-6: Digits Recognition For
Each Trials Each Trials

Total Number Of Success For All Trials


𝑂𝑣𝑒𝑟𝑎𝑙𝑙 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = x 100% (1)
Total Number Of Attempts

55 | PAGE
3.9 Project Estimate of Cost

Parts Quantity Price


Arduino TFT 1 RM37.00
Touch-shield
Arduino ATMega 1 RM49.99
STM32 L496G - 1 RM409.99
Discovery
Custom-made 1 RM120.00
Plastic Body
Wire Cable 4 RM3.00
TOTAL 8 RM619.98
Table 3-7: Project Estimation of Cost

56 | PAGE
3.10 Gantt Chart

The Gantt chart below was produced using Microsoft Project.

Figure 3-15: The Gantt chart of the FYP 1

The figure above shows the Gantt chart, which is the process of the FYP1
project. The project starts with the first task, FYP1 registration and ending with
a report and video submission. The duration of this project is in semester six,
which is from July 2020. The project started on Monday 20th July 2020 and
ended on the 13th November 2020.

57 | PAGE
Figure 3-16: The Gantt chart of the FYP 2

The figure above shows the Gantt chart, which is the process of the FYP2
project. The project starts with the first task, FYP1 registration and ending with
a report and video submission. The duration of this project is in semester seven
which is from January 2021. The project starts on Monday 12 th January 2021
and ended on the 4th June 2021.

58 | PAGE
3.11 Project Milestone

Table 3-8: Project Milestone

59 | PAGE
4. CHAPTER 4: RESULT AND DISCUSSION

4.1 Introduction

For this chapter, it will be included with the results that obtained from the tests
that has been conducted. For the result and discussion, there will be data
analysis that consist of chart, graph and table which usually for making
comparison between the trial that being attempted by the volunteers.

4.2 Data Collection

The results that will be discussed in this chapter will consists of accuracy of
every character and digit written by 10 volunteers. The data was collected from
10 volunteer students and analysed the probability of accuracy for the product.
Each of the students was requested to write all the 26 uppercase alphabet and
10 number characters on the touch screen.

4.3 Character Recognition

The first test that has been conducted is character recognition where the
volunteers were required to write the uppercase character from A-Z and each
of the character written will be recorded and analysed. The test will have 3 trial
for each of the volunteers to see the trend of the recognition.

60 | PAGE
4.3.1 First Trial of Character Recognition

Table 4-1: Character recognition for first Trial

Table 4-2: Sample character recognition for high accuracy vs low accuracy of first trial

61 | PAGE
Figure 4-1: Accuracy of recognition for each character for first trial (%)

Figure 4-2: Accuracy of recognition by each volunteer for first trial (%)

62 | PAGE
Table 4-1 show the data collected from the first trial. From here we can
observe the data according to each character, the system can 100% correctly
recognized for characters ‘F’, ‘K’, ‘M’, ‘N’, ‘P’, ‘S’, ‘T’, ‘U’ and ‘Y’. This is followed
by 90% correctly recognized characters which are ‘A’, ‘L’, ‘W’, and Z and
followed by 80% correctly recognized characters which are ‘C’, ‘E’, ‘G’, and ‘R’.
As for characters ‘J’, ‘Q’, ‘X’ and also ‘B’, ‘O’, ‘V’, these characters have 70%
and 60% accuracies. On the other hand, the character that had the least
accuracy is ‘D’. It was correctly recognized by only 3 volunteers out of 10, that
is only 30% which is less than half. It same goes to the characters ‘H’ and ‘I’
which each has only 50% and 40% accuracy respectively. The ‘D’ character
was frequently recognized as ‘P’ this is due to some of the volunteers that write
‘D’ with a unique way as we can see from the Table 4-2. Hence, this proved
that it may be caused solely by the problem of similar shape. The same goes
to the characters ‘H’ and ‘I’ where the system recognized the characters written
by volunteers as ‘N’ and ‘T” respectively due to the similar shape of these
characters which make these characters have less than 50% accuracy. Figure
4-1 show the accuracy of each character recognition by the 10 volunteers and
Figure 4-2 show the accuracy of character recognition achieved by each
volunteer.

63 | PAGE
4.3.2 Second Trial of Character Recognition

Table 4-3: Characters recognition for second trial

Table 4-4: Sample characters recognition for high accuracy vs low accuracy of second trial

64 | PAGE
Figure 4-3: Accuracy of recognition for each character for second trial (%)

Figure 4-4: Accuracy of recognition by each volunteer for second trial (%)

65 | PAGE
Table 4-3 show the data collected from the second trial. From here we can
observe the data according to each character, the system can 100% correctly
recognized for characters ‘M’, ‘P’, ‘S’, ‘T’, ‘U’ and ‘W’. This is followed by 90%
correctly recognized character which are ‘A’, ‘I’, ‘K’, ‘R’, ‘Y’ and ‘Z’ and followed
by 80% correctly recognized character which are ‘H’, ‘L’, ‘N’, ‘O’, ‘Q’ and ‘Z’.
As for characters ‘C’, ‘E’, ‘F’ and also ‘B’, ‘G’, ‘J’,’V’ these characters have 70%
and 60% accuracies. Meanwhile, the characters that had the least accuracy is
still ‘D’. This time it was correctly recognized by only 4 volunteers out of 10,
that is only 40% which still less than half. There are some characters which
was wrongly recognized such as ‘V’ recognized as ‘Y’ and also ‘J’ recognized
as ‘T’ as we can see from the Table 4-4. This is due to the problem of having
a similar shape. Figure 4-3 show the accuracy of each character recognition
by the 10 volunteers and Figure 4-4 show the accuracy of the character
recognition achieved by each volunteer.

66 | PAGE
4.3.3 Third Trial

Table 4-5: Characters recognition for third trial

Table 4-6: Sample character recognition for high accuracy vs low accuracy of third trial

67 | PAGE
Figure 4-5: Accuracy of recognition for each character for third trial (%)

Figure 4-6: Accuracy of recognition by each volunteer for third trial (%)

68 | PAGE
Table 4-5 show the data collected from the third trial. From here we can
observe the data according to each character, the system can 100% correctly
recognized for characters ‘K’, ‘M’, ‘N’, ‘P’, ‘Q’, ‘R’, ‘S’, ‘T’, ‘Y’ and ‘Z’. This is
followed by 90% correctly recognized character which are ‘A’, ‘C’, ‘E’, ‘J’ and
‘W’ and followed by 80% correctly recognized character which are ‘B’, ‘H’, ‘L’,
‘U’, ‘V’ and ‘X’. As for characters ‘F’, ‘G’, and ‘Y’, these characters have 70%
accuracy. On the other hand, the characters that had the least accuracy is still
‘D’ and ‘O’ as addition. This time ‘D’ was correctly recognized by 5 volunteers
which make it increase in accuracy from the second trial by 10%. Other than
that, character ‘O’ which was wrongly recognized such as ‘Q’ as we can see
from the Table 4-6. This is due to the problem of having a similar shape.
Figure 4-5 show the accuracy of each character recognition by the 10
volunteers and Figure 4-6 show the accuracy of the character recognition
achieved by each volunteer.

69 | PAGE
4.4 Digit Recognition

The second test that has been conducted is digit recognition where the
volunteers were required to write the digits from 0-9 and each of the digit
written will be recorded and analysed. The test will have 3 trial for each of the
volunteers to see the trend of the recognition.

4.4.1 First Trial of Digit Recognition

Table 4-7: Digits recognition for first trial

Table 4-8: Sample digits recognition for high accuracy vs low accuracy of first trial

70 | PAGE
Figure 4-7: Accuracy of recognition for each digit for first trial (%)

Figure 4-8: Accuracy of recognition by each volunteer for first trial (%)

Table 4-7 show the data collected from the first trial of digit recognition. We
can observe the data according to each character, the system can 100%
correctly recognized for digits ‘0’, ‘5’, ‘7’, and ‘9’. This is followed by 90%
correctly recognized digits which are ‘1’, ‘2’, ‘3’ and ‘4’ and followed by 70%
correctly recognized digit which is ‘6’. The digit ‘6’ is wrongly recognized as ‘0’.
On the other hand, the digit that had the least accuracy is ‘8’. The digit ‘8’ was
correctly recognized by only 6 volunteers out of 10 this is because it was
wrongly recognized such as ‘5’ as we can see from the Table 4-8. This is due
to the problem of having a similar characteristic. Figure 4-7 show the accuracy
of each digit recognition by the 10 volunteers and Figure 4-8 show the
accuracy of digit recognition achieved by each volunteer.

71 | PAGE
4.4.2 Second Trial of Digit Recognition

Table 4-9: Digits recognition for second trial

Table 4-10: Sample digits recognition for high accuracy vs low accuracy of second trial

72 | PAGE
Figure 4-9: Accuracy of recognition for each digit for second trial (%)

Figure 4-10: Accuracy of recognition by each volunteer for second trial (%)

Table 4-9 show the data collected from the second trial of digit recognition. We
can observe the data according to each character, the system can 100%
correctly recognized for digits ‘0’, ‘5’, ‘7’, and ‘9’. This is followed by 90%
correctly recognized digits which are ‘2’ and ‘4’ and followed by 80% correctly
recognized digit which are ‘3’ and ‘8’. The digit ‘1’ has accuracy of 70% is often
wrongly recognized as ‘2’. On the other hand, the digit that had the least
accuracy is ‘6’. The digit ‘6’ was correctly recognized by only 6 volunteers out
of 10 this is because it was wrongly recognized such as ‘0’. This is due to some
volunteers whom handwriting of ‘6’ seem like ‘0’ as we can see from the Table
4-10. Figure 4-9 show the accuracy of each digit recognition by the 10
volunteers and Figure 4-10 show the accuracy of digit recognition achieved
by each volunteer.

73 | PAGE
4.4.3 Third Trial of Digit Recognition

Table 4-11: Digits recognition for third trial

Table 4-12: Sample digits recognition for high accuracy vs low accuracy of third trial

74 | PAGE
Figure 4-11: Accuracy of recognition for each digit for third trial (%)

Figure 4-12: Accuracy of recognition by each volunteer for third trial (%)

Table 4-11 show the data collected from the third trial of digit recognition. We
can observe the data according to each character, the system can 100%
correctly recognized for digits ‘0’, ‘3’, ‘5’, ‘7’, and ‘9’. This is followed by 90%
correctly recognized digits which are ‘1’, ‘4’ and ‘8’ and followed by 80%
correctly recognized digit which is ‘2’. The digit ‘2’ was often wrongly
recognized as ‘7’. On the other hand, the digit that had the least accuracy is
‘6’. The digit ‘6’ was correctly recognized by only 7 volunteers out of 10 this is
because it was wrongly recognized such as ‘9’. However, the accuracy of digit
‘6’ in this trial is highest compared to the trials before. We can see how the
digits were wrongly recognized as shown in Table 4-12. Figure 4-11 show the
accuracy of each digit recognition by the 10 volunteers and Figure 4-12 show
the accuracy of digit recognition achieved by each volunteer.

75 | PAGE
4.5 Overall Accuracy of Recognition System

Figure 4-13: Accuracy Of Character Recognition For Each Trial (%)

Figure 4-14: Accuracy Of Digit Recognition For Each Trial (%)

Table 4-14: Digits Recognition For Table 4-13: Characters Recognition


Each Trials For Each Trials

76 | PAGE
𝐓𝐨𝐭𝐚𝐥 𝐍𝐮𝐦𝐛𝐞𝐫 𝐎𝐟 𝐒𝐮𝐜𝐜𝐞𝐬𝐬 𝐅𝐨𝐫 𝐀𝐥𝐥 𝐓𝐫𝐢𝐚𝐥𝐬
𝑶𝒗𝒆𝒓𝒂𝒍𝒍 𝑨𝒄𝒄𝒖𝒓𝒂𝒄𝒚 = x 100%
𝐓𝐨𝐭𝐚𝐥 𝐍𝐮𝐦𝐛𝐞𝐫 𝐎𝐟 𝐀𝐭𝐭𝐞𝐦𝐩𝐭𝐬

(1)

(𝟖𝟗+𝟖𝟕+𝟗𝟐+𝟐𝟎𝟗+𝟐𝟏𝟏+𝟐𝟐𝟒)
= x 100%
𝟏𝟎𝟖𝟎

= 84.44%

From the Figure 4-13 we can see the total number of success recognized
characters for every trial from the test that has been conducted. From here, we
also can get the accuracy of character recognition for every trial. For the first
trial, the number of success recognized characters is 209 out of 260 which the
accuracy is 80%. It is quite high rate of accuracy for the first trial. As for the
second trial, the number of success recognized characters is 211 out of 260
and the accuracy is 81% which slightly higher than the first trial. Last but not
least, the number of success recognized characters for the third trial is 224 out
of 260 and the accuracy is 86% which make the accuracy of the third trial is
the highest of all trial.

From the Figure 4-14 we can see the total number of success recognized
digits for every trial from the test that has been conducted. We also can
observe the accuracy of digit recognition for every trial. For the first trial, the
number of success recognized digits is 89 out of 100 which the accuracy is
89% which is higher than 85%. As for the second trial, the number of success
recognized digits is 87 out of 100 and the accuracy is 87% which is decreasing
from the first trial. Lastly, the number of success recognized characters for the
third trial is 92 out of 100 and the accuracy is 92% which make the accuracy
of the third trial is the highest of all trial. On the whole, the overall accuracy for
both characters and digits recognition in this system is 84.44% which is
satisfactory.

77 | PAGE
4.6 Result Summary

Table 4-15: Result Comparison With previous Studies

Table 4-15 show the recognition method used for this project and the methods
used in the previous studies. (Syafeeza et al., 2017) was using the XOR Bit-
wise Operation method with the preset data of 8x8 pixels character and digits
as the dataset and the system managed to get an accuracy of 80%. Next,
(Debnath et al., 2019) was using the CNN method with MATLAB dataset. The
method that being used is similar with the one used in this project, only the
dataset used was different and the system had an accuracy of 82%. On the
other hand, (Soselia et al., 2019) was using the RNN method with Latin and
Georgian characters Dataset. The system managed to get an accuracy of
88%.

78 | PAGE
5. CHAPTER 5 – CONCLUSION AND RECOMMENDATION

5.1 Conclusion

This project was aiming on the online handwriting recognition system with
utilization of the STM32 microcontroller to play a key role in the recognition
process by the use of the trained model of CNN with EMNIST dataset. The
user will find this system is portable and can be used in many field. It also has
improved the technology for the use of hefty keyboard system. The most
challenging part of implementing this system is to get the high accuracy of
recognition for various type of handwriting. The success rate of any recognition
system depends not only on the recognition methods but it depends on several
reasons such as the features extraction phase, the pre-processing stage, and
the segmentation step. The fact that every person has a different way in writing
characters, there are times where the preset character in the code will differ to
other people’s way of writing certain characters.

Despite of the challenges, the experiment results proves that this Online
Handwriting Recognition System has been developed successfully. This
system has been tested by 10 volunteer students with several trials. The
results that have been collected consist of the recognition accuracy of each
characters and digits along with the recognition accuracy of each volunteers.
In addition, the accuracy of characters and digits for every trials also been
recorded:

 First trial- Character and digit recognition has 80% and 89% accuracy.
 Second trial- Character and digit recognition has 81% and 87% accuracy.
 Third trial- Character and digit recognition has 86% and 92% accuracy.

To conclude this project, the overall accuracy of the system has been
calculated and the accuracy that the system has achieved in overall is 84.44%
which is satisfactory.

79 | PAGE
5.2 Future Recommendation

Although the main objective was accomplished by this project, there are a few
recommendations. The purpose is to enhance the project in the future and the
recommendations are:
a. This system can be added on with the features of saving the recognized
characters or digit into a text file.

b. This system can be strengthened with the advanced deep learning


method which can have the auto-correction features.

80 | PAGE
6. REFERENCES

1. Kaur, H., & Rani, S. (2017). Handwritten Gurumukhi Character


Recognition Using Convolution Neural Network. 13(5), 933–943.
2. Kacalak, W., & Majewski, M. (2012). Using Geometrical Character
Analysis. November, 248–255. https://doi.org/10.1007/978-3-642-34478-
7
3. Arica, N., & Yarman-Vural, F. T. (2001). An overview of character
recognition focused on off-line handwriting. IEEE Transactions on
Systems, Man and Cybernetics Part C: Applications and Reviews, 31(2),
216–233. https://doi.org/10.1109/5326.941845
4. Chakravarthy, A., Krishna Raja, P. V, & Avadhani. (2011). Handwritten
Text Image Authentication Using Back Propagation. International Journal
of Network Security & Its Applications, 3(5), 121–130.
https://doi.org/10.5121/ijnsa.2011.3510
5. S, S. N. R., & Afseena, S. (2015). Handwritten Character Recognition – A
Review. 5(3), 1–6.
6. Shawon, A., Jamil-Ur Rahman, M., Mahmud, F., & Arefin Zaman, M. M.
(2018). Bangla Handwritten Digit Recognition Using Deep CNN for Large
and Unbiased Dataset. 2018 International Conference on Bangla Speech
and Language Processing, ICBSLP 2018, September, 1–6.
https://doi.org/10.1109/ICBSLP.2018.8554900
7. Lamba, S., Gupta, S., & Soni, N. (2019). Handwriting Recognition
System- A Review. Proceedings - 2019 International Conference on
Computing, Communication, and Intelligent Systems, ICCCIS 2019,
2019-Janua, 46–50. https://doi.org/10.1109/ICCCIS48478.2019.8974547
8. Tappert, C. C., & Cha, S. H. (2007). English Language Handwriting
Recognition Interfaces. In Text Entry Systems. Elsevier Inc.
https://doi.org/10.1016/B978-012373591-1/50006-1
9. Nain, N., & Panwar, S. (2016). Handwritten Text Recognition System
Based onNeural Network. Journal of Computer and Information
Technology, 2(2), 95–103.
10. Iqbal, A., & Zafar, A. (2019). Offline Handwritten Quranic Text
Recognition: A Research Perspective. Proceedings - 2019 Amity
International Conference on Artificial Intelligence, AICAI 2019, 125–128.
https://doi.org/10.1109/AICAI.2019.8701404
11. Wang, J. S., & Chuang, F. C. (2012). An accelerometer-based digital pen
with a trajectory recognition algorithm for handwritten digit and gesture
recognition. IEEE Transactions on Industrial Electronics, 59(7), 2998–
3007. https://doi.org/10.1109/TIE.2011.2167895
12. Priyanka, M., Ramakrishanan, S., & Raajan, N. R. (2013). Electronic
Handwriting. 5(3), 2019–2023.
13. Syafeeza, A. R., Chew, A. N. B., Sadari, M. M. H., Kamaruddin, M. N., Li,
W. J., & Zauber, Z. M. (2017). Electronic online handwriting character

81 | PAGE
recognition system using arduino platform. Jurnal Teknologi, 79(7), 51–
59. https://doi.org/10.11113/jt.v79.10229
14. Debnath, B., Anika, A., Abrar, M. A., Chowdhury, T., Chakraborty, R.,
Khan, A. I., Anowarul Fattah, S., & Shahnaz, C. (2019). Automatic
Handwritten words on Touchscreen to Text file converter. IEEE Region
10 Annual International Conference, Proceedings/TENCON, 2018-
Octob(October), 219–223.
https://doi.org/10.1109/TENCON.2018.8650269
15. Soselia, D., Amashukeli, S., Koberidze, I., & Shugliashvili, L. (2019).
RNN-based online handwritten character recognition using accelerometer
and gyroscope data. ArXiv.
16. Liwicki, M., & Bunke, H. (2005). IAM-OnDB - An on-line English sentence
database acquired from handwritten text on a whiteboard. Proceedings of
the International Conference on Document Analysis and Recognition,
ICDAR, 2005(March), 956–961. https://doi.org/10.1109/ICDAR.2005.132
17. Nagy, R., Dicker, A., & Meyer-Wegener, K. (2012). NEOCR: A
configurable dataset for natural image text recognition. Lecture Notes in
Computer Science (Including Subseries Lecture Notes in Artificial
Intelligence and Lecture Notes in Bioinformatics), 7139 LNCS(February),
150–163. https://doi.org/10.1007/978-3-642-29364-1_12
18. Sharang, A. (2013). Machine Learning Techniques Mid Term Report.
10007.
19. Baldominos, A., Saez, Y., & Isasi, P. (2019). A survey of handwritten
character recognition with MNIST and EMNIST. Applied Sciences
(Switzerland), 9(15). https://doi.org/10.3390/app9153169
20. Rosyda, S. S., & Purboyo, T. W. (2018). A Review of Various Handwriting
Recognition Methods. International Journal of Applied Engineering
Research, 13(2), 1155–1164. http://www.ripublication.com
21. Cohen, G., Afshar, S., Tapson, J., & van Schaik, A. (2017). EMNIST: an
extension of MNIST to handwritten letters.
http://arxiv.org/abs/1702.05373

82 | PAGE
7. APPENDICES

APPENDIX A

Sample of EMNIST dataset

83 | PAGE
APPENDIX B

84 | PAGE
85 | PAGE
86 | PAGE
Sample of Character Recognition

87 | PAGE
Sample of Digit Recognition

88 | PAGE
APPENDIX C

89 | PAGE
Coding for Arduino Mega Microcontroller

90 | PAGE
APPENDIX D

91 | PAGE
Coding for STM32 L496G Microcontroller

92 | PAGE
93 | PAGE

You might also like