These Final
These Final
These Final
RépubliqueAlgérienneDémocratique et Populaire
ﻭﺯﺍﺭﺓ ﺍﻟﺘﻌﻠﻴﻢ ﺍﻟﻌﺎﻟﻲ ﻭﺍﻟﺒﺤﺚ ﺍﻟﻌﻠﻤﻲ
Ministère de l’enseignement supérieur et de la recherche scientifique
Présentée par :
HASSANI Rafia
First of all, I wish to express my sincere gratitude to my teacher and my advisor, Mr.
BOUMEHRAZ Mohamed, Professor of University Mohammed Khider Biskra. I am
highly indebted and appreciated him for giving me the chance , advices, support, and
encouragement throughout the research period.
I would like to thank my jury members, namely Mr. BAHRI Mebarik, Professor
of University Mohammed Khider Biskra, Mrs. TERKI Nadjiba , Professor of
University Mohammed Khider Biskra and Mr. HAFAIFA Ahmed, Professor of
University Ziane Achour Djelfa. Their time and energy are gratefully acknowledged.
I want to appreciate very much for their detailed reviews and insightful comments.
Moreover, I would especially like to thank all the Electrical Engineering department
staff, for their professional treatment of me as a colleague.
I would thank my family for their patience and encouragement during my study.
I would like to thank all my colleagues and friends whose company and
encouragement helped me a lot to work hard.
Last, but not the least, I thank ALLAH, for giving me the strength during the course
of this research work.
Dedication
To my Family,
to my dear husband
Un fauteuil roulant intelligent est un fauteuil roulant électrique standard auquel un ordinateur
et un ensemble de capteurs ont été ajoutés, donnant à l'utilisateur, incapable d’utiliser le
joystick conventionnel, la possibilité de le contrôler sans toucher aucun dispositif physique.
Au cours de ces dernières années, plusieurs projets de recherche dans le monde se sont
concentrés sur le développement de prototypes de fauteuils roulants intelligents, mais
l'adaptation de leur interface d’utilisateur avec la capacité du patient est souvent un sujet de
recherche négligé. La majorité des interfaces sont adaptées à un seul utilisateur ou à un groupe
spécifique des utilisateurs.
ﺍﻟﻜﺮﺳﻲ ﺍﻟﻤﺘﺤﺮﻙ ﺍﻟﺬﻛﻲ ﻫﻮ ﻛﺮﺳﻲ ﻣﺘﺤﺮﻙ ﻛﻬﺮﺑﺎﺋﻲ ﺃﺿﻴﻒ ﺇﻟﻴﻪ ﺟﻬﺎﺯ ﺍﻟﺤﺎﺳﻮﺏ ﻭ ﻣﺠﻤﻮﻋﺔ ﻣﻦ ﺃﺟﻬﺰﺓ ﺍﻻﺳﺘﺸﻌﺎﺭ،
ﻣﻤﺎ ﻳﺴﻤﺢ ﻟﻠﻤﺴﺘﻌﻤﻞ ﺍﻟﻐﻴﺮ ﻗﺎﺩﺭ ﻋﻠﻰ ﺍﺳﺘﺨﺪﺍﻡ ﻋﺼﺎ ﺍﻟﺘﺤﻜﻢ ﺍﻟﻌﺎﺩﻳﺔ ﺍﻟﻘﺪﺭﺓ ﻋﻠﻰ ﺍﻟﺘﺤﻜﻢ ﻓﻲ ﺍﻟﻜﺮﺳﻲ ﺍﻟﻤﺘﺤﺮﻙ ﺩﻭﻥ ﺍﻟﺤﺎﺟﺔ
ﺇﻟﻰ ﻟﻤﺲ ﺃﻱ ﺟﻬﺎﺯ .ﺭﻛﺰﺕ ﺍﻟﻌﺪﻳﺪ ﻣﻦ ﺍﻟﻤﺸﺎﺭﻳﻊ ﺍﻟﺒﺤﺜﻴﺔ ﻓﻲ ﺍﻟﺴﻨﻮﺍﺕ ﺍﻷﺧﻴﺮﺓ ﻋﺒﺮ ﻛﺎﻓﺔ ﺃﻧﺤﺎء ﺍﻟﻌﺎﻟﻢ ﻋﻠﻰ ﺗﻄﻮﻳﺮ ﻧﻤﺎﺫﺝ
ﻟﻠﻜﺮﺳﻲ ﺍﻟﻤﺘﺤﺮﻙ .ﻏﻴﺮ ﺃﻥ ﺗﻜﻴﻒ ﻭﺍﺟﻬﺔ ﺍﻟﻤﺴﺘﺨﺪﻡ ﻟﻬﺬﻩ ﺍﻟﻜﺮﺍﺳﻲ ﺍﻟﻤﺘﺤﺮﻛﺔ ﻣﻊ ﻗﺪﺭﺍﺕ ﺍﻟﻤﺮﻳﺾ ﻫﻲ ﻓﻲ ﺍﻟﻐﺎﻟﺐ ﻣﻮﺿﻮﻉ
ﺑﺤﺚ ﻣﻬﻤﻞ .ﻓﻐﺎﻟﺒﻴﺔ ﻫﺬﻩ ﺍﻟﻮﺍﺟﻬﺎﺕ ﻣﻜﻴﻔﺔ ﻟﻤﺴﺘﺨﺪﻡ ﻭﺍﺣﺪ ﺃﻭ ﻟﻤﺠﻤﻮﻋﺔ ﻣﺤﺪﺩﺓ ﻣﻦ ﺍﻟﻤﺴﺘﺨﺪﻣﻴﻦ.
ﺗﻘﺪﻡ ﻓﻲ ﻫﺬﻩ ﺍﻟﺮﺳﺎﻟﺔ ﺗﺼﻤﻴﻢ ﻭ ﺗﻄﺒﻴﻖ ﻧﻈﺎﻡ ﺗﺤﻜﻢ ﻣﺘﻌﺪﺩ ﺍﻟﻮﺳﺎﺋﻂ ﻟﻠﻜﺮﺳﻲ ﺍﻟﻤﺘﺤﺮﻙ ﺍﻟﺬﻛﻲ ﻟﻤﺴﺎﻋﺪﺓ ﺍﻷﺷﺨﺎﺹ ﺫﻭﻱ
ﺍﻹﻋﺎﻗﺔ ﺑﻤﺴﺘﻮﻳﺎﺕ ﻣﺨﺘﻠﻔﺔ .ﻳﺘﻜﻮﻥ ﺍﻟﻨﻈﺎﻡ ﻣﻦ ﻛﺮﺳﻲ ﻣﺘﺤﺮﻙ ﻛﻬﺮﺑﺎﺋﻲ ﻣﺰﻭﺩ ﺑﺎﻟﻤﻌﺪﺍﺕ ﺍﻻﻟﻴﻜﺘﺮﻭﻧﻴﺔ ﺍﻟﻼﺯﻣﺔ ) ﺍﻟﻤﺘﺤﻜﻤﺎﺕ
ﺍﻟﺪﻗﻴﻘﺔ ،ﺃﺟﻬﺰﺓ ﺍﻻﺳﺘﺸﻌﺎﺭ ،ﺣﺎﺳﻮﺏ ﻣﺤﻤﻮﻝ ،ﻣﻴﻜﺮﻭﻓﻮﻥ ،ﻭ ﻛﺎﻣﻴﺮﺍ( ﻟﻀﻤﺎﻥ ﺍﻟﺘﻨﻘﻞ ﺍﻵﻣﻦ ﻭ ﺳﻬﻮﻟﺔ ﺍﻻﺳﺘﻌﻤﺎﻝ.
ﻣﻦ ﺃﺟﻞ ﺍﻟﺘﺤﻜﻢ ﻓﻲ ﺣﺮﻛﺔ ﺍﻟﻜﺮﺳﻲ ﺍﻟﻤﺘﺤﺮﻙ ،ﻳﻘﻮﻡ ﺍﻟﻤﺴﺘﺨﺪﻡ ﺃﻭﻻ ﺑﺎﺧﺘﻴﺎﺭ ﻁﺮﻳﻘﺔ ﻭﺍﺣﺪﺓ ﻣﻦ ﺍﻟﻄﺮﻕ ﺍﻟﻤﺘﻮﻓﺮﺓ ﻟﻠﺘﺤﻜﻢ ﻓﻲ
ﺍﻟﻮﺍﺟﻬﺔ ﻭﻓﻘﺎ ﻟﻘﺪﺭﺍﺗﻪ ) ﺍﻟﻄﺮﻳﻘﺔ ﺍﻟﻴﺪﻭﻳﺔ ،ﻁﺮﻗﺔ ﺗﻌﺘﻤﺪ ﻋﻠﻰ ﺍﻟﺮﺅﻳﺔ ،ﻁﺮﻳﻘﺔ ﺗﻌﺘﻤﺪ ﻋﻠﻰ ﺃﺟﻬﺰﺓ ﺍﻻﺳﺘﺸﻌﺎﺭ ،ﺃﻭ ﺍﻟﻄﺮﻳﻘﺔ ﺍﻟﺘﻲ
ﺗﻌﺘﻤﺪ ﻋﻠﻰ ﺍﻟﻜﻼﻡ( .ﺛﻢ ﺑﻨﺎءﺍ ﻋﻠﻰ ﺍﻷﻣﺮ ﺍﻟﻤﺮﺍﺩ ﺗﻄﺒﻴﻘﻪ ) ﺃﻣﺎﻡ ،ﻭﺭﺍء ،ﻳﻤﻴﻦ ،ﻳﺴﺎﺭ ،ﺃﻭ ﺍﻟﺘﻮﻗﻒ( ،ﻳﻘﻮﻡ ﺍﻟﻤﺘﺤﻜﻢ ﺍﻟﺪﻗﻴﻖ ﺑﺘﻮﻟﻴﺪ
ﺇﺷﺎﺭﺍﺕ ﻣﻤﺎﺛﻠﺔ ﻟﻺﺷﺎﺭﺍﺕ ﺍﻟﺘﻲ ﺗﻮﻟﺪﻫﺎ ﻋﺼﺎ ﺍﻟﺘﺤﻜﻢ ﻭﺫﻟﻚ ﻟﻠﺴﻴﻄﺮﺓ ﻋﻠﻰ ﻧﻈﺎﻡ ﺍﻟﺘﺤﻜﻢ ﻟﻠﻜﺮﺳﻲ ﺍﻟﻤﺘﺤﺮﻙ .ﺗﺘﻤﺜﻞ ﺍﻟﻤﻴﺰﺓ
ﺍﻟﺮﺋﻴﺴﻴﺔ ﻟﻠﻨﻬﺞ ﺍﻟﻤﻘﺘﺮﺡ ﻓﻲ ﺃﻥ ﺍﻟﺘﺒﺪﻳﻞ ﺑﻴﻦ ﺃﻭﺿﺎﻉ ﺍﻟﺘﺤﻜﻢ ﻳﻜﻮﻥ ﺑﺴﻴ ً
ﻄﺎ ﻭﻭﺍﺿ ًﺤﺎ ﻭﺷﻔﺎ ًﻓﺎ ﻟﻠﻤﺴﺘﺨﺪﻡ .ﺃﺛﺒﺘﺖ ﺍﻻﺧﺘﺒﺎﺭﺍﺕ
ﺍﻟﺘﺠﺮﻳﺒﻴﺔ ﺍﻟﻤﻄﺒﻘﺔ ﻋﻠﻰ ﻫﺬﺍ ﺍﻟﻜﺮﺳﻲ ﺍﻟﻤﺘﺤﺮﻙ ﻓﻲ ﺍﻟﻤﺤﻴﻂ ﺍﻟﻮﺍﻗﻌﻲ ﺟﺪﻭﻯ ﻭﻛﻔﺎءﺓ ﻧﻈﺎﻣﻨﺎ ﺍﻟﻤﺘﻌﺪﺩ ﺍﻟﻮﺳﺎﺋﻂ.
ﺍﻟﻜﻠﻤﺎﺕ ﺍﻟﻤﻔﺘﺎﺣﻴﺔ :ﻛﺮﺳﻲ ﺍﻟﻤﺘﺤﺮﻙ ﺍﻟﺬﻛﻲ ،ﺗﺤﻜﻢ ﻣﺘﻌﺪﺩ ﺍﻟﻮﺳﺎﺋﻂ ،ﺍﻟﻤﺘﺤﻜﻢ ﺍﻟﺪﻗﻴﻖ ،ﺟﻬﺎﺯ ﺍﻻﺳﺘﺸﻌﺎﺭ.
Scientific productions
SCIENTIFIC PRODUCTIONS
Journal Papers:
Hassani, R., Boumehraz, M., Hamzi, M., & Habba,Z. (2018). “Gyro-
Accelerometer based Control of an Intelligent Wheelchair”, Journal of
Applied Engineering Science & Technology, 4(1), 101-107.
Communications:
Hassani, R., Boumehraz, & Habba, Z. (2015). “An Intelligent Wheelchair
Control System based on Head Gesture Recognition”, International Electrical
and Computer Engineering Conference IECEC2015, Setif, Algeria.
Hassani, R., Boumehraz, M., & Hamzi, M. (2018). “Head Gesture Based
Wheelchair Control”, Second International Conference on Electrical
Engineering ICEEB’18, Biskra, Algeria.
Hamzi, M., Boumehraz, M., & Hassani, R. (2018). “Hand Gesture Based
Wheelchair Control”, Second International Conference on Electrical
Engineering ICEEB’18, Biskra, Algeria.
Abbreviations
ABBREVIATIONS
EEG ElectroEncephaloGraphy
EMG ElectroMyoGraphy
EOG ElectroOculoGraphy
IW Intelligent Wheelchair
VAHM French acronym for Autonomous Vehicle for people with Motor
CONTENTS
Introduction………………….…………………………………..………………….… 1
Chapter 1 :
STATE OF THE ART ABOUT INTELLIGENT
WHEELCHAIRS
1.1. Introduction ………………………………………………………………………… 4
1.2. Overview of intelligent wheelchair projects………..………………………...….... 4
1.3. Sensors …....................................…………………………………………………… 12
1.4. Operating modes……………………………………………...……………..……… 13
1.4.1. Semi-autonomous mode…………………………………………………..…. 13
1.4.2. Autonomous mode…………………………………………………….….….. 13
1.5. The human machine interface …………………………………………………………. 13
1.5.1. Vision Based Methods………………………………………………..………. 13
1.5.1.1. Facial expression analysis……………………………………..….…. 14
1.5.1.2. Mouth gesture recognition……………………………………….…... 14
1.5.1.3. Eye gaze tracking………………..…………………………………… 14
1.5.1.4. Hand gesture recognition………………………………………….…. 14
1.5.1.5. Head movement………………………………….……………….…..
15
1.5.2. Voice Based Methods……………………………….……………..…….…… 15
1.5.3. Motion Based Methods………………………………..……………………… 15
1.5.3.1. Accelerometer…………………………….………………………….. 15
1.5.3.2. Sip and Puff………………………………………….………….…… 15
1.5.3.3. Tongue drive system………………………………….……………... 16
1.5.4. Bio – signals Based Methods……………………………………….………... 16
1.5.4.1. Electromyography (EMG)………….…………………….…….….… 17
1.5.4.2. Electrooculography (EOG)……………………………………….…. 17
1.5.4.3. The Electroencephalography (EEG)………………………………… 18
1.5.5. Smartphone and Tablet Based Methods……………………………...……… 18
1.6. Conclusion………………………………………………………………...………..... 19
Chapter 2:
SPEECH-BASED ON HUMAN MACHINE
INTERFACE SYSTEM
2.1. Introduction……………………………………………..………………….…...….. 20
Contents
Chapter 4:
SENSOR-BASED HUMAN MACHINE
INTERFACE SYSTEM
4.1. Introduction……………………………………………………………………….
53
4.2. Head motion control system…………………………………………………...
53
4.2.1. Head gesture estimation…………………………………………………
54
4.2.2. Sensor fusion…………………………………………………………….
56
4.2.3. Action selection…………………………………………………………
57
4.3. Results…………………………………………………………………………….
58
4.4. Conclusion………………………………………………………………………...
63
Chapter 5:
SYSTEM IMPLEMENTATION
5.1. Introduction………………………………………………………………………
64
5.2. Wheelchair platform…………………………………………………………….
64
5.2.1. Synoptic of our electric wheelchair……………………………………..
64
5.2.2. Hardware Description…………………………………………………
65
5.2.2.1. Laptop…………………………………………………………
67
5.2.2.2. MPU6050 IMU board……………….....…………………….
67
5.2.2.3. Ultrasonic sensors……………………………………….…….
67
5.2.2.4. Control and data acquisition boards………………………….
68
5.2.2.5. Bluetooth modules (Transceiver modules)………………...…
69
5.2.2.6. Control Box………………………………………………..…..
69
5.2.3. Software used……………………………………………………………
74
5.2.4. Obstacle detection………………………………………………………
75
5.3. System implementation…………………………………………………………..
75
5.3.1. Multimodal control system……………………………………………..
75
5.3.2. User Interface…………………………………………………………..
77
5.3.3. Experimental Results and analysis……………………………………
77
5.4. Conclusion………………………………………………………………………..
83
Conclusion................................................................................................................... 84
List of figures
LIST OF FIGURES
Figure Title Page
1.1 Prototype developed by Madarasz……………………………………….. 3
1.2 Smart Wheelchair Prototype……………………………………………… 5
1.3 RobChair Prototype……………………………………………………… 6
1.4 NavChair Project………………………………………………………….. 7
1.5 FRIEND wheelchair project………………………………………………. 7
1.6 MAid prototype…………………………………………………………… 8
1.7 SmartChair prototype……………………………………………………... 8
1.8 VAHM2 (to the left) VAHM3 (to the right)……………………………… 9
1.9 Rolland II (to the left) Rolland III (to the right)…………………………. 10
1.10 The Walking Wheelchair prototype………………………………………. 10
1.11 Sharioto prototype………………………………………………………… 11
1.12 COALAS prototype……………………………………………………… 11
1.13 Examples of sensors used in IW applications:(a) sonar sensor, b) infrared 12
sensor, (c) laser range finders, (d) stereo camera, (e) inside Microsoft
Kinect…………………………………………………………
1.14 Eye end gaze drive system………………………………………………… 14
1.15 Sip and puff powered wheelchair…………………………………………. 16
1.16 Prototype Tongue drive system…………………………………………… 16
1.17 Placement of electrodes for EOG measurement………………………….. 17
1.18 EEG control IW………………………………………………………….. 18
2.1 Speech recognition process………………………………………………. 22
2.2 The flowchart of the system……………………………………………… 26
2.3 User interface……………………………………………………………... 27
2.4 The results of commands in user interface……………………………….. 30
2.5 The result of tested command in both environments……………………… 31
2.6 Screenshots of the Voice_controlled_wheelchair…………………………. 32
2.7 The results of commands in android application………………………….. 34
3.1 HMI system based on head gesture recognition structure………………… 37
3.2 Haar features………………………………………….…………………… 38
3.3 Image is scanned from top left corner to the bottom right corner………… 38
List of figures
LIST OF TABLES
2.4 The results of the system for both users in noisy environment…………………29
2.5 The results of the system for both users in silent environment……………..…..32
2.6 The results of the system for both users in noisy environment…………….…..33
3.1 The results of the system for the both user in different lighting conditions …...48
3.2 The results of the system for the both user in different lighting conditions …...52
INTRODUCTION
According to data provided by Word health organization, over a billion people are estimated
to live with some form of disability. This corresponds to about 15% of the word population.
Between 110 million (2.2%) and 190 million (3.8%) people adult has significant difficulties in
functioning. Furthermore, the rate of disability are increasing due to ageing populations and
an increase in chronic health conditions such as diabetes, cardiovascular disease, cancer and
mental health disorders. Traffic and work accidents, war and land mines are other factors
which contribute to the increase of people with mobility difficulties [78].
Due to this high proportion, there is growing demand for developing technologies that can
aid this population group from international health care organization, universities and
companies interested in developing and adapting new products. Wheelchairs are important
locomotion devices for these individual. However, traditional wheelchairs which are
controlled by users via joystick, buttons and levers, cannot satisfy the needs of disabled users
who have lost the ability to use their arms such as quadriplegia, amputee’s hands and paralyzed
patients. This segment of peoples needs to use special control systems depending on their
abilities. In order to solve this problem, different methods and techniques have been
developed to create intelligent wheelchairs (IWs) over the last years [2, 79].
Due to the low cost and widespread availability of commercial voice recognition hardware
and software, voice recognition has often been used for IW [4, 6, 14, 70, and 74].
Vision based methods are another technique which uses the recognition of head gesture [33,
38], hand gesture [83], head tilt and mouth shape [40], facial expressions [68] and eye gaze
tracking [62] as the main input to guide the wheelchair.
Recently, several approaches of operating an IW using advanced sensors as an input method
have been proposed in many projects, such as, Micro-Electro-Mechanical Systems (MEMS)
sensors [31,34, 35], sensors that capture the user’s bio-signal (Electromyography [15, 32, 67,
and 79], Electro-oculography [17, 80]. and Electro-encephalography [16, 43] ), Tongue
Operated Magnetic sensor [42], and pressure sensors [39, 81].
1
Introduction
However, the majority of the interfaces were adaptable to a single user or to a specific user
group.
This thesis presents a multimodal control system of IW. The area of multimodal control
researches is not new, but in this work we tried to cover the needs of the most disabled people
whatever the level of their disability by developing and designing an IW prototype with
multimodal interface which combines all the possible input methods and using necessary
electronics like (Microcontroller and sensors) to ensure safe mobility and ease of operation.
Chapter 1: Introduces the state of the art of the IW projects, their operating modes, some low
cost input devices that can be used in IW system, and the most popular mode of human
machine interface (HMI).
Chapter 2: In this chapter, HMI system based on speech recognition is presented with two
different ways, the first way uses Microsoft Windows speech Recognition and Microsoft
visual studio C#. The other way is using an android application in Smartphone which uses
Google Speech to Text to recognize and process human speech.
Chapter 3: Two vision based interface systems for controlling the wheelchair are proposed.
The first interface uses the head gesture and the second uses the mouth gesture as input
methods to control the movement of the wheelchair.
Chapter 4: This chapter presents an interface based on IMU sensor. The system uses the head
movement and orientation to control the intelligent wheelchair. The user's head angles around
the X and Y axis are interpreted as a wheelchair movement commands in the forward,
backward, left and right directions
Throughout the last three chapters, the system is tested many times by two persons. The
exprimental results show that the proposed interfaces can achieve the purpose of controlling
the intelligent wheelchair.
2
Introduction
Chapter 5: This chapter is divided into two parts: the first part shows the concept, design
and implementation of the platform. As well as different hardware and software used for the
development of our IW. In the second part, most of the proposed interfaces were combined
together into a multimodal interface, implemented and tested in real environment and
compared with the manual mode to evaluate their performance.
We end this manuscript with a general conclusion summarizing what has been done and the
prospects for this work.
3
CHAPTER 1 State of the Art about Intelligent Wheelchairs
CHAPTER 1
1.1. Introduction:
Nowadays, there is a significant increase in the number of older persons in all
countries of the world. This segment of society, as well as patients with quadriplegia
and amputee’s hands, lost the ability to use their arms to control traditional wheelchair
which is controlled by Joystick. They need to use special control systems to control
electric wheelchairs.
In order to provide a better quality of life for people with this kind of disability,
various alternative control methods and techniques have been developed to create
intelligent wheelchairs. An Intelligent Wheelchair (IW) is a standard electric
wheelchair with an embedded computer and sensors, giving it certain intelligence.
This chapter presents an overview of the most popular IW projects, a short
description of the used sensors, each research focus, as well as different approaches
used by researchers to solve common problems.
The outline of this chapter is as follow: at first, we briefly introduce some IW
projects and the necessary requirements for such project. The second part contains an
overview of some low cost input devices that can be used in IW systems. Then we
present the operating modes of IW projects. Finally, the most popular mode of human
machine interface is covered at the end of the chapter.
4
CHAPTER 1 State of the Art about Intelligent Wheelchairs
5
CHAPTER 1 State of the Art about Intelligent Wheelchairs
NavChair was developed for people that suffered from different sorts of impairments
such as bad vision (Figure 1.4). It was equipped with ultrasonic sensors and was able
to avoid obstacle, follow wall and pass through doors. A voice control module was
also adapted to this wheelchair [45].
6
CHAPTER 1 State of the Art about Intelligent Wheelchairs
7
CHAPTER 1 State of the Art about Intelligent Wheelchairs
8
CHAPTER 1 State of the Art about Intelligent Wheelchairs
Figure 1.8: VAHM2 (to the left) VAHM3 (to the right)
9
CHAPTER 1 State of the Art about Intelligent Wheelchairs
Figure 1.9: Rolland II (to the left) Rolland III (to the right)
10
CHAPTER 1 State of the Art about Intelligent Wheelchairs
gyroscope for rotational velocity estimation. The main work focuses on the intention
estimation (Figure 1.11) [22, 75].
11
CHAPTER 1 State of the Art about Intelligent Wheelchairs
1.3. Sensors
(d) (e)
Figure 1.13: Examples of sensors used in IW applications: (a) sonar sensor,
sensor (b)
infrared sensor, (c) laser range finders,
finders (d) stereo camera, (e) inside Microsoft Kinect
Using a stereo camera and spherical vision system (Figure 1.13d) [555], 3D scanners
like Microsoft’s Kinect (Figure
(Fig 1.13e), and most recently the Structure 3D scanner, it
has become possible to use point cloud data to detect hazards like holes, stairs, or
obstacles [19]. These sensors have until recently been relatively expensive, were
bulky and would consume a lot of power.
12
CHAPTER 1 State of the Art about Intelligent Wheelchairs
There are three modes of operation: Manual, semi-autonomous, and autonomous. The
manual mode simply uses the joystick to control the motors of the wheelchairs. Semi-
autonomous and autonomous modes are detailed in the following section.
Autonomous mode means that the user has an interface in which he selects a
destination and the wheelchair calculates the path to get there. This method requires
precise location of the chair in the environment but does not require much attention
of the user. However, it asks whether the wheelchair has a complete knowledge of the
environment using a map or that the environment has been modified by adding objects
to it. In this case the operating mode does not allow navigating in an unknown
environment [18].
In the past decades, a lot of intelligent wheelchair control systems were developed using
different methods and techniques. Human Computer Interface (HCI) and Human Machine
Interface (HMI) are the latest and most effective techniques [15]. In user interface systems
images, voices, motions, bio- signals, and tactile devices are used as a medium of control. In
this section an overview of the most common methods used in human machine
interface is given.
1.5.1. Vision Based Methods
Vision based methods can be used to recognize different types of human body
motion. Some IW projects use these methods to capture the facial expressions, mouth
gesture, gaze and position, hand gesture, and orientation of the patients’ head to
control the wheelchair.
13
CHAPTER 1 State of the Art about Intelligent Wheelchairs
14
CHAPTER 1 State of the Art about Intelligent Wheelchairs
used for hand gesture detection. The results of detection combined with centroid are
used to determine control commands.
1.5.1.5. Head movement
Some vision based systems capture the orientation and position of the user’s head,
which can be used as an input to represent a specific desired output. This technique
has been used in some wheelchair projects (eg. RoboChair) [38]. A combination of
Adaboost face detection and camshift object tracking algorithm is used to achieve
accurate face detection, tracking and gesture recognition in real time. By detecting
frontal face and nose position, head gesture is estimated and used to control the
wheelchair.
1.5.2. Voice Based Methods
Voice is also used as an input to control wheelchairs [4, 6, and 74]. In such systems
voice recognition algorithms is used. When system recognizes the voice, it is
classified to one of pre-stored commands and generates a signal to control the
movements of wheels.
1.5.3. Motion Based Methods
Motion recognition is the interpretation of a human gesture by a computing device.
In this context, motions are expressional movements of human body parts, such as:
fingers, hands, arms, head, tongue, legs. The goal of motion recognition research is to
develop systems which can identify specific human gestures and use them to send
information for a device control.
1.5.3.1. Accelerometer
In accelerometer based method, the IW is controlled using head tilt. Firstly, the user
tilts his head in front/back/left or right direction. Then, the accelerometer senses the
change in direction of head and accordingly the signal is given to microcontroller.
Depending on the direction of the acceleration, microcontroller controls the
wheelchair [33].
1.5.3.2. Sip and Puff
In this method using air presser to generates control signals by sipping (inhaling) or
puffing (exhaling) in a tube (Figure 1.15). This technology generates four control
signals for motorized wheelchair which are initial hard puffs, hard sip, initial hard sip,
and hard puff. It is mostly used for quadriplegics having injury in their spinal cord or
people with ALS. But this is not good for individual with week breathing [79].
15
CHAPTER 1 State of the Art about Intelligent Wheelchairs
16
CHAPTER 1 State of the Art about Intelligent Wheelchairs
17
CHAPTER 1 State of the Art about Intelligent Wheelchairs
18
CHAPTER 1 State of the Art about Intelligent Wheelchairs
1.6. Conclusion
In this chapter, after a brief introduction to the most popular IW projects and the
necessary requirements for such project, we have presented an overview of some low
cost input devices that can be used in IW systems, as well as the operating modes of
IW projects. Then, the most popular mode of human machine interface is given at the
end of the chapter, within this framework; some of these methods are described and
used in this project, while the others were also briefly described to serve as examples.
The following chapters present the human machine interfaces which are used in our
project.
19
CHAPTER 2 Speech-Based Human Machine Interface System
CHAPTER 2
2.1. Introduction
Speech is the primary and the most important way of communication between human. By
the developments of communication technologies in the last era, the speech starts to be an
important interface for many systems. Instead of using complex different interfaces, speech
is easier to communicate with computers.
In this chapter, an HMI system based on speech recognition is implemented using two
different ways. The first way is using Microsoft Windows speech Recognition, which
comes built into Windows 7, and Microsoft visual studio C#. The second way is using an
android application in Smartphone which use Google Speech to Text to recognize and process
human speech. In both methods, the chosen commands were tested by two users ten times in
silent environment and ten times in noisy environment. The result shows that the overall
accuracy is quite satisfactory.
This Chapter is organized as follow: In section 2.2, we briefly defined the speech
recognition. Section 2.3 presents the classification of speech recognition system. Section 2.4
shows the principal Components of speech recognition system. Few speech recognition
softwares are presented in Section 2.5. Section 2.6 describes our HMI system based on
speech recognition and shows experimental results. Finally, Section 2.7 concludes the
chapter.
20
CHAPTER 2 Speech-Based Human Machine Interface System
The quality of a speech recognition systems are assessed according to two factors: its
accuracy (error rate in converting spoken words to digital data) and speed (how well the
software can keep up with a human speaker). Speech recognition technology has
endless applications. Commonly, such voice-mail systems in telephony, hands-free
machine operations, communication interfaces for people with special abilities,
dictation systems, and translation devices.
21
CHAPTER 2 Speech-Based Human Machine Interface System
possible ways of pronouncing words, and yet specific enough to discriminate between
the various words of the vocabulary.
For a speaker dependent system the training is usually carried out by the user, but for
applications such as large vocabulary dictation systems this is too time consuming for
an individual user. In such cases an intermediate technique known as speaker
adaptation is used. Here, the system is bootstrapped with speaker-independent
models, and then gradually adapts to the specific aspects of the user.
Small vocabulary
Medium vocabulary
Large vocabulary
Small vocabulary systems for command and control applications normally have an
active vocabulary of up to a few tens words. Large vocabulary systems, on the other
side, recognize a thousand words and more.
22
CHAPTER 2 Speech-Based Human Machine Interface System
2.4.2. Training
Training is the process of estimating the speech model parameters from actual speech
data. In preparation for training, what are needed are the text of the training speech and
a lexicon of all the words in the training, along with their pronunciations, written down
as phonetic spellings. Thus, a transcription of the training speech is made by listening to
the speech and writing down the sequence of words. All the distinct words are then
placed in a lexicon and someone has to provide a phonetic spelling of each word [49].
Acoustic model contains a statistical representation of the distinct sounds that make
up each word in the language Model or Grammar. Each distinct sound corresponds to a
phoneme.
Language Models contain a very large list of words and their probability of occurrence
in a given sequence. They are used in dictation applications. Grammars are a much
smaller file containing sets of predefined combinations of words. Grammars are used in
command and control applications. Each word in a Language Model or Grammar has
an associated list of phonemes (which correspond to the distinct sounds that make up a
word).
Speech Recognition Engine uses software that is called Decoder, which get the
sounds spoken by a user and finds the acoustic Model for the same sounds, when a
match is completed, the Decoder determines the phoneme corresponding to the sound. It
keeps track of the matching phonemes until it reaches a pause in the users’ speech. It
then searches the Language Model or Grammar file for the same series of phonemes. If
23
CHAPTER 2 Speech-Based Human Machine Interface System
a match is made it returns the text of the corresponding word or phrase to the calling
program [73].
HMI system based on speech recognition is implemented using two different ways.
The first way is using Microsoft Windows Speech Recognition (MWSR), which come
built into windows 7, and Microsoft visual studio C#. The second way is using an
android application in Smartphone which uses Google Speech to Text to recognize and
process human speech.
24
CHAPTER 2 Speech-Based Human Machine Interface System
the direction of IW. The grammars also must be added into the
“SpeechRecognitionEngine”. Otherwise, the speech recognizer will not be able to
recognize the targeted phrases. The command words included in both grammar lists are
presented in Table 2.2
Commands Action
Démarrer le système To start controlling the IW
bad' alnizam ()ﺑﺪء ﺍﻟﻨﻈﺎﻡ using speech recognition
interface data
Operating
The execution of the program starts by reading the voice captured by microphone and
converting it into words. These words firstly are compared only with operating interface
data base. However, the recognizing of the command words which control the movement of
IW will not commence unless the phrase "Démarrer le système" or " " ﺑﺪء ﺍﻟﻨﻈﺎﻡis spoken by
the user. Once one of these phrases is spoken, the system will be waiting for an utterance to
analyze and compare with IW control data base. In the case of match with anty control
word of the second grammar, the appropriate instruction will be sent to the microcontroller
via the serial port. The flowchart of our system is shown in Figure 2.2
25
CHAPTER 2 Speech-Based Human Machine Interface System
Start
No Match with
any command
Yes
Text= Text=
« arrêter le système » « Démarrer le système»
No
Is the voice
detected?
Yes
Match with No
any command
Yes
26
CHAPTER 2 Speech-Based
Based Human Machine Interface System
The design of user interface provides all the information about the recognized
commands and the control button of the system (Figure
( 2.3).
). The control over the user
interface includes Start/Stop (commencer/ arrêter) button that tells the system to start or
to stop temporarily recognizing the words which control the IW movement. The user
can also do these steps by either vocal command, saying Démarrer le système" or " ﺑﺪء
" ﺍﻟﻨﻈﺎﻡto start the control of IW and " arrêter le système " or "ﺍﻟﻨﻈﺎﻡ
" ﺃﻭﻗﻒ ﺍﻟﻨﻈﺎﻡto stop the
control of IW, or by clicking on Start/Stop button.
The speech control system works in two phases: the training phase and the recognition
phase.. First, we trained our speech recognition engine through windows speech
recognition because it’ss impossible to train it from code. Then, we went to recognition
recogni
phase; the tested commands which presented
presented in Table 2.2 were tested with the current
user, who train the system, and other user. To ensure the accuracy of the system we
27
CHAPTER 2 Speech-Based Human Machine Interface System
asked each user to repeat each tested commands 10 times in silent environment and 10
times in noisy environment.
2.6.1.3. Results
Table 2.3 presents the results of the system for the both user in silent environment
Table 2.3: The results of the system for both users in silent environment
From Table 2.3, there are 139 over 140 commands recognized by our system for the
current user and 136 over 150 commands recognized for the other user. It’s clear that
the success of the system for the other user is less than current user.
139 + 136
= × 100%
280
= 98.21 %
28
CHAPTER 2 Speech-Based Human Machine Interface System
Table 2.4 presents the results of the system for the both users in noisy environment
Table 2.4 : The results of the system for both users in noisy environment
From Table 2.4, there are 238 over 280 commands recognized by the system for the
both users.
The percentage of the accuracy of the system in silent environment is 85%, calculation
for percentage is shown as below.
122 + 116
= × 100%
280
= 85%
Image 2.4 present the commands which control user interface and IW (“démarrer le
système/ ”ﺑﺪء ﺍﻟﻨﻈﺎﻡ, “droit”, “gauche”, “arrête”, “avance”, “arrière”,” ”ﻳﻤﻴﻦ,” ”ﻳﺴﺎﺭ,” ﻗﻒ
”,” ”ﺃﻣﺎﻡ,” ” ﻭﺭﺍء, and “arrêter le système /” ﺃﻭﻗﻒ ﺍﻟﻨﻈﺎﻡ.
29
CHAPTER 2 Speech-Based Human Machine Interface System
30
CHAPTER 2 Speech-Based Human Machine Interface System
25
20
15
10
5 Noisy environment
Silent environment
0
ﺑﺩء ﺍﻟﻧﻅﺎﻡ
ﻳﺳﺎﺭ
ﻗﻑ
ﻭﺭﺍء
Avance
ﺃﻭﻗﻑ ﺍﻟﻧﻅﺎﻡ
ﻳﻣﻳﻥ
ﺃﻣﺎﻡ
Démarrer le système
arrêter le système
Droite
Gauche
Arrête
Arrière
From the graph, we can find out that the system accuracy is less when assigning the
commands in the noisy environment. That means the HMI based speech recognition has
less control when in the noisy condition.
In this interface, the user could operate the wheelchair wirelessly using an android
application in Smartphone which uses Google Speech to Text to recognize and process
human speech.
The Android application is generally developed using JAVA language. The proposed
application which will be used to control the wheelchair can be built without knowledge
in java language. It is called as “Voice_controlled_wheelchair” developed using MIT
App Inventor. Figure 2.6 shows below is a diagram which shows the interface of the
application.
To be active this application requires an internet connection and Bluetooth
connection to send the appropriate commands wirelessly.
31
CHAPTER 2 Speech-Based Human Machine Interface System
32
CHAPTER 2 Speech-Based Human Machine Interface System
From table 2.5, there are 94 over 100 commands recognized by the system for the both
users.
The percentage of the accuracy of the system in silent environment is 94%, calculation
for percentage is shown as below.
47 + 47
= × 100%
100
= 94%
Table 2.6 presents the results of the system for the both user in noisy environment
From Table 2.6, there are 83 over 100 commands recognized by the system for the
both users.
The percentage of the accuracy of the system in silent environment is 93%, calculation
for percentage is shown as below.
47 + 46
= × 100%
100
= 83%
Image 2.7 presents the commands « avance », « arrière », « gauche », « droite », and
« arrête ».
33
CHAPTER 2 Speech-Based Human Machine Interface System
34
CHAPTER 2 Speech-Based Human Machine Interface System
2.7. Conclusion
In this Chapter, we implemented an HMI system based on speech recognition with
two different ways. The first way is using Microsoft Windows speech Recognition and
Microsoft visual studio C#. The second way is using an android application in Smartphone
which uses Google Speech to Text to recognize and process human speech. In both methods,
the system is tested many times by two persons (the current user and another user) once in a
silent environment and once in a noisy environment. Experimental results show that the
accuracy of the system in a silent environment is better than it in the noisy environment.
35
CHAPTER 3 Vision-Based Human Machine Interface system
CHAPTER 3
3.1. Introduction
Vision based method can be used to recognize different types of human body. Some IW
projects have used these methods to capture the facial expressions, mouth gesture, gaze and
position, hand gesture, the orientation of the patient’s head to control the wheelchair.
In this chapter, two Vision based interface systems for hands free control of an intelligent
wheelchair are presented. The first interface uses the head gesture to control the movement
of the wheelchair. Firstly, the head is detected using haar cascade. Once, the initial tracking
window is determined; the new head location is determined using Camshift algorithm. Then,
the head gesture command is determined according to the position of a rectangle containing
the patient head in the image. The second interface uses mouth gesture. This system consists
of two main parts: mouth detection using template matching and command extraction which
is determined according to the detected gesture ( Open mouth, Tongue on the left, tongue on
the right).
The remainder of this chapter is organized as follows: Section 2 and section 3 describe our
HMI system based on hand gesture and mouth gesture respectively and show experimental
results. A brief conclusion is proposed in section 4.
36
CHAPTER 3 Vision-Based Human Machine Interface system
Start
Frame acquisition
Face detection
No
Face?
The main function of this step is to determine whether user face appears in a given image,
and where this face is located at. For our project, Open Source Computer Vision Library
(OpenCV) [7, 11, and 12] is used to implement the haar cascade classifier. Object detection
using haar cascade classifier is an effective object detection method proposed by Paul Viola
and Michael Jones [76]. It is a machine learning based approach where a cascade function is
trained from a lot of positive images (which contain the object that want to detect it) and
negative images (which do not contain the object that want to detect it). Then, it is used to
detect objects in other images.
37
CHAPTER 3 Vision-Based
Based Human Machine Interface system
Haar
aar features are the main part of the haar cascade classifier. They are used to detect the
presence of feature in given image. Each features result in a single value which is calculated
by subtracting the sum of pixels intensities in white region from that in black region as
shown in equation (3.1) . Figure 3.2 shows some
s examples of haar like features.
feature
∑ ∑ ℎ (3.1)
The Haar feature starts scanning the image for the detection of the face from the top left
corner and ends the face detection process bottom right corner
corner of the image as shown in
Figure 3.3.. The image is scanned several times through the haar like features in order to
detect the face from an image.
Figure 3.3: Image is scanned from top left corner to the bottom right corner.
38
CHAPTER 3 Vision-Based Human Machine Interface system
To compute the rectangle features rapidly integral image concept is used [27]. It needs
only four values at the corners of the rectangle for the calculation of sum of all pixels inside
any given rectangle. In an integral image the value at pixel , is the sum of pixels above
and to the left of , . Sum of all pixels value in rectangle D is shown in Figure 3.4:
1
⎧
⎪
+ 2 A B
+" 3 1 2
&
⎨ + +"+$ 4 D
⎪
C
3 4
⎩ $ 4 3 2 + 1
Voila Jones algorithm uses a 24×24 window as the base window size to start evaluating
these features in any given image. If we consider all the possible parameters of the haar
features like position, type and scale, then we have to calculate about 160, 000 features in
this window but this is practically impossible. The solution of this problem is to use the
adaboost algorithm. Adaboost is a machine learning algorithm which helps us to find only
the best features among the 160,000. These features are the weak classifiers. Adaboost
construct a strong classifier as a linear combination of weighted simple weak classifiers as
shown in equation (3.2).
Where
: Image
*- : Weak classifiers
39
CHAPTER 3 Vision-Based Human Machine Interface system
The face detection can be accomplished by cascade using haar like features as shown in
Figure 3.5. In that cascade, an image will be a face if it passes all the stages. If it fail to
pass any one of th stages it means that the image is not a face.
Input
Sub-window Positive Faces found
Positive
Classifier 1 Classifier 2 Classifier n
Further
False False False processing
40
CHAPTER 3 Vision-Based Human Machine Interface system
4: Iterate Mean Shift algorithm to find the centroid of the probability image. Store the
0th moment (distribution area) and centroid location.
5: For the following frame, center the search window at the mean location found in
step 4 and set the window size to a function of the 0th moment. Then go to Step 3.
The creation of the probability distribution function corresponds to steps 1 to 3. For
initialization, we use the face region detected on the last frame in detection step as the initial
location of the search window. Then, we need to calculate the color histogram
corresponding to this region in HSV color space. Since human’s skin colors have little
difference in S and V channels, only the H (hue) channel is used to compute the color
distribution, which consumes the lowest number of CPU cycles possible. The histogram is
quantized into bins to reduce the computational and space complexity and allow similar
color values to be clustered together. Then a histogram back-projection is applied.
Histogram Back-Projection: It is a primitive operation that associates the pixel values in the
image with the value of the corresponding histogram bin. In all cases the histogram bin
values are scaled to be within the discrete pixel range of the 2D probability distribution
image using equation (3.3).
+55
. ̂0 min 4678 9:
;:0 , 255=> (3.3)
0?),…A
Where ;:0 is the unweighted histogram corresponding to the B ℎ bin, and is the
numbers of bins. That is, the histogram bin values are rescaled from [0; ; ] to the
range[0; 255], where pixels with the highest probability of being in the sample histogram
will map as visible intensities in the 2D histogram backprojection image.
Mass Center Calculation: The mean location (centroid) within the search window of the
discrete probability image computed in Step 3 is found using moments. Given the intensity
of the discrete probability image I(x; y) at (x; y) within the search window, the mass center
is computed from:
GHH ∑K ∑J I , (3.4)
G)H ∑K ∑J I , (3.5)
GH) ∑K ∑J I , (3.6)
MNO MON
L ; L (3.7)
MOO MOO
41
CHAPTER 3 Vision-Based Human Machine Interface system
Where GHH is the P QRST , G)H and GH) are the first moments for and respectively,
L; L is the next center position of the tracking window.
The Mean Shift component of the algorithm is implemented by continually recomputing
new values of L; L for the window position computed in the previousframe until there is
no significant shift in position, i.e., convergence. Figure 3.6 shows one iteration of the Mean
Shift algorithm.
Up
( Forward)
Down
(Backward) or (Stop)
42
CHAPTER 3 Vision-Based Human Machine Interface system
In this section, a novel IW interface using simple mouth gesture recognition is presented.
As shown in Figure 3.8, the proposed system consists of two main parts. The first part is the
mouth detection. After image acquisition from camera, a face area is detected using haar
cascade classifier. Once the face is detected, Template matching is performed for detecting
the mouth from lower face [48, 58]. The second part is the command extraction. The
controlling command is determined according to mouth gesture determination.
Get frame
No
Face found?
Yes
Template images
Detect mouth from lower
face using Template
The main function of this step is to determine whether user face appears in given image,
and where this face is located at to extract the approximate mouth region. For our project,
Open Source Computer Vision Library (OpenCV) is used to implement the haar cascade
classifier.
43
CHAPTER 3 Vision-Based Human Machine Interface system
After the system determines the approximate mouth region, template matching is
performed for detecting the mouth. Template matching is a technique for finding areas of an
image that match to a template image (Small part of image) [13]. Before performing
template matching, we load three template images to the system (Figure 3.8). These images
present the gestures that can be used by the mouth for controlling the wheelchair movement.
cd d
, d
is the average value of template T
Id + d
, + d
is the average value of I in the regin coincide with T
d d
0……….e f ℎ 1, 0 … … . . ℎ gℎ 1
Once the match was found in the source image, we excluded a rectange around the area
corresponding to the highest match.
44
CHAPTER 3 Vision-Based Human Machine Interface system
used to stoping and starting the mouth gesture recognition controller, that mean seven
gestures must be recognized by the system (Figure 3.9).
If the user take out his/her tongue on the left, the gesture is recognized as the left turn
command.
If the user take out his/her tongue on the left, the gesture is recognized as the right turn
command.
If the user open his mouth, the system triggers a timer to calculate the time taken befor
closing his/her mouth (t)
45
CHAPTER 3 Vision-Based Human Machine Interface system
3.4. Results
In order to test the accuracy and the effectiveness of the both interfaces, two persons (male
and female) were asked to apply the commands twenty times in scenes which had cluttered
stationary background with a varying illumination.
3.4.1. HMI system based on head gesture recognition
Figure 3.10 shows some head tracking results for both users, Figure 3.11 shows the head
gesture recognition results in a bright light condition, Figure 3.12 shows the head gesture
recognition results in a normal light condition, Figure 3.13 shows the head gesture
recognition results in a dark light condition,
(a)
(b)
Figure 3.10: Head tracking results for both users: (a) subject 1,(b) subject 2.
46
CHAPTER 3 Vision-Based Human Machine Interface system
47
CHAPTER 3 Vision-Based Human Machine Interface system
Forward 19 18 20 19 18 18
Backward 18 17 18 17 17 16
Left 20 19 20 20 17 18
Right 19 19 20 19 18 18
Stop 18 18 20 19 17 17
Total 185 192 180
Accuracy 92.5% 96% 87%
Table 3.1: The results of the system for the both user in different lighting conditions.
48
CHAPTER 3 Vision-Based Human Machine Interface system
From the results, we can find out that the system accuracy is quite satisfactory. However, we
found that the system performance is lower in both bright and dark lighting environment
than in a normal light environment.
3.4.2. HMI system based on mouth gesture recognition
Figure 3.14 shows some mouth gesture recognition results for both users, Figure 3.15
shows the mouth gesture recognition results in a bright light condition, Figure 3.16 shows
the mouth gesture recognition results in a normal light condition, Figure 3.17 shows the
mouth gesture recognition results in a dark light condition.
(a)
(b)
Figure 3.14: Mouth gesture recognition results for both users: (a) subject 1,(b) subject 2.
49
CHAPTER 3 Vision-Based Human Machine Interface system
50
CHAPTER 3 Vision-Based Human Machine Interface system
As indicated in the figures above, the detected face is drawn in green color, and the
recognized mouth gesture result is drawn in red color and written in command Label which
is located in the right corner of the interface, the system accurately detected the face and
mouth, confirming the robustness to time-varying illumination, and low sensitivity to a
cluttered environment.
Table 3.2 presents the results of the system for the both user in a (bright, normal, and light
condition).
51
CHAPTER 3 Vision-Based Human Machine Interface system
Forward 20 19 20 20 19 19
Backward 19 19 19 20 19 19
Left 19 18 18 19 19 18
Right 19 19 19 19 19 18
Stop 20 20 20 20 20 20
Start 20 20 20 20 20 20
controlling
Stop 20 20 20 20 20 20
controlling
Total 272 274 270
Accuracy 97.14% 97.85% 96.42%
Table 3.2: The results of the system for the both user in different lighting conditions
From the results, we can find out that the system accuracy is satisfactory in the three lighting
conditions.
3.5. Conclusion
In this Chapter, we present two Vision based interface systems for hands free control of
an intelligent wheelchair. The first interface uses the head gesture and the second uses the
mouth gesture to control the movement of the wheelchair. In both interfaces, the system is
tested many times by two persons in scenes which had cluttered stationary background with
a varying illumination. Experimental results show that the both system accuracy is quite
satisfactory in the three lighting conditions. However, we found that the head gesture
recognition system accuracy in bright and dark lighting condition is lower than in a normal
lighting condition.
52
CHAPTER 4 Sensor -Based Human Machine Interface system
CHAPTER 4
4.1. Introduction
In this chapter, a sensor based HMI system for hands free control of an IW is presented.
The developed system is based basically on the patient head gesture (head tilt movement).
The head gesture is detected using accelerometer and gyroscope sensors embedded on a
single board, both IMU(Inertial Measurement Unit) sensors output are combined together
using Kalman filter as sensor fusion to build a high accurate orientation sensor. The system
is tested many times. Experimental results show that the system accuracy is quite
satisfactory.
Calculate angle
Filtering
Select direction
53
CHAPTER 4 Sensor -Based Human Machine Interface system
As shown in Figure 4.1, the algorithm is implemented through several steps of operation. Firstly,
the sensor board must be placed horizontally on the head tilt of the user, when the user moves
his/her head in different directions around and axis the acceleration and rotation rate of the
movement is detected as raw value. The head movement around Z-axis is not used to give the user
the ability of moving his/her head around without affecting the control of the system (Figure 4.2).
Then, the main board estimated the corresponding head angle using geometrical calculation and
sensor fusion with Kalman filter. The estimated head angles are compared with angle thresholds to
select the direction of the wheelchair.
IMU board
54
CHAPTER 4 Sensor -Based Human Machine Interface system
orientation. But if there was a small error during the integration, the small error is accumulated
over time and provokes drift in the resulting orientation estimation.
On the other hand, while the gyroscope causes drift when orientation is estimated, the
accelerometer is not affected. Combining the outputs of the two sensors can provide better
estimation of orientation. The accelerometer measures linear acceleration based on the
acceleration of gravity, it is more accurate in static calculation when the system is closer to its
fixed reference point. The problem of accelerometer is that it tends to distort acceleration due to
external forces as gravitational forces in motion, which accumulates as noise in the system and
erroneous spikes in resulting output. The long-term accuracy of a gyroscope combined with the
short-term accuracy of the accelerometer, these sensors can be combined to obtain more accurate
orientation reading by utilizing the benefits of each sensor.
A few methods to apply sensor fusion are available to varying degrees of complexity.
Kalman Filter is the most popular sensor fusion algorithm as it does not take a lot of
processing power for a better behaving system.
In order to analyze data provided by the IMU sensor, the raw data needed to be converted into
angular units.
Roll angle is obtained using the accelerometer value of and along x and z axes,
while pitch angle is found using the accelerations and along and axes.
= 2 , (4.1)
= 2 , (4.2)
Data for gyroscope are found similarly, except that the gyroscope output represents angular
rates and in deg/s.
= _ + _ .∆ (4.3)
!" #!"_$%&'
= (4.4)
(
= _ + _ .∆ (4.5)
)" #)"_$%&'
= (4.6)
(
55
CHAPTER 4 Sensor -Based Human Machine Interface system
Where is the roll rate, is pitch rate of the gyroscope, ∆ is the time step, _* +( the
offset calibration found on initialization and , is the sensitivity of the sensor found in
specification sheet for the IMU .
/0 = 1 /0#2 + 3 40 6
. (4.7)
/ = 5 /0
The equation (4.7) can be expanded to (4.8) when using the equation (4.1) and equations
from (4.3) to (4.6)
0 1 −∆ 0#2 ∆
;- < = ; < ;- < + ; < 0
: 0 0 1 0#2 0 6 (4.8)
0
/0 = >1 0? ; - <
0
Where the superscript _ denotes prior estimate value and @ is the Kalman gain.
By inserting equation (4.8) to (4.9) and (4.10) we can get the Kalman equations for the
IMU sensor:
56
CHAPTER 4 Sensor -Based Human Machine Interface system
#
0 = 1 − @A 0 + @A 0 (4.12)
#
- B
=- BCD
+ @2 B
− B
(4.13)
E0 # = 1 E0#2 1F + G (4.14)
Where
HC I J
@0 = I HCI J KL (4.15)
HMM C
@0,AA = (4.16)
HMM C KL
Where N is measurement noise covariance matrix from accelerometer, the updating error
covariance prediction equations to:
The roll angle is estimated using the same process by choosing the roll angle as the state
vector and calculating the prediction using Kalman filter.
57
CHAPTER 4 Sensor -Based
Based Human Machine Interface
Interfac system
(a) (b)
Figure 4.3: The position of the IMU board relative to head motion in different direction:
direction
(a) Left-Right, (b) Forward-Backward.
4.3. Results
In order to test the accuracy and the effectiveness of these interface two experiments were
carried out. The first one is to evaluate the performance of the Kalman filter which was used to
estimate the orientation of the IMU board. The second one is to evaluate the performance of our
proposed system.
For the first experiment, back and forth movements around X-axis
X axis were applied to the IMU
board. The roll angle was obtained by integrating the gyroscope output and then it was compared
compa
with the Kalman filter result. Figure 4.4 shows the comparison between the Kalman filter’s
output (blue line) and the gyroscope integration result (red line).
40
Kalman
30 Gyro
20
10
R oll (D e g)
-10
-20
-30
-40
0 100 200 300 400 500 600 700
Time (ms)
Figure 4.4:: Comparison between the Kalman filter’s output and the gyroscope integration
integrat result.
58
CHAPTER 4 Sensor -Based Human Machine Interface system
In this figure we can clearly see the big drift in the result of gyroscope integration, while this
drift is eliminated in Kalman filter results.
Evaluation of Kalman filter’s performance was continued by comparing the rotation angle
from the accelerometer, which obtained by using equations (4.1) and (4.2), with the Kalman
filter’s output. Figure (4.5) shows the comparison between the Kalman filter’s output (blue line)
and the accelerometer result (red line).
40
30
20
10
R o ll (D e g )
-10
-20
-30
-40 Kalman
accel
-50
0 100 200 300 400 500 600 700
Time (ms)
Figure 4.5: Comparison between Kalman filter output and the accelerometer result.
From the results, it is clearly observed that all fluctuation, which is seen in the accelerometer
output, is eliminated successfully by the filter.
For the second experiment, firstly, the IMU board was placed horizontally on the front of a
cap, worn by the user. Then, the five commands (Forward, Backward, Left, Right and Stop) were
applied by one user thirty times for each command (fifteen times in experiment 1 and another
fifteen in experiment 2). An example of issuing a series of commands is shown in Figure 4.6.
The red lines indicate the thresholds chosen to select each action.
59
CHAPTER 4 Sensor -Based Human Machine Interface system
50 Forward
P it c h (d e g ) 0
-50 Backward
0 500 1000 1500 2000 2500 3000 3500
Time (ms)
40
Right
20
R o ll (d e g )
-20
Left
-40
0 500 1000 1500 2000 2500 3000 3500
Time (ms)
Figure 4.7 represents calculated angles (pitch and roll) when the user applies the five commands
separately.
40
20
P it c h ( d e g )
0
-20
-40
300 350 400 450 500 550 600
Time (ms)
40
20
R o ll (d e g )
-20
-40
300 350 400 450 500 550 600
Time (ms)
(a)
60
CHAPTER 4 Sensor -Based Human Machine Interface system
50
P it c h (d e g )
0
-50
3000 3050 3100 3150 3200 3250 3300 3350 3400 3450 3500
Time(ms)
40
20
R o ll (d e g )
-20
-40
3000 3050 3100 3150 3200 3250 3300 3350 3400 3450 3500
Time(ms)
(b)
50
P it c h (d e g )
-50
20
R o ll (d e g )
-20
-40
700 750 800 850 900 950 1000
Time(ms)
(c)
50
P it c h (d e g )
-50
20
R o ll (d e g )
-20
-40
1800 1850 1900 1950 2000 2050 2100 2150
Time(ms)
(d)
61
CHAPTER 4 Sensor -Based Human Machine Interface system
50
P it c h (d e g )
0
-50
20
R o ll ( d e g )
-20
-40
600 620 640 660 680 700 720 740 760
Time(ms)
(e)
Figure 4.7: The calculated angles when the user applies the five commands separately.
In Figure 4.7 (a), when the user tilts his head down with an angle 20° or more, the gesture is
recognized as the forward movement. The pitch angle must be equal or more than 20° and the
roll angle must be more than -20° and less than 20°.
In Figure 4.7 (b), when the user tilts his head up with an angle 20° or more, the gesture is
recognized as the backward movement. The pitch angle must be equal or less than -20° and the
roll angle must be more than -20° and less than 20°.
In Figure 4.7 (c), when the user slopes his head right with an angle 20° or more, the gesture is
recognized as the right turn. The pitch angle must be more than -20° and less than 20°and the roll
angle must be equal or more than 20°.
In Figure 4.7 (d), when the user inclines his head left with an angle 20° or more, the gesture
is recognized as the left turn. The pitch angle must be more than -20° and less than 20°and the
roll angle must be equal or less than -20°.
In Figure 4.7 (e), when the user keeps or backs his head at the center, the gesture is recognized
as stop. The pitch angle must be more than -20° and less than 20°and the roll angle must be more
than -20° and less than 20°.
62
CHAPTER 4 Sensor -Based Human Machine Interface system
Table 4.1 presents the results of the system of the two experiments.
From the results, we can find out that the proposed system is reliable for controlling the
wheelchair.
4.4. Conclusion
In this Chapter, a head gesture recognition system based on IMU sensors is designed to control
an intelligent wheelchair. The sensor board fixed to the front of a cap worn by the user detects
head orientation and sends data to control board to calculate the appropriate head inclination
angles using geometry rules and sensor fusion using a Kalman filter to build a high accurate
orientation sensor. Experimental results show that the proposed system is reliable for controlling
the wheelchair.
63
CHAPTER 5 System Implementation
CHAPTER 5
SYSTEM IMPLEMENTATION
5.1. Introduction
This chapter presents the architecture of the proposed multimodal IW, most relevant
implementation details and the results achieved by the implemented prototype.
The outline of this chapter is as follow: at first, we present the concept, design, the
implementation of the platform and the different hardware and software used for the
development of our IW. The second part contains the concept of the proposed multimodal
interface, a description of its basic input methods and how the multimodal system works and the
different tests applied to the wheelchair platform and their results.
64
CHAPTER 5 System Implementation
Battery 24V
Left wheel
5 V power
supply
Control box
Left motor
On / Off Button
of Joystick Control Power
board board
Battery charge Right motor
indicator
Right wheel
65
CHAPTER 5 System Implementation
The transmitter side contains a laptop with a webcam, and a microphone and connected via serial
port to an Arduino Uno card with an IMU sensor(MPU6050) and a Bluetooth module(HC05).
. Whereas the receiver unit consists of a control board and an Arduino Mega card with a second
Bluetooth module (HC06) and a belt of ultrasonic sensors.
MPU6050
Transceiver
Webcam Laptop Arduino module (HC05)
Microphone
Ultrasonic
sensors
Figure 5.5 shows an image of the IW platform. It contains the following components:
66
CHAPTER 5 System Implementation
5.2.2.1. Laptop
To run the platform software used, a laptop is used, it is equipped with headset
microphone and HD Webcam. However, other computers with equivalent or superior
performance may be used.
The MPU-6050, shown in figure 5.6 is the world’s first motion tracking device designed
for the low power, low cost, and high performance requirements of smart phones, tablets and
wearable sensors. The MPU-6050 device comprises a combination of a 3-axis gyroscope
and a 3-axis accelerometer on the same silicon die, together with an onboard Digital Motion
Processor (DMP) for motion fusion and a peripheral controller.
The MPU 6050 main features are [37] :
Programmable accelerometer full-scale range: ±2g, ±4g, ±8g, and ±16g,
Programmable gyroscope full-scale range: ±250 °/s, ±4g, ±8g, and ±16g. ±250,
±500, ±1000, and ±2000 °/sec,
Programmable output data rate: max: 1 kHz,
I2C and SPI communication interface,
Internal Digital Motion Processing unit.
The MPU6050 is used to measure the wheelchair patient’s head gesture and send data to the
microcontroller for processing. Estimated head gesture, using geometry rules and sensor
fusion, is used to control the wheelchair movement.
67
CHAPTER 5 System Implementation
This sensor is compact, arduino compatible; and provides precise and stable non contact
distance measurement from about 2 cm to 400 cm with very high accuracy. The module
includes ultrasonic transmitter,
transmitter receiver and control circuit.
The boards are used to collect data from the sensors and send instructions to the
wheelchair control board. The developped system is based on two boards(Arduino Mega and
Arduino Uno). The first is connected to the computer platform via a USB,
USB its role is to send
the command to the second arduino via Bluetooth module, while the second is connected to
control board and its role is to receive the commands from the first arduino and send the
instruction to the wheelchair control board.
The Arduino Mega 2560 [71] is a microcontroller board based on the ATmega2560. It
has 54 digital input/output pins (of which 14 can be used as PWM outputs), 16 analog
inputs, 4 UARTs (hardware serial ports). The Arduino UNO[72] is a microcontroller
microcontroll board
based on the ATmega328. It has 144 digital input/output pins (of which 6 can be used as
PWM outputs), 6 analog inputs. The both boards have a 16 MHz crystal oscillator, a USB
connection, a power jack, an ICSP header, and a reset button. They contain everything
neededd to support the microcontroller; simply connect it to a computer with a USB cable or
power it with an AC-to-DC
DC adapter or battery to get started.
68
CHAPTER 5 System Implementation
HC Serial Bluetooth product consists of Bluetooth serial interface module and Bluetooth
adapter. Bluetooth serial module is used for converting serial port to Bluetooth. Bluetooth
serial module’s operation doesn’t need drive, and can communicate with the other Bluetooth
device. But communication between two Bluetooth modules require at two conditions: i)
The communication must be between master and slave. ii) The password must be correct. In
this project we used two Bluetooth modules; the first one is HC05 as master [28], and the
second HC06 as slave [23].
69
CHAPTER 5 System Implementation
and emulates joystick signals according to command instructions become from transmitter
part, if the joystick is inactive then, one from the other modes is used.
a. Working principles of joystick
Forward 3.9 v
Left Right
2.5 v
Stop
Backward
1.1 v
Figure 5.10: The Joystick and the generated voltage from joystick axis.
The voltages generated by the axis movement are used to control the wheelchair
movement via VR2 controller. If the joystick moves along X axis then the wheelchair turns
right or left, if the joystick moves along Y axis then the wheelchair goes forward or
backward. Table 5.1 shows the joystick voltage ranges.
Output1 Output2
Stop 2.5 V 2.5 V
Forward 2.5 V 2.5 V~3.9 V
Backward 2.5 V 1.1 V~2.5 V
Turn right 2.5 V~3.9 V 2.5 V
Turn left 1.1 V~2.5 V 2.5 V
Figure 5.11 show continuous voltages delivered by the joystick when the potentiometer is oriented
forward, it is reinstated to the center, and then it is oriented backward.
70
CHAPTER 5 System Implementation
(a) Forward
(b) Center
71
CHAPTER 5 System Implementation
(c) Backward
Figure 5.11: Voltages delivered by the joystick
72
CHAPTER 5 System Implementation
(a) Stop
(b) backward
73
CHAPTER 5 System Implementation
(c) Forward
Figure 5.13: Voltages delivered by the control board.
74
CHAPTER 5 System Implementation
START
NO
If obstacle ?
YES
Stop Wheelchair
Continue
75
CHAPTER 5 System Implementation
first mode to active head gesture mode which uses IMU sensor or by saying “demarrer le
système” (Start the system) to active speech recognition mode and stop the vision mode. In
order to avoid any disagreement in multimodal control of the wheelchair, the joystick has
higher priority than the other modes to use it in emergency case.
Once the control mode is selected, the controlling command is determined by the joystick,
mouth gesture, head motion, or speech. Then, according to these commands, the
microcontroller generates signals to emulate the joystick for controlling the power circuits of
the wheelchair. Figure 5.15 shows the structure of multimodal control system.
Start
No
If the user says Yes
“demarrer le système”
Stop controlling
Select speech control mode
Gesture Recognized
Send instruction to control
box
76
CHAPTER 5 System Implementation
The experiment consists of two steps. In the first step, two healthy students were asked to learn how
the system works by at first selecting one from the three proposed control mode three times for each
mode. Then, the selection is followed directly by repeating the five commands (Forward, backward, left,
right and stop) ten times. Thus, 150 commands for each mode are tested.
In the second step of the experiment, they preformed the four control modes of the wheelchair (mouth
gesture, head motion, speech and manual mode) in laboratory environment.
In order to apply the five commands, we choose to follow the route presented in Figure 5.17. As can
be seen, The width of the wheelchair is about 60 , the distance between desks is 2 , and total
distance is about 9 .
77
CHAPTER 5 System Implementation
78
CHAPTER 5 System Implementation
Figure 5.21, Figure 5.22, Figure 5.23 present the sequence of images that our IW is entering using
the mouth gesture mode, head motion mode, and speech mode respectively.
Figure 5.18: Time in seconds for user 1 following the route for two times using four different
control modes.
79
CHAPTER 5 System Implementation
Figure 5.19: Time in seconds for user 2 following the route for two times using four different
control modes.
80
CHAPTER 5 System Implementation
1 2 3
4 5 6
1 2 3
81
CHAPTER 5 System Implementation
41 5 6
Avance
Arrête
Droite
1 2 3
Arrête
Gauche
Droite
4 5 6
Figure 5.23: Experiments of speech mode.
82
CHAPTER 5 System Implementation
As can be observed in Table 5.2 and Table 5.3, most of the errors that emerge in the first part of the
experiment decrease with the test 1 and test 2 while issuing a mouth gesture control mode.
Also, the errors that emerge in the second part of the experiment decrease with the test 2 while
performing mouth gesture and speech control mode.
The reason for the test 1 errors is that the user had not trained well to how performs the commands.
Figure 5.18 shows the time used by user 1 to travel the route two times per control mode. As can be
seen, the fastest control mode was the manual mode, it took 67 seconds and 65 seconds to finish the
route the second fastest control modes was head gesture, it is slightly slower than manual mode, they
had almost similar time 73 second and 70 seconds. The third fastest control mode was speech control,
finishing the route in 86 seconds and 82 seconds. Finally, the mouth gesture control mode is the
slowest control mode; it took more than 90 seconds to finish the route in both tests.
Figure 5.19 shows the time achieved by user 2 to travel the route two times per each control mode.
As in the case of user, the manual mode is the fastest control mode followed by head gesture, speech
and mouth gesture control mode, respectively.
The reason for the slowness of mouth gesture and speech control mode is that: some of mouth
gesture control mode commands take a long time. For example, for executing backward command, the
user must open his mouth more than 4 seconds. Where, the small delay of executing speech control
mode commands caused by the pronunciation time of each command.
5.4. Conclusion
This chapter is divided into two parts: in the first part, the concept, design, and
implementation of the platform. As well as Different hardware and software used for the
development of our IW were presented. The second part was dedicated to describe how the
multimodal system works and to show the experiments that were made during the
implementation stage of the IW system. The results were very satisfactory. However, a
number of improvements still need to be made to compass the desired level of quality.
83
Conclusion
CONCLUSION
The goal of this thesis was to develop and to design an IW with a multimodal interface which
combines all the possible input methods and using necessary electronics to ensure safe
mobility and ease of operation for different kinds of disabled persons. In order to achieve the
proposed goal, we divided our work into four main steps.
In the first step, a study of some IW projects and their necessary requirements is done to
understand their concept. Then, the most popular modes of HMI are revised to see the existing
input devices that can be used in IW systems and recognition methods liable to be used by any
kind of disabled people; this step was summarized in the first chapter.
In the second step, several HMI are applied to the electric wheelchair and presented in three
different chapters depending on their recognition methods.
Chapter 2 presents HMI based on speech recognition using two different ways, the first way
uses Microsoft Windows speech Recognition. The other way is using an android application
in Smartphone which uses Google Speech to Text to recognize and process human speech. In
both interfaces, the system is tested many times by two persons in silent and noisy
environments. Experimental results show that the system accuracies are quite satisfactory in
both environments.
In chapter 3, two vision based interface systems are presented. The first interface uses head
gesture as an input method to control the movement of the wheelchair. Firstly, the head is
detected using haar cascade. Once, the initial tracking window is determined, the new head
location is determined using Camshift algorithm. Then, by checking the location of the center
of the rectangle containing the user's head against reference rectangles, the head gesture
commands are determined correspondingly. The second interface is based on mouth gesture
recognition. This system consists of two main parts: the first part is mouth detection using
haar cascade to find the area of the face and template matching to detect the mouth gesture
from the lower face area. The second part is the command extraction, where, the controlling
84
Conclusion
command is determined according the detected gesture and the corresponding reference
template. In both interfaces, the system is tested many times by two persons. The exprimental
results show that both interfaces can achieve the purpose of controlling the intelligent
wheelchair.
In chapter 4, a sensor based HMI system for hands free control of an IW is presented. The
developed system works basically on patient head gesture. The head gesture is detected using
accelerometer and gyroscope sensors embedded on a single board. both IMU sensors output
are combined together using Kalman filter as sensor fusion to build a high accurate
orientation sensor.
In the next step, the characteristics of our wheelchair and the hardware chosen to implement
the platform of the IW is identified and summarized in chapter 5, and followed by
presentation of the concept of the proposed multimodal interface, a description of its basic
input methods and how the multimodal system works. As well as different applied tests to
wheelchair platform and their results. .
The experimental test shows the success of proposed multimodal control system. However,
one may say that in order to take full advantage of the system, a training session is advised.
Despite having a good system specification and a functional prototype that implements most
of the proposed input methods, there are still further steps that should be followed to obtain a
more complete system for assisting people with different levels of disabilities:
Develop a robust voice based HMI that could be embedded in our IW system.
Extend the list of the input methods which embedded in the multimodal interface.
Enhance the obstacle detection and avoiding algorithm.
Study the recent developments in brain computer interfaces. Although it is under
research, but it’s foreseen that one day it might break any physical disability barrier.
85
References
REFERENCES
[25] D. Exner, E. Bruns, D. Kurz, A. Grundhofer, and O. Bimber, "Fast and robust
CAMShift tracking", IEEE Computer Society Conference on Computer Vision and
Pattern Recognition Workshops (CVPRW), pp.9-16, 2010.
[26] A. Freeman, Introducing Visual C# 2010, Apress, Berkeley, CA, 2010.
[27] G.Facciolo, N. Limare, E.Meinhardt, “Integral Images for Block Matching”, IPOL
Journal , vol. 4 , pp. 344–369 , 2014.
[28] Gotronic, HC-05 Bluetooth module, available at: https://www.gotronic.fr/pj2-
guide-de-mise-en-marche-du-module-bluetooth-hc-1546.pdf, accessed 20 june 2018.
[29] Gotronic, HC-SR04 ultrasonic sensor, available at: https://www.gotronic.fr/pj2-
hc-sr04-utilisation-avec-picaxe-1343.pdf , accessed 20 june 2018.
[30] R.GRASSE, “Aide à la navigation pour les personnes handicapées :
reconnaissance de trajets ”, PhD thesis, lorraine university, France, 2007
[31] A.Gupta, N. Joshi, N. Chaturvedi, S. Sharma, V. Pandar, “Wheelchair Control by
Head Motion Using Accelerometer”, International Journal of Electrical and
Electronics Research, vol. 4, no.1, pp.158-161, 2016.
[38] P. Jia, H. Huosheng, H. Hu, T. Lu, and K. Yuan. “Head Gesture Recognition for
Hands-free Control of an Intelligent Wheelchair”. Industrial Robot: An International
Journal, vol.34, no.1, pp.60–68, 2007.
[39] S. Jia, J. Yan, J. Fan, X. Li, L. Gao, “Multimodal Intelligent Wheelchair Control
Based on Fuzzy Algorithm”, Proceeding of the IEEE International Conference on
Information and Automation, Shenyang, China, June 2012
[40] S. Ju, Y. Shin, and Y. Kim. “Intelligent wheelchair (IW) interface using face and
mouth recognition”. In Proceedings of the 13th international conference on
Intelligent user interfaces, pp.307–314, 2009.
[41] k. kannan, j. selvakumar, “arduino based voice controlled robot” , international
research journal of engineering and technology ,vol.2, no.1 , pp.2395-0072, 2015.
[42] G. Krishnamurthy, M. Ghovanloo, “Tongue drive: A tongue operated magnetic
sensor based wireless assistive technology for people with severe disabilities”. In:
Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS),
Island of Kos, Greece, pp. 5551–5554,2006.
[43] M. Krishnan, M. Mariappan,” EEG-Based Brain-Machine Interface (BMI) for
Controlling Mobile Robots: The Trend of Prior Studies”, IJCSEE, vol. 3, no.2,
pp.159-165, 2015.
[44] A. Lankenau and T. R¨ofer. “Mobile robot self-localization in large-scale
environments”. In Proceedings of the IEEE International Conference on Robotics
and Automation, pp.1359–1364, 2002.
[45] S.P. Levine, D.A. Bell, L.A. Jaros, R.C. Simpson, Y. Koren, and J. Borenstein.
“The navchair assistive wheelchair navigation system”. IEEE Transactions on
Rehabilitation Engineering, vol.7, pp.452–463, 1999.
[46] G. Li, Y. He, Y. Wei, S. Zhu, Y. Cao “The MEMS gyro stabilized platform design
based on Kalman Filter” International Conference on Optoelectronics and
Microelectronics (ICOM), 2013
[47] V. R. Madarasz, L. C. Heiny, R. F. Cromp, and N.M. Mazur. “The design of an
autonomous wheelchair for the disabled”. IEEE Robotics and Automation, vol.2,
no.3, pp.117–126, 1986.
[48] T. Mahalakshmi, P. Swaminathan and R. Muthaiah“An Overview of Template
Matching Technique in Image Processing”. Research Journal of Applied Sciences,
Engineering and Technology, vol.4, no.24, pp. 5469-5473, 2012.
References