Abstract
Recent research studies showed that brain-controlled systems/devices are breakthrough technology. Such devices can provide disabled people with the power to control the movement of the wheelchair using different signals (e.g. EEG signals, head movements, and facial expressions). With this technology, disabled people can remotely steer a wheelchair, a computer, or a tablet. This paper introduces a simple, low-cost human-machine interface system to help chaired people to control their wheelchair using several control sources. To achieve this paper’s aim, a laptop was installed on a wheelchair in front of the sitting person, and the 14-electrode Emotiv EPOC headset was used to collect the person’s head impressions from the skull surface. The superficially picked-up signals, containing the brain thoughts, head gestures, and facial emotions, were electrically encoded and then wirelessly sent to a personal computer to be interpreted and then translated into useful control instructions. Using these signals, two wheelchair control modes were proposed: automatic (using single-modal and multimodal approaches) and manual control. The automatic mode controller was accomplished using a software controller (Arduino), whereas a simple hardware controller was used for the manual mode. The proposed solution was designed using wheelchair, Emotiv EPOC EEG headset, Arduino microcontroller, and Processing language. It was then tested by totally chaired volunteers under different levels of trajectories. The results showed that the person’s thoughts can be used to seamlessly control his/her wheelchair and the proposed system can be configured to suit many levels and degrees of disability.
1 Introduction
Many people could unexpectedly get injured during different events, including falls, accidents, violence, or even injuries during sports practicing. These events may lead to devastating neuromuscular disorders, causing severe disabilities due to spinal cord injuries. Because of the lack of the nerve supply, every injured part of the human body will be paralyzed (e.g. causing chaired people) [18], [19]. Thus, there is an emerging need to provide help for these people to elevate their capabilities to practice their daily life routines by themselves without others’ assistance. The ability to move is considered of highest priority for chaired people, and this ability could be realized using an electric-powered wheelchair (EPW).
The wheelchair can be controlled either manually (using joysticks, mouse, or keypad) or automatically (using head movements, eye tracking, or tongue). Most of the manual designs need the seated person to be able to precisely control the wheelchair movements by hand or fingers [11], [15], [17]. In cases including the absence of the nerve supply from their brains to their motor muscles and limbs, many disabled persons could not fulfill the traditional joystick control requirements [16]. For example, in the case of a spinal cord-neck region injury, there is no nerve supply from the brain down to any of the limbs. In such cases, the automatically generated signals from the victim’s head itself to provide self-mobility cannot control the traditional joysticks [1], [40].
Automatic models were designed relying on a source of information that the disabled person can generate and repeat, for example, head gestures [22], [56], tongue motion [26], and eye iris position tracking and detection [55]. Gajwani and Chhabria implemented an eye tracking system to control the wheelchair movement using a USB camera mounted in front of the sitting user’s eye [20]. Their model tracks the eye movements toward the left or right direction to control the wheelchair. Also, in this model, the eye blinking feature was used to control the starting and stopping of the wheelchair. In another research [35], a video charge-coupled device (CCD) camera and a frame grabber were used to analyze a series of human pupil images to track the eye movements of the user to control the wheelchair. The eye movement was detected by recording the standing cornea-retina potential that resulted from hyperpolarizations and depolarizations existing between the cornea and the retina. Jia et al. used a camera to track the nose of the user’s face as a measure for the head position and orientation (i.e. head movement) to control the wheelchair movement [22]. Bergasa et al. proposed a model that uses head movements to control the wheelchair for handicapped people with severe disabilities [7]. In another research, Bergasa et al. used a 2D color face tracker and a fuzzy detector to detect the face movements of the user and then generate some commands to drive the wheelchair [8]. Kuno et al. proposed a robotic wheelchair using the face direction to convey the user’s attention [34]. Their proposed system used the face direction to move the wheelchair in that direction, which is a natural action. Ju et al. used the face inclination and mouth shape information to determine the direction of the wheelchair [30]. Facial expressions were used to generate commands to control the movements of the wheelchair. Faria et al. captured the facial expressions using a digital camera, and the collected images are preprocessed and interpreted by an application running on a laptop computer on the wheelchair [14]. Recently, Bastos-Filho et al. used eye blinks, eye movements, and head movements to generate a command, which is sent to a robotic wheelchair [5]. However, the performances of these camera-based systems are likely affected by environmental noises, such as illumination, brightness, and the camera position. Furthermore, eye tracking may lead to user fatigue and may affect the vision of the user in the long run [20], [42]. Also, Njah and Jallouli proposed a control method for wheelchair to provide automatic indoor navigation [39]. This method made use of two fuzzy controllers to ensure obstacle avoidance and an extended Kalman filter to achieve precise measurements.
Electromyography (EMG) signals from certain body movements (e.g. shoulder) can also be used to automatically control a wheelchair movement [29]. In Ref. [23], four EMG electrodes were attached to the sternocleidomastoid muscle, and the sensed signals were used to detect shoulder movements. In this work, the wheelchair can move forward when both shoulders move up. It can also move to the right or left when the shoulder either moves up right or left, respectively. Also, in Ref. [38], only two superficial EMG electrodes were used to detect shoulder movements to finally control a wheelchair. In addition, Xu et al. suggested a system to control a wheelchair using EMG signals generated by the facial movements [57]. Nonetheless, facial and shoulder muscles, used in Refs. [23], [38], [57], are weak and cannot be used for a long period [26].
Electroencephalography (EEG) signals also were used to control a wheelchair movement [47], [54]. Rebsamen et al. proposed a brain-controlled wheelchair that can navigate inside a typical hospital or office environment [41]. In another research, Lin et al. proposed a system that consists of a simple unipolar electrode to collect EEG signals and eye blinking to build a brain-computer interface (BCI) electric wheelchair [36]. A novel paradigm was proposed by Huang et al. [25]. This paradigm was based on the multiclass discrimination of the spatiotemporally distinguishable phenomenon of event-related desynchronization/synchronization (ERD/ERS) in EEG signals, which are generated from the movements of the right/left hand. However, due to the low signal-to-noise ratio (SNR), the EEG signals, which are generated by the synchronous activity of millions of cortical neurons, are insufficient to decode accuracy [25].
The automatic wheelchair control could be achieved by exploiting the tongue and a set of Hall-effect sensors. Huo and Ghovanloo tried to alleviate the suffering of the disabled people using a tongue drive system [26]. By installing a small magnet piercing at the tongue tip, the chaired person can control the switching state of four to six magnetic sensors arranged externally on both face sides. Moreover, as the nerve supply of the tongue is directly connected to the brain and does not pass through the spinal cord, the proposed tongue control system has the main advantage of being completely voluntary for severe quadriplegic disability cases. However, as the user should receive a tongue piercing embedded with the magnetic tracer, this technique is considered a little invasive for long-term usage [13], [27], [31], [48].
Data/information fusion is the process of combining multiple data and knowledge representing the same real-world object into an accurate, consistent, and useful representation. The aim of the data fusion technique is to combine relevant information from two or more different data sources into a single one that gives a more precise description than any of the individual data sources [33], [51]. This technique was used in many machine learning applications such as classifier fusion [51], feature fusion [44], rank fusion/aggregation [46], and score fusion [50]. Moreover, the data fusion technique was used to combine two or more biometrics; this is called multimodal biometrics such as face+fingerprint [43], hand vein+hand geometry+fingerprint [45], or ear+finger knuckle [49]. Multimodal biometrics was used (1) to improve the performance of the biometric system that may not be possible using one of a single biometrics, (2) to solve the problem of noisy sensor data, and (3) for the nonuniversality of the biometric trait [43]. In addition, head pose and eye location were used for gaze estimation [53].
In this paper, simple and severe inability scenarios for chaired people were considered. These people may or may not be able to (1) manually control joysticks, (2) move their shoulders and bodies for a long time, (3) use models that are degraded with the background speaker noise as voice recognition-based models, (4) use models that are affected by the involuntary component of the eye movements and blinking in eye tracking systems, and (5) use models that are affected by environmental noises such as illumination, brightness, and the camera position. At the same time, they need a relatively accurate controllable wheelchair. Hence, different alternative criteria such as EEG signals and facial expressions must be used to meet their needs.
In this work, an automatic control model was proposed to allow chaired people to seamlessly control their wheelchairs. This model has two control modes: automatic and manual. In the automatic mode, the wheelchair is controlled using signals, which are generated automatically. This will be achieved using the signals generated from facial expressions, head movements, and superficial EEG signals. The wheelchair was steered using a software controlling module via one or a combination of the acquired signals (i.e. single-modal or multimodal control). In the manual mode, the chair was controlled using a simple hardware controller with a keypad designed from four push button switches.
The rest of the paper is organized as follows. Section 2 introduces some theoretical background about the proposed model. Section 3 demonstrates the proposed prototype realization, including a brief description of the proposed model (i.e. control of a battery-powered wheelchair). Further, two modes for controlling the wheelchair, automatic and manual control modes, are briefly explained. The experimental results, evaluation, and discussions are given in Section 4 to show the feasibility and performance of the two control modes of the proposed HMI system in the real operating settings. Finally, Section 5 concludes this paper.
2 Preliminary
This section gives overviews of the devices, tools, and techniques that are used in the proposed model.
2.1 Emotiv EPOC EEG Headset
EEG is an electrophysiological monitoring method to detect and record the electrical activities of the brain. These activities are represented by voltage fluctuation that results from ionic current within the neurons of the brain, and it is measured by EEG. The Emotiv EPOC EEG headset was used to collect EEG signals. This headset consists of 14 electrodes, which are located on the surface of the scalp (see Figure 1). The EPOC is produced by a company called Emotiv®, which specializes in producing a BCI system and its associated software [52]. The EPOC has three main software modules: (1) Cognitive suite: to detect thoughts and thinkings, (2) Expressive suite: for facial expression recognition, and (3) Affective suite: to interpret emotions [25], [52]. Moreover, the EPOC headset encapsulates a gyroscope to detect head movements and orientation. The main advantages of the EPOC sensor are (1) high reliability of the output signals, (2) lightweight, (3) ease of installation, (4) wireless transmission ability to any personal computer (PC) via Bluetooth connectivity, and (5) containing a long-life rechargeable battery. In this research, the EPOC was used to read, analyze, and process the user’s brainwaves, which were then used to control the wheelchair using the Arduino microcontroller.

Emotiv EPOC Headset that was used to Control the Intelligent Wheelchair Setup for Assisting the Locked-In Subjects.
The Emotiv EPOC headset was used in many systems to implement smart human-computer interaction systems [10], [21], [42]. For example, Gomez-Gil et al. used the EPOC headset to recognize four types of muscular events: (1) eyes looking to the right while jaw opened, (2) eyes looking to the right while jaw closed, (3) eyes looking to the left while jaw opened, and (4) eyes looking to the left while jaw closed. They used these muscular events to steer an agricultural tractor. In [9], [10], EPOC was used to develop a system called Virtual Move to navigate through Google Street View (GSV). Their proposed system exploited the EPOC capabilities to detect head movements, facial expressions, thoughts, and emotional states. Using two control modes, one with single head movement and the second with four head movements, Rechy-Ramirez et al. implemented a human-machine interface (HMI) for hands-free control of an EPW [42].
In our model, EPOC was used to collect facial expressions, head movements, and EEG signals. The signals were sent to the PC wirelessly. The Emotiv EPOC Software Development Kit (SDK) was used to interpret the received signals to take suitable actions according to these signals.
2.2 Arduino Microcontroller
The Arduino is a microcontroller with an open-source platform based on an easy-to-use hardware and software [4], [12]. The microcontroller board is designed and manufactured by several vendors. The boards have serial communications interfaces, including USB on some models to load programs from PCs.
In our model, the Arduino serves as a bridge between the PC and the interfacing circuit, which connects between the Arduino’s digital pins and the geared DC motors that drive the wheelchair. When the Arduino receives a command/instruction from the computer, it causes the circuit to operate the motors and hence control the wheelchair. Those commands could be changed according to the Emotiv input as discussed earlier.
The software of the Arduino microcontroller is used to read and interpret the instructions and commands sent from a computer to the Arduino through the USB serial interface. There are two types of software. First, the Arduino-OSC communication is an OSC library called ArdOSC. OSC is the acronym for Open Sound Control, and it is a network protocol developed at CNMAT Center at the University of California at Berkeley. OSC is one of the most common protocols for communication among computers, sound synthesizers, and other multimedia devices that are optimized for modern networking technology and has been used in many application areas [59]. The “Mind your OSCs” application sends the data collected by the EPOC headset to any program that can receive and read OSC messages [52]. However, the Arduino Ethernet Shield hardware module is needed to be added to the system, which may be difficult to deal with. Second, Processing software was used to receive the OSC messages [6]. Processing is an easy-to-use, open-source programming environment and language that is used by novices, artists, and designers [6]. Programs written using Processing are called sketches. The reasons for using Processing software instead of ArdOSC library paradigm are as follows:
The OSC-P5 is a free open-source reliable OSC library for Processing software that allows the OSC data to be sent and received by Processing sketches [59].
Processing software minimizes the power consumption, cost, and generated noise because there is no need for Arduino Ethernet Shield module addition.
3 Proposed System
This section describes the proposed model in detail. Generally, the main goal of an electrical wheelchair controller is the interpretation and processing of the driving commands to adapt the motor power; these commands flow among the user, PC, and digital controller. The proposed model aims to control the wheelchair either manually or automatically as shown in Figures 2 and 3. In the automatic mode, the standard wheelchair was controlled using either single-modal or multimodal approach. Using the single-modal approach, the wheelchair was controlled using a human head electric representation of the facial expressions, head movements, or EEG signals. In the multimodal approach, two or more signals were fused to control the wheelchair. In the manual mode, the wheelchair was controlled using a manual keypad as shown in Figures 2 and 3. In our model, the friction effect was neglected.

Block Diagram of the Proposed Wheelchair Control System.

Detailed Flowchart Diagram of the Proposed Wheelchair Control System.
Generally, the proposed system, brain-control wheelchair (BCW), used brain thoughts to control the wheelchair. When a user has a certain thought (push, pull, right, and left), the wheelchair moves in four predefined directions (forward, backward, turning right, and turning left, respectively). This control was implemented using a program installed on the Arduino microcontroller. Figure 4 shows the design of the wheelchair, whereas Figure 3 shows the proposed system to show how the chaired person can automatically or manually control the wheelchair.

Modifications of the Standard Wheelchair to Suit the Proposed Brain-Controlled Battery-Powered Wheelchair Control Criteria.
3.1 Wheelchair Design
To achieve the goal of controlling an EPW, a number of mandatory modifications on the design of the standard wheelchair were implemented. As shown in Figure 4, a special woody shelf was added to hold the digital controller, switching control circuits, motors, gears, and high-capacity batteries including charging circuitry. There was also a special high-torque DC motor for each of the back wheels, two 12 V batteries, and an interfacing circuit between the motors and the Arduino microcontroller.
The DC motors have two main problems. First, it sinks a level of current that normally cannot be offered directly from the output pin of a microcomputer chip or an Arduino. Therefore, they need some sort of driving and current boosting before controlling them. Second, the DC motors are a great source of interference that can make the rest of electronic devices misbehave. This problem can be solved by optically isolating the motor power supply.
In our proposed model, the two problems were solved using a simple circuit as shown in Figure 5. The controller output pin was connected to the anode terminal of an LED (D1) next to a photo transistor (Q1). D1 and Q1 provide an optical interfacing technique that was used to achieve a high level of isolation and hence solve the second problem. In addition, the motor sits between the relay contact and the 24 V power supply, allowing the DC motor to sink its rating current and thus solving the first problem. Table 1 illustrates the specifications of the components that were used in our proposed model.

Design of the Circuit that was Used to Drive the Motors.
Components’ Specifications for the Proposed Wheelchair System.
Component | Specifications |
---|---|
Head signals capturing device | Emotiv EPOC headset |
Bluetooth wireless transmission to PC | |
USB wireless receiver dongle | |
Hydrator with 16 saline sensors | |
Number of channels: 14 (plus CMS/DRL references) | |
Sampling method: sequential sampling, single ADC | |
Sampling rate: 128 Hz (2048 Hz internal) | |
ADC resolution: 14-bit, 1 LSB=0.51 μV | |
Bandwidth: 0.2–45 Hz | |
Digital notch filters (50/60 Hz) | |
Dynamic range (input referred): 256 mV (Vpp) | |
Connectivity: Proprietary wireless, 2.4 GHz band | |
Battery type: Li-poly, 12 h lifetime | |
PC (laptop) | Processor: Intel Core i7 3630Q |
Main memory: 8 GB | |
Video graphics: NVIDIA GeForce GTX675MX | |
Operating System: Microsoft Windows 8.1 | |
Digital controller | Arduino UNO board |
Microcontroller board based on the ATmega328 | |
14 digital input/output pins | |
6 GPIO pins can be used as PWM outputs | |
6 analog inputs | |
16 MHz crystal oscillator | |
USB bidirectional connection | |
In-system programming (ISP) header | |
Manual steering controller | Low-cost keypad |
4 ON/OFF switches | |
Switch labels: forward, backward, right, and left | |
Power interface circuit | Optical isolation: |
Isolation test voltage: 5000 VRMS | |
Interfaces with common logic families | |
Input-output coupling capacitance: <0.5 pF | |
Current boosting: | |
Darlington pair power transistors | |
Current rating: 10 A | |
Maximum voltage: 80 V | |
Power: 150 W | |
Electric wheelchair DC motors | Rated voltage: 24 VDC |
Rated speed: 4300 rpm | |
No load speed: 5000 rpm | |
No load current: 1.2 A | |
Rated current: 14 A | |
Rated torque: 550 mNm | |
Rated power: 247.6 W | |
Maximum efficiency: 84% | |
Torque constant: 46 mNm/A | |
Speed constant: 208 rpm/V | |
Rotor inertia: 1000 |
3.2 Automatic BCW (ABCW)
The aim of the ABCW is to automatically control the movement of the wheelchair using different signals. As shown in Figure 3, the data flow of the proposed system starts by collecting the skull superficial signals using the EPOC headset, which collects data from three different modalities: head movements, facial expressions, and EEG signals. The collected signals are transferred from EPOC to PC through a Universal Serial Bus (USB) wireless transmitter/receiver channel, in which the commands are interpreted.
3.2.1 Command Interpreter
Driving commands are usually issued by the acquisition facilities of the wheelchair such as joysticks, voice recognizer, or head movement sensors [58]. These commands are divided into two categories: jump type and step type. In the jump-type category, the command will load a fixed speed such as back, go, and stop commands. In the step-type commands, the speed and direction of the wheelchair are changed such as moving left, right, forward, and backward commands. In this study, the command interpreter receives the signals that were sent from EPOC, and the Emotiv SDK processes the data stream received through the USB inlet and converts it into accessible packets of data that can be processed and interpreted using the generic software packages. Based on the interpretation of the instantly received data, the computer issues a predefined control instruction and sends a command message to another USB outlet, which transfers the command to an Arduino microcontroller that was programmed to handle these commands and to move the wheelchair forward, backward, left, or right automatically. The controller is responsible for controlling two high-torque geared optically isolated battery-powered DC motors, and the DC motors can efficiently move and steer a loaded wheelchair. Figure 1 demonstrates the setup of the Emotiv EPOC headset to control the intelligent wheelchair for assisting the locked-in subjects.
Interface buses labeled “A” to “E” act as block connectors. Generally speaking, in Figure 6, the buses are not necessarily carrying only the electric signals. Paths “A” and “E” are the physical information flow associated with the user’s reaction, e.g. the commands from the facial expression, head movements, and EEG signals. Bus “E” carries the feedback information about the position of the wheelchair. Bus “B” contains the signals that were collected from the Emotiv EPOC headset, bus “D” has the energized PWM signals, and bus “C” contains the interpreted commands. Through the interface of these buses, the system forms a closed-loop control flow.

Block Diagram of the Wheelchair Controller System.
3.2.2 Single-Modal vs. Multimodal BCW
This paper proposes an approach for controlling the wheelchair for a single preprogrammed movement (single output) using a combination of two or more certain input signals (multiple inputs) as an example for the multimodal wheelchair control. Figure 7 demonstrates the idea of input control signal fusion to extract a single decision for the chair movement.

Design of the Controller Block Diagram.
In the single-modal approach, the wheelchair was controlled using only one signal (facial expression, head movement, or EEG signals), whereas, in the multimodal approach, a fusion scheme was adopted using a group of summing circuits that merge a number of single-modality signals into a unique control signal. Combining different signals can allow an effective and functional interaction for people with limited mobility/coordination abilities or under specific conditions. In other words, the multimodal approach was used to exploit all the head movements, facial expressions, and mind thoughts to allow a large number of users with different degrees of disability to use the proposed application. Hence, the multimodal approach is flexible for different possible users’ needs. For example, if the subject cannot perform a specific head movement, the proposed approach should be able to replace the head movement with facial expressions or thought. In our implementation, it was assumed that all the incoming signals have the same weight and the signals were combined using the simple sum-rule fusion technique [32], [33].
Figure 7 demonstrates the proposed system software controller block diagram. In case of unimodal or single-modal control, only one input signal amplitude should exceed the threshold value to enable the chaired person to steer the chair to a prescribed direction. With the multimodal alternative, a subset of input signal amplitudes must overcome the threshold barrier for guaranteeing a controller’s final decision for a certain movement.
3.3 Manual Wheelchair Control Model (MWCM)
In the manual control mode, a keypad was designed using four ON/OFF switches to control the movement of the wheelchair to left, right, forward, or backward. The designed keypad sends commands or signals directly to the DC motors that control the wheelchair movements. It could be attached to any of the chair armrest (i.e. right or left armrests) as the user prefer (see Figure 8).

Manual Control Mode Based on Keypad to Control the Intelligent Wheelchair Manually.
4 Experimental Results and Testing Scenarios
To test the proposed solutions, a number of experiments were conducted to simulate how chaired individuals control the wheelchair through either automatic (using single-modal and multimodal approaches) or manual control modes. The next two subsections present the experiments’ environment and the conducted experiments and their results.
4.1 Experimental Setup
In all conducted experiments: a laptop was installed on a wheelchair in front of the sitting person, and the 14-electrode Emotiv EPOC headset was used to collect the person’s head impressions from the skull surface. The superficially picked-up signals, containing the brain thoughts, head gestures, and facial emotions, were electrically encoded and then wirelessly sent to a PC through the Arduino microcontroller to be interpreted and then translated into useful control instructions. The experiments were conducted on a flat terrain and indoor environment. All such experiments have the following components:
Two trajectories with different lengths: The subjects were asked to follow two different trajectories; each trajectory has a starting point and an ending point. The first trajectory, as shown in Figure 9, is short (length=19.9 m) and easy to track and has one obstacle to move around. The other trajectory, as shown in Figure 10, is more difficult to track and it has a length equal to 58.3 m and three obstacles to move around, i.e. the subjects need to change their directions’ movement many times.
Two chaired volunteers/subjects: one male and another female were asked to test our proposed solution. The volunteers had no prior experience with our proposed wheelchair control systems, thus making our experiment very close to a real scenario. They were asked to repeat each experiment three times. Hence, there were 12 different experiments, 2 (volunteers)×2 (trajectories)×3 trails, to evaluate our model.
Graphical user interface (GUI): To improve the usability of our proposed systems, a GUI was provided using the Processing programming language. This GUI appears and works as a guide to the user to suitably and accordingly generate the desired commands, while the resultant decisions are displayed on the PC screen, as seen in Figure 11. The set of all possible commands, used in all experiments, is as follows:
Left: The right and left DC motors are powered to rotate the right wheel of the chair in the forward direction and the left wheel to the backward direction.
Right: The left and right DC motors are powered to rotate the left wheel of the chair in the forward direction and the right wheel to the backward direction.
Forward: The two DC motors are powered to rotate both wheels in the forward direction.
Backward: The two DC motors are powered to rotate both wheels in the backward direction.

First Trajectory (with 19.9 m Overall Length) for Testing the Wheelchair Control Performance.

Second Trajectory (with 58.3 m Length) for Evaluating the Proposed System Efficiency.

GUI of the Automatic Control Mode.
The Resultant Control Signal Strength While the User is Thinking About Left; This Control Signal Instructs the BCW Left and Right Back Wheel Motors to Rotate Clockwise 45°.
4.1.1 Assessment Methods
Three different assessment methods were used to evaluate our experiments. The first was the mean deviation from the real trajectory and it was modeled as follows:
4.2 Conducted Experiments and Their Results
Three different experiments were conducted for the proposed system performance evaluation. In the first experiment, the two subjects were asked to control the wheelchair manually. In the second and third experiments, the subjects were asked to first control the wheelchair using the single-modal approach and then using the multimodal approach. The descriptions of each experiment are explained in the next sections.
4.2.1 Control the Wheelchair Using Manual Keypad
In this experiment, the wheelchair was controlled manually using a keypad. This experiment was conducted to show how the wheelchair is controlled using the keypad commands, which were generated from the two subjects to track the two trajectories. The results of this experiment are summarized in Table 2.
Mean and Standard Deviation of the Distance Between the Simulated Movements and the Two Trajectories Using Single-Modal Signals and Manual Control Mode.
Subjects | Trails | Automatic single-modal approach | Manual control | ||||||
---|---|---|---|---|---|---|---|---|---|
Head movements | Facial expressions | EEG signals | |||||||
Tr. #1 | Tr. #2 | Tr. #1 | Tr. #2 | Tr. #1 | Tr. #2 | Tr. #1 | Tr. #2 | ||
Subject 1 | 1 | 2.4 | 7 | 2.56 | 7.54 | 3.45 | 9.85 | 1.45 | 4.1 |
2 | 2.15 | 6.62 | 2.42 | 7.12 | 3.25 | 9.74 | 1.15 | 3.92 | |
3 | 2.1 | 6.13 | 2.31 | 6.84 | 3.1 | 9.74 | 1.13 | 3.82 | |
Mean (standard deviation) | 2.22 (0.16) | 6.58 (0.44) | 2.43 (0.13) | 7.17 (0.35) | 3.27 (0.18) | 9.78 (0.06) | 1.24 (0.18) | 3.94 (0.15) | |
Subject 2 | 1 | 3.2 | 7.1 | 2.5 | 7.42 | 3.48 | 9.68 | 1.42 | 4.00 |
2 | 2.05 | 6.5 | 2.45 | 7.13 | 3.41 | 9.5 | 1.21 | 3.94 | |
3 | 2.05 | 6.05 | 2.3 | 7.01 | 3.2 | 9.5 | 1.12 | 3.85 | |
Mean (standard deviation) | 2.13 (0.14) | 6.55 (0.53) | 2.42 (0.1) | 7.19 (0.21) | 3.36 (0.15) | 9.56 (0.10) | 1.25 (0.15) | 3.93 (0.08) | |
Average error (%) | 10.93 | 11.26 | 12.19 | 12.32 | 16.66 | 16.59 | 6.26 | 6.75 |
Tr. #1, first trajectory; Tr. #2, second trajectory.
4.2.2 Control the Wheelchair Using the Single-Modal Approach
In this experiment, single-modal signals were used to control the wheelchair. This experiment has three subexperiments, which were conducted to compare different single-modal signals, i.e. when using either head movement, facial expressions, or EEG signals to control the wheelchair. In all subexperiments, two subjects were asked to control the movement of the wheelchair to follow the two trajectories in Figures 9 and 10 for three times. In the first subexperiment (using head movement), left, right, up, and down head movements were used to move the wheelchair left, right, forward, and backward, respectively. In the second subexperiment (using facial expressions), happy, anger, surprise, and fear expressions were used to move the wheelchair left, right, forward, and backward, respectively. In the third experiment (using ECG signals), the subjects were asked to think to move left, right, forward, and backward, which generates EEG signals that were used to control the directions’ movement of the wheelchair. Table 2 summarizes the results of this experiment.
4.2.3 Control the Wheelchair Using the Multimodal Approach
In this experiment, multimodal signals were used to control the wheelchair by combining or fusing two signals. In this experiment, three subexperiments were conducted. In each subexperiment, the two subjects were asked to control the wheelchair to track the two trajectories by combining between (1) facial expressions and EEG signals (FE+EEG), (2) head movements and EEG signals (HM+EEG), and (3) head movements and facial expressions (HM+FE). The performance evaluation of these experiments is shown in Table 3.
Mean and Standard Deviation of the Distance Between the Simulated Movements and the Two Trajectories Using Multimodal Control Signals.
Subjects | Trails | FE+EEG | HM+EEG | HM+FE | |||
---|---|---|---|---|---|---|---|
Tr. #1 | Tr. #2 | Tr. #1 | Tr. #2 | Tr. #1 | Tr. #2 | ||
Subject 1 | 1 | 2.20 | 5.98 | 1.89 | 6.84 | 1.82 | 5.4 |
2 | 2.10 | 5.94 | 1.84 | 6.91 | 1.75 | 5.24 | |
3 | 1.98 | 6.01 | 1.75 | 6.75 | 1.64 | 5.20 | |
Mean (standard deviation) | 2.09 (0.11) | 5.98 (0.35) | 1.83 (0.07) | 6.83 (0.08) | 1.74 (0.09) | 5.28 (0.11) | |
Subject 2 | 1 | 2.05 | 6.12 | 1.95 | 6.67 | 1.74 | 5.64 |
2 | 2.05 | 6.15 | 1.92 | 6.65 | 1.72 | 5.51 | |
3 | 1.84 | 6.02 | 1.92 | 6.65 | 1.68 | 5.27 | |
Mean (standard deviation) | 1.98 (0.12) | 6.10 (0.07) | 1.93 (0.02) | 6.66 (0.01) | 1.71 (0.03) | 5.47 (0.19) | |
Average error (%) | 10.23 | 10.36 | 9.45 | 11.57 | 8.67 | 9.22 |
FE, facial expression; HM, head movement.
4.3 Discussion
The results summarized in Tables 2 and 3 will be discussed in this section. The results in Table 2 show the deviation between the simulated movements and the two trajectories using manual mode and the single-modal approach, whereas Table 3 shows the results of the multimodal approach. From these two tables, the following remarks can be drawn.
A comparison between the results of the automatic single-modal signals and the manual mode showed that the average error rate of the manual control mode ranged from 6.26% to 6.75%, whereas the average error rate using the single-modal approach was more than 10%. It is not surprising that the wheelchair using manual control is more accurate than all other single-modal signals. However, the important finding was that the single-modal approach achieved a relatively low average rate. Hence, it solves the problem of disabled people who may not be able to manually control using conventional joysticks or keypads.
A comparison between the results of the single-modal signals showed that the mean deviation of the head movement, facial expressions, and EEG signals from the first trajectory were 2.18,[1] 2.43, and 3.32 m, respectively, whereas, in the second trajectory, the deviation of the head movement, facial expressions, and EEG signals was 6.57, 7.18, and 9.67 m, respectively. Moreover, the average errors of the head movements ranged from 10.93% to 11.26%, facial expression errors ranged from 12.19% to 12.32%, and EEG signals ranged from 16.59% to 16.66%. From these results, we can conclude that controlling the wheelchair using head movements simulated the two trajectories better than the other two signals. Therefore, the head movement is accurate and suitable for controlling wheelchair than the two other signals. This is because (1) the EEG signals are weak and may be affected by different sources such as physiological sources, which are generated from the body parts having an electric dipole such as heart, eyes, muscle, and tongue, or nonphysiological sources, which are caused by body or electrode movements [58], and (2) facial expressions depend mainly on facial muscle movements. However, these movements depend on the individuals’ actions. Moreover, extreme/strong expressions (e.g. smiling expression) can be detected and decoded easier than weak expressions such as anger expression [24], [37].
In Table 3, comparing the multimodal signals, it can be noticed that the mean deviation of the FE+EEG, HM+EEG, and HM+FE from the first trajectory was 2.03, 1.88, and 1.725 m, respectively, whereas, in the second trajectory, the deviation of the FE+EEG, HM+EEG, and HM+FE signals were 6.04, 6.75, and 5.38 m, respectively. In other words, using the HM+FE gave more accurate results than the other two multimodal signals. The reason is that head movements and facial expressions are more accurate than EEG signals.
Comparing single-modal and multimodal approaches showed that the multimodal approach achieved mean deviation lower than all single-modal signals. For example, the average error of the HM+EEG ranged from 9.45% to 11.57%, whereas the EEG ranged from 16.66% to 16.59% and HM ranged from 10.93% to 11.26%. Hence, combining two or more different signals generates a new signal that was more robust against artifacts and noises as reported in Ref. [2]. Hence, the problem of weakness of signals or noise is solved by combining two or more signals together.
Comparing the results of the two trajectories, it can be seen that there is a slight difference between the average error of the two trajectories, which reflects that the performance of the proposed approaches (i.e. single modal and multimodal) is stable and suitable for short or long trajectories and it is affected slightly by the number of obstacles.
In short, from the above discussion, it can be concluded that (1) the manual model was more accurate than all other automatic models; (2) the multimodal signal control achieved a low average error compared to the single-modal one; (3) the facial expressions combined with head movements (HM+FE) achieved minimum error (this is summarized in Figure 12); (4) in the single-modal approach, the head movements achieved the best results, whereas the EEG signal achieved the worst results; and (5) the average error rate of the single-modal and multimodal approaches was slightly affected by the length of the trajectories. Hence, it can be said that the proposed model is suitable for chaired people.

Comparison Between the Accuracy of Manual and Automatic (Single-Modal and Multimodal) Control Modes.
5 Conclusions and Future Work
In this paper, a novel wheelchair control system was proposed. This system made use of mind thoughts using its EEG signals, facial expressions, and head gestures to allow the chaired person to manually or automatically control of his/her wheelchair. The aim of the proposed system was realized using hardware (wheelchair, Emotiv EPOC EEG headset, and Arduino microcontroller) and software (Processing language). Real experiments were conducted under different levels of trajectories and various modes of wheelchair controls. Also, comparisons between manual and automatic modes were conducted. The obtained results showed that the manual control mode was more accurate than all other automatic approaches and the former was easy to use by the subjects using a GUI interface. Also, the multimodal control approach decreased the deviation from the real trajectories and achieved results better than all other single-modal controls, whereas facial expressions along with head movements achieved the minimum error. The main advantages of the proposed system were that, using different signals (EEG, head movements, and facial expression), the proposed system could support a high degree of customization to suit many degrees of disability, low cost, ergonomics, and easy control and be provided with a PC screen. In the future work, it is planned to add two other features to the system: safety (detection and reaction to obstacles) and more control options for chaired people, including controlling the operation of nearby appliances and doors. Another point for the future work is to employee more subjects to investigate the performance of the proposed system before deploying it in real-life scenarios. This may help to avoid real-time problems. In addition, a comparison between different modes in terms of computational time will be conducted to show how the proposed model is fast enough for real applications. Moreover, different environments with different numbers of obstacles will be used to investigate the robustness of the proposed system against real environments. It is also worth mentioning that the joystick can be used instead of designing a four-button pad for manual control.
Acknowledgments
The members of the Electronics and Communication Engineering Department, Suez Canal University, Ismailia, Egypt, are acknowledged for their assistance in prototype preparation and their helpful comments.
Bibliography
[1] G. Al-Hudhud, Affective command-based control system integrating brain signals in commands control systems, J. Comput. Hum. Behav.30 (2014), 535–541.10.1016/j.chb.2013.06.038Search in Google Scholar
[2] J. B. Anderson, Digital transmission engineering, 12th ed., John Wiley & Sons, USA, 2006.Search in Google Scholar
[3] R. Anderson and D. Cervo, Pro Arduino (technology in action), 1st ed., Springer, USA, 2013.10.1007/978-1-4302-3940-6Search in Google Scholar
[4] M. Banzi and M. Shiloh, Getting started with arduino, 1st ed., O’Reilly Media, Inc., USA, 2009.Search in Google Scholar
[5] T. F. Bastos-Filho, F. A. Cheein, S. M. Torres Muller, W. Cardoso Celeste, C. de la Cruz, D. Cruz Cavalieri, M. Sarcinelli-Filho, P. F. Santos Amaral, E. Perez, C. M. Soria and R. Carelli, Towards a new modality-independent interface for a robotic wheelchair, IEEE Trans. Neural Syst. Rehab Eng.22 (2014), 567–584.10.1109/TNSRE.2013.2265237Search in Google Scholar PubMed
[6] J. Bayle, C Programming for arduino, 1st ed., Packt Publishing Ltd., UK, 2013.Search in Google Scholar
[7] L. M. Bergasa, M. Mazo, A. Gardel, J. C. Garca, A. E. M. A. Ortuno and A. E. Mendez, Guidance of a wheelchair for handicapped people by face tracking, in: Proceedings of the 7th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA’99), vol. 1, pp. 105–111, IEEE, 1999.10.1109/ETFA.1999.815344Search in Google Scholar
[8] L. M. Bergasa, M. Mazo, A. Gardel, R. Barea and L. Boquete, Commands generation by face movements applied to the guidance of a wheelchair for handicapped people, in: Proceedings of the 15th International Conference on Pattern Recognition, vol. 4, pp. 660–663, IEEE, 2000.10.1109/ICPR.2000.903004Search in Google Scholar
[9] T. Carlson, R. Leeb, R. Chavarriaga and J. del R. Millan, The birth of the brain-controlled wheelchair, in: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5444–5445, IEEE, 2012.10.1109/IROS.2012.6386299Search in Google Scholar
[10] F. Carrino, J. Tscherrig, E. Mugellini, O. Abou Khaled and R. Ingold, Head-computer interface: a multimodal approach to navigate through real and virtual worlds, in: Proceedings of 14th International Conference of Human-Computer Interaction (HCI), Interaction Techniques and Environments, vol. 6762, pp. 222–230, Springer, Orlando, FL, USA, 2011.10.1007/978-3-642-21605-3_25Search in Google Scholar
[11] B. Dicianno, D. M. Spaeth, R. A. Cooper, S. G. Fitzgerald and M. Boninger, Advancements in power wheelchair joystick technology: effects of isometric joysticks and signal conditioning on driving performance, Am. J. Phys. Med. Rehab.85 (2006), 250.10.1097/00002060-200603000-00020Search in Google Scholar
[12] A. D’Ausilio, Arduino: a low-cost multipurpose lab equipment, J. Behav. Res. Methods44 (2012), 305–313.10.3758/s13428-011-0163-zSearch in Google Scholar PubMed
[13] J. Fan, S. Jia, X. Li, W. Lu, J. Sheng, L. Gao and J. Yan, Motion control of intelligent wheelchair based on sitting postures, in: International Conference on Mechatronics and Automation (ICMA), pp. 301–306, IEEE, 2011.10.1109/ICMA.2011.5985674Search in Google Scholar
[14] P. M. Faria, R. A. M. Braga, E. Valgode and L. P. Reis, Interface framework to drive an intelligent wheelchair using facial expressions, in: IEEE International Symposium on Industrial Electronics (ISIE), pp. 1791–1796, IEEE, 2007.10.1109/ISIE.2007.4374877Search in Google Scholar
[15] B. M. Faria, L. Ferreira, L. P. Reis, N. Lau, M. Petry and J. Couto, Manual control for driving an intelligent wheelchair: a comparative study of joystick mapping methods, Environment17 (2012), 18.Search in Google Scholar
[16] B. M. Faria, S. Vasconcelos, L. P. Reis and N. Lau, Evaluation of distinct input methods of an intelligent wheelchair in simulated and real environments: a performance and usability study, J. Assist. Technol.25 (2013), 88–98.10.1080/10400435.2012.723297Search in Google Scholar PubMed
[17] B. M. Faria, L. M. Ferreira, L. P. Reis, N. Lau and M. Petry, Intelligent wheelchair manual control methods, in: Portuguese Conference on Artificial Intelligence, pp. 271–282, Springer, 2013.10.1007/978-3-642-40669-0_24Search in Google Scholar
[18] B. M. Faria, L. P. Reis and N. Lau, A survey on intelligent wheelchair prototypes and simulators, in: New Perspectives in Information Systems and Technologies, vol. 1,, pp. 545–557, Springer, 2014.10.1007/978-3-319-05951-8_52Search in Google Scholar
[19] B. M. Faria, L. P. Reis and N. Lau, Adapted control methods for cerebral palsy users of an intelligent wheelchair, J. Intell. Robot. Syst.77 (2014), 299–312.10.1007/s10846-013-0010-9Search in Google Scholar
[20] P. S. Gajwani and S. A. Chhabria, Eye motion tracking for wheelchair control, Int. J. Inf. Technol.2 (2010), 185–187.Search in Google Scholar
[21] J. Gomez-Gil, I. San-Jose-Gonzalez, L. F. Nicolas-Alonso and S. Alonso-Garcia, Steering a tractor by means of an EMG-based human-machine interface, J. Sensors11 (2011), 7110–7126.10.3390/s110707110Search in Google Scholar PubMed PubMed Central
[22] P. Jia, H. H. Hu, T. Lu and K. Yuan, Head gesture recognition for hands-free control of an intelligent wheelchair, Ind. Robot34 (2007), 60–68.10.1108/01439910710718469Search in Google Scholar
[23] J.-S. Han, Z. Zenn Bien, D.-J. Kim, H.-E. Lee and J.-S. Kim, Human-machine interface for wheelchair control with EMG and its evaluation, in: Proceedings of the 25th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 2, pp. 1602–1605, IEEE, 2003.Search in Google Scholar
[24] U. Hess, P. Philippot and S. Blairy, Facial reactions to emotional facial expressions: affect or cognition?, Cognit. Emotion12 (1998), 509–531.10.1080/026999398379547Search in Google Scholar
[25] D. Huang, K. Qian, D.-Y. Fei, W. Jia, X. Chen and O. Bai, Electroencephalography (EEG)-based brain-computer interface (BCI): a 2-D virtual wheelchair control based on event-related desynchronization/synchronization and state control, IEEE Trans. Neural Syst. Rehab. Eng.20 (2012), 379–388.10.1109/TNSRE.2012.2190299Search in Google Scholar PubMed
[26] X. Huo and M. Ghovanloo, Using unconstrained tongue motion as an alternative control mechanism for wheeled mobility, IEEE Trans. Biomed. Eng.56 (2009), 1719–1726.10.1109/TBME.2009.2018632Search in Google Scholar PubMed PubMed Central
[27] X. Huo, H. Park, J. Kim and M. Ghovanloo, A dual-mode human computer interface combining speech and tongue motion for people with severe disabilities, IEEE Trans. Neural Syst. Rehab. Eng.21 (2013), 979–991.10.1109/TNSRE.2013.2248748Search in Google Scholar PubMed PubMed Central
[28] A. Ismail and A. Vigneron, A new trajectory similarity measure for GPS data, in: Proceedings of the 6th ACM SIGSPATIAL International Workshop on GeoStreaming, pp. 19–22, ACM, 2015.10.1145/2833165.2833173Search in Google Scholar
[29] I. Iturrate, J. Antelis and J. Minguez, Synchronous EEG brain-actuated wheelchair with automated navigation, in: IEEE International Conference on Robotics and Automation (ICRA’09), pp. 2318–2325, IEEE, 2009.10.1109/ROBOT.2009.5152580Search in Google Scholar
[30] J. S. Ju, Y. Shin and E. Y. Kim, Intelligent wheelchair (IW) interface using face and mouth recognition, in: Proceedings of the 14th international conference on Intelligent user interfaces, pp. 307–314, ACM, 2009.10.1145/1502650.1502693Search in Google Scholar
[31] J. Kim, H. Park, J. Bruce, E. Sutton, D. Rowles, D. Pucci, J. Holbrook, J. Minocha, B. Nardone, D. West, A. Laumann, E. Roth, M. Jones, E. Veledar and M. Ghovanloo, The tongue enables computer and wheelchair control for people with spinal cord injury, J. Sci. Transl. Med.5 (2013), 213ra166.10.1126/scitranslmed.3006296Search in Google Scholar PubMed PubMed Central
[32] J. Kittler and F. M. Alkoot, Sum versus vote fusion in multiple classifier systems, IEEE Trans. Pattern Anal. Mach. Intell.25 (2003), 110–115.10.1109/TPAMI.2003.1159950Search in Google Scholar
[33] L. I. Kuncheva, A theoretical study on six classifier fusion strategies, IEEE Trans. Pattern Anal. Mach. Intell.24 (2002), 281–286.10.1109/34.982906Search in Google Scholar
[34] Y. Kuno, N. Shimada and Y. Shirai, Look where you’re going[robotic wheelchair], IEEE Robot. Automat. Mag.10 (2003), 26–34.10.1109/MRA.2003.1191708Search in Google Scholar
[35] C.-S. Lin, C. Ho, W. Chen, C. Chiu and M. Yeh, Powered wheelchair controlled by eye-tracking system, Opt. Appl.36 (2006), 401.Search in Google Scholar
[36] J.-S. Lin, K.-C. Chen and W.-C. Yang, EEG and eye-blinking signals through a brain-computer interface based control for electric wheelchairs with wireless scheme, in: 4th International Conference on New Trends in Information Science and Service Science (NISS), pp. 731–734, IEEE, 2010.Search in Google Scholar
[37] B.-U. Meyer, K. Werhahn, J. C. Rothwell, S. Roericht and C. Fauth, Functional organisation of corticonuclear pathways to motoneurones of lower facial muscles in man, Exp. Brain Res.101 (1994), 465–472.10.1007/BF00227339Search in Google Scholar PubMed
[38] I. Moon, M. Lee, J. Chu and M. Mun, Wearable EMG-based HCI for electric-powered wheelchair users with motor disabilities, in: Proceedings of the 2005 IEEE International Conference on Robotics and Automation (ICRA), pp. 2649–2654, IEEE, 2005.10.1109/ROBOT.2005.1570513Search in Google Scholar
[39] M. Njah and M. Jallouli, Fuzzy-ekf controller for intelligent wheelchair navigation, J. Intell. Syst.25 (2016), 107–121.10.1515/jisys-2014-0139Search in Google Scholar
[40] S. P. Parikh, V. Grassi, V. Kumar and J. Okamoto, Integrating human inputs with autonomous behaviors on an intelligent wheelchair platform, IEEE J. Intell. Syst.22 (2007), 33–41.10.1109/MIS.2007.36Search in Google Scholar
[41] B. Rebsamen, E. Burdet, C. Guan, H. Zhang, C. L. Teo, Q. Zeng, C. Laugier and M. H. Ang Jr, Controlling a wheelchair indoors using thought, IEEE Intell. Syst.22 (2007), 18–24.10.1109/MIS.2007.26Search in Google Scholar
[42] E.-J. Rechy-Ramirez, H. Hu and K. McDonald-Maier, Head movements based control of an intelligent wheelchair in an indoor environment, in: IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1464–1469, IEEE, 2012.10.1109/ROBIO.2012.6491175Search in Google Scholar
[43] A. Ross and A. Jain, Information fusion in biometrics, Pattern Recognit. Lett.24 (2003), 2115–2125.10.1016/S0167-8655(03)00079-5Search in Google Scholar
[44] N. A. Semary, A. Tharwat, E. Elhariri and A. E. Hassanien, Fruit-based tomato grading system using features fusion and support vector machine, in: Intelligent Systems’ 2014, pp. 401–410, Springer, 2015.10.1007/978-3-319-11310-4_35Search in Google Scholar
[45] M. K. Shahin, A. M. Badawi and M. E. Rasmy, A multimodal hand vein, hand geometry, and fingerprint prototype design for high security biometrics, in: International Biomedical Engineering Conference (CIBEC), pp. 1–6, IEEE, 2008.10.1109/CIBEC.2008.4786038Search in Google Scholar
[46] M. M. Sharif, A. Tharwat, A. E. Hassanien, H. A. Hefny and G. Schaefer, Enzyme function classification based on Borda count ranking aggregation method, in: Machine Intelligence and Big Data in Industry, pp. 75–85, Springer, 2016.10.1007/978-3-319-30315-4_7Search in Google Scholar
[47] S. K. Swee and L. Z. You, Fast Fourier analysis and EEG classification brainwave controlled wheelchair, in: 2nd International Conference on Control Science and Systems Engineering (ICCSSE), pp. 20–23, IEEE, 2016.10.1109/CCSSE.2016.7784344Search in Google Scholar
[48] H. Tamura, T. Murata, Y. Yamashita, K. Tanno and Y. Fuse, Development of the electric wheelchair hands-free semi-automatic control system using the surface-electromyogram of facial muscles, J. Artif. Life Robot.17 (2012), 300–305.10.1007/s10015-012-0060-2Search in Google Scholar
[49] A. Tharwat, A. F. Ibrahim and H. A. Ali, Multimodal biometric authentication algorithm using ear and finger knuckle images, in: Seventh International Conference on Computer Engineering & Systems (ICCES), pp. 176–179, IEEE, 2012.10.1109/ICCES.2012.6408507Search in Google Scholar
[50] A. Tharwat, M. M. Sharif, A. E. Hassanien and H. A. Hefeny, Improving enzyme function classification performance based on score fusion method, in: International Conference on Hybrid Artificial Intelligence Systems, pp. 530–542, Springer, 2015.10.1007/978-3-319-19644-2_44Search in Google Scholar
[51] A. Tharwat, T. Gaber and A. E. Hassanien, Two biometric approaches for cattle identification based on features and classifiers fusion, Int. J. Image Mining1 (2015), 342–365.10.1504/IJIM.2015.073902Search in Google Scholar
[52] A. Thobbi, R. Kadam and W. Sheng, Achieving remote presence using a humanoid robot controlled by a non-invasive BCI device, Int. J. Artif. Intell. Mach. Learn.10 (2010), 41–45.Search in Google Scholar
[53] R. Valenti, N. Sebe and T. Gevers, Combining head pose and eye location information for gaze estimation, IEEE Trans. Image Process.21 (2012), 802–815.10.1109/TIP.2011.2162740Search in Google Scholar PubMed
[54] F. Velasco-Álvarez, A. Fernández-Rodrguez and R. Ron-Angevin, Switch mode to control a wheelchair through EEG signals, in: Converging Clinical and Engineering Research on Neurorehabilitation II, pp. 801–805, Springer, 2017.10.1007/978-3-319-46669-9_131Search in Google Scholar
[55] P. Viswanathana, J. L. Bella, R. H. Wanga, B. Adhikarib, A. K. Mackworthb, A. Mihailidisa, W. C. Millerc and I. M. Mitchellb, A Wizard-of-Oz intelligent wheelchair study with cognitively-impaired older adults: attitudes toward user control, in: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Workshop on Assistive Robotics for Individuals with Disabilities: HRI Issues and Beyond, pp. 1–4, 2014.Search in Google Scholar
[56] L. Wei, H. Hu, T. Lu and K. Yuan, Evaluating the performance of a face movement based wheelchair control interface in an indoor environment, in: IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 387–392, IEEE, 2010.10.1109/ROBIO.2010.5723358Search in Google Scholar
[57] X. Xu, Y. Zhang, Y. Luo and D. Chen, Robust bio-signal based control of an intelligent wheelchair, J. Robot.2 (2013), 187–197.10.3390/robotics2040187Search in Google Scholar
[58] Y. Yasui, A brainwave signal measurement and data processing technique for daily life applications, J. Physiol. Anthropol.28 (2009), 145–150.10.2114/jpa2.28.145Search in Google Scholar PubMed
[59] G. Zimmermann and G. Vanderheiden, The universal control hub: an open platform for remote user interfaces in the digital home, in: 12th International Conference, Human-Computer Interaction (HCI), Interaction Platforms and Techniques, vol. 4551, pp. 1040–1049, Springer, Beijing, China, 2007.10.1007/978-3-540-73107-8_114Search in Google Scholar
©2019 Walter de Gruyter GmbH, Berlin/Boston
This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.