Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection
Next Article in Special Issue
Applying High-Speed Vision Sensing to an Industrial Robot for High-Performance Position Regulation under Uncertainties
Previous Article in Journal
Telecommunication Platforms for Transmitting Sensor Data over Communication Networks—State of the Art and Challenges
Previous Article in Special Issue
Flexible Piezoelectric Energy Harvesting from Mouse Click Motions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Controller for a Smart Walker Based on Human-Robot Formation

by
Carlos Valadão
1,*,
Eliete Caldeira
2,
Teodiano Bastos-Filho
1,
Anselmo Frizera-Neto
1 and
Ricardo Carelli
3,4
1
Postgraduate Program in Electrical Engineering, Federal University of Espirito Santo (UFES), Fernando Ferrari Av., 514, 29075-910 Vitoria, Brazil
2
Electrical Engineering Department, Federal University of Espirito Santo (UFES), Fernando Ferrari Av., 514, 29075-910 Vitoria, Brazil
3
Institute of Automatics, National University of San Juan (UNSJ), San Martín Av. (Oeste), 1109, J5400ARL San Juan, Argentina
4
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), C1425FQB Buenos Aires, Argentina
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(7), 1116; https://doi.org/10.3390/s16071116
Submission received: 12 May 2016 / Revised: 24 June 2016 / Accepted: 14 July 2016 / Published: 19 July 2016
(This article belongs to the Special Issue Advanced Robotics and Mechatronics Devices)

Abstract

:
This paper presents the development of a smart walker that uses a formation controller in its displacements. Encoders, a laser range finder and ultrasound are the sensors used in the walker. The control actions are based on the user (human) location, who is the actual formation leader. There is neither a sensor attached to the user’s body nor force sensors attached to the arm supports of the walker, and thus, the control algorithm projects the measurements taken from the laser sensor into the user reference and, then, calculates the linear and angular walker’s velocity to keep the formation (distance and angle) in relation to the user. An algorithm was developed to detect the user’s legs, whose distances from the laser sensor provide the information necessary to the controller. The controller was theoretically analyzed regarding its stability, simulated and validated with real users, showing accurate performance in all experiments. In addition, safety rules are used to check both the user and the device conditions, in order to guarantee that the user will not have any risks when using the smart walker. The applicability of this device is for helping people with lower limb mobility impairments.

1. Introduction

Mobility can be defined as the “ability to move or be moved freely and easily” [1], which is an important skill of the human body that affects virtually all areas of a person’s life, since it is used for working, entertaining, having social relationships and exercising, among other daily tasks. Mobility also works as a form of primary exercise for the elderly [2]. People whose mobility has been impaired usually rely on other people having to perform daily tasks. In addition, the lack of mobility, inevitably, decreases the quality of life of those affected [3].
Impaired people can lack totally or partially the force needed to perform their movements. In the first group, the affected limb is unable to perform any force to allow the movement, while in the second group, the force is not enough to perform the movement [4]. Having a place or device in which the person can support partially his/her weight helps with balance and makes it easier to walk.
Assistive technologies (AT) are used to help impaired people, including those with problems related to mobility. They are defined as a set of equipment, services, strategies and practices, whose concept and application are used to minimize the problems of people with impairments [5]. This kind of technology has gained importance and awareness, due to the increase of the population who need assistance in several forms, including mobility, especially for the elderly, who can suffer from fall-related problems [6].
Mobility-impaired people may resort to some assistive devices to improve their quality of life, independence, self-esteem and avoid the fear of falling [7,8]. Several situations may lead a person to require an assistive device, such as infirmities, accidents and natural aging, which brings some age-related illnesses [9], and studies from the United Nations show that there is a tendency of the increase of the elderly’s participation in the overall population [10]. This shows the importance of studying technologies to assist the elderly. In addition, there are also people with non-age-related illnesses that affect mobility.
The mobility-aid devices are divided into two major categories: alternative and augmentative ones [7]. In the first category, the device changes the way the person moves himself/herself and does not require residual force of the affected limbs to allow the movement [7]. Examples of this category are wheelchairs, cars adapted to elderly people [11] and auto-guided vehicles (AGV), which allow the user to move without using the force of the affected limb [12]. By using these kinds of auxiliary devices, people who do not have or are unable to use remaining forces can benefit themselves [13,14]. In contrast, in the second category, the augmentative devices are designed for those who can still use their residual forces to move. These devices only work if the user intentionally applies the remaining forces to aid the movement [15]. Examples of these devices are canes, crutches, walkers and exoskeletons [9].
Each group has its advantages and disadvantages. Remarkably, the alternative devices can be used for most kinds of impairments, i.e., for both people with and without residual forces; however, the disadvantage is exactly the lack of the use of residual forces, because if a person still has remaining muscle forces and does not use them, his/her muscle may suffer atrophy [16,17,18]. Thus, augmentative devices use these remaining forces, therefore keeping the muscle tone and avoiding atrophy. The disadvantage of these devices lies in the fact that not everyone who suffers from an impairment can use them, exactly because they need residual forces, and some people may not have them [7].
The choice of the mobility-aid device must be made by a physician, who will take into consideration all of the factors and issues about the user’s health and rehabilitation needs [9]. If there is still usable remaining forces in the affected limbs, an augmentative device must be chosen [19,20]. Additionally, augmentative devices can be used as tools for physiotherapy sessions to enhance the muscular tone of the weakened limbs and improve the movement ability [21]. Some authors make a subdivision of augmentative devices according to the application: for transportation and for rehabilitation (for both transportation and physiotherapy) [7]. Figure 1 exemplifies this kind of device, which is a smart walker developed in our lab at UFES/Brazil to help people in gait rehabilitation [22].
In terms of the number of frames, walkers are categorized into basically three types, according to their ground reaction forces [23]: four-legged, front-wheeled and rollators (Figure 2). Each one has its advantages and disadvantages, which are intrinsic to their construction.
Four legged walkers are considered the most stable of the conventional walkers and can be used for people who do not have good body balance [23]. As a disadvantage, this kind of walker requires more force to use it, since the user has to lift the whole device and then put it back on the ground at each step. Nevertheless, this walker does not offer a natural gait, due to the need for lifting it [7,9,25]. On the other hand, front wheeled walkers do not require much force, since the user has only to lift the walker in the rear part, keeping the wheels on the ground and using them to move the walker. This walker provides a better gait when compared to the four-legged one; however, it still requires lifting the walker up, although not completely, during the gait, requiring less force than the previous one, but offering less stability and requiring more balance and control, especially when the walker is not totally on the ground [7]. Finally, the third type of walker is the rollator, which is the one that provides the most natural gait. In addition, it requires less force than the two previous walkers, since no lifting is necessary. However, this structure requires from the user better control and a good balance, since the wheels can run freely [7,9,23,25].
The rollator is the kind of walker used in this work, although built from a four-legged frame (converted to free wheels) and attached to a mobile robot to become a smart walker, with electronics, sensors, control system and actuators (motors) added to it.

1.1. Smart Walkers

As previously mentioned, smart walkers are walkers that contain, besides the mechanical structure to support the user, electronics, control systems and sensors, in order to allow a better user experience and minimize the risks of falling. In addition, they also provide a more natural and soft gait [7,26] as they are usually built on a rollator structure [25]. Some of them are designed to help users in other functions, such as guiding visually-impaired people or elderly people who have memory deficits [27,28,29,30]. There are also examples of walkers that go beyond the mobility support and expand the user experience, providing sensorial, cognitive and health monitoring support [7]. They can be classified into passive or active depending on the propulsion movement being made/aided by the device or not. A passive smart walker may contain actuators to help the orientation of the device, but not propulsion, while an active smart walker provides support in propelling the movement [15]. Some notable smart walkers that contain other features, besides aiding the users with movement, are:
  • RT Walker: this is a passive walker, which contains a laser sensor to monitor the surrounding area and two other laser sensors to find the user’s legs. In addition, this walker also contains inclinometers to find the walker inclination [31].
  • GUIDO Smart Walker: this walker had its first version (PAM-AID walker) developed to assist blind people [32]. Throughout time, this walker received new sensors, other control strategies and, currently, has ultrasound or a laser sensor (depending on the version) to help users avoid obstacles [29]. They do not offer propulsion, being classified as a passive walker [33]. The current version of this walker can also perform SLAM (simultaneous localization and mapping), which allows the device with mapping the environment (while assisting the user) and using this information to detect obstacles [27].
  • JARoW: this walker uses omni-directional wheels to improve its maneuverability. Two infrared sensors are used to detect the user’s legs in order to control the walker speed [34].
  • Other devices: there are several other devices, with different techniques and purposes. The SIMBIOSIS Smart Walker, for example, was designed to study human gait [35]; the PAMM (Personal Assistant for Mobility and Monitoring) monitors the user’s health while he/she uses the device [36]; the iWalker was developed to be used inside a structured environment using local sensors and others in the environment [30]. These walkers are focused either on elderly people or to study human gait.
In this work, a new controller for a human-robot formation is introduced, in which the human is the leader of the formation and does not have any sensor on him/her, and the follower is the robot, which contains all of the sensing devices. All measurements necessary for the walker controller are obtained from the distance to the user’s legs (through a laser sensor), in addition to measurements from the ultrasound sensors and robot odometry. No force sensors are used, which is an advance in relation to the smart walker developed in [22].
Table 1 shows the comparison among the smart walker controllers. As can be seen, the smart walker of this paper proposes a novel controller based on human-robot formation. Besides, this work does not have any sensors attached to the user’s body, as some of those presented in Table 1 have.

2. Materials and Methods

2.1. Mechanical Structure and Hardware

Our smart walker was designed by adapting a conventional commercial four-legged walker (Figure 2a) into a structure, which was attached to a Pioneer 3-DX robot (Figure 3); the original four-legged walker was modified to include free wheels. Thus, our smart walker can move freely in any direction, and once attached to the robot, its movements are given by the robot. The human, in turn, controls the robot. Therefore, the movement is guided by the human.
To make the whole structure of the walker, some pieces were built in aluminum to attach the walker frame to the robot. Figure 3 shows details of the smart walker. Item A is the modified walker with four free wheels (Item D) and with other options for height configuration. Foam supports for the forearms were built to give more comfort to the user during the use, shown in Item G. Item B shows the robot and the ultrasound sensors to detect obstacles. The robot provides both propulsion and the non-holonomic restrictions needed for this application. Items E and F are, respectively, a support to store the battery used by the laser sensor SICK LMS-200 LRF (item C, [40]) and a WiFi to Ethernet converter used to allow the robot to communicate wirelessly.

2.2. Algorithms

The controller used as a basis to develop the human-robot formation controller presented in this paper was based on the studies of [41]. Here, the idea is to substitute the master robot (leader) with a human, with all of the sensors included in the follower robot. The human does not have any sensors, thus the acquired data by the robot are translated and rotated and then projected into the human reference. In other words, the data collected by the robot are processed and projected on the human pose as if the human were the master robot. Thus, the human commands the formation. By using the laser range sensor, the smart walker can calculate the human speed, position and orientation inferred from the distance to the human’s legs and, therefore, adjust its own speed, keeping the formation maintained. After the output of the formation control, the data of the linear and angular speed are processed by an internal servo PID (proportional-integral-derivative) controller, which calculates the rotational speed of each wheel of the robot.
The steps of the controller are presented in the block diagram shown in Figure 4. Blocks, such as “LD Safety Rules” (LD meaning leg detection), “Overspeed” and “Backwards move”, shown in Figure 4, are related to the safety rules, explained in Section 2.2.4. The human-robot kinematics is explained in Section 2.2.2.

2.2.1. Legs and User Detection

To detect the human pose, the laser sensor firstly scans the area in front of it (180°) with 1° resolution, searching for the legs [40]. The control algorithm uses only the filtered data in a region of interest (ROI), defined from 68° to 112° and 1 m long, which is the area where the human’s legs should be located. Inside this region, only the human’s legs should be detected, as the walker frames are outside that region.
The algorithm to detect the legs is based on the one developed by [42]. This algorithm works with the signal transitions, which are high variations in the laser signal. Transitions are actually big variations in the laser signal, and they can be found by deriving the main signal and finding d ρ d θ . By analyzing this last signal and finding the peaks and valleys, it is possible to infer where the transitions provoked by the legs are. Figure 5 shows in pictures what the transitions are and how they are analyzed (algorithm functioning).
The variables T 1 up to T 4 represent the transitions (high variation of the laser signal), and L L and R L are the left and right leg, respectively. Finally, the variable “user” means the position of the user itself.
Summarizing, the algorithm detects where the legs are located by performing the analysis of the number, length and amplitude of the transitions in the derivative function of the laser signal inside the ROI area. These transitions express big variations in the laser reading, generating peaks and valleys, which are used to find the legs’ extremities.
There are five possible cases that are analyzed by the algorithm to find each human’s leg, in addition to estimate the human’s pose (legs’ position and orientation). The first case happens when there are less than two transitions or more than four transitions. In such cases, the algorithm ignores the frame and waits for the next laser scan. If there are more than 20 frames ignored consecutively, there is a safety rule that stops the walker. The other cases are when there are two, three or four transitions detected. Figure 6 shows examples of leg detection for: two transitions (a); four (b); three with the left leg closer to the laser (c); three with the right leg closer to the laser (d) and a case in which only one leg is detected; the other is outside the safe zone, and the laser sensor cannot detect it (e). The system can know this by analyzing the leg diameter in the space between two transitions. To be considered a leg, it should be bigger than the predefined value of 9 cm (after projection). The value 9 cm was taken based on the average horizontal line (frontal plane) of the leg at a 30-cm height, which is the laser height from the ground, of people that work in our laboratory. The dots in the graphics represent the variations, i.e., the transitions.
The flowcharts represented in Figure 7 show the decision tree used to determine where the user is, by analyzing the signals from the laser sensor for each case aforementioned. Figure 8 is an extension of Figure 7, which details how the number of transitions makes the algorithm behave.
In the case of the system finding less than two transitions, which means not detecting any leg or detecting just one leg, the system ignores the frame and increments the safety counter. If this counter reaches a maximum value, the walker stops. If there are only two transitions, the legs are probably in superposition, without significant difference between them regarding the distance from the laser sensor.
Figure 7 and Figure 8 show the algorithm used to make the system detect where the user’s legs are. First, the algorithm starts reading the laser sensor with its 180 angle range and a 30-m distance range. Second, it is necessary to crop the area into the region of interest, which is the area behind the walker. This area is defined by the angles between 68 and 112 and 1 m long. Everything outside this area is ignored.
Then, using this filtered signal, the algorithm computes its derivative and then finds where the major variations are, which are most probably the user legs. The transitions should be higher than a predefined threshold. This avoids the system obtaining noises or transitions, which could be misinterpreted as representing legs. With the information of the derivative of the laser signal, the algorithm finds the peaks and valleys, which indicate the transitions.
According to the number of transitions, distinct scripts are used to process the information. If there is only one transition, this laser reading is ignored and the robot waits for the next one. With only one transition, it is not possible to detect and define where the two legs are. The same behavior is adopted in the case of more than five transitions, since in this case, it is impossible to detect what transitions represent the legs.
For the other cases, it is possible to calculate the position of each leg. For two legs, the extreme position of the transitions is analyzed, and the person is considered to be in the middle between these points. Mathematically, this is represented by Equations (1) and (2).
L L = T 1 + ϕ 2
R L = T 2 ϕ 2
where L L is the left leg and R L is the right leg. T 1 and T 2 are the first and second transitions (from left to right), respectively. The variable ϕ is related to the size of the leg projected in the distance of each transition.
In the case of the legs, there are two possibilities, which are either the left leg being closer to the laser or the right leg being closer to the sensor. To determine which leg is closer, the first and the last transition are analyzed. If the first transition ( T 1 ) is closer to the sensor, this means the left leg is in front of the right leg. Otherwise (if T 3 is closer to the sensor), this means the right leg is in front of the left leg.
In the first case, the mathematical expression that would represent each leg position is given by Equations (3) and (4).
L L = T 1 + T 2 2
R L = T 3 ϕ 2
where T 3 is the third transition (from left to right).
On the other hand, the second case is given by the mathematical expressions shown in Equations (5) and (6).
L L = T 1 + ϕ 2
R L = T 2 + T 3 2
In the case of four transitions, the leg angles are defined by Equations (7) and (8).
L L = T 1 + T 2 2
R L = T 3 + T 4 2
After detecting each leg position, the user position is estimated by taking the average position of both legs. The information given by L L and R L , as well as the transitions are actually the indexes (angles) in the vector of the laser signal. To find the distance, it is necessary to look for the amplitude (distance, represented by variable d) given by those angles. Therefore, the user distance and angle from the walker are defined as in Equations (9) and (10).
d h L = d ( L L ) + d ( R L ) 2
φ h L = L L + R L 2
where d h L is the distance from the human to the robot, d ( L L ) is the distance of the left leg and d ( R L ) is the distance of the right leg (Equation (9)). In Equation (10), φ h L , L L and R L are, respectively, the human orientation and his/her left and right leg orientation (angle). The superscript L means the laser axis, and the subscript h means the human/user.

2.2.2. Human-Robot Kinematics

The human-robot interaction is given by a system that helps the user to move him/herself with the aid of the weight support and balance. In addition, different from other works that use inertial measurement units and/or force sensors, there are no sensors attached to the user’s body. All sensing is made by the robot, which is the follower in the formation. The laser sensor is used to verify where the user is, and in the case of obstacle detection, the ultrasound sensors present in the robot are also used. Therefore, in this work, a human-robot interaction is shown , in which the human does not need to wear any sensor, since all of the data needed are acquired by the robot and its sensors.
The diagram of the human-robot interaction is depicted in Figure 9. This is the diagram used to generate the human-robot kinematics model and, further, the control laws. In the diagram depicted in Figure 9, it is possible to see the variables used by the controller algorithm to calculate the robot’s linear and angular speed to keep the formation. The variables presented in this picture are described in Table 2.
The mathematical model of the smart walker pose in the human reference is described in Equations (11)–(13). Pioneer 3-DX is a differential mobile robot with non-holonomic constraints.
x ˙ r h = v r 0 · cos α + ω h 0 · d · sin θ
y ˙ r h = v r 0 · sin α + ω h 0 · d · cos θ v h
α ˙ = ω r 0 ω h 0
where x ˙ and y ˙ are the robot-human position variation due to his/her movements (linear speed v h and angular speed w h ) and α is the robot-human angle also related to his/her displacements. θ is the angle of the robot in the human reference, and d is the distance between the human and the robot. Velocities v r 0 and ω r 0 are, respectively, the robot linear and angular speed.
In such a model, all variables with the index h are related to the user (human), while variables with index r are related to the robot (smart walker). The index L and 0 mean, respectively, the laser sensor and absolute references.
As shown in Equations (11)–(13), the absolute speeds of the human (both linear and angular) are needed to calculate the control actions. On the other hand, as shown in the diagram in Figure 4, the first step made by the controller after receiving the distance and angle from the laser sensor is to convert them into the Cartesian system using the laser sensor as the reference.
The following steps are used to find the linear and angular speed the robot should perform to keep the formation. Some of those steps can be visualized in Figure 4.
  • First, the coordinates are calculated in the Cartesian system (Equation (14)).
    x h L ( k ) y h L ( k ) = d ( k ) cos φ ( k ) sin φ ( k )
    where x h L ( k ) , y h L ( k ) are the Cartesian coordinates with the laser sensor reference at the instant k. These coordinates are calculated using the distance and angle from the laser sensor ( d ( k ) and φ ( k ) ).
  • Second, after computing the human coordinates in the laser reference, it is necessary to measure the human linear and angular speed. To this end, it is necessary to calculate the robot angle variation, which is given by Equation (15). Figure 10 depicts the absolute angle robot variation.
    Δ α r ( k ) = α r ( k ) α r ( k 1 )
    where α r means the robot absolute angle, and Δ α r is its variation in two consecutive instants.
  • Following the control algorithm, this variation is used to calculate the rotation and translation transform matrices used to find the human previous location projected in the current one (Figure 11).
The projection shown in Figure 11 is represented in Equation (16).
x h L y h L k 1 | k = cos ( Δ α r ( k ) ) sin ( Δ α r ( k ) ) sin ( Δ α r ( k ) ) cos ( Δ α r ( k ) ) · x h L y h L k 1 v r · Δ k 0
where:
  • x h L and y h L are the human position in the laser reference at instant k 1 .
  • x h L and y h L are the previous item position projected into the current instant k.
  • v r is the robot linear speed.
  • Δ k is the sample time between two consecutive instants of time (in the figure, the time needed for the walker to move from Y L , which is supposed to be in the instant k 1 , to Y L , which is supposed to be in instant k, which is the current instant.
  • Subscript k 1 | k means the positions in instant k 1 projected into instant k.
This information is useful to calculate the speed of the human using a single reference, which is k.
4.
Once the human’s previous location projected on the current robot reference and the current user location are available, it is possible to compute the human linear and angular speeds. The human speed in this case is the same for the robot reference and the absolute reference. This occurs because it is a variation, and the robot displacement in the absolute reference was taken into account in the last part of Equation (16).
Figure 12 shows the user speed calculation based on the robot movement. It is considered that the human did not move.
Therefore, it is possible to calculate the absolute human linear speed in both axes (x and y), in addition to the module of the speed, as shown in Equations (17)–(19), respectively.
v h x ( k ) = x h L ( k ) x h L ( k 1 | k ) Δ k
v h y ( k ) = y h L ( k ) y h L ( k 1 | k ) Δ k
v h ( k ) = v h x 2 ( k ) + v h y 2 ( k )
This velocity value is independent of the reference system that is being used.
5.
The next step relies on computing the human orientation in the robot reference, as shown in Figure 13.
This information is necessary to find the human angular speed, which is exactly the angle between the two speed components of the human speed, given by Equation (20).
β ( k ) = arctan y h L ( k ) y h L ( k 1 | k ) x h L ( k ) x h L ( k 1 | k )
where β is the human speed vector angle (angle between the components of the human speed).
6.
Using the variation of the angle β, it is possible to find the human angular speed on the robot reference. The robot angular speed ω r is previously provided by the robot encoders. Figure 14 shows how the angle variation affects the user angular speed.
ω h ( k ) = β ( k ) β ( k 1 ) Δ k + ω r ( k )
where ω h and ω r are the absolute speeds of the human and the robot.
7.
From the human orientation in the robot’s reference, it is possible to find the robot’s orientation in the human reference, through Equation (22). The angle between the laser reference and the robot reference is π 2 (assuming counterclockwise positive), as shown in Figure 15.
α ( k ) = β ( k ) + π 2
8.
Following the calculation of the control actions, the next step is to find the displacement vector T, which converts the robot’s reference into the user’s reference.
T ( k ) = cos ( π 2 ) sin ( π 2 ) sin ( π 2 ) cos ( π 2 ) · x h L ( k ) y h L ( k )
9.
Finally, it is possible to compute the robot position in the human reference, as shown in Equation (24) and represented in Figure 16.
x r h ( k ) y r h ( k ) = cos ( α r ( k ) + π 2 ) sin ( α r ( k ) + π 2 ) sin ( α r ( k ) + π 2 ) cos ( α r ( k ) π 2 ) · 0 T ( k )
where x r h and y r h are the human position in the robot reference.
10.
To define the control laws, it is important to define first the vector h, which contains the robot position in the human reference, shown in Equation (25). Figure 16 details the desired vector h d and the current vector h.
h = x r h y r h
11.
It is also essential to determine which are the desired values, given by the h d vector, represented in Equation (26).
h d = x r h | d y r h | d
where the variables with subscript d indicate the desired values, i.e., the set-points of the robot position vector and each one of its components.
12.
The error vector is given by Equation (27).
h ˜ = h h d
13.
By using the inverse kinematic model, the speed reference vector can be calculated as in Equation (28)
h ˙ r e f = K h ˜ h ˙ r h
where K is a control gain, and h ˙ r h (shown in Equation (28)) refers to the human contribution to the whole system movement, given by:
h ˙ r h = ω h · d · sin θ ω h · d cos θ v h
where θ refers to the human angle in the absolute reference.
14.
Finally, the control laws are computed, as described in Equations (30) and (31).
v c = | h ˙ r e f | cos α ˙
ω c = k ω α ˜ + α ˙ r e f + ω h ,
where v c and ω c are the controller outputs. Thus, these are reference speeds the controller sends to the robot.
The kinematic controller is presented in Equations (30) and (31). Even though the dynamic model can be affected by the user’s weight, the usually slow speeds during robot operation make it irrelevant to consider the dynamic effects. Therefore, a pure kinematic controller is sufficient to keep the user distance and angle, and the dynamic model alterations due to the user and structure weight can be ignored. Besides, part of the weight is not supported by the robot, but the walker frame and the user’s legs.

2.2.3. Control Stability Proof

The stability proof of the controller is made by analyzing the whole system with the direct Lyapunov method. Considering the state vector as the error vector, the positive definite function of Equation (32) is taken:
V = 1 2 h ˜ h ˜ T
The derivative function of Equation (32) is shown in Equation (33).
V ˙ = h ˜ h ˜ ˙ T
By substituting h ˜ ˙ T = h ˙ r e f = K h ˜ , it is possible to find that this derivative function is negative definite, as shown in Equation (34), thus concluding the asymptotic stability at the equilibrium point of zero error.
V ˙ = h ˜ K h ˜ < 0 , h ˜ 0 , K > 0
Similarly, we can prove that the robot’s orientation converges to zero by taking the candidate function (Equation (35)).
V = 1 2 α ˜ 2 > 0
Deriving Equation (35), we find:
V ˙ = α ˜ α ˜ ˙
To prove Equation (36) is negative definite, it is necessary to close the loop and isolate the term α ˜ ˙ . Therefore, considering that the controller speed equals the robot speed, which means ω c of Equation (31) equals ω r 0 of Equation (13), we can obtain Equation (38) by computing this information into Equation (13).
α ˙ = ω r 0 ω h 0
α ˙ = ω c ω h
α ˙ = k ω α ˜ + α ˙ r e f + ω h ω h
0 = k ω α ˜ + α ˙ r e f α ˙
0 = k ω α ˙ r e f α ˜ ˙ α ˜ ˙
α ˜ ˙ = k ω α ˜
Now, by replacing Equation (42) in Equation (36), Equation (43) can be obtained, thus proving that V ˙ is negative definite.
V ˙ = K w α ˜ 2 < 0

2.2.4. Safety Rules

Safety rules are a special part of the algorithm, apart from the controller, which analyzes whether the controller’s output is safe or not. Normally, the controller’s outputs are executed; however, there are some special situations that may require a safety supervisor. Examples of these situations are when the laser sensor only detects one leg, which may imply that the user is losing his or her balance and may fall. Table 3 shows the safety rules used in this work and when they are applicable. Thus, the safety supervisor can change the controller’s output, in order to guarantee human safety.

3. Obstacle Avoidance

The robot is equipped with ultrasound sensors that can be used to avoid obstacles. If the obstacle avoidance is turned on, the robot stops if there is an object or person within a 50-cm distance in front of the robot. Figure 17 shows how the ultrasounds act in the case of finding an obstacle in front of the robot.

4. Experiments

Experiments were performed in order to validate the controller and safety rules by applying them in real situations. This is a proof-of-concept application; therefore, the goal is to prove that the system can work, and it was not tested with people suffering from disabilities.

4.1. Straight Line

In the first experiment set, the user was asked to walk helped by the smart walker in a 10-m straight path, three times. The user goes from the start point until the finish point and then returns back to the start point. In the first instance, there is no obstacle in the line the user goes through. In the second instance, a wood board is placed, which represents an obstacle that should be detected by the ultrasound sensor and brake the walker, following Safety Rule #5. The path and movements are described in Figure 18. As can be seen in the results (Figure 19), the errors in the distance and angle converge towards zero, and since it is a straight line, there is almost no angular speed.
The error tends to zero, but it does not converge due to the movement of the user. During the walking, the human-robot distance changes not only due to the robot movement, but also due to the human movement. Therefore, the controller always tends to force the error towards zero, but when the user moves, it changes the distance again, making the error different than zero. Since the speed is limited, the robot cannot act so quickly in order to always maintain the error at zero. However, it stays bounded, which shows that the controller is acting while the user walks.
In the case of the straight line experiment, the angle error was −0.04 rad, while the distance error was 0.0009 m.
In order to validate the obstacle detection algorithm, which is Safety Rule #5, the ultrasound sensors are monitored, and the walker stops immediately if the robot finds an obstacle within 50 cm from the robot. In the beginning of this experiment, there is an obstacle, and when the robot achieves 50 cm from the obstacle, it stops. Then, the obstacle is removed, and the walker can move forward again.
Figure 20 shows some photos of the experiment and the diagram of the path with the obstacles. Every time the obstacle was put in front of the smart walker, it stopped, and once it was removed, the device started moving again. The results of this experiment are shown in Figure 21.
Similarly to the previous case, the errors stay bounded. When the walker approximates the obstacle, it stops in order not to collide with it. Only when the obstacle is removed does it allow the user to walk again. In this experiment, the angle error was −0.02 rad, while the distance error was 0.05 m.

4.2. Lemniscate Path

The second experiment was conducted with the walker following a Lemniscate curve path (which is typically used to validate robot performance). In the first instance, there is no obstacle, while in the second instance, the walker should brake before colliding with the obstacle. The user started the curve at Point “A” and then walked following the whole curve, returning to the same point, as shown in Figure 22.
Mathematically, this Lemniscate curve is represented by Equations (44) and (45).
x ( t ) = a · c o s ( t ) s i n 2 ( t ) + 1
y ( t ) = b · s i n ( t ) · c o s ( t ) s i n 2 ( t ) + 1
where a and b are the constants defining the length in each axis.
The Lemniscate curve followed by the robot can be viewed in Figure 23. The error varies through time, but always tends to zero.
The shape generated by the robot odometry in Figure 23 is relative to the Lemniscate curve. The shape does not fit completely in the curve shown due to two reasons: (1) the human guided the robot, and therefore, he did not walk exactly on the Lemniscate curve; (2) there are some odometry errors due to sliding in addition to other errors that naturally accumulate throughout time when using odometry. The control error tends to zero, with means the distance error is 0.02 m and the mean angle error equals −0.08 rad. It is important to emphasize that the error varies with the human movement, i.e., it will always be changing while the human moves, but the controller makes it go towards zero throughout time.
After collecting the data from the experiments, some results, a discussion and conclusions can be made, which are shown in Section 5 and Section 6.

5. Discussion

The results of the experiments show that the controller maintained stability and helped the user in different paths, including complex curves, such as the Lemniscate one. In addition, the safety rules were functional when necessary.
The graphs in Figure 23 show that in abrupt changes, there were slight increases of the errors, but they quickly went towards zero, due to the effect of the controller. Additionally, in the cases when the safety rules were needed, the robot responded in a suitable way, detecting the obstacle and braking the smart walker.
Still, regarding the safety rules, when only one leg was detected, the smart walker was stopped. This is an important issue addressed here, as the one leg detection may be the cause of the falling of the user.
The results of this research show that the controller kept the set-point distance and angle. Additionally, the safety rules also have been activated as expected.
This shows that the idea behind the controller, sensors and actuators together with the mechanical structure, i.e, the whole structure of the smart walker, can be a useful tool for mobility and rehabilitation and also may be used in clinics for such purposes.

6. Conclusions

This work presented a new controller and a safety supervisor to guide a smart walker with no sensors attached to the user. Additionally, it was shown how these concepts were applied in a modified walker frame attached to a mobile robot, which has a laser sensor to detect the user’s legs. In addition, ultrasound sensors on-board this robot allowed braking the smart walker to avoid the collision with obstacles.
The algorithm used to detect the user’s legs was suitable for this application, and the whole system integration was tested with two different kinds of experiments, involving obstacles and free paths (without obstacles).
The results were found satisfactory, and this controller, together with the safety rules, is a potential tool for helping people both for mobility and for physical rehabilitation, since the controller proved to guarantee safe walking. The safety rules behaved as expected, offering the user a safe experience, needed both in mobility and rehabilitation.

Acknowledgments

The authors acknowledge the financial support from CNPq (#458651/2013-3) and technical support from the Federal University of Espirito Santo, the National University of San Juan and CONICET.

Author Contributions

Carlos Valadao envisioned and developed all of the algorithms for leg detection and the smart walker’s controller, besides writing this piece. All of the work was supervised and technically advised by Teodiano Bastos-Filho, Eliete Caldeira, Anselmo Frizera-Neto and Ricardo Carelli, who also contributed to the editing of this manuscript. Teodiano Freire Bastos-Filho and Ricardo Carelli have provided the general direction of the research.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LD
Leg detection
LMS
Laser measurement systems
LRF
Laser range finder
PID
Proportional-integral-derivative
ROI
Region of interest

References

  1. Oxford Dictionaries. Available online: http://www.oxforddictionaries.com/definition/english/mobility (accessed on 10 May 2016).
  2. Yu, K.T.; Lam, C.P.; Chang, M.F.; Mou, W.H.; Tseng, S.H.; Fu, L.C. An interactive robotic walker for assisting elderly mobility in senior care unit. In Proceedings of the IEEE Workshop on Advanced Robotics and Its Social Impacts, ARSO, Seoul, Korea, 26–28 October 2010; pp. 24–29.
  3. Morris, A.; Donamukkala, R.; Anuj, K.; Aaron, S.; Matthews, T.J.; Dunbar-Jacob, J.; Thrun, S. A robotic walker that provides guidance. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), Taipei, Taiwan, 14–19 September 2003; pp. 25–30.
  4. Valadão, C.; Cifuentes, C.; Frizera, A.; Carelli, R.; Bastos, T. Development of a Smart Walker to Assist Human Mobility. In Proceedings of the 4th IEEE Biosignals and Biorobotics conference (ISSNIP), Rio de Janeiro, Brazil, 18–20 February 2013; pp. 1–5.
  5. Cook, A.M.; Polgar, J.M. Cook and Hussey’s Assistive Technologies: Principles and Practice, 3rd ed.; Mosby Elsevier: St. Louis, MO, USA, 2013; p. 592. [Google Scholar]
  6. Duxbury, A.S. Gait Disorders and Fall Risk: Detection and Prevention. Comp. Ther. 2000, 26, 238–245. [Google Scholar] [CrossRef]
  7. Martins, M.M.; Santos, C.P.; Frizera-Neto, A.; Ceres, R. Assistive Mobility Devices Focusing on Smart Walkers: Classification and Review. Robot. Auton. Syst. 2012, 60, 548–562. [Google Scholar] [CrossRef] [Green Version]
  8. World Health Organization. WHO Global Report on Falls Prevention in Older Age; World Health Organization Press: Geneva, Switzerland, 2007; p. 53. [Google Scholar]
  9. Bradley, S.M.; Hernandez, C.R. Geriatric assistive devices. Am. Fam. Physician 2011, 84, 405–411. [Google Scholar] [PubMed]
  10. United Nations. World Population Ageing 1950–2050. In Technical Report 26; United Nations: New York, NY, USA, 2002; pp. xxvii–xxxi. [Google Scholar]
  11. Kamata, M.; Shino, M. Mobility devices for the elderly: “Silver vehicle” feasibility. IATSS Res. 2006, 30, 52–59. [Google Scholar] [CrossRef]
  12. Ceres, R.; Pons, J.; Calderon, L.; Jimenez, A.; Azevedo, L. A robotic vehicle for disabled children. IEEE Eng. Med. Biol. Mag. 2005, 24, 55–63. [Google Scholar] [CrossRef] [PubMed]
  13. Frizera Neto, A. Interfaz Multimodal Para Modelado y Asistencia a la Marcha Humana Mediante Andadores Robóticos. Ph.D. Thesis, Universidad de Alcalá, Madrid, Spain, 2010. [Google Scholar]
  14. Bastos-Filho, T.F.; Cheein, F.A.; Muller, S.M.T.; Celeste, W.C.; De La Cruz, C.; Cavalieri, D.C.; Sarcinelli-Filho, M.; Amaral, P.F.S.; Perez, E.; Soria, C.M.; et al. Towards a new modality-independent interface for a robotic wheelchair. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 567–584. [Google Scholar] [CrossRef] [PubMed]
  15. Wasson, G.; Gunderson, J.; Graves, S.; Felder, R. An Assistive Robotic Agent for Pedestrian Mobility. In Proceedings of the 5th International Conference on Autonomous Agents, Montreal, QC, Canada, 28 May–1 June 2001; pp. 169–173.
  16. Alwan, M.; Wasson, G.; Sheth, P.; Ledoux, A.; Huang, C. Passive derivation of basic walker-assisted gait characteristics from measured forces and moments. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Francisco, CA, USA, 1–5 September 2004.
  17. Wasson, G.; Sheth, P.; Alwan, M.; Granata, K.; Ledoux, A.; Huang, C. User Intent in a Shared Control Framework for Pedestrian Mobility Aids. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS 2003), Las Vegas, NE, USA, 27–31 October 2003; pp. 2962–2967.
  18. Wasson, G.; Sheth, P.; Ledoux, A.; Alwan, M. A physics-based model for predicting user intent in shared-control pedestrian mobility aids. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No. 04CH37566), Sendai, Japan, 28 September–2 October 2004; pp. 1914–1919.
  19. Loterio, F.A.; Mayor, J.J.V.; Frizera Neto, A.; Filho, T.F.B. Assessment of applicability of robotic walker for post-stroke hemiparetic individuals through muscle pattern analysis. In Proceedings of the 5th ISSNIP-IEEE Biosignals and Biorobotics Conference (2014): Biosignals and Robotics for Better and Safer Living (BRC), Salvador, Brazil, 26–28 May 2014; IEEE: Salvador, Brazil, 2014; pp. 1–5. [Google Scholar]
  20. Kikuchi, T.; Tanaka, T.; Tanida, S.; Kobayashi, K.; Mitobe, K. Basic study on gait rehabilitation system with intelligently controllable walker (i-Walker). In Proceedings of the 2010 IEEE International Conference on Robotics and Biomimetics, ROBIO 2010, Tianjin, China, 14–18 December 2010; pp. 277–282.
  21. Valadão, C.; Cifuentes, C.; Frizera, A.; Carelli, R.; Bastos, T. Development of a Smart Walker for People with Disabilities and Elderlies. In XV Reunión de Trabajo en Procesamiento de la Información y Control; RPIC: San Carlos de Bariloche, Argentina, 2013; pp. 977–982. [Google Scholar]
  22. Rodriguez, C.; Cifuentes, C.; Frizera, A.; Bastos, T. Metodologia para Obtenção de Comandos de Navegação de um Andador Robótico Através de Sensores de Força e Laser. In XI Simpósio Brasileiro de Automação Inteligente (SBAI), 2013; SBA: Fortaleza, Brazil, 2013; pp. 1–6. [Google Scholar]
  23. Lacey, G.; Mac Namara, S.; Dawson-Howe, K.M. Personal Adaptive Mobility Aid for the Infirm and Elderly Blind. In Assistive Technology and Artificial Intelligence; Springer: Berlin, Germany; Heidelberg, Germany, 1998; pp. 211–220. [Google Scholar]
  24. Figure-Pixabay. Available online: https://pixabay.com/ (accessed on 18 July 2016).
  25. Chan, A.D.C.; Green, J.R. Smart rollator prototype. In Proceedings of the MeMeA 2008—IEEE International Workshop on Medical Measurements and Applications, Ottawa, ON, Canada, 9–10 May 2008; pp. 97–100.
  26. Einbinder, E.; Horrom, T.A. Smart Walker: A tool for promoting mobility in elderly adults. J. Rehabil. Res. Dev. 2010, 47, xiii–xv. [Google Scholar] [CrossRef] [PubMed]
  27. Rodriguez-Losada, D.; Matia, F.; Jimenez, A.; Galan, R.; Lacey, G. Implementing map based navigation in guido, the robotic SmartWalker. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 3390–3395.
  28. Wachaja, A.; Agarwal, P.; Zink, M.; Adame, M.R.; Moller, K.; Burgard, W. Navigating blind people with a smart walker. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–3 October 2015; pp. 6014–6019.
  29. Rodriguez-Losada, D. A Smart Walker for the Blind. Robot. Autom. Mag. 2008, 15, 75–83. [Google Scholar]
  30. Kulyukin, V.; Kutiyanawala, A.; LoPresti, E.; Matthews, J.; Simpson, R. iWalker: Toward a rollator-mounted wayfinding system for the elderly. In Proceedings of the 2008 IEEE International Conference on RFID, Las Vegas, NV, USA, 16–17 April 2008; pp. 303–311.
  31. Hirata, Y.; Muraki, A.; Kosuge, K. Motion control of intelligent passive-type walker for fall-prevention function based on estimation of user state. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, FL, USA, 15–19 May 2006; pp. 3498–3503.
  32. Lacey, G.; Dawson-Howe, K. Evaluation of robot mobility aid for the elderly blind. In Proceedings of the Fifth International Symposium on Intelligent Robotic Systems, Stockholm, Sweden, 8–11 July 1997.
  33. Rentschler, A.J.; Simpson, R.; Cooper, R.A.; Boninger, M.L. Clinical evaluation of Guido robotic walker Andrew. J. Rehabil. Res. Dev. 2008, 45, 1281. [Google Scholar] [CrossRef] [PubMed]
  34. Lee, G.; Ohnuma, T.; Chong, N.Y. Design and control of JAIST active robotic walker. Intell. Serv. Robot. 2010, 3, 125–135. [Google Scholar] [CrossRef]
  35. Frizera-Neto, A.; Ceres, R.; Rocon, E.; Pons, J.L. Empowering and assisting natural human mobility: The simbiosis walker. Int. J. Adv. Robot. Syst. 2011, 8, 34–50. [Google Scholar]
  36. Dubowsky, S.; Genot, F.; Godding, S.; Kozono, H.; Skwersky, A.; Yu, H.; Yu, L.S. PAMM—A robotic aid to the elderly for mobility assistance and monitoring: A “helping-hand” for the elderly. In Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, CA, USA, 24–28 April 2000; pp. 570–576.
  37. Hirata, Y.; Hara, A.; Kosuge, K. Passive-type intelligent walking support system “RT Walker”. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, 28 September–2 October 2004; pp. 3871–3876.
  38. MacNamara, S.; Lacey, G. A smart walker for the frail visually impaired. In Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, CA, USA, 24–28 April 2000; pp. 1354–1359.
  39. Valadão, C.T.; Lotério, F.; Cardoso, V.; Bastos-Filho, T.; Frizera-Neto, A.; Carelli, R. Adaptação De Andador Convencional Para Reabilitação E Assistência a Pessoas Com Restrições Motoras. In XXIV Congresso Brasileiro de Engenharia Biomédica; SBEB: Uberlândia, Brazil, 2014; pp. 533–536. [Google Scholar]
  40. Sick. Technical Documentation LMS200/211/221/291 Laser Measurement Systems; 2006. Available online: http://sicktoolbox.sourceforge.net/docs/sick-lms-technical-description.pdf (accessed on 18 July 2016).
  41. Roberti, F.; Marcos Toibero, J.; Frizera Vassallo, R.; Carelli, R. Control Estable de Formación Basado en Visión Omnidireccional para Robots Móviles No Holonómicos. In Revista Iberoamericana de Automática e Informática Industrial RIAI; Elsevier: Madrid, Spain, 2011; pp. 29–37. [Google Scholar]
  42. Schneider Junior, V.; Frizera Neto, A.; Valadão, C.; Elias, A.; Bastos Filho, T.; Filho, A. Detecção de pernas utilizando um sensor de v. In Congresso Brasileiro de Automática; Universidade Federal do Espírito Santo: Vitória-ES, Brazil, 2012; pp. 1364–1370. [Google Scholar]
Figure 1. UFES’s smart walker.
Figure 1. UFES’s smart walker.
Sensors 16 01116 g001
Figure 2. Types of walkers according to their mechanical structure. (a) Four-legged; (b) front-wheeled [24]; (c) rollator [24].
Figure 2. Types of walkers according to their mechanical structure. (a) Four-legged; (b) front-wheeled [24]; (c) rollator [24].
Sensors 16 01116 g002
Figure 3. Diagram showing the structures that compose our smart walker [39].
Figure 3. Diagram showing the structures that compose our smart walker [39].
Sensors 16 01116 g003
Figure 4. Flowchart of the data acquisition, controller and safety rules applied to the smart walker.
Figure 4. Flowchart of the data acquisition, controller and safety rules applied to the smart walker.
Sensors 16 01116 g004
Figure 5. Example of finding the user based on the position of each leg. (a) Laser signal inside the region of interest; (b) signal derivative and transitions (peaks and valleys); (c) transitions in the original signal; (d) legs and user position.
Figure 5. Example of finding the user based on the position of each leg. (a) Laser signal inside the region of interest; (b) signal derivative and transitions (peaks and valleys); (c) transitions in the original signal; (d) legs and user position.
Sensors 16 01116 g005
Figure 6. Diagram showing the operation of the leg detection algorithm. (a) Legs togheter - two transitions; (b) Legs separated - four transitions; (c) Three transitions - left leg closer to the laser sensor; (d) Three transitions - right leg closer to the laser sensor; (e) Only one leg detected.
Figure 6. Diagram showing the operation of the leg detection algorithm. (a) Legs togheter - two transitions; (b) Legs separated - four transitions; (c) Three transitions - left leg closer to the laser sensor; (d) Three transitions - right leg closer to the laser sensor; (e) Only one leg detected.
Sensors 16 01116 g006
Figure 7. Flowchart used to infer the human position by finding where the legs are (main part).
Figure 7. Flowchart used to infer the human position by finding where the legs are (main part).
Sensors 16 01116 g007
Figure 8. Details of the scripts for each kind of leg detection.
Figure 8. Details of the scripts for each kind of leg detection.
Sensors 16 01116 g008
Figure 9. Diagram showing the human-robot interaction and the human, robot, laser sensor and absolute references.
Figure 9. Diagram showing the human-robot interaction and the human, robot, laser sensor and absolute references.
Sensors 16 01116 g009
Figure 10. Variation of the robot in the absolute reference and calculation of the angular speed.
Figure 10. Variation of the robot in the absolute reference and calculation of the angular speed.
Sensors 16 01116 g010
Figure 11. Projection of the previous position in the current time.
Figure 11. Projection of the previous position in the current time.
Sensors 16 01116 g011
Figure 12. Human speed vector calculation.
Figure 12. Human speed vector calculation.
Sensors 16 01116 g012
Figure 13. Calculation of the angle of the human in the laser reference.
Figure 13. Calculation of the angle of the human in the laser reference.
Sensors 16 01116 g013
Figure 14. User angular speed by β-angle variation.
Figure 14. User angular speed by β-angle variation.
Sensors 16 01116 g014
Figure 15. Conversion from the laser reference to the robot reference. Note that the β angle goes from the X H axis up to the Y L axis. Since it goes clockwise, it is negative.
Figure 15. Conversion from the laser reference to the robot reference. Note that the β angle goes from the X H axis up to the Y L axis. Since it goes clockwise, it is negative.
Sensors 16 01116 g015
Figure 16. Robot position projected into the human reference.
Figure 16. Robot position projected into the human reference.
Sensors 16 01116 g016
Figure 17. Ultrasound sensors are used to avoid collision. In (a), it stops moving to avoid the obstacle; in (b), it keeps moving, since there is no obstacle.
Figure 17. Ultrasound sensors are used to avoid collision. In (a), it stops moving to avoid the obstacle; in (b), it keeps moving, since there is no obstacle.
Sensors 16 01116 g017
Figure 18. Straight path where the user guided the robot. The human walked through that path, and the robot helped him to perform such an action. The human guides the robot, not the opposite.
Figure 18. Straight path where the user guided the robot. The human walked through that path, and the robot helped him to perform such an action. The human guides the robot, not the opposite.
Sensors 16 01116 g018
Figure 19. Results for the straight path. (a) Graphical data about the experiment; (b) Graphical path of the experiment.
Figure 19. Results for the straight path. (a) Graphical data about the experiment; (b) Graphical path of the experiment.
Sensors 16 01116 g019
Figure 20. Diagram of the path with obstacles.
Figure 20. Diagram of the path with obstacles.
Sensors 16 01116 g020
Figure 21. Results for the straight path with obstacles. (a) Graphical data about the experiment; (b) Graphical path of the experiment, showing the obstacles and the end of the path.
Figure 21. Results for the straight path with obstacles. (a) Graphical data about the experiment; (b) Graphical path of the experiment, showing the obstacles and the end of the path.
Sensors 16 01116 g021
Figure 22. Lemniscate curve followed in the second set of experiments (with photos).
Figure 22. Lemniscate curve followed in the second set of experiments (with photos).
Sensors 16 01116 g022
Figure 23. Results for the Lemniscate curve. (a) Graphical data about the experiment; (b) Graphical path of the experiment.
Figure 23. Results for the Lemniscate curve. (a) Graphical data about the experiment; (b) Graphical path of the experiment.
Sensors 16 01116 g023
Table 1. Smart walkers’ controllers and sensors.
Table 1. Smart walkers’ controllers and sensors.
Smart WalkerSensorsControllers
RT Walker [37]Force/moment sensing and encodersSeveral algorithm controllers for motion (obstacle avoidance, path following, among others)
GUIDO Smart Walker [29]Laser sensor, force sensors, switches, sonar and encodersShared control approach
PAM-AID [38]Ultrasound and laser sensorsAlgorithmic controller
PAMM [36]Health sensors, external sensors, encoders, force sensors, among othersAdmittance-based controller
JARoW [34]Infrared sensorsAlgorithmic controllers
iWalker [30]RFID, encoders and external sensorsAlgorithmic controllers
UFES’ Smart Walker [22]IMUs, laser sensor and force sensorForce and inverse kinematics controllers
Our deviceLaser sensor, encoders and ultrasoundFormation-based controller
Table 2. Variables in Figure 9.
Table 2. Variables in Figure 9.
VariableDetailsUnit
v h Human linear speed in the absolute axism/s
ω h Human angular speed in the absolute axisrad/s
dHuman-robot distancem
θRobot angle in the human referencerad
φHuman angle in the robot referencerad
x L and y L Laser sensor longitudinal and transversal axism
x H and y H Human longitudinal and transversal axism
x R and y R Robot longitudinal and transversal axism
x h L and y h L Human position in the laser sensor referencem
x h R and y h R Human position in the robot referencem
x R h and y R h Robot position in the human referencem
h ˙ r e f Speed vector the robot should follow to keep the formationm/s
αRobot orientation in the human referencerad
α r e f Set-point orientation the robot should achieve to keep the formationrad
v r Robot linear speedm/s
ω r Robot angular speedm/s
βHuman orientation in the robot referencerad
v h x Human linear speed in the transversal axism/s
v h y Human linear speed in the longitudinal axism/s
kSample time0.1 s (100 ms)
k 1 Previous sample time0.1 s (100 ms)
k 1 | k Positions and angles of instant ( k 1 ) projected into instant (k)0.1 s (100 ms)
Table 3. Safety rules for the smart walker.
Table 3. Safety rules for the smart walker.
SituationActionNotes
No legs/only one leg detected, several times sequentiallyIncrease the counterThere is a counter that increases each time both legs are not detected. If this counter reaches the limit number, the walker stops immediately. The limit number can be defined inside the code. If the leg is detected before the counter reaches the limit, the counter is zeroed.
Counter exceeded limitBrake the robotIf the counter reaches the maximum limit, it brakes the robot and stops its movement.
High speedLimit speedThis safety rule is applicable for both linear and angular speeds.
Backwards movementBrake the robotBraking the robot in this case means the speed will be set up to zero.
Obstacle (detected by ultrasound sensors)Brake the robotThe robot is stopped while the obstacle is not removed from the path.

Share and Cite

MDPI and ACS Style

Valadão, C.; Caldeira, E.; Bastos-Filho, T.; Frizera-Neto, A.; Carelli, R. A New Controller for a Smart Walker Based on Human-Robot Formation. Sensors 2016, 16, 1116. https://doi.org/10.3390/s16071116

AMA Style

Valadão C, Caldeira E, Bastos-Filho T, Frizera-Neto A, Carelli R. A New Controller for a Smart Walker Based on Human-Robot Formation. Sensors. 2016; 16(7):1116. https://doi.org/10.3390/s16071116

Chicago/Turabian Style

Valadão, Carlos, Eliete Caldeira, Teodiano Bastos-Filho, Anselmo Frizera-Neto, and Ricardo Carelli. 2016. "A New Controller for a Smart Walker Based on Human-Robot Formation" Sensors 16, no. 7: 1116. https://doi.org/10.3390/s16071116

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop