Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Upper-Body Motion Mode Recognition Based On Imus For A Dynamic Spine Brace

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Proceedings of the 2018 IEEE

International Conference on Cyborg and Bionic Systems


Shenzhen, China, October 25-27, 2018

Upper-Body Motion Mode Recognition Based on IMUs for a Dynamic


Spine Brace
Pihsaia S. Sun, Jingeng Mai, Zhihao Zhou, Sunil Agrawal, and Qining Wang∗

Abstract— This paper presents an upper-body motion mode of adolescents with idiopathic or neuromuscular scoliosis.
recognition method based on inertial measurement units (IMUs) Compared to passive spine braces that apply rigid and passive
using cascaded classification approaches and integrated ma- support, the dynamic spine brace is able to actively control
chine learning algorithms. The proposed method is designed to
be applied on a dynamic spine brace in the future to assess the motion of different regions of the spine by using three
its usability. This study focuses on the problem of classifying rings where the motion of successive rings is controlled by
upper-body motion modes by using four IMUs worn on the using six parallel actuators. The novel design of the dynamic
upper-body of the subjects. Six locomotion modes and ten loco- spine brace ensures its functionality in modifying the spine
motion transitions were investigated. A quadratic discriminant posture. In addition, it does not restrict mobility in the daily
analysis (QDA) classifier and a support vector machine (SVM)
classifier were deployed in our study. With selected cascade activities of the user.
classification strategies, the system is demonstrated to achieve However, for the dynamic spine brace, there are still
a satisfactory performance with an average of 96.77%(QDA) challenges in achieving responsive human-robot interaction
and 97.64%(SVM) recognition accuracy. The obtained results through this interface. One of the most difficult challenges is
prove the effectiveness of the proposed method. to provide the robotic system with sensors and appropriate
I. I NTRODUCTION software to allow it to respond to the environment, and react
intuitively to human motion with both accuracy and imme-
Wearable robotics is a promising and challenging field of
diacy. This is especially pertinent for situations where the
robotic research. As rehabilitation is a key application of
interface must compensate with unexpected sudden changes
wearable robotics, orthoses and exoskeletons have gained
in human intention. Therefore, motion mode recognition
increasing research interests recently [1]–[3]. Exoskeletons
holds an important role in the high level decision making
for rehabilitation have been developed in different types of
and this is one of the factors that may potentially differentiate
mechanical structures, actuators and interfaces [1].
and make this system more usable.
Generally, wearable exoskeletons are classified according
Recent studies have demonstrated human motion recogni-
to the human segments on which the robot kinematic chains
tion techniques by using wearable sensors combined with
are applied. Thus, robotic exoskeletons can be classified
machine learning algorithms. Inertial Measurement Units
as upper limb, lower limb and full-body exoskeletons [4].
(IMUs) are widely used in upper-limb and lower-limb mo-
Although rehabilitation robots have different forms and func-
tion detection [7], [8]. Other studies have chosen surface
tionality, most current research still focus mainly on limbs
electromyograph (sEMG), capacitive sensing or multi-sensor
or full-body suits. There are only few studies concerning
fusion strategies for lower-limb motion detection [9]–[11].
robotic spine exoskeletons. For example, a SpineCor brace
Though these studies have focused on different motion
was designed using a series of elastic straps to provide spine
sensing methods, they are typically looking at motion of
correction [5]. However, the force application of this brace
individual limbs instead of the complex coordinated motions
is still passive and requires extensive training.
of the upper-body. Only a few studies are addressing upper-
A dynamic spine brace was recently proposed by the
body motion recognition, e.g. [12].
Robotics and Rehabilitation Laboratory in Columbia Uni-
In this study, we utilized upper-body motion mode recog-
versity [6]. It is a wearable upper-body suit that can provide
nition methods in order to implement on the dynamic spine
dynamic controlled forces on different regions of spine to
brace for improved active motion adaptation and response.
help correct abnormal postures (shown in Fig. 1(a)). There
Four IMUs were used to detect upper-body motions. We
is a potential need for this functionality in rehabilitation
designed several upper-body motion tasks that can utilize
This work was supported by the National Natural Science Foundation of up to sixteen locomotion modes. Three able-bodied subjects
China (No. 91648207) and the National Key R&D Program of China (No. participated in the experiments. By using cascaded classifi-
2018YFF0300606). Additional funding was provided by Beijing Goodoing
SpeedSmart, Co. Ltd. (Corresponding author: Qining Wang.) cation methods and integrated machine learning algorithms,
P. S. Sun, J. Mai, Z. Zhou and Q. Wang are with the Robotics Research an effective motion mode recognition method was designed
Group, College of Engineering, Peking University, Beijing 100871, China, which will be applied to the dynamic spine brace in the
with the Beijing Innovation Center for Engineering Science and Advanced
Technology (BIC-ESAT), Peking University, China and also with the Beijing future.
Engineering Research Center of Intelligent Rehabilitation Engineering, The rest of this paper is organized as follows. Section II
Beijing 100871, China. (e-mail:qiningwang@pku.edu.cn) shows the method for upper body motion mode recognition.
S. Agrawal is with the Robotics and Rehabilitation (ROAR) Laboratory,
Department of Mechanical Engineering in Columbia University, New York, Section III shows the results and related discussion. We
NY 10027, USA. conclude in Section IV.

978-1-5386-7355-3/18/$31.00 ©2018 IEEE 167


II. M ETHOD into static and dynamic modes in the first step of our
A. Experimental protocol cascaded classification (shown in Fig. 2). The static mode
referred to the movement when the subject’s upper-body
Three subjects (all male; aged between 21 and 23 years) remained static, while the dynamic mode was when the
volunteered to participate in the study and provided written subject stretched the arm or returned to the resting position.
consent prior to testing. The sensing system consists of four Static mode was then divided into six static modes according
wireless IMUs at a sample rate of 100Hz. Each IMU sensor to their different locations, and they were marked as S1, S2,
records three sets of kinematic information, including three- S3, S4, S5, and S6. For the dynamic mode, two types of
axis acceleration, three-axis angular velocity, and three-axis locomotion transitions were investigated and classified into
Euler angles (yaw, roll, pitch). forward mode (static to dynamic) and return mode (dynamic
to static). In each transition group, there were five modes
coordinating the different locomotion directions. Forward
modes were marked as F1, F2, F3, F4 and F5. Similarly
return modes were marked as R1, R2, R3, R4 and R5.

Fig. 1. (a)Human intent recognition in hierarchical control strategy


for a dynamic spine brace. (b)The placement of four IMUs and their Fig. 2. S0-S5 are static modes. F1-F5, R1-R5 are dynamic modes. F1-F5
corresponding locations on human spine. are forward locomotion transitions. R1-R5 are return locomotion transitions.

As shown in Fig. 1(b), the placement of four IMUs B. Data segmentation and feature set
are as follows: one unit was placed on the neck of the
subject, corresponding to the location of the fourth cervical In data segmentation stage, data streams from IMUs were
vertebra (C4); a second unit was placed on the upper back gathered and then processed into segments. In our study,
of the subject, corresponding to the location of the sixth there were four IMUs with 9 signal channels on each unit,
thoracic vertebrae (T6); while the other two units were placed thus total 36 channels of signals were sampled. Even though
symmetrically on the subject’s lower back, corresponding to all data in each mode was labeled manually at the beginning
the height of the fifth lumbar vertebrae (L5). and at the end of locomotion activities, it was still difficult
All individuals were asked to perform a set of upper- to segment a consecutive data stream without breaking its
body motion tasks in five different directions. These five continuity. Thus, to extract data segments effectively, a
directions were: center, right 45◦ , right, left45◦ , left. The sliding window over the time series data was introduced to
tasks selected for upper-body movement were: (1) Subject extract data in this stage.All data was segmented into sliding
sitting on the chair, (2) stretching the arm to reach a point windows, with overlap of 15ms for each sampling period.
in front of the subject, (3) returning the arm to the initial Time domain features, such as signal mean, maximum,
position, (4) keeping the upper limbs in a resting position. difference were derived from all four sensors in sliding
Subjects were asked to make the first movement along the windows, and eventually concatenated as a 180-dimension
center with their right hand, then subjects used their right vector IMU sensor feature set. Five time-domain features
hand for the right side and their left hand for the left side. were selected and IMU signals for each sliding window were
Subjects repeated each motion movement task fifteen times calculated. The features were defined as follows:
during the experiment. The reaching points were 12 inches f 1 = avg(X),
in front of subject’s feet, 27 inches on the right and left side f 2 = std(X),
of the the subject’s shoulder, and at a height of 32 inches f 3 = max(X),
above the ground. An experimental observer advised subjects f 4 = min(X),
about the activities to perform and recorded each task. f 5 = sum(X),
IMU signals were captured and utilized as data sets for where X stands for the N-length data vector of one analysis
sixteen locomotion modes in our experiment. Each mode was window. avg(X) represents the average value of X, std(X)
labeled by the experiment observer during the operation task. is the standard deviation of X, diff(X) is the difference of
Collected data of sixteen locomotion modes were categorized X, a (N-1)-length vector.

168
C. Classification method where Nmin is the number of wrongly recognized testing
data and Ntotal is the number of all testing data. Each ambu-
Cascaded classification methods combining with a
lation mode was evaluated separately. In order to understand
quadratic discriminant analysis (QDA) classifier and a sup-
the recognition performance of each locomotion pattern, we
port vector machine (SVM) classifier were deployed for mo-
used a confusion matrix to present the recognition results
tion modes detection in this study (shown in Fig. 3). All data
in our dynamic mode evaluation. The confusion matrix was
collected from the motion task experiment were divided into
defined as
sixteen different modes of locomotion. First, the locomotion  
modes were divided into static and dynamic modes. When c11 c12 ... c1m
 c21 c22 ... c2m 
static mode and dynamic mode were discriminated, data C=  ...
 (2)
of dynamic mode were imported for the next classification ... ... ... 
process. Two classifiers were trained here for the data of cm1 cm2 ... cmm
forward modes and the data of return modes. Finally, data where each element Cij was defined as
sets of static mode were trained into 6 phases, and data sets
of dynamic mode were trained into 5 phases respectively. Cij = nij /ni × 100% (3)
Among these sixteen modes, we categorized them into nij is the number of testing data in mode i but wrongly
static mode and dynamic mode. The static mode refers to the recognized as mode j. ni is the total number of testing data
phase when the subject’s movement remains static, and the in mode i.
dynamic mode is when the subject stretches the arm and the
return movement. In dynamic mode, we further distinguished III. R ESULTS AND D ISCUSSION
the transition movement by its direction, which are forward A. Cascaded classification
and return. We eventually measured the data of 6 static
As cascaded classification methods were applied in this
modes and 10 locomotion transitions.
study, there were recognition results of sixteen modes in
three classifiers: (1) static and dynamic, (2) static modes
and locomotion transitions, and(3) forward modes and return
modes. First, all data was classified as either static or
dynamic mode; the data of static mode was then classified
into its six designated modes. Meanwhile, another classifi-
cation on data of dynamic mode was deployed to determine
the locomotion transition modes based on the direction of
the movement (forward and return). All data of forward
motion mode was classified according to the direction of
motion. The same method was applied to return motion mode
as well. Recognition results were calculated by a 150-ms
sliding window length for classification with QDA classifier
and SVM classifier. The recognition performance in the
first classifier was good, with an average 98.04% ± 1.54
recognition accuracy with QDA classifier, and 98.90%± 1.63
recognition accuracy with SVM classifier. Compared with the
first layer classification, it was more difficult to distinguish
the locomotion transitions(forward and return) in the second
classifier. In locomotion transition mode recognition, the
Fig. 3. Cascaded classification: (1)static and dynamic, (2)static modes and average recognition accuracy was 97.46%± 1.14 with QDA
locomotion transitions, (3)forward modes and return modes. classifier and 98%± 1.47 recognition accuracy with SVM
classifier.
B. Classification of each mode
D. Evaluation methods In static mode recognition, the accuracy rate could reach
In this study, 10-fold leave-one-out cross validation up to 100%, which points out that all six static modes
(LOOCV) was used for the training and testing of the were recognized accurately. On the other hand, the results
classifiers. In this procedure, data of one fold was used of each locomotion transition mode has a high accuracy
as the testing set, and the remaining data was used as the with average 99.98%± 0.01 and 99.34%± 0.38 for forward
training set. The process was repeated ten times until all transition mode and return transition mode, respectively. The
group data was used as testing set. In this analysis, the overall recognition performance of each locomotion transition mode
recognition error (RE) was defined as is shown in Table I and Table II. There were no errors in
classification of right and left side motion. The reason why
RE = Nmin /Ntotal × 100% (1) F4 had 0.14% misclassification error with QDA classifier and

169
0.63% misclassification error with SVM classifier was due results show that our improved recognition method obtained
to its adjacent location to F1. This reason also applied on the a high accuracy in upper-body motion mode recognition.
errors of R1 as R3 (2.63% with QDA classifier and 0.61% IV. C ONCLUSION
with SVM classifier), and R5 as R4 (1.92% error rate).
Upper-body motion mode recognition is an important
TABLE I consideration in the use of a robotic spine exoskeleton.
R ECOGNITION ACCURACY C OMFUSION M ATRIX Motivated by mobility and safety of human motion, we
utilized an integrated motion sensing method with improved
Forward F1 F2 F3 F4 F5 machine learning techniques that can be applied on a dy-
F1 100% 0% 0% 0% 0% namic spine brace to enhance its usability. Six locomotion
F2 0% 100% 0% 0% 0% modes and ten locomotion transitions were investigated. With
F3 0% 0% 100% 0% 0%
F4 0.14% 0% 0% 99.86% 0% selected cascade classification strategies, the system achieved
F5 0% 0% 0% 0% 100% good recognition performance among these sixteen modes.
Return R1 R2 R3 R4 R5
The overall recognition accuracy were 96.77%±1.25 and
97.64%± 2.08 corresponding to QDA classifier and SVM
R1 97.37% 0% 2.63% 0% 0%
R2 0% 100% 0% 0% 0% classifier. Future work will focus on real-time motion mode
R3 0% 0 100% 0% 0% recognition and integration of the proposed method in the
R4 0% 0% 0% 100% 0% actual control of the dynamic spine brace. More upper-
R5 0% 0% 0% 1.92% 98.08%
body motion recognition experiments will be conducted with
different parameters and sensing systems.
TABLE II
R EFERENCES
R ECOGNITION ACCURACY C OMFUSION M ATRIX WITH SVM
CLASSIFIER [1] T. Yan, M. Cempini, C. M. Oddo, and N. Vitiello, ”Review of assistive
strategies in powered lower-limb orthoses and exoskeletons,” Robotics
and Autonomous Systems, vol 64, pp. 120-136, 2015.
Forward F1 F2 F3 F4 F5 [2] A. J. Young and D. P. Ferris, ”State of the art and future directions
F1 100% 0% 0% 0% 0% for lower limb robotic exoskeletons,” IEEE Transactions on Neural
F2 0% 100% 0% 0% 0% Systems and Rehabilitation Engineering, Vol. 25, no. 2, pp. 171-182,
F3 0% 0% 100% 0% 0% 2017.
F4 0.63 % 0% 0% 99.37% 0% [3] A. M. Dollar and H. Herr, ”Lower extremity exoskeletons and active
F5 0% 0% 0% 0% 100% orthoses: challenges and state-of-the-art,” IEEE Trans. Robot., vol.
24, no. 1, pp. 144-158, 2008.
Return R1 R2 R3 R4 R5 [4] E. Rocon, A. F. Ruiz, R. Raya, A. Schiele, and J. L. Pons, Human-
Robot Physical Interaction, Wearable Robots: Biomechatronic Ex-
R1 99.39% 0% 0.61% 0% 0%
oskeletons., pp. 127-163, 2009.
R2 0% 100% 0% 0% 0%
[5] M. S. Wong, J. C. Cheng, T. P. Lam, B. K. Ng, S. W. Sin, S. L.
R3 0% 0% 100% 0% 0%
Lee-Shum, D. H. Chow, and S. Y. Tam, ”The effect of rigid versus
R4 0% 0% 0% 100% 0%
flexible spinal orthosis on the clinical efficacy and acceptance of the
R5 0% 0% 0% 0% 100%
patients with adolescent idiopathic scoliosis,” Spine, vol. 33, no. 12,
pp. 1360-1365, 2008.
[6] J. H. Park, P. Stegall, and S. K. Agrawal ”Dynamic brace for
C. Overall recognition accuracy correction of abnormal postures of the human spine,” Proc. of the
IEEE International Conference on Robotics and Automation, 2015,
The results of recognition accuracy of each mode of pp. 5922-5927.
locomotion and transitions were satisfactory (100% in static [7] N. Ahmad, R. A. R. Ghazilla, N. M. Khairi, and V. Kasi, ”Reviews
on various inertial measurement unit (IMU) sensor applications.”
mode, 99.98%± 0.01 in forward mode and 99.34%± 0.38 International Journal of Signal Processing Systems., vol. 1, no. 2,
in return mode). The recognition accuracy of static and pp. 256-262, 2013.
dynamic modes was 98.04% ± 1.54 (QDA) and 98.90%± [8] A. J. Young, A. M. Simon, and L. J. Hargrove, ”A training method
for locomotion mode prediction using powered lower limb prostheses,”
1.63 (SVM). The recognition accuracy of locomotion tran- IEEE Trans. Neur. Syst. Reh. Eng., vol. 22, no. 3, pp. 671-677, 2014.
sition modes was 97.46%± 1.14 (QDA) and 98%± 1.47 [9] E. Zheng, L. Wang, K. Wei, and Q. Wang. ”A noncontact capac-
(SVM). Since a three-layer cascaded classifier was applied, itive sensing system for recognizing motion modes of transtibial
amputees”IEEE Trans. Biomed. Eng., vol. 61, no. 12, pp. 2911-2920,
the overall recognition accuracy was calculated by gathering 2014.
the final results of each mode.The recognition accuracy of [10] H. Huang, F. Zhang, L. J. Hargrove, Z. Dou, D. R. Rogers, and K. B.
three-cascade classification was 96.77%±1.25 with QDA Englehart, ”Continuous locomotion-mode identification for prosthetic
legs based on neuromuscular-mechanical fusion,” IEEE Trans. Biomed.
classifier and 97.64%± 2.08 with SVM classifier. Results Eng., vol. 58, no. 10, pp. 2867-2875, 2011.
show that using SVM classifiers could boost the recognition [11] L. J. Hargrove, A. M. Simon, A. J. Young, R. D. Lipschutz, S. B.
accuracy, even though the recognition accuracy was slightly Finucane, D. G. Smith, and T. A. Kuiken, ”Robotic leg control with
EMG decodingin an amputee with nerve transfers,” New Engl. J. Med.,
higher than the recognition accuracy from QDA classifier. vol. 369, no. 13, pp. 1237-1242, 2013.
However, it required much longer time in training models [12] J. Cheng, O. Amft, and P. Lukowicz. ”Active capacitive sensing:
for SVM classifier (compared to QDA classifier) due to its Exploring a new wearable sensing modality for activity recognition,”
International Conference on Pervasive Computing, 2010.
complexity of the algorithm. To sum up, our experimental

170

You might also like