Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
22 views

Human Activity Recognition Based On Acceleration Data From Smartphones Using HMMs

research article

Uploaded by

saima faheem
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Human Activity Recognition Based On Acceleration Data From Smartphones Using HMMs

research article

Uploaded by

saima faheem
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Received September 9, 2021, accepted September 28, 2021, date of publication October 4, 2021, date of current version October

18, 2021.
Digital Object Identifier 10.1109/ACCESS.2021.3117336

Human Activity Recognition Based on


Acceleration Data From Smartphones
Using HMMs
SYLVAIN ILOGA 1,2,3 , ALEXANDRE BORDAT 2,4 ,
JULIEN LE KERNEC 2,5 , (Senior Member, IEEE),
AND OLIVIER ROMAIN 2 , (Member, IEEE)
1 Department of Computer Science, Higher Teachers’ Training College, University of Maroua, Maroua, Cameroon
2 ETIS UMR 8051, CNRS, ENSEA, CY Cergy Paris University, 95000 Cergy, France
3 IRD, UMMISCO, University of Sorbonne, 93143 Bondy, France
4 Bluelinea, 78990 Élancourt, France
5 James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, U.K.

Corresponding author: Sylvain Iloga (sylvain.iloga@gmail.com)


This work was supported by the U.K. Engineering and Physical Sciences Research Council (EPSRC) under
Grant EP/R041679/1 (INSHEP).

ABSTRACT Smartphones are among the most popular wearable devices to monitor human activities.
Several existing methods for Human Activity Recognition (HAR) using data from smartphones are based on
conventional pattern recognition techniques, but they generate handcrafted feature vectors. This drawback
is overcome by deep learning techniques which unfortunately require lots of computing resources, while
generating less interpretable feature vectors. The current paper addresses these limitations through the
proposal of a Hidden Markov Model (HMM)-based technique for HAR. More formally, the sequential
variations of spatial locations within the raw data vectors are initially captured in Markov chains, which
are later used for the initialization and the training of HMMs. Meta-data extracted from these models are
then saved as the components of the feature vectors. The meta-data are related to the overall time spent by
the model observing every symbol for a long time span, irrespective of the state from which this symbol is
observed. Classification experiments involving four classification tasks have been carried out on the recently
constructed UniMiB SHAR database which contains 17 classes, including 9 types of activities of daily living
and 8 types of falls. As a result, the proposed approach has shown best accuracies between 92% and 98.85%
for all the classification tasks. This performance is more than 10% better than prior work for 2 out of
4 classification tasks.

INDEX TERMS Human activity recognition, activities of daily living, fall detection, hidden Markov models,
smartphone sensors.

I. INTRODUCTION More detailed descriptions of the existing types of human


Human Activity Recognition (HAR) has gained in impor- activities are available in [1]1 and [2].2 A review of
tance for many decades for its capability to learn meaning- state-of-the-art techniques for abnormal HAR is also pro-
ful and high-level knowledge about various types of human posed in [3]. There are two main categories of HAR. These
activities including (but not limited to): are video-based HAR and sensor-based HAR. Video-based
1) Ambulation: walking, running, climbing stairs, etc. HAR performs high-level analysis of videos or images con-
2) Activities of daily living (ADL): eating, drinking, taining human motions from cameras. No further details
reading, etc. related to this category of HAR are provided in the cur-
3) Falls: fall forward, fall backward, syncope, etc. rent paper which rather focuses on the second category.

The associate editor coordinating the review of this manuscript and 1 See Table 1 of [1]
approving it for publication was Yue Zhang . 2 See Table 3 of [2]

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
139336 VOLUME 9, 2021
S. Iloga et al.: HAR Based on Acceleration Data From Smartphones Using HMMs

Nevertheless, relevant surveys on video-based HAR are avail- accurately infer complex activities like having a coffee
able in [4]–[6]. Sensor-based HAR is more popular and for example.
widely used because it better preserves privacy than video- 3) These techniques often require a large amount of
based HAR. It relies on motion data from several types of well-labeled data to train the model. However, most
smart sensors including: of the activity data are remaining unlabeled in real
1) Body-worn sensors: These sensors are worn by the user applications.
to describe the body movements. They are generally The aforementioned drawbacks are overcome by deep
embedded in smartphones, watchs, standalone devices learning sensor-based solutions. However deep learning tech-
which include sensors such as accelerometers, gyro- niques embed the following limitations analyzed in [2]5 :
scopes, etc. 1) They require lots of computing resources.
2) Object sensors: They are attached to objects to capture 2) The parameters of the resulting models are difficult to
object movements. Radio frequency identifiers (RFID) adjust.
deployed in smart home environment or accelerometer 3) The components of the resulting feature vectors are less
fixed on objects (e.g: glass, cup) are generally used for interpretable. More details related to this limitation are
this purpose. available in [89] where an overview of explainable arti-
3) Ambient sensors: They are used to capture the interac- ficial intelligence for deep neural networks is proposed.
tion between humans and the environment in a smart The current paper addresses these limitations through the
environment. There are many kinds of ambient sensors proposal of a Hidden Markov Model (HMM)-based tech-
such as radars, microphones, pressure sensors, temper- nique for HAR which derives interpretable feature vec-
ature sensors, WiFi, Bluetooth, etc. tors from the model’s meta-data. The parameters of the
4) Hybrid sensors: Here, the three former types of sensors resulting HMMs are understandable and can therefore be
are combined. Further details are available in [7]. easily adjusted. Furthermore, the models’ training time is
Besides HAR, the aforementioned sensors are also reasonable compared to deep models. Raw data from tri-
adapted for several other topics including indoor position- axial smartphones sensors are preferred here because stud-
ing methods [8] and pedestrian dead reckoning [9], [10]. ies demonstrated that samples from smartphones sensors
Detailed presentations of these types of sensors and related (e.g., accelerometer and gyroscope) are accurate enough to be
papers are provided in [11].3 When the experiments are used in the clinical domain, such as ADLs recognition [23].
performed in smart homes with a variety of sensors, More precisely, given a signal window w, we first rep-
the HAR data processing needs to be distributed over a resent w as a sequence w = w1 . . . wT of 3-dimensional
group of heterogeneous, autonomous and interacting enti- vectors. The sequential variations of spatial locations within
ties in order to be more efficient. An efficient multi- these 3-dimensional vectors are then captured to transform w
agent approach for HAR in this context has recently been into the Markov chain δw whose content later serves to fix
proposed in [12]. the parameters of an initial HMM associated with w. These
HAR can be treated as a typical pattern recognition prob- parameters are then iteratively adjusted at each iteration of
lem and existing papers in HAR can be organized into two the Baum-Welch algorithm to obtain the final HMM λw .
main categories: Thereafter, meta-data derived from λw are saved as the com-
1) Conventional pattern recognition techniques [13]–[46] ponents of the descriptor vector − →
w associated with w. The
where the feature extraction and the model building performances of the proposed approach are evaluated through
steps are separated. flat classification experiments on UniMiB SHAR [23] which
2) Deep learning techniques [43], [44], [47]–[88] where is a database containing acceleration patterns captured by
the features extraction and model building processes smartphones and constructed in 2017 for the objective evalu-
are performed simultaneously in the deep learning ation of ADLs recognition and fall detection techniques.
models. The rest of this paper is organized as follows: The state of
the art is presented in Section II, followed by a summarized
Conventional pattern recognition techniques embed the presentation of HMMs in Section III. A detailed description
following drawbacks analyzed in [11]4 : of the approach proposed in this paper is given in Section IV.
1) The features are extracted via a handcrafted process Experimental results are presented in Section V and the last
that heavily relies on human experience and domain section is devoted to the conclusion.
knowledge.
2) Only shallow features can be learned according to II. STATE OF THE ART
human expertise. Such shallow features can only A. RELATED WORK
be used for recognizing low-level activities (walk- 1) CONVENTIONAL PATTERN RECOGNITION TECHNIQUES
ing, running, etc.), but they can hardly enable to Conventional pattern recognition solutions for HAR gener-
ally rely on the following process depicted in Figure 1:
3 See Section 3 of [11]
4 See Section 2.2 of [11] 5 See Table 5 of [2]

VOLUME 9, 2021 139337


S. Iloga et al.: HAR Based on Acceleration Data From Smartphones Using HMMs

FIGURE 1. HAR using conventional pattern recognition techniques.

FIGURE 2. HAR using deep learning techniques.

1) Signal preprocessing: Here, sensing devices capture metric for evaluating the performances of these classi-
human motions and save them in raw data vectors. fiers. Other metrics like the F1-measure, the Precision,
Small-time windows (a couple of seconds) of the input the Recall are also rarely used.
signal h are sequentially captured and sampled at a
specific sampling frequency depending on the targeted 2) DEEP LEARNING TECHNIQUES
application. With deep learning techniques for HAR, the features
2) Features extraction: Handcrafted features are extraction and model building processes are performed
extracted from each raw data vector w associated simultaneously in the deep learning models as shown
with a given time window through the computation in Figure 2. Here, the feature vectors are automatically
of informative statistics. More precisely, time-domain learned through the network χ instead of being manually
features (variance, mean, root mean square, zero- designed. A detailed study of deep neural networks for
crossing rate, etc.), frequency domain features (Fast HAR is available in [90]. An evaluation framework allow-
Fourier Transform, Discrete Cosine Transform, etc.) ing a rigorous comparison between handcrafted features
and other features (Principal Component Analysis, and features generated by several deep models is proposed
Linear Discriminant Analysis, etc.) are manually com- in [91]. The following deep models are most often used
puted and saved as the components of the features in HAR:
vector −→
w of each data vector w. More detailed lists of • Deep Neural Networks (DNN) [47]–[50].
typical handcrafted features used in HAR are available • Convolutional Neural Networks (CNN) [49], [51]–[76].
in [1]6 and [2].7 • Recurrent Neural Networks (RNN) [49], [72], [76]–[80].
3) Model building: Here, a conventional machine learn- • Deep belief network (DBN) and Restricted Boltzmann
ing model (classifier) χ is trained to learn the char- Machine (RBM) [43], [44], [60], [81]–[87].
acteristics of the human activities associated with the • Stacked autoencoder (SAE) [67], [88].
training feature vectors. The following models are most • Hybrid models [7], [49], [60], [67], [72].
often used for this purpose:
Although handcrafted features are considered a drawback
• k Nearest Neighbor (KNN) [13]–[25].
in HAR, these features can nevertheless enhance the per-
• Support Vector Machines (SVMs) [16]–[20], [22],
formances of CNN in some face related problems including
[23], [25]–[28].
age/gender estimation, face detection and emotion recogni-
• Decision trees (DT) [13], [17], [19]–[22], [25],
tion [92]. Handcrafted features have also been combined with
[29]–[34].
CNN generated features for HAR [73].
• Random Forest (RF) [19], [22], [23].
• Multilayer perceptron (MPL) [17], [21], [33], [35].
3) PERFORMANCES OF EXISTING WORK
• Artificial Neural Networks (ANN) [22], [23].
• Naive Bayes and Bayesian Networks [13], [14],
The implementation of a typical HAR system requires the
[17], [19], [22], [25], [31], [36]. design of a suitable dataset containing the raw data from
• Fuzzy Inference System (FIS) [14], [37]–[39].
which the signal windows will be derived. The datasets
• Boosting and Bagging [40], [41].
utilized in existing papers contain data related to human
• Hidden Markov models (HMMs) [36], [42]–[46].
activities performed by various participants whose charac-
teristics (gender, age, weight, height, etc.) vary from one
4) Classification step: The previously trained model
dataset to the other. Some authors prefer private custom data-
(classifier) is now used for inferring the correspond-
sets [13]–[16], [26], [27], [29]–[32], [34], [37], [39]–[42],
ing human activity. The accuracy is the most used
[45]–[48], [50], [53], [58], [60], [65], [66], [68]–[71], [77],
6 See Table 2 of [1] [82], [84]–[86], but it is difficult to perform comparisons
7 See Table 4 of [2] with these work in such conditions. Other authors adopt

139338 VOLUME 9, 2021


S. Iloga et al.: HAR Based on Acceleration Data From Smartphones Using HMMs

TABLE 1. Performances of relevant existing work where publicly available datasets are experimented. The values of the accuracy and the F1-measure
are in (%).

VOLUME 9, 2021 139339


S. Iloga et al.: HAR Based on Acceleration Data From Smartphones Using HMMs

TABLE 2. Performances of relevant existing work where custom datasets are experimented. Accuracies are in (%).

the use of publicly available datasets to enable further which are low values. Additionally, the subset of human
comparisons [17]–[25], [28], [33], [35], [36], [43], [49], activities selected by the authors for these two datasets
[51], [52], [54]–[59], [61], [62], [64], [67], [72], [74]–[81], varies from one work to the other. The dataset UCI-HAD
[83], [86]–[88]. A deep survey of the evolution of modern was design with enough participants (30 participants) but
datasets for HAR is available in [93]. The performances it only enables us to identify 6 human activities. The
of a given HAR systems depend on several parameters advantages of the dataset UniMiB SHAR (selected here)
including: the type of sensors, the number of participants, compared to the other publicly available datasets listed
the protocol of data collection, the selected features and in Table 1 are thoroughly analyzed in [23].8 A summa-
the selected models. Tables 1 and 2 present the perfor- rized description of the UniMiB SHAR datatset is given in
mances of relevant existing work with experiments on Section V-A.
publicly available and custom datasets. In these tables,
the column entitled ‘#HA’ contains the number of human B. PROBLEM STATEMENT
activities involved in the considered work and the last Existing approaches for HAR rely on conventional pat-
column contains the accuracy of each reviewed work (except tern recognition techniques or deep learning techniques for
for [36], [49], [61], [79], [88] where the F1-measure is rather deriving feature vectors from the raw data acquired by
provided). diverse sensors. These feature vectors are used for classifi-
Among the datasets experimented by existing deep cation purposes. Conventional pattern recognition techniques
learning techniques listed in Table 1, OPPORTUNITY, only enable to learn shallow features extracted via hand-
PAMAP2 and UCI-HAD are the most experimented datasets. crafted processes and require a large amount of well-labeled
However these datasets have not been considered in the cur- data. Deep learning techniques for HAR overcome these
rent work for several reasons. OPPORTUNITY and PAMAP2
were respectively designed with 4 and 9 participants, 8 Cf. Sections 1 and 2 of [23]

139340 VOLUME 9, 2021


S. Iloga et al.: HAR Based on Acceleration Data From Smartphones Using HMMs

FIGURE 3. HAR using the proposed HMM-based learning technique.

drawbacks, but they require lots of computing resources and


they generate less interpretable feature vectors. Additionally,
it is challenging to adjust the parameters of the resulting deep
models.
The current paper attempts to provide a solution to these FIGURE 4. HMM used as sequence generator.
limitations. This solution is based on the following obser-
vation: Every human activity is a sequential process; hence 5) The initial state probability distribution π =
it embeds a natural temporality that is meaningful for its {π[si ]} where π[si ] = Prob(q1 = si ) with
characterization. For this reason, human activity is generally 1 ≤ i ≤ N.
captured at a precise sampling frequency by dedicated sensors
which sequentially record several values in a raw data vector. B. HMM USED AS SEQUENCE GENERATOR
Unfortunately, the natural temporality embedded in the raw A HMM λ = (A, B, π) can be used to generate a sequence
data vectors is actually ignored during the computation of O = o1 o2 . . .oX composed of X symbols observed by the
existing feature vectors. Our opinion is that, the sequential sequence of states q = q1 q2 . . .qX as described in the Markov
variations of spatial locations within each raw data vector chain (MC) shown in Figure 4. In order to obtain the MC
enables the derivation of relevant feature vectors which may presented in Figure 4, the following algorithm is executed:
provide a better characterization of the considered human 1) Select the initial state sj ∈ S according to the distribu-
activity. tion π and set x = 0.
The current paper follows the principle presented in 2) Set x = x + 1 and change the current state to qx = sj
Figure 3 to analyze these sequential variations in order to 3) Select the symbol ox ∈ ϑ to be observed at state qx
generate one new feature vector − →
w from each raw data according to the distributions in B.
vector w through a machine learning process. HMMs have 4) If (x < X ) go to step 5, else terminate.
been selected in this paper on the one hand because they 5) Select the state transition to be realized from the cur-
are suitable for sequential data and their training time is rent state qx to another state sj ∈ S according to the
reasonable compared to deep models. On the other hand, distribution A, then go to step 2.
these models are managed by algorithms whose robustness
and efficiency are widespread. HMMs have already been C. MANIPULATION OF A HMMs
used in HAR systems as classifiers [36], [42], [45], [46]. Consider a sequence of symbols O = o1 o2 . . .oX and a HMM
They have also been combined with deep learning techniques λ = (A, B, π). The probability Prob(O|λ) to observe O
[43], [44]. But they are used here in a different way and for a given λ is efficiently calculated by the Forward-Backward
very different purpose (i.e: feature vectors are extracted from algorithm [94] which runs in θ(X .N 2 ). Given a sequence
their meta-data). of symbols O = o1 o2 . . .oX , it is possible to iteratively
re-estimate the parameters of a HMM λ = (A, B, π) in order
III. PRESENTATION OF HMMs to maximize the value of Prob(O|λ), where λ = (A, B, π) is
A. HMM DEFINITION the re-estimated model. The Baum-Welch algorithm [94] is
A HMM λ = (A, B, π) is fully characterized by [94]: generally used to perform this re-estimation. This algorithm
1) The number N of states of the model. The set of states runs in θ(γ .X .N 2 ) where γ is the user-defined maximum
is S = {s1 , s2 , . . . , sN }. The state of the model at time number of iterations. In this paper, the value γ = 100 is
x is generally noted qx ∈ S. selected following [95].
2) The number M of symbols. The set of symbols is
ϑ = {v1 , v2 , . . . , vM }. The symbol observed at time x D. STATIONARY DISTRIBUTION OF A HMM
is generally noted ox ∈ ϑ. A vector ϕ = (ϕ[s1 ], . . . , ϕ[sN ]) is a stationary distribution
3) The state transition probability distribution A = of a HMM λ = {A, B, π} if:
j ϕ[sj ] = 1
P
{A[si , sj ]} where A[si , sj ] = Prob(qx+1 = sj |qx = si ) 1)
with 1 ≤ i, j ≤ N . 2) ∀j, ϕ[sj ] ≥ 0
3) ϕ = ϕ.A ⇔ ϕ[sj ] = i ϕ[si ] × A[si , sj ], ∀j
P 
4) The symbols probabilities distributions B = {B[si , vk ]}
where B[si , vk ] = Prob(vk at time x|qx = si ) with 1 ≤ ϕ[sj ] estimates the overall proportion of time spent by λ in
i ≤ N and 1 ≤ k ≤ M . state sj over a long time span. ϕ can be extracted from any line

VOLUME 9, 2021 139341


S. Iloga et al.: HAR Based on Acceleration Data From Smartphones Using HMMs

FIGURE 5. Extraction of the 3 data vectors x, y and z from the raw data generated by the triaxial accelerometer of a
smartphone located inside subject’s waist pocket during a fall.



FIGURE 6. Methodology for deriving the features vector w associated with the signal windows w .

of the matrix Ar = A × A × . . . × A (r times) when r → +∞.


Therefore, the computation of ϕ requires θ(r.N 3 ) arithmetic
operations.

IV. THE PROPOSED APPROACH


FIGURE 7. Proposed representation of a signal window w = w1 . . .wT
A. MAIN IDEA derived from the raw data vectors.
Given a human activity, we assume that the experimental data
are recorded by a triaxial smartphone accelerometer which
generates 3 vectors x = (x1 , . . . , xT ), y = (y1 , . . . , yT ) and
z = (z1 , . . . , zT ) for each signal window w. These vectors are The main idea of this work is related to the fact that the
respectively obtained after sampling the signal on each Carte- former raw data vectors x, y and z can also be viewed as one
sian axis at a unique sampling frequency. Figure 5 depicts sequence w = w1 w2 . . .wT composed of 3-dimensional data
this process for fall detection where the smartphone is located vectors as depicted in Figure 7 where each wi = (xi , yi , zi )
inside subject’s waist pocket. In that figure, a signal window with (1 ≤ i ≤ T ). Hence, the sequential variations of the
extracted from the raw data generated by the smartphone spatial locations within the T vectors composing w can be
triaxial accelerometer is sampled on each Cartesian axis to captured into a MC δw which can later be used to initialize
derive the 3 data vectors x, y and z. and train a dedicated HMM λw . Thereafter, one single feature
Existing techniques for HAR generally concatenate these vector −
→w derived from the model’s meta-data can finally be
3 raw data vectors to obtain a unique vector w = (x1 , . . . , associated with w for classification purposes.
xT , y1 , . . . , yT , z1 , . . . , zT ). All the (3T )-dimensional raw
data vectors resulting from the application of this principle B. METHODOLOGY
on all the signal windows of the database are then used for As it is summarized in Figure 6, the proposed methodol-
handcrafted feature extraction or for deep feature extraction. ogy for deriving the feature vector −

w associated with the

139342 VOLUME 9, 2021


S. Iloga et al.: HAR Based on Acceleration Data From Smartphones Using HMMs

sequence w = w1 w2 . . .wT is composed of the three following its cluster. Given a selected distance measure dist between
steps: vectors, the computation scheme of α(wi ) is shown in (1).
1) The spatial locations of the T data vectors composing
α(wi ) = 100 × (U /V ) in (%) where
the input sequence w = w1 w2 . . .wT are captured by
transforming w into the MC δw through a calculation U = dist(wi , ṽ(wi )) and
involving each wi and the K central vectors derived V = max{dist(wj , ṽ(wi )), ∀wj ∈ v(wi )} (1)
from the off-line K -means clustering of the training
data vectors. More explanations about this step are Our objective is to consider all the possible values of α(wi )
provided in Section IV-C. as the set of states. In these conditions, Figure 8 depicts the
2) The content of δw is used for initializing a HMM which resulting ‘pseudo’ MC δ w associated with w = w1 w2 . . .wT .
is then trained using the Baum-Welch algorithm to
learn the sequential variations occurring inside δw (and
consequently, inside w). The resulting model is λw .
Section IV-D is devoted to the presentation of this step.
3) Meta-data are finally extracted from λw to derive the
features vector −
→w . More precisely, − →w has M compo- FIGURE 8. Pseudo MC δ w associated with w .
nents, where M is the number of symbols of λw . The
k th component wk of − →w being the overall proportion
of time spent by λw observing the symbol vk ∈ϑ over a
long time span, irrespective of the state from which vk
is observed. This step is fully presented in Section IV-E.

C. TRANSFORMATION INTO MARKOV CHAIN FIGURE 9. MC δw associated with w .

As shown in Figure 4, a MC is composed of symbols and


states, both belonging to finite sets. In order to transform δ w is not a valid MC because the spatial locations can
a signal window into a MC, these two finite sets must be take any value in the continuous interval [0, 100]. However,
defined. the states of a MC must always belong to a finite set. To over-
To determine the set of symbols, we are initially going to come this limitation, the interval [0, 100] is first split into
cluster the data vectors derived from all the signal windows (m+1) slices {s0 , s1 , . . . , sm } following [97]9 as shown in (2),
found in the training database. More formally, let H = where m is a user-defined integer.
{H1 , . . . , Hn } be the set of activities found in the training
 
100 100
database, each activity being represented by |Hj | signal win- s0 = {0} and sj = × (j − 1), × j , (1 ≤ j ≤ m)
m m
dows, with (1 ≤ j ≤ n). Given that each signal window w (2)
is now considered as a sequence w1 . . .wT of 3-dimensional
data vectors, the experimental
P database
 becomes a collection If the value of m is very high, the width of each slice sj
n becomes tiny and all the elements in sj converge to one unique
composed of T × j=1 |H j | data vectors. The k-means
clustering algorithm [96] is then executed off-line for orga- value which is 100 m × j. In that case, the elements of sj can
nizing this collection of training data vectors into K clusters, be approximated by this single value which we identify here
where K is a positive user-defined integer. The resulting set by the index j of slice sj . In these conditions, the finite set
ϑ = {v1 , . . . , vK } of clusters is finally considered as the set of {s0 , s1 , . . . , sm } of slices can be considered here as the set
symbols of the model in such a way that all the vectors found of states. This reasoning enables defining the valid MC δw
inside a given cluster are associated with the same symbol. associated with w = w1 w2 . . .wT by replacing every α(wi )
If we note v(wi ) the cluster containing the data vector wi , appearing in δ w by the value β(wi ) which is the index j of the
then the signal window w = w1 . . .wT is associated with the slice sj containing α(wi ) as shown in (3). Figure 9 shows the
sequence of symbols v(w1 ). . .v(wT ). The k-means clustering resulting MC.
algorithm is preferred in this work due its simplicity of imple-
(β(wi ) = j) ⇔ α(wi ) ∈ sj , (0 ≤ j ≤ m)

(3)
mentation and the quality of its resulting clusters.
To determine the set of states, we focus on the spatial loca- Proceeding this way, δw effectively embeds information
tions of the data vectors inside each cluster. More formally, related to the sequential variations of spatial locations within
consider a data vector wi and let ṽ(wi ) be the center vector of w because:
cluster v(wi ). We first evaluate the distance between wi and 1) The sequence v(w1 ). . .v(wT ) of symbols embeds infor-
ṽ(wi ), then we compare the resulting distance to the highest mation related to the sequential variations of clusters
distance between any data vector of cluster v(wi ) and ṽ(wi ). within w.
This comparison leads to the computation of a percentage
α(wi ) which spatially characterizes each data vector wi inside 9 See Equation 7 of [97]

VOLUME 9, 2021 139343


S. Iloga et al.: HAR Based on Acceleration Data From Smartphones Using HMMs

2) The corresponding sequence β(w1 ). . .β(wT ) of states 3) TRAINING OF THE HMM


embeds information related to the sequential variations The readjusted initial HMM λ1w is trained to learn the sequen-
of spatial locations inside the various clusters within w. tial variations occurring inside δw using the Baum-Welch
Consequently, if a HMM λw is initialized and trained algorithm. The resulting HMM λw is the final model associ-
according to the content of δw , this model will learn all the ated with w. During this training phase, the training sequences
information related to these sequential variations. are exclusively composed of symbols appearing in δw .

D. HMM INITIALIZATION AND TRAINING E. FEATURES VECTOR COMPUTATION


1) DESIGN OF THE INITIAL HMM The features vector − →
w = (w0 , w1 , . . . , wm ) associated with
Given a positive user-defined constant ε, the parameters of w is finally derived from λw = (Aw , Bw , πw ) by analyzing
each the initial HMM λ0w associated with w, are set to statisti- the behavior of λw regarding each symbol vk . More precisely,
cally capture the state transitions and the symbols probability we propose to consider wk as the overall proportion of time
distributions from the content of δw as follows: spent by λw at observing symbol vk over the long term, irre-
1) The set of symbols is the set ϑ = {v1 , . . . , vK } of spective of the state from which this observation is realized.
clusters generated by the k-means clustering algorithm, In order to compute wk , one must first evaluate the overall
the number K = M of clusters (symbols) being proportion of time spent by λw at observing vk in each state
user-defined. si over the long term as follows:
2) The set of states is S = {s0 , s1 , . . . , sm } whose con- 1) Evaluate the overall proportion of time spent by λw in
tent is computed in (2) where m is the user-defined state si over the long term. This proportion is given by
number of slices used to split the interval [0, 100]. the ith component ϕw [si ] of the stationary distribution
Consequently, the number of states is N = m + 1. of λw .
3) The probability of transiting from state sj to state sk is 2) Multiply the result obtained at step 1 by the probability
calculated in (4), where transit(sj , sk , δw ) is the num- of observing vk in state si which is Bw [si , vk ].
ber of transitions from state sj to state sk in δw and
transit(sj , −, δw ) is the number of transitions from state The value of wk is finally obtained by repeating this process
sj to any destination in δw . for every state si and summing the resulting proportions (7).
transit(sj , sk , δw ) −

A0w [sj , sk ] = (4) w = (w0 , w1 , . . . , wm ) where
transit(sj , −, δw ) + ε XK
4) The probability to observe symbol vk at state sj is cal- wk = (ϕw [si ] × Bw [si , vk ]) with (0 ≤ k ≤ m) (7)
i=1
culated in (5), where observe(vk , sj , δw ) is the number
of times symbol vk is observed at state sj in δw , and V. EXPERIMENTAL RESULTS
observe(−, sj , δw ) is the number of occurrences of state A. EXPERIMENTAL DATASET
sj in δw . Among the publicly available databases recorded with smart-
observe(vk , sj , δw ) phones listed in Table 1, the dataset UniMiB SHAR has been
B0w [sj , vk ] = (5) selected in this work [23]. It is database of acceleration
observe(−, sj , δw ) + ε
patterns measured by smartphones to be used as a common
5) The probability that the observation starts with state sj benchmark for the objective evaluation of both, ADLs recog-
is calculated in (6), where start(sj , δw ) = 1 if δw starts nition and fall detection techniques. This dataset contains
with state sj , 0 otherwise. 17 human activities including 9 different types of ADLs and
start(sj , δw ) 8 different types of falls. Table 3 presents the description of
πw0 [sj ] = (6) each ADL/fall.
1+ε
During the construction of the selected database, human
2) READJUSTMENT OF THE INITIAL HMM activities where performed by 30 subjects (including
The parameters of λ0w are not probability distributions. This 24 females) between 18 and 60 years of age. Each ADL/fall
inconvenience is intentionally introduced by adding ε to the type was performed twice by each subject: the first time
denominators of its various components in order to avoid with the smartphone in the right pocket and the second time
eventual divisions by zero and zero probabilities. In this work, with the smartphone in the left pocket. Signal windows of 3s
we experimentally fixed ε = 1. An equitable redistribution of each were saved during every experimental trial of a given
the missing quantity is applied to each element of each line ADL/fall performed by each subject. For each signal window
in λ0w = (A0w , B0w , πw0 ) to obtain the readjusted initial model w, the accelerometer recorded three data vectors (samples) x,
λ1w = (A1w , B1w , πw1 ) whose parameters are: y and z, each having T = 151 components. The database
Pm
1) A1w [sj , sk ] = A0w [sj , sk ] + m+1
1
 1 −P l=0 Aw [sj , sl]
0

contains a total of 11.771 samples not equally distributed
2) B1w [sj , vk ] = B0w [sj , vk ] + K1 1 − K across activity types: 7.759 samples describing ADLs and
l=1 Bw [sj , vl ]
0
4.192 samples describing falls. Further details about the data
3) πw1 [sj ] = πw0 [sj ] + m+1 1
1− m l=0 πw [sl ]
0
P 
acquisition, the experimental protocols, the characteristics of

139344 VOLUME 9, 2021


S. Iloga et al.: HAR Based on Acceleration Data From Smartphones Using HMMs

TABLE 3. Descriptions of the 17 human activities of UniMiB SHAR.

the subjects, the signal segmentation and the signal process- The two remaining steps of the proposed machine learning
ing are available in [23]. process were both developed in C language. Given a signal
window w and its associated HMM λw = (Aw , Bw , πw ),
B. EXPERIMENTAL SETTINGS the stationary distribution ϕw of λw is obtained in this work
The classification experiments performed in this work were by extracting the first line of the matrix (Aw )r with r = 100.
realized on a personal computer having 16 GB of main mem- After discovering the K clusters for each classification task,
ory and the following processor: Intel(R) Core(TM) i7-8665U the computation of each feature vector − →w associated with w
CPU @ 1.90 GHz 2.11 GHz. We evaluated four different took between 50 and 3500 ms, depending on the content of
classification tasks, following [23]: the signal window. The overall time taken for the computation
1) AF-17 which contains 17 classes (9 ADL classes and of the feature vectors associated with all the signal windows
8 FALL classes). varied from one classification task to another depending on
2) A-9 which contains 9 ADL classes. the considered number of signal windows. For each classifi-
3) F-8 contains 8 FALL classes. cation task in {AF-2, F-8, A-9, AF-17} and for each number
4) AF-2 which contains 2 classes obtained by considering of clusters in {20, 40, 60, 80, 100}, the resulting descriptor
all the ADLs as one class and all the FALLs as one vectors have been saved into one online available ‘arff’ files10
class. which are taken as inputs by WEKA. The Matlab items
The Euclidean distance was selected in this work as the (codes and tables) required to perform the off-line k-means
distance dist between two vectors required in (1). (2) was clustering and the transformation into of Markov chains are
used to split the interval [0, 100] into 51 slices (i.e: we fixed also available through this same URL.
m = 50) following [97] where the authors used analog
reasoning to split the same interval to perform the comparison C. CLASSIFICATION PERFORMANCES
of finite sets of histograms using HMMs. Hence, the number Classification experiments were realized with the soft
of states of the HMMs designed in the current work is 51. WEKA [98] through 5-fold cross-validation following [23].
To analyze the impact of the user-defined number K of The following classifiers have been selected in this paper and
clusters discovered by the k-means clustering algorithm on their corresponding names in WEKA are shown in brackets:
the performances of the proposed approach, we experimented 1) k-NN (IBk) with k = 1, was used for the Euclidean and
with the 5 following values of K : 20, 40, 60, 80 and 100. the Manhattan distances.
Consequently, the number of symbols of the HMMs designed 2) SVMs with polynomial kernel (SMO).
in this work also varies accordingly. The first step of the 3) Multilayer Perceptron (MLP).
machine learning process presented in Figure 6 and corre- 4) Decision trees (J48).
sponding to the transformation of every signal w into the 5) Random Forest (RF), these are bootstrap-aggregated
MC δw was entirely developed in Matlab. This choice was decision trees with 300 bagged classification trees.
dictated by the fact that the database files were available as Table 4a presents the best classification accuracies for
Matlab tables. Therefore, we executed a Matlab version of the each classification task using the proposed feature vec-
k-means clustering algorithm during this step. Depending on tors, irrespective of the number of clusters generating
the classification task and on the selected number of clusters,
the off-line clustering step could sometimes take over an hour. 10 http://perso-etis.ensea.fr/sylvain.iloga/index.html

VOLUME 9, 2021 139345


S. Iloga et al.: HAR Based on Acceleration Data From Smartphones Using HMMs

TABLE 4. Classification results. Accuracies are in (%). The best accuracies are in bold.

these accuracies. According to Table 4a, the selected classi- Comparison results presented in Table 5 reveal that the
fiers can be organized in descending order of performances approach proposed in this paper always outperforms [23] with
for all the classification tasks as follows: RF, IBk (Man- positive accuracy gains reaching +13.45% and +10.36%
hattan), IBk (Euclidean), J48, MLP and SMO. The content respectively for F-8 and AF-17.
of Table 4a demonstrates the high quality of the proposed
descriptor vectors derived from the proposed HMM-based TABLE 5. Comparisons with [23]. Accuracies are in (%). The best
accuracies are in bold.
learning process with best accuracies always above 75%,
84% and 92% for the J48, the IBk and the RF classifiers
respectively. This table also reveals the unsatisfactory perfor-
mances exhibited by the SMO and the MLP classifiers for the
AF-17 classification task and the very poor performances of
these same classifiers for the F-8 classification task. Detailed
classification performances for each classification task are
presented in Tables 4b to 4f respectively for 20, 40, 60, 80 and
100 clusters. According to these tables, the variations of the
user-defined number K of clusters do not significantly influ-
E. TIME COST
ence the performances of the proposed technique. Indeed,
1) THEORETICAL TIME COST
the gaps between the classification accuracies for the 5 exper-
imental values of K are low. Consequently, it is not worth The main contribution of the current paper is the computation
selecting a high value of K . We therefore recommend low of the proposed feature vector − →
w associated with an input
values between 20 and 40. sequence w of raw data vectors as it can be observed in
Figure 6. This computation embeds an offline k-means clus-
tering whose time cost is not considered in this evaluation
D. COMPARISONS WITH RELATED WORK
because it is realized ‘offline’. The remaining steps of the
We have compared the best classification performances computation of −→w are:
obtained in this paper with those obtained in [23] where
1) The transformation of w into the Markov chain δw
the authors conducted classification experiments on the same
(See Section IV-C).
database and for the same classification tasks, using the fol-
2) The HMM initialization using the content of δw to
lowing classifiers:
obtain the initial model λ0w (See Section IV-D1).
1) k-NN with k = 1. 3) The readjustment of λ0w to obtain λ1w (See
2) SVMs with a radial basis kernel. Section IV-D2).
3) Artificial Neural Networks. 4) The HMM training of λ1w with the Baum-Welch algo-
4) RF with 300 bagged classification trees. rithm to obtain the final model λw (See Section IV-D3).

139346 VOLUME 9, 2021


S. Iloga et al.: HAR Based on Acceleration Data From Smartphones Using HMMs

5) The extraction of meta-data from λw to derive − →w The experiment presented in this section has been performed
(See Section IV-E). using the 11.771 MCs obtained when the k-means algo-
The time cost of the 3 first steps was experimentally low rithm is executed with k = 20 (our smallest experimental
and very tiny compare to the time cost of the two last steps. value of k).
Consequently, only the HMM training and the meta-data The Nvidia Jetson TX2 has 8 GB L128 bit DDR4 of
extraction are time consuming. According to Section III-C, main memory. It runs on L4T (Linux for Tegra, i.e. Linux
the HMM training phase runs in θ(γ .T .(m + 1)2 ). The main Kernel 4.9) and has a Parker SOC consisting of:
operation realized during the meta-data extraction is the com- 1) A Pascal GPU at 256 Cuda cores (not used in this
putation of the stationary distribution of the HMM which experiment).
runs in θ(r.(m + 1)3 ) as stated in Section III-D. Therefore, 2) One HMP (6 cores) including 2 Denver cores (custom
the overall time cost of our contribution is approximated by core designed by Nvidia to run the ARMv8 ISA) and
θ(r.(m + 1)3 + γ .T .(m + 1)2 ) where: 4 Arm Cortex A57 cores (to run the ARMv8 ISA).
1) r is the user-defined number of matrix products needed All the cores are compatible with ISA ARMv8 which
to compute the stationary distribution. In this paper is the 64 bits architecture of Arm.
r = 100. The Nvidia Jetson TX2 has several power supply modes
2) γ is the user-defined maximum number of iterations of that influence the frequency of the different cores. The modes
the algorithm. In this paper γ = 100. used during our tests were the ‘Max-N’ mode which allows
3) T is the number of vectors in the sequence w. In this to reach 2 GHz on each core and the ‘Max-P Core-All’
paper T = 151. mode allowing each core to be used at 1.4 GHz (i.e. a trade-
4) m is the user-defined number of slices used to split the off between performance and energy consumption). During
interval [0, 100]. In this paper m = 50. the current experiment, the program was executed on cores
This time cost can be further reduced by gradually reduc- excluded from the Linux scheduler allowing to dedicate the
ing the values of parameters like r or γ without negatively core entirely to the program and thus, not to distort the per-
impacting the classification results. If the stationary distribu- formance measurements. The following units were selected
tion is discovered after r0 iterations with (r0 < r), this sta- to measure the performance of our program using the Perfor-
tionary distribution will not change during (r − r0 ) remaining mance Monitoring Unit and Syscall:
iterations. Similarly, if the Baum-Welch algorithm reaches its 1) Milliseconds (ms)
local optimum after γ0 iterations with (γ0 < γ ), this local 2) Instructions per cycle (IPC)
optimum will not change during the (γ − γ0 ) remaining iter- 3) Instructions per second (IPS)
ations. The experimental value T = 151 cannot be modified 4) Floating point operations per second (FLOPS)
because it was fixed during the design of the experimental Tables 6 and 7 present the performances of our program on
database. Similarly, the experimental value m = 50 cannot be the experimental devices. According to these tables, the best
changed here because it was fixed after several experiments performance is obtained by the desktop with a mean time
performed in [97]. cost of less than 2 seconds. Nevertheless, the performance on
Nvidia Jetson TX2 is also very interesting with a mean time
2) EXPERIMENTAL TIME COST cost of around 12 seconds when the Core Denver 2 GHz is
In order to measure the speed-up of the two most time- used.
consuming stages of the processing chain (i.e. the stages of
TABLE 6. Time cost in (ms) on the experimental devices.
HMM training and the meta-data extraction), we executed
and benchmarked the program on two different architectures:
a desktop and the Nvidia JETSON TX2 [99] comparable in
terms of hardware to an embedded platform.
The main characteristics of the experimental desktop are
the following:
• CPU: AMD Ryzen 5 5600X (6 Cores) with dedicated
core frequency when used at 4649.877 MHz
TABLE 7. Means of IPC, IPS and FLOPS on the experimental devices.
• RAM: 32GB DDR4 3600MHz G SKill Trident Z Neo
• GPU: RTX 3080 Gaming X Trio MSI
• Motherboard: Asus ROG STRIX B550-F GAMING Wifi
• Power supply: RMX 850W Corsair (Certified 89% of
efficiency)
The System-On-a-Chip (SOC) used on the Nvidia JET-
SON TX2 has in common 4 cores used on the SOC of
the Exynos 7 Octa (5433) [100] clocked at 1.9 GHz which A wattmeter was additionally used for measuring the
is used on the Galaxy Note 4 and the Galaxy Tab S2. energy consumption in order to deduce the Average Power

VOLUME 9, 2021 139347


S. Iloga et al.: HAR Based on Acceleration Data From Smartphones Using HMMs

TABLE 8. APC and FOM. have demonstrated the efficiency of the proposed approach
with the best accuracies between 92% and 98.85% for all
the classification tasks. This performance is more than 10%
better than state of the art for two classification tasks.
The main contribution of the current work is the
HMM-based sequential learning of the sample (raw data
vector) w associated with a human activity. Meta-data
Consumption (APC) of each experimental device. This extracted from the resulting HMM λw are then used for
enabled us to calculate the Figure Of Merit (FOM) of each deriving the corresponding feature vector − →w . Consequently,
device by multiplying the Average Execution Time (AET) by the number of samples in the experimental database does
the APC as shown in Equation (8). According to the FOM not impact the components of − →
w since each sample in the
presented in Table 8, the Nvidia Jetson TX2 is 2.138 times database is handled individually, irrespective of the other
more efficient than the Desktop 4.6 GHz for the execution of samples in the database. For this reason, the proposed
our program. approach will still exhibit good classification results even
Efficiency(Time,Power) = (AET ) × (APC) (8) for large scale databases. Only the overall computation time
for all the samples in the database will increase in these
conditions. Parallel computation of all the feature vectors in
F. MAIN ASSETS the data base can also be implemented to reduce this overall
The technique proposed in this paper: computation time.
1) Considers the sequential variations of spatial locations The current work has a dual impact on further researches
inside the raw data vectors unlike existing techniques. in HAR. Firstly, it has been theoretically and experimentally
2) Uses HMMs for the extraction of feature vectors, unlike demonstrated that learning a human activity as a sequential
existing techniques which only use these same models process enhances the quality of the resulting feature vectors
during the classification step. and consequently, induces better classification results. A lot
3) Generates feature vectors whose components are inter- of the research in HAR only considers discrete activities as
pretable, unlike existing techniques generating hand- opposed to activities in a continuum. The proposed method
crafted or less interpretable feature vectors. enables extracting salient features from a stream of data
4) Performs the feature vectors extraction in reasonable using HMM. Secondly, the proposed method is an advance
time compared to deep learning techniques. towards real-time implementation since this method can be
5) Efficiently performs HAR and outperforms prior work efficiently ported on embedded platforms. Indeed, the current
for the selected database. work uses HMMs for generating the feature vectors and the
6) The algorithm has been demonstrated viable on embed- resulting feature vectors exhibit good classification perfor-
ded platform processors from 2014 mobile phones and mance, even with a basic classifier like the k-NN. Given that
tablets, namely, ARM and Denver cores clocked at efficient hardware implementations of the Baum-Welch [101]
1.4GHz and 2GHz. This shows that current mobile and the k-NN [102] algorithms on Field-Programmable Gate
phones and tablets would be much closer to the per- Array (FPGA) chips are available, this method can therefore
formances of a PC. be deployed using hardware platforms with a lower footprint
than GPUs in terms of energy and using fewer resources on a
VI. CONCLUSION FPGA due to the simplicity of the implementation compared
This paper addresses the problem of HAR based on accel- to CNN or DNN approaches.
eration data from Smartphones. Existing approaches for this
purpose either rely on conventional pattern recognition tech- REFERENCES
niques or on deep learning techniques. Conventional pattern [1] O. D. Lara and M. A. Labrador, ‘‘A survey on human activity recognition
recognition techniques generate shallow handcrafted fea- using wearable sensors,’’ IEEE Commun. Surveys Tuts., vol. 15, no. 3,
pp. 1192–1209, 3rd Quart., 2013.
ture vectors which heavily rely on the human experience/ [2] Y. Wang, S. Cang, and H. Yu, ‘‘A survey on wearable sensor modality
expertise. Deep learning techniques are preferable, but they centred human activity recognition in health care,’’ Expert Syst. Appl.,
require lots of computing resources while generating less vol. 137, pp. 167–190, Dec. 2019.
[3] C. Dhiman and D. K. Vishwakarma, ‘‘A review of state-of-the-art tech-
interpretable feature vectors. The current paper attempts niques for abnormal human activity recognition,’’ Eng. Appl. Artif. Intell.,
to overcome these limitations by proposing an efficient vol. 77, pp. 21–45, Jan. 2018.
HMM-based technique that generates interpretable feature [4] S. Vishwakarma and A. Agrawal, ‘‘A survey on activity recognition and
vectors while requiring a reasonable time cost with a demon- behavior understanding in video surveillance,’’ Vis. Comput., vol. 29,
no. 10, pp. 983–1009, Oct. 2013.
strated feasibility of implementation on embedded proces- [5] L. Onofri, P. Soda, M. Pechenizkiy, and G. Iannello, ‘‘A survey on using
sors, namely, Denver and ARM cores. domain and contextual knowledge for human activity recognition in video
Four different classification tasks have been tested on the streams,’’ Expert Syst. Appl., vol. 63, pp. 97–111, Nov. 2016.
[6] D. R. Beddiar, B. Nini, M. Sabokrou, and A. Hadid, ‘‘Vision-based
UniMiB SHAR dataset containing 17 human activities includ- human activity recognition: A survey,’’ Multimedia Tools Appl., vol. 79,
ing 9 types of ADLs and 8 types of falls. Classification results nos. 41–42, pp. 30509–30555, Nov. 2020.

139348 VOLUME 9, 2021


S. Iloga et al.: HAR Based on Acceleration Data From Smartphones Using HMMs

[7] V. Ghate, ‘‘Hybrid deep learning approaches for smartphone sensor- [29] U. Maurer, A. Smailagic, D. P. Siewiorek, and M. Deisher, ‘‘Activity
based human activity recognition,’’ Multimedia Tools Appl., vol. 2021, recognition and monitoring using multiple sensors on different body
pp. 1–20, Feb. 2021. positions,’’ in Proc. Int. Workshop Wearable Implant. Body Sensor Netw.,
[8] S. Xu, R. Chen, Y. Yu, G. Guo, and L. Huang, ‘‘Locating smartphones 2006, p. 4.
indoors using built-in sensors and Wi-Fi ranging with an enhanced parti- [30] J. Parkka, M. Ermes, P. Korpipaa, J. Mantyjarvi, J. Peltola, and
cle filter,’’ IEEE Access, vol. 7, pp. 95140–95153, 2019. I. Korhonen, ‘‘Activity classification using realistic data from wearable
[9] H. Ju, S. Y. Park, and C. G. Park, ‘‘A smartphone-based pedes- sensors,’’ IEEE Trans. Inf. Technol. Biomed., vol. 10, no. 1, pp. 119–128,
trian dead reckoning system with multiple virtual tracking for indoor Jan. 2006.
navigation,’’ IEEE Sensors J., vol. 18, no. 16, pp. 6756–6764, [31] E. M. Tapia, S. S. Intille, W. Haskell, K. Larson, J. Wright, A. King,
Aug. 2018. and R. Friedman, ‘‘Real-time recognition of physical activities and
[10] S. Qiu, Z. Wang, H. Zhao, K. Qin, Z. Li, and H. Hu, ‘‘Inertial/magnetic their intensities using wireless accelerometers and a heart rate mon-
sensors based pedestrian dead reckoning by means of multi-sensor itor,’’ in Proc. 11th IEEE Int. Symp. Wearable Comput., Oct. 2007,
fusion,’’ Inf. Fusion, vol. 39, pp. 108–119, Jan. 2018. pp. 37–40.
[11] J. Wang, Y. Chen, S. Hao, X. Peng, and L. Hu, ‘‘Deep learning for sensor- [32] M. Ermes, J. Parkka, and L. Cluitmans, ‘‘Advancing from offline to online
based activity recognition: A survey,’’ Pattern Recognit. Lett., vol. 119, activity recognition with wearable sensors,’’ in Proc. 30th Annu. Int. Conf.
pp. 3–11, Mar. 2019. Eng. Med. Biol. Soc., Aug. 2008, pp. 4451–4454.
[12] A. Jarraya, A. Bouzeghoub, A. Borgi, and K. Arour, ‘‘DCR: A new [33] J. R. Kwapisz, G. M. Weiss, and S. A. Moore, ‘‘Activity recognition using
distributed model for human activity recognition in smart Homes,’’ Expert cell phone accelerometers,’’ ACM SIGKDD Explor. Newslett., vol. 12,
Syst. Appl., vol. 140, Feb. 2020, Art. no. 112849. no. 2, pp. 74–82, Dec. 2010.
[13] L. Bao and S. S. Intille, ‘‘Activity recognition from user-annotated accel- [34] S. D. Lara and M. A. Labrador, ‘‘A mobile platform for real-time human
eration data,’’ in Proc. Int. Conf. Pervas. Comput. Berlin, Germany: activity recognition,’’ in Proc. IEEE Consum. Commun. Netw. Conf.
Springer, 2004, pp. 1–17. (CCNC), Jan. 2012, pp. 667–671.
[14] L. C. Jatoba, U. Grossmann, C. Kunze, J. Ottenbacher, and W. Stork, [35] K. H. Walse, R. V. Dharaskar, and V. M. Thakare, ‘‘PCA based optimal
‘‘Context-aware mobile health monitoring: Evaluation of different pattern ANN classifiers for human activity recognition using mobile sensors
recognition methods for classification of physical activity,’’ in Proc. 30th data,’’ in Proc. 1st Int. Conf. Inf. Commun. Technol. Intell. Syst., vol. 1.
Annu. Int. Conf. Eng. Med. Biol. Soc., Aug. 2008, pp. 5250–5253. Cham, Switzerland: Springer, 2016, pp. 429–436.
[15] T. Brezmes, J.-L. Gorricho, and J. Cotrina, ‘‘Activity recognition from [36] M. H. Kabir, M. R. Hoque, K. Thapa, and S.-H. Yang, ‘‘Two-layer hidden
accelerometer data on a mobile phone,’’ in Proc. Int. Work-Conf. Artif. Markov model for human activity recognition in home environments,’’
Neural Netw. Berlin, Germany: Springer, 2009, pp. 796–799. Int. J. Distrib. Sensor Netw., vol. 12, no. 1, Jan. 2016, Art. no. 4560365.
[16] K. Altun and B. Barshan, ‘‘Human activity recognition using iner- [37] Y.-P. Chen, J.-Y. Yang, S.-N. Liou, G.-Y. Lee, and J.-S. Wang, ‘‘Online
tial/magnetic sensor units,’’ in Proc. Int. Workshop Hum. Behav. Under- classifier construction algorithm for human activity detection using a tri-
stand. Berlin, Germany: Springer, 2010, pp. 38–51. axial accelerometer,’’ Appl. Math. Comput., vol. 205, no. 2, pp. 849–860,
[17] M. Shoaib, H. Scholten, and P. J. M. Havinga, ‘‘Towards physical activity 2008.
recognition using smartphone sensors,’’ in Proc. IEEE 10th Int. Conf. [38] T.-P. Kao, C.-W. Lin, and J.-S. Wang, ‘‘Development of a portable activity
Ubiquitous Intell. Comput., Dec. 2013, pp. 80–87. detector for daily activity recognition,’’ in Proc. IEEE Int. Symp. Ind.
[18] C. Medrano, R. Igual, I. Plaza, and M. Castro, ‘‘Detecting falls as nov- Electron., Jul. 2009, pp. 115–120.
elties in acceleration patterns acquired with smartphones,’’ PLoS ONE, [39] M. Berchtold, M. Budde, D. Gordon, H. R. Schmidtke, and M. Beigl,
vol. 9, no. 4, Apr. 2014, Art. no. e94811. ‘‘ActiServ: Activity recognition service for mobile phones,’’ in Proc. Int.
[19] M. Shoaib, S. Bosch, O. D. Incel, H. Scholten, and P. J. M. Havinga, Symp. Wearable Comput. (ISWC), Oct. 2010, pp. 1–8.
‘‘Fusion of smartphone motion sensors for physical activity recognition,’’ [40] D. Minnen, T. Westeyn, D. Ashbrook, P. Presti, and T. Starner, ‘‘Rec-
Sensors, vol. 14, no. 6, pp. 10146–10176, 2014. ognizing soldier activities in the field,’’ in 4th Int. workshop wearable
[20] J.-L. Reyes-Ortiz, L. Oneto, A. Samà, X. Parra, and D. Anguita, Implant. body sensor Netw. (BSN), pp. 236–241. Springer, 2007.
‘‘Transition-aware human activity recognition using smartphones,’’ Neu- [41] D. Lara, A. J. Pérez, M. A. Labrador, and J. D. Posada,
rocomputing, vol. 171, pp. 754–767, Jan. 2016. ‘‘Centinela: A human activity recognition system based on acceleration
[21] G. Vavoulas, C. Chatzaki, T. Malliotakis, M. Pediaditis, and and vital sign data,’’ Pervasive Mobile Comput., vol. 8, no. 5,
A. M. Tsiknakis, ‘‘The mobiact dataset: Recognition of activities pp. 717–729, Oct. 2012.
of daily living using smartphones,’’ in Proc. ICT4AgeingWell, 2016, [42] C. Zhu and W. Sheng, ‘‘Human daily activity recognition in robot-assisted
pp. 143–151. living using multi-sensor fusion,’’ in Proc. IEEE Int. Conf. Robot. Autom.,
[22] T. Sztyler and H. Stuckenschmidt, ‘‘On-body localization of wearable May 2009, pp. 2154–2159.
devices: An investigation of position-aware activity recognition,’’ in [43] M. Abu Alsheikh, A. Selim, D. Niyato, L. Doyle, S. Lin, and H.-P. Tan,
Proc. IEEE Int. Conf. Pervas. Comput. Commun. (PerCom), Mar. 2016, ‘‘Deep activity recognition models with triaxial accelerometers,’’
pp. 1–9. 2015, arXiv:1511.04664. [Online]. Available: http://arxiv.org/abs/1511.
[23] D. Micucci, M. Mobilio, and P. Napoletano, ‘‘UniMiB SHAR: 04664
A new dataset for human activity recognition using acceleration [44] L. Zhang, X. Wu, and D. Luo, ‘‘Recognizing human activities from raw
data from smartphones,’’ 2016, arXiv:1611.07688. [Online]. Available: accelerometer data using deep neural networks,’’ in Proc. IEEE 14th Int.
http://arxiv.org/abs/1611.07688 Conf. Mach. Learn. Appl. (ICMLA), Dec. 2015, pp. 865–870.
[24] G. Vavoulas, M. Pediaditis, C. Chatzaki, E. G. Spanakis, and [45] S. Fallmann and J. Kropf, ‘‘Human activity recognition of continuous
A. M. Tsiknakis, ‘‘The mobifall dataset: Fall detection and classification data using hidden Markov models and the aspect of including discrete
with a smartphone,’’ in Artificial Intelligence: Concepts, Methodolo- data,’’ in Proc. Int. Conf. Ubiquitous Intell. Comput., Adv. Trusted Com-
gies, Tools, and Applications. Hershey, PA, USA: IGI Global, 2017, put., Scalable Comput. Commun., Cloud Big Data Comput., Jul. 2016,
pp. 1218–1231. pp. 121–126.
[25] J. Santoyo-Ramón, E. Casilari, and J. Cano-García, ‘‘Analysis of a [46] M. O. Padar, A. E. Ertan, and C. Candan, ‘‘Classification of human motion
smartphone-based architecture with multiple mobility sensors for fall using radar micro-Doppler signatures with hidden Markov models,’’ in
detection with supervised learning,’’ Sensors, vol. 18, no. 4, p. 1155, Proc. IEEE Radar Conf. (RadarConf), May 2016, pp. 1–6.
Apr. 2018. [47] P. Vepakomma, D. De, S. K. Das, and S. Bhansali, ‘‘A-wristocracy:
[26] Z.-Y. He and L.-W. Jin, ‘‘Activity recognition from acceleration data using Deep learning on wrist-worn sensing for recognition of user complex
AR model representation and SVM,’’ in Proc. ICMLC, vol. 4, Jul. 2008, activities,’’ in Proc. IEEE 12th Int. Conf. Wearable Implant. Body Sensor
pp. 2245–2250. Netw. (BSN), Jun. 2015, pp. 1–6.
[27] Z. He and L. Jin, ‘‘Activity recognition from acceleration data based on [48] L. Zhang, X. Wu, and D. Luo, ‘‘Human activity recognition with
discrete consine transform and SVM,’’ in Proc. IEEE Int. Conf. Syst., Man HMM-DNN model,’’ in Proc. IEEE 14th Int. Conf. Cognit. Informat.
Cybern., Oct. 2009, pp. 5041–5044. Cognit. Comput., Jul. 2015, pp. 192–197.
[28] D. Anguita, A. Ghio, L. Oneto, X. Parra, and J. L. Reyes-Ortiz, ‘‘A public [49] N. Y. Hammerla, S. Halloran, and T. Ploetz, ‘‘Deep, convolutional, and
domain dataset for human activity recognition using smartphones,’’ in recurrent models for human activity recognition using wearables,’’ 2016,
Proc. ESANN, vol. 3, 2013, p. 3. arXiv:1604.08880. [Online]. Available: http://arxiv.org/abs/1604.08880

VOLUME 9, 2021 139349


S. Iloga et al.: HAR Based on Acceleration Data From Smartphones Using HMMs

[50] S. Zhang, W. W. Ng, J. Zhang, and C. D. Nugent, ‘‘Human activ- [71] M. Panwar, S. Ram Dyuthi, K. Chandra Prakash, D. Biswas, A. Acharyya,
ity recognition using radial basis function neural network trained via K. Maharatna, A. Gautam, and G. R. Naik, ‘‘CNN based approach for
a minimization of localized generalization error,’’ in Proc. Int. Conf. activity recognition using a wrist-worn accelerometer,’’ in Proc. 39th
Ubiquitous Comput. Ambient Intell. Cham, Switzerland: Springer, 2017, Annu. Int. Conf. Eng. Med. Biol. Soc. (EMBC), Jul. 2017, pp. 2438–2441.
pp. 498–507. [72] S. Yao, S. Hu, Y. Zhao, A. Zhang, and T. Abdelzaher, ‘‘DeepSense: A uni-
[51] M. Zeng, L. T. Nguyen, B. Yu, O. J. Mengshoel, J. Zhu, P. Wu, and fied deep learning framework for time-series mobile sensing data process-
J. Zhang, ‘‘Convolutional neural networks for human activity recognition ing,’’ in Proc. 26th Int. Conf. World Wide Web, Apr. 2017, pp. 351–360.
using mobile sensors,’’ in Proc. 6th Int. Conf. Mobile Comput., Appl. [73] M. Dong, J. Han, Y. He, and X. Jing, ‘‘HAR-Net: Fusing deep repre-
Services, 2014, pp. 197–205. sentation and hand-crafted features for human activity recognition,’’ in
[52] Y. Zheng, Q. Liu, E. Chen, Y. Ge, and J. L. Zhao, ‘‘Time series classifica- Int. Conf. Signal Inf. Process., Netw. Comput. Singapore: Springer, 2018,
tion using multi-channels deep convolutional neural networks,’’ in Proc. pp. 32–40.
Int. Conf. Web-Age Inf. Manage. Cham, Switzerland: Springer, 2014, [74] F. M. Rueda, R. Grzeszick, G. A. Fink, S. Feldhorst, and
pp. 298–310. M. T. Hompel, ‘‘Convolutional neural networks for human activity
[53] Y. Chen and Y. Xue, ‘‘A deep learning approach to human activity recognition using body-worn sensors,’’ Informatics, vol. 5, no. 2, p. 26,
recognition based on single accelerometer,’’ in Proc. IEEE Int. Conf. Syst., 2018.
Man, Cybern., Oct. 2015, pp. 1488–1492. [75] S. Wan, L. Qi, X. Xu, C. Tong, and Z. Gu, ‘‘Deep learning models for
[54] S. Ha, J.-M. Yun, and S. Choi, ‘‘Multi-modal convolutional neural net- real-time human activity recognition with smartphones,’’ Mobile Netw.
works for activity recognition,’’ in Proc. IEEE Int. Conf. Syst., Man, Appl., vol. 25, pp. 743–755, Dec. 2019.
Cybern., Oct. 2015, pp. 3017–3022.
[76] K. Xia, J. Huang, and H. Wang, ‘‘LSTM-CNN architecture for human
[55] W. Jiang and Z. Yin, ‘‘Human activity recognition using wearable sensors
activity recognition,’’ IEEE Access, vol. 8, pp. 56855–56866, 2020.
by deep convolutional neural networks,’’ in Proc. 23rd ACM Int. Conf.
[77] M. Edel and E. Koppe, ‘‘Binarized-BLSTM-RNN based human activity
Multimedia, Oct. 2015, pp. 1307–1310.
recognition,’’ in Proc. Int. Conf. Indoor Positioning Indoor Navigat.
[56] J. Yang, M. N. Nguyen, P. P. San, X. Li, and S. Krishnaswamy, ‘‘Deep
(IPIN), Oct. 2016, pp. 1–7.
convolutional neural networks on multichannel time series for human
activity recognition,’’ in Proc. IJCAI, vol. 15, 2015, pp. 3995–4001. [78] A. Murad and J.-Y. Pyun, ‘‘Deep recurrent neural networks for human
[57] Y. Chen, K. Zhong, J. Zhang, Q. Sun, and X. Zhao, ‘‘LSTM networks activity recognition,’’ Sensors, vol. 17, no. 11, p. 2556, 2017.
for mobile human activity recognition,’’ in Proc. Int. Conf. Artif. Intell., [79] Y. Guan and T. Plötz, ‘‘Ensembles of deep LSTM learners for activity
Technol. Appl., 2016, pp. 50–53. recognition using wearables,’’ Interact., Mobile, Wearable Ubiquitous
[58] H. Gjoreski, J. Bizjak, M. Gjoreski, and M. Gams, ‘‘Comparing deep Technol., vol. 1, no. 2, pp. 1–28, 2017.
and classical machine learning methods for human activity recognition [80] C. Xu, D. Chai, J. He, X. Zhang, and S. Duan, ‘‘InnoHAR: A deep neural
using wrist accelerometer,’’ in Proc. Workshop Deep Learn. Artif. Intell., network for complex human activity recognition,’’ IEEE Access, vol. 7,
New York, NY, USA, vol. 10, 2016, p. 970. pp. 9893–9902, 2019.
[59] S. Ha and S. Choi, ‘‘Convolutional neural networks for human activity [81] T. Plötz, N. Y. Hammerla, and P. L. Olivier, ‘‘Feature learning for activity
recognition using multiple accelerometer and gyroscope sensors,’’ in recognition in ubiquitous computing,’’ in Proc. 22nd Int. Joint Conf. Artif.
Proc. Int. Joint Conf. Neural Netw. (IJCNN), Jul. 2016, pp. 381–388. Intell., 2011, pp. 1729–1734.
[60] C. Liu, L. Zhang, Z. Liu, K. Liu, X. Li, and Y. Liu, ‘‘Lasagna: Towards [82] H. Fang and C. Hu, ‘‘Recognizing human activity in smart home using
deep hierarchical understanding and searching over mobile sensing data,’’ deep learning algorithm,’’ in Proc. 33rd Chin. Control Conf., Jul. 2014,
in Proc. 22nd Annu. Int. Conf. Mobile Comput. Netw., Oct. 2016, pp. 4716–4720.
pp. 334–347. [83] T. Hayashi, M. Nishida, N. Kitaoka, and K. Takeda, ‘‘Daily activity recog-
[61] F. J. O. Morales and D. Roggen, ‘‘Deep convolutional feature transfer nition based on DNN using environmental sound and acceleration sig-
across mobile activity recognition domains, sensor modalities and loca- nals,’’ in Proc. 23rd Eur. Signal Process. Conf. (EUSIPCO), Aug. 2015,
tions,’’ in Proc. ACM Int. Symp. Wearable Comput., Sep. 2016, pp. 92–99. pp. 2306–2310.
[62] D. Ravi, C. Wong, B. Lo, and G.-Z. Yang, ‘‘Deep learning for human [84] N. D. Lane and P. Georgiev, ‘‘Can deep learning revolutionize mobile
activity recognition: A resource efficient implementation on low-power sensing?’’ in Proc. 16th Int. Workshop Mobile Comput. Syst. Appl.,
devices,’’ in Proc. IEEE 13th Int. Conf. Wearable Implant. Body Sensor pp. 117–122, 2015.
Netw. (BSN), Jun. 2016, pp. 71–76. [85] L. Zhang, X. Wu, and D. Luo, ‘‘Real-time activity recognition on smart-
[63] D. Ravì, C. Wong, B. Lo, and G.-Z. Yang, ‘‘A deep learning approach phones using deep neural networks,’’ in Proc. IEEE 12th Int. Conf.
to on-node sensor data analytics for mobile or wearable devices,’’ IEEE Ubiquitous Intell. Comput., Aug. 2015, pp. 1236–1242.
J. Biomed. Health Inform., vol. 21, no. 1, pp. 56–64, Jan. 2017. [86] S. Bhattacharya and N. D. Lane, ‘‘From smart to deep: Robust activity
[64] C. A. Ronao and S.-B. Cho, ‘‘Human activity recognition with smart- recognition on smartwatches using deep learning,’’ in Proc. IEEE Int.
phone sensors using deep learning neural networks,’’ Expert Syst. Appl., Conf. Pervas. Comput. Commun. Workshops, Mar. 2016, pp. 1–6.
vol. 59, pp. 235–244, Oct. 2016.
[87] V. Radu, N. D. Lane, S. Bhattacharya, C. Mascolo, M. K. Marina, and
[65] J. Wang, X. Zhang, Q. Gao, H. Yue, and H. Wang, ‘‘Device-free wireless
F. Kawsar, ‘‘Towards multimodal deep learning for activity recognition
localization and activity recognition: A deep learning approach,’’ IEEE
on mobile devices,’’ in Proc. ACM Int. Joint Conf. Pervas. Ubiquitous
Trans. Veh. Technol., vol. 66, no. 7, pp. 6258–6267, Jul. 2017.
Comput., Adjunct, Sep. 2016, pp. 185–188.
[66] T. Zebin, P. J. Scully, and K. B. Ozanyan, ‘‘Human activity recognition
[88] B. Almaslukh, J. Almuhtadi, and A. Artoli, ‘‘An effective deep autoen-
with inertial sensors using a deep learning approach,’’ in Proc. IEEE
coder approach for online smartphone-based human activity recogni-
SENSORS, May 2016, pp. 1–3.
tion,’’ Int. J. Comput. Sci. Netw. Secur., vol. 17, no. 4, pp. 160–165, 2017.
[67] Y. Zheng, Q. Liu, E. Chen, Y. Ge, and J. L. Zhao, ‘‘Exploiting multi-
channels deep convolutional neural networks for multivariate time series [89] W. Samek, G. Montavon, S. Lapuschkin, C. J. Anders, and K.-R. Müller,
classification,’’ Frontiers Comput. Sci., vol. 10, no. 1, pp. 96–112, ‘‘Explaining deep neural networks and beyond: A review of methods and
Feb. 2016. applications,’’ Proc. IEEE, vol. 109, no. 3, pp. 247–278, Mar. 2021.
[68] Y. Kim and Y. Li, ‘‘Human activity classification with transmission [90] E. Sansano, R. Montoliu, and Ó. Belmonte Fernández, ‘‘A study of deep
and reflection coefficients of on-body antennas through deep convolu- neural networks for human activity recognition,’’ Comput. Intell., vol. 36,
tional neural networks,’’ IEEE Trans. Antennas Propag., vol. 65, no. 5, no. 3, pp. 1113–1139, Aug. 2020.
pp. 2764–2768, May 2017. [91] F. Li, K. Shirahama, M. A. Nisar, L. Köping, and M. Grzegorzek, ‘‘Com-
[69] S.-M. Lee, S. Min Yoon, and H. Cho, ‘‘Human activity recogni- parison of feature learning methods for human activity recognition using
tion from accelerometer data using convolutional neural network,’’ in wearable sensors,’’ Sensors, vol. 18, no. 2, p. 679, 2018.
Proc. IEEE Int. Conf. Big Data Smart Comput. (BigComp), Feb. 2017, [92] S. Hosseini, S. Hee Lee, and N. Ik Cho, ‘‘Feeding hand-crafted features
pp. 131–134. for enhancing the performance of convolutional neural networks,’’ 2018,
[70] S. Mohammed and I. Tashev, ‘‘Unsupervised deep representation learn- arXiv:1801.07848. [Online]. Available: http://arxiv.org/abs/1801.07848
ing to remove motion artifacts in free-mode body sensor networks,’’ in [93] R. Singh, A. Sonawane, and R. Srivastava, ‘‘Recent evolution of modern
Proc. IEEE 14th Int. Conf. Wearable Implant. Body Sensor Netw. (BSN), datasets for human activity recognition: A deep survey,’’ Multimedia
May 2017, pp. 183–188. Syst., vol. 26, no. 2, pp. 83–106, 2019.

139350 VOLUME 9, 2021


S. Iloga et al.: HAR Based on Acceleration Data From Smartphones Using HMMs

[94] L. Rabiner, ‘‘A tutorial on hidden Markov models and selected applica- ALEXANDRE BORDAT graduated from the École
tions in speech recognition,’’ Proc. IEEE, vol. 77, no. 2, pp. 257–286, Nationale Supérieure de l’Électronique et de ses
Feb. 1989. Applications (ENSEA), France. He is currently
[95] S. Iloga, ‘‘Customizable HMM-based measures to accurately compare pursuing the Ph.D. degree in embedded system
tree sets,’’ Pattern Anal. Appl., vol. 5, pp. 1–22, Mar. 2021. with ENSEA. He is also an Engineer of embedded
[96] A. Likas, N. Vlassis, and J. J. Verbeek, ‘‘The global k-means clustering system.
algorithm,’’ Pattern Recognit., vol. 36, no. 2, pp. 451–461, Feb. 2003.
[97] S. Iloga, O. Romain, and M. Tchuenté, ‘‘An accurate HMM-based sim-
ilarity measure between finite sets of histograms,’’ Pattern Anal. Appl.,
vol. 22, no. 3, pp. 1079–1104, Aug. 2019.
[98] I. H. Witten and F. Eibe. (2005). Data Mining: Practical Machine
Learning Tools and Techniques. [Online]. Available: http://weka.sourc
eforge.net/
[99] Nvidia Jetson Tx2 User Manual. Accessed: Jul. 21, 2021. [Online].
Available: https://www.manualslib.com/manual/1634693/Nvidia-Jetson- JULIEN LE KERNEC (Senior Member, IEEE)
Tx2.html received the B.Eng. and M.Eng. degrees in elec-
[100] Mobile Processor Exynos 7 Octa (5433). Accessed: Jul. 21, 2021. tronic engineering from Cork Institute of Tech-
[Online]. Available: https://www.samsung.com/semiconductor/minisite/ nology, Ireland, in 2004 and 2006, respectively,
exynos/products/mobileprocessor/exynos-7-octa-5433/ and the Ph.D. degree in electronic engineering
[101] A. A. López and M. O. Arias-Estrada, ‘‘Implementing hidden Markov from University Pierre and Marie Curie, France,
models in a hardware architecture,’’ in Proc. Int. Meeting Comput. Sci.,
in 2011. He is currently a Senior Lecturer with
vol. 2, 2001, pp. 1007–1016.
the School of Engineering, University of Glasgow.
[102] A. Lu, Z. Fang, N. Farahpour, and L. Shannon, ‘‘CHIP-KNN: A
configurable and high-performance k-nearest neighbors accelerator on He is also a Senior Lecturer with the University of
cloud FPGAs,’’ in Proc. Int. Conf. Field-Program. Technol. (ICFPT), Electronic Science and Technology of China and
Dec. 2020, pp. 139–147. an Adjunct Associate Professor with the ETIS Laboratory, University of
Cergy-Pontoise, France. His research interests include radar system design,
software-defined radio/radar, signal processing, and health applications.
SYLVAIN ILOGA received the Ph.D. degree
in computer science from the University of
Yaoundé 1, Cameroon, in January 2018. Since
January 2010, he has been a Teacher with the
Department of Computer Science, High Teach- OLIVIER ROMAIN (Member, IEEE) received
ers’ Training College, Maroua, Cameroon. From the Engineering degree in electronics from ENS
September 2017 to August 2019, he completed a Cachan, the master’s degree in electronics from
research and teaching internship with the Depart- Louis Pasteur University, and the Ph.D. degree
ment of Electronic Engineering and Industrial in electronics from Pierre and Marie Curie Uni-
Computing, IUT of Cergy-Pontoise, Neuville Uni- versity, Paris. From 2012 to 2019, he was the
versity, France. He was promoted to the rank of a Lecturer in Cameroon, Head of the Department of Architecture, ETIS-
in May 2018. Subsequently in January 2019, he obtained his qualification UMR8051 Laboratory. Since January 2020, he has
for the functions of a Lecturer in France, section 27 (Computer Science). been the Director of the ETIS-UMR8051 Labo-
His research interests include design of taxonomies for hierarchical clas- ratory. He is currently a University Professor of
sification, sequential data mining, machine learning using hidden Markov electrical engineering with CY Cergy Paris University. His research interests
models, and the implementation of reconfigurable architectures based on the include systems on chips for diffusion and biomedical applications.
FPGA technology.

VOLUME 9, 2021 139351

You might also like