Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Liu LNCS

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/278702586

Real-Time Subject-Dependent EEG-Based Emotion Recognition Algorithm

Chapter · January 2014


DOI: 10.1007/978-3-662-43790-2_11

CITATIONS READS

94 1,702

2 authors:

Yisi Liu Olga Sourina


Nanyang Technological University Nanyang Technological University
72 PUBLICATIONS   2,002 CITATIONS    158 PUBLICATIONS   3,595 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Neurofeedback technology View project

All content following this page was uploaded by Yisi Liu on 24 August 2016.

The user has requested enhancement of the downloaded file.


Real-time Subject-dependent EEG-based
Emotion Recognition Algorithm

Yisi Liu and Olga Sourina

Fraunhofer IDM@NTU, Nanyang Technological University


Singapore
{LIUYS,EOSourina}@ntu.edu.sg

Abstract. In this paper, we proposed a real-time subject-dependent


EEG-based emotion recognition algorithm and tested it on experiments’
databases and the benchmark database DEAP. The algorithm consists of
two parts: feature extraction and data classification with Support Vector
Machine (SVM). Use of a Fractal Dimension feature in combination with
statistical and Higher Order Crossings (HOC) features gave results with
the best accuracy and with adequate computational time. The features
were calculated from EEG using a sliding window. The proposed algo-
rithm can recognize up to 8 emotions such as happy, surprised, satisfied,
protected, angry, frightened, unconcerned, and sad using 4 electrodes in
real time. Two experiments with audio and visual stimuli were imple-
mented, and the Emotiv EPOC device was used to collect EEG data.

Keywords: emotion recognition, EEG, emotion recognition algorithms,


Emotiv EPOC, Valence-Arousal-Dominance model

1 Introduction

Recently, the EEG devices became wireless, more portable, wearable and easy
to use, thus more research can be done on real-time emotion recognition algo-
rithms. Emotion recognition algorithms can be subject-dependent and subject-
independent. Subject-dependent algorithms have a better accuracy than subject-
independent algorithms but the system training session for each individual user
should be designed and implemented in the subject-dependent algorithms.
In this paper, we proposed and implemented a real-time subject-dependent
algorithm based on the Valence-Arousal-Dominance (VAD) emotion model. A
combination of features including Fractal Dimension (FD) was used because
FD values reflect nonlinearity of EEG signals. Fractal Dimension analysis is a
suitable approach for analyzing nonlinear systems and can be used in real-time
EEG signal processing [4, 72]. Early works show that Fractal Dimension can re-
flect changes in EEG signals [58], and Fractal Dimension is varied for different
mental tasks [48]. In [63, 66], music was used as a stimulus to elicit emotions, and
Fractal Dimension was applied for the analysis of the EEG signal. In [5], it was
demonstrated that the difference between positive and negative emotions can be
2 Yisi Liu and Olga Sourina

discovered by estimating the dimensional complexity of the signal. Recent sup-


porting evidence such as [23] and [24] shows that Fractal Dimension can reflect
the activity of the sensorimotor cortex. More supporting evidence to successful
use of Fractal Dimension in EEG analysis in different applications is described
in [48, 58, 63, 66]. These works show that Fractal Dimension based EEG analysis
is a potentially promising approach in EEG-based emotion recognition.
Our hypothesis is that the feeling of changes can be noticed from EEG as
fractal dimension changes. In 2008, we started to use fractal dimension to rec-
ognize positive and negative emotions from EEG [63]. In 2010, we proposed to
use Higuchi algorithm for fractal feature extraction for real-time emotion recog-
nition. We calculated subject dependent thresholds of emotions recognition, and
we visualized emotions in real time on a virtual avatar [43, 44]. At the same year,
[33] and [26] also confirmed that Higuchi fractal dimension can be used in EEG-
based emotion recognition algorithms. In 2011, we studied the fractal dimension
methods such as box-counting and Higuchi using mono fractal signals generated
by Brownian and Weierstrass functions [73] and in [64] both algorithms were
applied to recognize high/low arousal and positive/negative valence [64]. In [46],
two affective EEG databases were presented; two experiments were conducted
to set up the databases. Audio and visual stimuli were used to evoke emotions
during the experiments. In [46] and this work, we proposed to use a FD feature
to improve emotion recognition algorithm accuracy. The algorithm consists of
two parts: feature extraction and classification with the Support Vector Machine
(SVM) classifier. Use of a Fractal Dimension feature in combination with sta-
tistical and Higher Order Crossings (HOC) features gave the results with the
best accuracy and with adequate computational time. The features’ values were
calculated from EEG using a sliding window.
In the VAD model, the emotions are described as follows: a “satisfied” emo-
tion is defined as a positive/ low arousal/ high dominance emotion, a “happy”
emotion is defined as a positive/ high arousal/ high dominance emotion, a “sur-
prised” emotion is defined as a positive/ high arousal/ low dominance emotion,
a “protected” emotion is defined as a positive/ low arousal/ low dominance
emotion, a “sad” emotion is defined as a negative/ low arousal/ low dominance
emotion, a “unconcerned” emotion is defined as a negative/ low arousal/ high
dominance emotion, a “angry” emotion is defined as a negative/ high arousal/
high dominance emotion and a “frightened” emotion is defined as a negative/high
arousal/low dominance emotion [51], etc. The proposed algorithm should be
tested on the EEG databases labeled with emotions where emotions were induced
by visual, audio, and combined (music video) stimuli, and the best combination
of features should be proposed. In this paper, 2 series of experiments on emotion
induction with audio stimuli and with visual stimuli were designed and imple-
mented based on the Valence-Arousal-Dominance emotion model. The sounds
were chosen to induce happy, surprised, satisfied, protected, angry, frightened,
unconcerned, and sad emotions from International Affective Digitized Sounds
(IADS) [13], and the visual stimuli were chosen from International Affective
Picture System (IAPS) database [41]. The data were collected from 14 subjects
Real-time Subject-dependent EEG-based Emotion Recognition Algorithm 3

in Experiment 1 and 16 subjects in Experiment 2. The questionnaire and Self-


Assessment Manikin (SAM) [12] technique were applied. Two databases with
EEG data labeled with 8 emotions were created. Recently, the DEAP benchmark
database that used music video stimuli for emotion induction became available
[35]. The proposed algorithm was tested on the benchmark DEAP database and
on our own two experiments’ databases.
The paper is organized as follows. In Section 2, emotion classification mod-
els, and EEG-based emotion recognition algorithms are reviewed. Mathematical
models of the channel choice algorithm, statistical features, Higher Order Cross-
ings (HOC), Fractal Dimension (FD) algorithm and the Support Vector Machine
classifier that were used for feature extraction and classification are introduced
in Section 3. In Section 4, the proposed and implemented experiments are given.
The affective EEG database DEAP is also briefly described. A real-time subject-
dependent algorithm is described in Section 5. The algorithm results and dis-
cussion are given in Section 6. Finally, Section 7 concludes the paper.

2 Background

2.1 Emotion Classification Models

The most widely used approach to represent emotion is the bipolar model with
valence and arousal dimensions proposed by Russell [60]. In this model, valence
dimension ranges from “negative” to “positive”, and arousal dimension ranges
from “not aroused” to “excited”. The 2-Dimensional (2D) model can locate the
discrete emotion labels in its space [50], and it could define emotions which are
even without discrete emotion labels. However, emotion of fear and anger cannot
be differed if they were defined by the 2D model as they both have the same
high arousal and negative valence level values.
In order to get a comprehensive description of emotions, Mehrabian and Rus-
sell proposed 3-Dimensional (3D) Pleasure (Valence)-Arousal-Dominance (PAD)
model in [51] and [52]. “Pleasure-displeasure” dimension of the model equals to
the valence dimension mentioned above, evaluating the pleasure level of the emo-
tion. “Arousal-non-arousal” dimension is equivalent to the arousal dimension,
referring to the alertness of an emotion. “Dominance-submissiveness” dimension
is a newly extended dimension, which is also named as a control dimension of
emotion [51, 52]. It ranges from a feeling of being in control during an emotional
experience to a feeling of being controlled by the emotion [10]. It makes the di-
mensional model more complete. With the Dominance dimension, more emotion
labels can be located in the 3D space. For example, happiness and surprise are
both emotions with positive valence and high arousal, and it can be differenti-
ated by their dominance level since happiness is with high dominance, whereas
surprise is with low dominance [51].
In our work, we use the 3-dimensional emotion classification model.
4 Yisi Liu and Olga Sourina

2.2 EEG-based Emotion Recognition Algorithms

The EEG-based emotion recognition algorithms are different on their depen-


dence on subjects during the recognition and they can be implemented in either
a subject-dependent or a subject-independent way. The advantage of subject-
dependent recognition is that a higher accuracy can be achieved since the clas-
sification is catered for each individual, but the disadvantage is that every time
a classifier is needed to be trained for a new subject.
[15, 29, 42, 43, 61, 74] are examples of subject-dependent algorithms. In [42,
43] and [61], the discrete emotion model was used. Four emotions such as joy,
anger, sadness, and pleasure were recognized in [42] with 32 channels and an
averaged accuracy of 82.29% using the differential asymmetry of hemispheric
EEG power spectra and SVM as a classifier. In [43], 6 emotions such as plea-
sure, satisfaction, happiness, sadness, frustration, and fear were recognized with
the proposed subject-dependent algorithm. Three emotional reactions including
“pleasant”, “neutral” and “unpleasant” were recognized in [61] with 4 channels,
and the average accuracy is 66.7%. In [15, 29, 74], the dimensional emotion model
was used. Positive and negative emotional states based on the valence dimension
in the 2D emotional model were recognized in [74], and the accuracy obtained
was 73% using 3 channels. [15] recognized positive and negative states with 4
channels and obtained an accuracy of 57.04%. [29] recognized positive/ negative
valence states with the best mean accuracy of 83.1%, and strong/calm arousal
states with the best mean accuracy of 66.51% using 32 channels.
[8, 16, 62, 70] are examples of subject-independent algorithms. [16, 62, 70] em-
ployed the discrete emotion model. By using the powers of different frequency
bands of EEG signals as features, [16] got maximum 56% accuracy for 3 emo-
tional states such as boredom, engagement, and anxiety detection. By using
the statistical features and SVM, five emotions such as joy, anger, sadness, fear
and relaxation were recognized with accuracy 41.7% in [70]. Although [8] can
achieve an average accuracy of 94.27% across five emotions, 256 channels were
compulsory in the recognition. [62] used the dimensional emotion model, and
it obtained accuracies of 62.1% and 50.5% for detecting 3 levels of arousal and
valence respectively, and 32 channels were needed in the algorithm.
In [56], both subject-dependent and subject-independent algorithms were
proposed. 4 channels were used in the recognition. In the subject-dependent case
of [56], the accuracy ranges from 70% to 100%, and in the subject-independent
case, the accuracy drops by 10% to 20%. It is important to notice that in the
reviewed works, the reported accuracy was calculated on their own datasets.
All the above-mentioned works show that the accuracy of the subject-dependent
algorithms is generally higher than subject-independent algorithms. Thus, we
developed a subject-dependent algorithm in our work. The number of emotions
that is possible to recognize and the number of the electrodes that is used are
very important for the algorithms comparison. For example, although the accu-
racy that was obtained in [42] is higher than in [74], 32 channels were needed
in [42] compared with 3 channels used in [74]. Besides that, if the algorithm is
developed for real-time applications, the time needed for features extraction and
Real-time Subject-dependent EEG-based Emotion Recognition Algorithm 5

the number of channels used should be minimized. If more electrodes are used,
the comfort level of the user who wears the device decreases as well. Thus, our
main objective is to propose an algorithm performing with adequate accuracy
in real-time applications.

3 Method

3.1 The Fisher Discriminant Ratio

The Fisher Discriminant Ratio (FDR) is a classical approach that is used to


select channels [7, 19, 40]. The output of FDR is a score corresponding to each
channel. The selection of channels will follow the rank of their FDR scores. The
formula of FDR value calculation is as follows:
PC PC i j 2
i=1 j=1 (µp − µp )
F DR(p) = PC . (1)
i 2
i=1 (σp )

where C is the number of classes, p is the channel index, µip is the mean value
of the feature from the pth channel for the ith class, and σpi is the standard
deviation of the feature from the pth channel for the ith class [34].

3.2 Statistical, Higher Order Crossings and Fractal Dimension


Feature Extraction

Although the EEG signal is nonlinear [38, 39], little has been done to investi-
gate its nonlinear nature when emotion recognition research is conducted. Linear
analysis such as Fourier Transform only preserves the power spectrum in the sig-
nal, but destroys the spike-wave structure [67].
In this work, we proposed to use Fractal Dimension feature in combination
with statistical features [57], Higher Order Crossings (HOC) [32, 55] to improve
emotion recognition accuracy. Statistical and HOC features were used as they
gave the highest emotion recognition accuracy as it was described in [55, 57, 70].
In this work, the Higuchi algorithm [25] was proposed to be used for FD values
calculation. The algorithm gave better accuracy than other FD algorithms as it
was shown in [73]. Details of these algorithms are given as below.

Statistical Features

1. The means of the raw signals


N
1 X
µX = X(n). (2)
N n=1
6 Yisi Liu and Olga Sourina

2. The standard deviations of the raw signals


v
u
u1 X N
σX = t (X(n) − µX )2 . (3)
N n=1

3. The means of the absolute values of the first differences of the raw signals
N −1
1 X
δX = |X(n + 1) − X(n)| . (4)
N − 1 n=1

4. The means of the absolute values of the first differences of the normalized
signals
N −1
1 X δX
δX = X(n + 1) − X(n) = . (5)
N − 1 n=1 σX

5. The means of the absolute values of the second differences of the raw signals
N −2
1 X
γX = |X(n + 2) − X(n)| . (6)
N − 2 n=1

6. The means of the absolute values of the second differences of the normalized
signals
N −2
1 X γX
γX = X(n + 2) − X(n) = . (7)
N − 2 n=1 σX

Thus, the feature vector composed is

F VStatistical = [µX , σX , δX , δX , γX , γX ]. (8)

HOC-Based Features The algorithm of HOC is given as follows.


The input data is a finite zero-mean series {Xn },n = 1, ..., N .
First, a sequence of filters are applied on the input data
k
k−1
X (k − 1)!
∇ Xn ≡ (−1)(j−1) Xn−j+1 . (9)
j=1
(j − 1)!(k − j)!

where ∇k−1 denotes a sequence of filters, when k = 1, it becomes the identity


filter.
Then the number of zero-crossings associated with a particular filter is counted.
To get the counts of zero-crossings,
(
1 if ∇k−1 Xn ≥ 0
Zn (K) = (10)
0 if ∇k−1 Xn < 0
Real-time Subject-dependent EEG-based Emotion Recognition Algorithm 7

is used and
N
X
Dk = [Zn (k) − Zn−1 (k)]2 . (11)
n=2

represents the number of zero crossings.


As a result, the feature vector [55] is constructed as

F VHOC = [D1 , D2 , ..., Dk ]. (12)

Higuchi Algorithm Let X (1) , X (2) , . . . , X (N ) be a finite set of time series


samples. Then, the newly constructed time series is
   
m N −m
Xt : X (m) , X (m + t) , . . . , X m + ·t . (13)
t

where m = 1, 2, ..., t is the initial time and t is the interval time [25].
For example, if t = 3 and N = 100, the newly constructed time series are:
X31 : X(1), X(4), ..., X(100),X32 : X(2), X(5), ..., X(98),
X33 : X(3), X(6), ..., X(99).
t sets of Lm (t) are calculated by
 N −m  
P[ t ] N −1
i=1 |X (m + it) − X (m + (i − 1) · t)|
[ N −m
t ]·t
Lm (t) = . (14)
t
hL (t)i denotes the average value of Lm (t), and one relationship exists

hL (t)i ∝ t−dimH . (15)

Then, the fractal dimension dimH could be obtained by logarithmic plotting


between different t(ranging from 1 to tmax ) and its associated hL(t)i [25].

ln hL(t)i
dimH = . (16)
− ln t
Thus, the feature vector composed is

F VF D(Higuchi) = [dimH ]. (17)

3.3 Support Vector Machine Classifier

The goal of SVM method is to find a hyperplane of high dimensional space which
can be used for classification [18]. SVM is a powerful classifier. It projects low
dimension features into higher dimension using kernel functions which can solve
the inseparable cases [53]. There are different types of kernel functions used in
implemented classifiers. The polynomial kernel used in our work is defined as
8 Yisi Liu and Olga Sourina

follows [17]:

K(x · z) = (gamma · xT · z + coef )d . (18)


where x, z ∈ Rn , gamma, and coef are the kernel parameters, d denotes the order
of the polynomial kernel and T is the transpose operation. More information on
SVM classifiers can be found in [18].

4 Experiment

We designed and carried out two experiments with audio and visual external
stimuli to collect EEG data based on the Valence-Arousal-Dominance emotion
model. The obtained EEG data with different emotional labels were used to test
the proposed algorithm.

4.1 Stimuli

In Experiment 1, sound clips selected from the International Affective Digitized


Sounds (IADS) database [13] that follows the Valence-Arousal-Dominance emo-
tion model were used to induce emotions. The choice of sound clips is based on
their Valence, Arousal and Dominance level rating in the IADS database. The
experiment is composed of 8 sessions, and 5 clips targeting one emotion were
played in each session. The details of stimuli used to target emotions in each
session are given in Table 1.
In Experiment 2, we elicited emotions with visual stimuli selected from In-
ternational Affective Picture System (IAPS) database [41].The experiment was
also composed of 8 sessions, and 4 pictures targeting one emotion were shown in
each session. The details of stimuli targeting emotions in each session are given
in Table 2.

Table 1: Stimuli used in Experiment 1.


Session No. Targeted States Stimuli No.
Session1 Positive/ Low arousal/ Low dominance (PLL) 170, 262, 368, 602, 698
Session2 Positive/ Low arousal/ High dominance(PLH) 171, 172, 377, 809, 812
Session3 Positive/ High arousal/ Low dominance (PHL) 114, 152, 360, 410, 425
Session4 Positive/ High arousal/ High dominance (PHH) 367, 716, 717, 815, 817
Session5 Negative/ Low arousal/ Low dominance (NLL) 250, 252, 627, 702, 723
Session6 Negative/ Low arousal/ High dominance (NLH) 246, 358, 700, 720, 728
Session7 Negative/ High arousal/ Low dominance (NHL) 277, 279, 285, 286, 424
Session8 Negative/ High arousal/ High dominance (NHH) 116, 243, 280, 380, 423
Real-time Subject-dependent EEG-based Emotion Recognition Algorithm 9

Table 2: Stimuli used in Experiment 2.


Session No. Targeted States Stimuli No.
Session1 Positive/ Low arousal/ Low dominance (PLL) 7632, 5890, 5982, 7497
Session2 Positive/ Low arousal/ High dominance(PLH) 5000, 1604, 2370, 5760
Session3 Positive/ High arousal/ Low dominance (PHL) 5260, 1650, 8400, 849
Session4 Positive/ High arousal/ High dominance (PHH) 5626, 8034, 8501, 8200
Session5 Negative/ Low arousal/ Low dominance (NLL) 2682, 2753, 9010, 9220
Session6 Negative/ Low arousal/ High dominance (NLH) 2280, 7224, 2810, 9832
Session7 Negative/ High arousal/ Low dominance (NHL) 6230, 6350, 9410, 9940
Session8 Negative/ High arousal/ High dominance (NHH) 2458, 3550.2, 2130, 7360

4.2 Subjects
In Experiment 1, there are a total of 14 (9 females and 5 males) subjects par-
ticipating in the experiment. In Experiment 2, there are a total of 16 (9 females
and 7 males) subjects participating in the experiment. All of them are univer-
sity students and staff whose age ranged around 20 to 35 years old and without
auditory deficit or any history of mental illness.

4.3 Procedure
After a participant was invited to a project room, the experiment protocol and
the usage of a self-assessment questionnaire were explained to him/her. The sub-
jects needed to complete the questionnaire after the exposure to the audio/visual
stimuli. The Self-Assessment Manikin (SAM) technique [12] was employed which
used the 3D model with valence, arousal and dominance dimensions and nine
levels indicating the intensity in all dimensions. In the questionnaire, the subjects
were also asked to describe their feelings in any words including the emotions
like happy, surprised, satisfied, protected, angry, frightened, unconcerned, sad or
any other emotions they feel. The experiments were done with one subject at
each time. The audio experiment was conducted following the standard proce-
dure for emotion induction with audio stimuli [22, 42]. Therefore, in Experiment
1, the participants were asked to close their eyes to avoid artifacts and to be
focused on hearing. In Experiment 2, the subjects were asked to avoid making
movement except working on the keyboard.
The design of each session in Experiment 1 is as follows.
1. A silent period is given to the participant to calm down (12 seconds).
2. The subject is exposed to the sound stimuli (5clips x 6 seconds/clip=30
seconds).
3. The subject completes the self-assessment questionnaire.
In summary, each session lasted 42 seconds plus the self-assessment time.
The construction of each session in Experiment 2 is as follows.
1. A black screen is shown to the participant (3 seconds).
10 Yisi Liu and Olga Sourina

2. A white cross in the center of the screen is given to inform the subject that
visual stimuli will be shown (4 seconds).
3. The subject is exposed to the pictures (4 pictures x 10 seconds/clip=40
seconds).
4. The black screen is shown to the participant again (3 seconds).
5. The subject completes the self-assessment questionnaire.
In summary, each session lasted 50 seconds plus the self-assessment time.

4.4 EEG recording


In this work, we used Emotiv [2] device with 14 electrodes located at AF3, F7,
F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, AF4 and these locations are stan-
dardized by the American Electroencephalographic Society [3] (plus CMS/DRL
as references) for both Experiment 1 and 2. The technical parameters of the
device are given as follows: bandwidth - 0.2-45Hz, digital notch filters at 50Hz
and 60Hz; A/D converter with 16 bits resolution and sampling rate of 128Hz.
The data are transferred via a wireless receiver. Recently, Emotiv devices have
become popular for research [54, 59]. The reliability and validity of the EEG
data collected by Emotiv device was done in [21, 68]. EEG data recorded from
standard EEG device and Emotiv were compared, and the results showed that
the Emotiv device could be used as the standard EEG device in real-time appli-
cations where fewer electrodes were needed [68] and it is creditable to be used
in applications such as games [21].

4.5 Analysis of Self-Assessment Questionnaire


Even though the stimuli were selected with targeted emotional states, we found
out from the self-report questionnaire records that some emotions were not con-
firmed by the subjects. Our analysis was based on the questionnaire which gave
us the recorded participants’ feelings. We did not consider the data from the cases
when the targeted emotion was not induced according to the self-assessment
questionnaire record.
Since the aim of this work is to develop an algorithm to detect up to 8
emotions defined by combinations of high/low arousal levels, positive/negative
valence levels, and high/low dominance levels, and in the benchmark DEAP
database (described in Section 4.6) and two experiments’ databases the self-
assessment questionnaire has 9 levels rating in each emotional dimension, the
level 5 was used as the thresholds to identify high and low values at each di-
mensional level as shown in Table 3. Here, 5 is considered as an intermediate
level which does not belong to neither a high nor a low state, thus the data with
rating 5 were not used in the following processing. For example, if the targeted
emotion is Positive/Low arousal/Low dominance, then the subject’s data will
be considered compatible with the targeted emotion if the subject’s rating for
valence dimension is larger than 5, the rating for arousal dimension is lower than
5, and the rating for dominance level is lower than 5.
Real-time Subject-dependent EEG-based Emotion Recognition Algorithm 11

Table 3: The conditions of different states in the analysis of self-assessment


questionnaire.
Emotional Dimension Targeted States Conditions
Valence Dimension Positive Valence rating> 5
Negative Valence rating< 5
Arousal Dimension High Arousal rating> 5
Low Arousal rating< 5
Dominance Dimension High Dominance rating> 5
Low Dominance rating< 5

4.6 Affective EEG database DEAP


Since EEG-based emotion analysis is attracting more and more attention, the
DEAP database labeled with emotions was established and published [35]. It
has a relatively large amount of subjects (32 subjects) who participated in the
data collection. The stimuli to elicit emotions used in the experiment are 40 one-
minute long music videos. In the DEAP database, a Biosemi ActiveTwo device
with 32 EEG channels [1] was used for the data recording, which could give a
more comprehensive understanding of brain activity.
There are a number of datasets available in the DEAP database. Here, we
used the dataset after preprocessing [36]. The sampling rate of the original
recorded data is 512 Hz, and the set of preprocessed data are down sampled
to 128 Hz. The artifacts such as EOG were removed from the DEAP EEG data
during the preprocessing. As suggested by the developers of DEAP, this dataset
is well-suited to those who want to test their own algorithms. Thus, in our work,
we used this dataset to validate the algorithm. More details about the DEAP
database can be found in [35] and [36].

5 Implementation
5.1 Fractal Features
In this work, the FD values were proposed to be used as features to improve the
accuracy of emotion recognition from EEG signals. To calculate one FD value per
finite set of time series samples, the Higuchi algorithm described in Section 3.2
was implemented and validated on the standard mono fractal signal generated by
the Weierstrass function where the theoretical FD values were known in advance
[49]. The size of the finite set N defines the size of the window in our emotion
recognition algorithm. Fig. 1 shows the result of the calculation of FD values of
the signals generated by the Weierstrass function with different window sizes. As
it is seen from the graph, the FD values calculated with the window size equal
to 512 samples are more close to the theoretical values. Thus, in our algorithm,
the size of the window of 512 samples was chosen. For each window size N , we
used different tmax values ranging from 8 to 64 in (15)-(16) to compute the FD
values. With N = 512, the value of tmax was set to 32 since it has the lowest
12 Yisi Liu and Olga Sourina

error rate as shown in Fig. 2.

Fig. 1: FD values of the signals generated by the Weierstrass function calculated


by the Higuchi algorithm with different window sizes.

Fig. 2: Abs(error) for different tmax with N = 512.

5.2 Channel Choice

The classical FDR method [34] was applied for the channel selection [7, 19, 40].
A non-overlapping sliding window with size of 512 samples was used for FD fea-
ture calculation. Channel ranking for 32 subjects from the DEAP database was
calculated. In DEAP databases, there are 40 experimental trials labelled with
arousal, valence and dominance ratings, and in our case (recognition of 8 emo-
tions), for every subject, one trial was selected per emotion. As a result, up to
Real-time Subject-dependent EEG-based Emotion Recognition Algorithm 13

8 trials EEG data were selected with the corresponding emotions such as PLL,
PLH, PHL, PHH, NLL, NLH, NHL, and NHH in the following processing. If the
subject has more than one trial that labeled with the same emotions, the trial
with extreme rating will be selected. For example, for the state of Arousal>5,
Valence>5, Dominance>5, a trial with arousal rating 9, valence rating 8, dom-
inance rating 8 will be used instead of a trial with arousal rating 6, valence
rating 6, dominance rating 7. EEG data collected during playing the first 53
seconds of the video were used to calculate FD values in each session. The mean
FDR scores for each channel were computed across all the 32 subjects. By using
the data provided by the DEAP database, we can have more subjects to get
a more general channel rank patterns. The final channel rank is FC5, F4, F7,
AF3, CP6, T7, C3, FC6, P4, Fp2, F8, P3, CP5, O1, F3, P8, CP2, CP1, P7, Fp1,
PO4, O2, Pz, Oz, T8, FC2, Fz, AF4, PO3, Cz, C4, FC1. The ranking of each
channel is visualized in Fig. 3 using EEGLAB [20]. The visualization is based on
the mean FDR scores across different subjects for each channel. It is a standard
approach to use the channel rank to show spatial pattern. For example, [7] visu-
alizes the weights of the spatial filters obtained from the sparse common spatial
pattern (SCSP) algorithm. [40] uses the channel rank score to show the acti-
vated brain regions during the motor imagination tasks. The values were scaled
for better visualization. From the figure, it can be seen that the frontal lobe is
the most active because the most discriminant channels belong to the frontal
lobe. Previous research has confirmed that the orbitofrontal cortex, anterior cin-
gulated cortex, amygdala, and insula are highly involved in emotion processing
[69]. For example, it shows that negative emotions increase the amygdale ac-
tivation [14]. However, the subcortical structures such as amygdala cannot be
detected directly by EEG signals which are recorded from the scalp. The amyg-
dale connects and interacts with frontal cortex and emotions are experienced as
a result [11, 31]. The visualization in Fig. 3 complies with the above-mentioned
findings about the importance of frontal lobe in the emotion processing. Then,
we followed the final channel rank to calculate the classification accuracy and
to choose the number of the channels for our algorithm. The Support Vector
Machine classifier described in Section 3.3, implemented by LIBSVM [17] with
polynomial kernel for multiclass classification was used to compute the accu-
racy of emotion recognition using a different number of channels following the
rank. 5-fold cross validation of the data was applied: first, the raw data were
partitioned into 5 sets without overlapping, and then features were extracted
from each set. During the classification phase, 4 sets were used as training sets,
and 1 set was used as validation data for testing. The process was run 5 times,
and every set was used as the testing data for once. The mean accuracy of the
classification in 5 runs was computed as the final estimation of the classification
accuracy. The cross-validation can allow us to avoid the problem of over fitting
[28]. The parameters of the SVM classifier were set according to [55] where high
accuracy of emotion classification was achieved with different feature types as
follows: the value of gamma in (18) was set to 1, coef was set to 1 and order
d was set to 5. A grid-search approach was also applied to select the SVM ker-
14 Yisi Liu and Olga Sourina

nel parameters, and the classification accuracy of emotion recognition showed


that the above-mentioned parameters were the optimal choice. Fig. 4 shows the
mean accuracy of emotion classification over the number of the best channels
following the obtained channel rank for subjects who have EEG data labeled
with 8, 7, 6 and 5 emotions. Here, FD features were used in the classification,
and they were calculated by using a sliding window with size of 512 and mov-
ing by 128 new samples (the overlapping rate was 384/512) each time. As it is
shown in Fig. 4, in order to minimize the number of channels used, based on
the accuracy, the top 4 channels are considered as the optimum choice that can
be used for emotion recognition with an adequate accuracy. These 4 channels
including FC5, F4, F7, and AF3 electrode positions correspond to the frontal
lobe location (Fig. 5). It complies with research results described in [9, 30, 37,
71] where correlation between emotions and the signal activities in the frontal
lobe was studied, and the close relationship is confirmed. The frontal lobe is
believed to execute processes that require intelligence such as determination and
assessment. Human emotions are also associated with the frontal lobe. [9] finds
that the damage in prefrontal cortex may cause a weakened or even disabled
generation of certain emotions; [37] shows that the prefrontal lobe area plays
a role in linking reward to subject’s pleasantness; [30] confirms that there is a
lateralization pattern in the frontal lobe when processing positive and negative
emotions; [71] discovers an asymmetrical pattern in the frontal lobe during the
observation of pleasant/unpleasant advertisements.

Fig. 3: Visualization of the ranking of 32 channels on the scalp.

5.3 Feature Extraction and Classification

We present a subject-dependent algorithm of human emotion recognition from


EEG based on the Valence-Arousal- Dominance emotion model. The algorithm
consists of two parts: features extraction with a sliding window and data clas-
sification with Support Vector Machine (SVM) in order to accomplish efficient
emotion recognition. In our work, a 2-42 Hz bandpass filter was applied to the
data since it could remove artifacts such as muscle contraction and control [6,
33]. We extract features from the entire EEG band from 2-42Hz instead of de-
composing it to different frequency bands such as Alpha or Beta waves as it was
Real-time Subject-dependent EEG-based Emotion Recognition Algorithm 15

done in [70] and [74]. As it was shown in Section 5.2, the FC5, F4, F7, and AF3
channels were chosen for the algorithm implementation. The feature vector F V
for emotion classification is defined as follows:

F V = [F V1 , F V2 , F V3 , F V4 ]. (19)
where 1 denotes FC5 channel, 2 denotes F4 channel, 3 denotes F7 channel, 4
denotes AF3 channel, and F Vi is the feature vector per channel. Here, the F Vi is
composed by the statistical features given in (8), HOC features given in (12), FD
features given in (17) or the combinations of different features F Vcombination1
and F Vcombination2 as given below in (20) and (21). Normalization is applied
to the FD, statistical features and HOC features across the four channels in (19).

F Vcombination1 = [µX , σX , δX , δX , γX , γX , dimH ]. (20)

F Vcombination2 = [D1 , D2 , ..., Dk , µX , σX , δX , δX , γX , γX , dimH ]. (21)

Here, F Vcombination1 employs 6 statistical and 1 FD features, F Vcombination2


employs HOC features, 6 statistical and 1 FD features. In (20) and (21), nor-
malization is applied to the statistical features, HOC features, and FD features
across the four channels before combining the features. As it was determined in
Section 5.1 and 5.2, to obtain training and testing samples for the SVM clas-
sifier, a sliding window with the size of 512 with 384 samples overlapping was
used to calculate the statistical features (as in (8)), HOC features (as in (12)),
and the combined features (as in (20) and (21)). In the DEAP database, the
EEG data collected from 60 seconds when the videos were played were used. As
in Experiment 1 only 5 clips were selected in each session, EEG data collected
from the first 30 seconds when the sound clips were exposed to the subjects were
used. In Experiment 2, EEG data collected from the first 30 seconds when the
pictures were shown to the subjects were used. Different HOC orders (k in (12))
were tested using the data labeled with all possible 4 emotions combinations
from the 6 subjects who had EEG data with 8 emotions in the DEAP database.
The results are shown in Table 4. To save computation time and reduce the
size of feature dimension, k=36 is the optimal choice. This choice of k is also
consistent with the setting in [55], where the optimal choice of HOC order k is
set to 36. After feature extraction, the Support Vector Machine classifier with
polynomial kernel implemented by LIBSVM [17] (as described in Section 3.3)
was used to classify the data. Since in Experiment 1 and 2, the experiments’
duration is relatively shorter than in the DEAP database, 4-fold cross validation
was applied.

6 Results and Discussion


After the analysis of the questionnaires, in the DEAP database, we have got
EEG data labeled with 8 emotions from 6 subjects, namely Subject 7, 8, 10,
16 Yisi Liu and Olga Sourina

Fig. 4: Mean accuracy of emotions classification of the subjects’ data with 5 to


8 emotions.

Fig. 5: Positions of the 4 channels (FC5, F4, F7, and AF3).

Table 4: Parameter choice for HOC features.


k = 10 k = 20 k = 36 k = 50
Mean Accuracy 49.76% 49.67% 50.13% 50.15%
Real-time Subject-dependent EEG-based Emotion Recognition Algorithm 17

16, 19, and 20. For each subject, EEG data from one trial labeled with one of
the eight emotions were used in the following processing. In Experiment 1, we
have got EEG data labeled with 5 emotions from 1 subject, with 3 emotions
from 4 subjects, and with 2 emotions from 6 subjects. In Experiment 2, we have
got EEG data labeled with 6 emotions from 2 subjects, with 5 emotions from 1
subject, with 4 emotions from 2 subjects, with 3 emotions from 4 subjects and
with 2 emotions from 5 subjects.
Fractal Dimension analysis could be used to quantify the nonlinear property
of EEG signals [4]. In this algorithm, we propose to combine FD feature with
other best features to improve the classification performance of emotion recogni-
tion. Using just 1 FD feature solely has better accuracy than using 6 statistical
features or 36 HOC features for some subjects. For example, as it shown in Fig. 6,
FD features outperforms the other two types of features in the recognition of
NHL and PLL, NHL and NLH, NHL and NLL, PHH and NLH, PLL and NLH
for Subject 10; NHL and PHL, NHH and PHH, NHH and PHL for Subject 19
in DEAP database.

Fig. 6: The comparison of classification accuracies using statistical features, FD,


HOC features of Subject 10 and 19.

In Fig. 7, FD values spatial patterns show that FD values can be used to


differentiate 8 emotions. Here, the FD values are calculated from 6 subjects in
DEAP database who have 8 emotions’ data available. The pattern is obtained
as follows. First, the FD values are calculated using the 512 sliding window with
75% overlapping from each channel and subject. Then, the calculated values
are averaged across all 57 samples of FD values from each channel per emotion.
Secondly, the mean FD values are scaled to -1, 1 across all 32 channels for
18 Yisi Liu and Olga Sourina

each subject per emotion. Finally, the scaled mean FD values in step 2 are
averaged cross all 6 subjects per emotion and visualized on the brain map. As
it can be seen from Fig. 7, different emotions have different spatial patterns,
and the frontal lobe is always active (in red or yellow). Higher FD values of
EEG reflect higher activity of the brain. FD value can be used for differentiation
of valence dimension in the Valence-Arousal-Dominance model [47]. It can be
seen from Fig. 7 that in negative emotions such as (a) frightened, (b) angry, (g)
unconcerned, (h) sad, the spatial pattern shows that the right hemisphere is more
active than the left one, and in (a) frightened, (b) angry, the right hemisphere is
more active than the right hemisphere in (g) unconcerned and (h) sad emotions.
In positive emotions such as (c) happy, (d) surprise, (e) protected, (f) satisfied,
the spatial pattern shows that the left semisphere is more active than the right
one, and the left hemisphere is more active in (c) happy, (d) surprise than the
left hemisphere in (e) protected, (f) satisfied emotions.

Fig. 7: The visualization of FD pattern for 6 subjects from DEAP with 8 emo-
tions: (a) frightened, (b) angry, (c) happy, (d) surprise, (e) protected, (f) satisfied,
(g) unconcerned, and (h) sad.

In Table 5, the comparisons of mean accuracy of the emotion classification


for 8 emotions using the data from the DEAP database for combination of HOC,
6 statistical and 1 FD features, combination of 6 Statistical and 1 FD feature, 6
statistical features, and HOC features are shown respectively. The accuracy of
fewer emotional states was computed as the mean value for all possible combina-
tion of emotions in the group across all subjects. For example, the mean accuracy
of 2 out of 8 emotions was calculated as follows. Since we have 8 emotions in
total, the 2 emotions combinations could be (PHH and PHL), (PLH and NHH),
(PLL and NLL), etc. For each subject, there are 28 possible combinations for
choosing 2 emotions from 8 emotions. The mean accuracy over 28 combinations
Real-time Subject-dependent EEG-based Emotion Recognition Algorithm 19

for each subject was calculated, and the mean accuracy over 6 subjects is given
in the table. As expected, the classification accuracy increases when the number
of emotions recognized is reduced. A one-way ANOVA was performed on the
results of the recognition of 4 emotions, and the statistical test was applied to
the accuracy by using the combination of HOC, 6 statistical, and FD features
and by using other features. As shown in Table 6, the statistical results showed
that the proposed combined features when Fractal Dimension feature was in-
cluded (HOC, 6 statistical and 1 FD) are statistically superior to using solely
HOC (p=6.8926e-071) or 6 statistical features (p=0.0056). As can be seen from
the Table 5, using the combination of HOC, 6 statistical and 1 FD features has
slightly higher accuracy than using the combination of 6 statistical and 1 FD,
however, no significant difference is found between these two combined features
(p=0.42). Thus, both combinations of features could be used.

Table 5: The classification accuracy computed using the DEAP database.


Feature type Number of emotions recognized
8 7 6 5 4 3 2
HOC+6 statistical +FD 53.7 56.24 59.3 63.07 67.9 74.36 83.73
6statistical +FD 52.66 55.28 58.37 62.2 67.08 73.69 83.2
6 statistical 50.36 53.04 56.19 60.07 65.07 71.9 82
HOC 32.6 35.55 39.23 43.92 50.13 58.88 72.66

Table 6: F-values and p-values of the ANOVA tests applied on the accuracy of
proposed combined features (HOC, 6 statistical, FD) and the other features.
Feature F-value p-value
Statistical Features 7.72 <0.01
HOC 385.4 <0.01
6statistical, FD 0.44 0.42

We also validated our algorithm on the data from our own databases (Ex-
periment 1 and 2). The results are presented in Table 7 and 8 correspondingly.
The accuracy of fewer emotional states is the mean across all subjects who have
the data that are labeled with the corresponding number of emotions. For ex-
ample, the accuracy for 2 emotions recognition in Table 7 is the average across
all 11 subjects in Experiment 1 with their corresponding 2 emotions recognition
results. The results in Table 7 and 8 also support our conclusion that the combi-
nation of HOC, 6 statistical and 1 FD features or 6 statistical features with 1 FD
feature is the optimal choice for real-time applications. The algorithm accuracy
improves from 68.85% to 87.02% or 86.17% in Experiment 1 and from 63.71% to
76.53% or 76.09% in Experiment 2 when combinations of HOC, 6 statistical and
20 Yisi Liu and Olga Sourina

1 FD features or 6 statistical features and 1 FD feature were used comparing to


HOC features. As it can be seen from Table 5, 7, 8, the classification accuracies
for the same number of emotions are comparable among three databases, which
gives positive support to the use of the proposed algorithm in real-time EEG-
based emotion recognition. The computation time to extract one new sample
of the combined feature in Matlab is less than 0.1 second and classifying this
sample by SVM takes less than 0.05 seconds. Thus the algorithm can be used in
real time.
Since the DEAP dataset has up to 32 channels, we also investigated the
relationship between the number of channels and the classification accuracy in
Table 9. The increasing of the channels follows the channel rank given in Sec-
tion 5.2. With 32 channels we can improve the accuracy of our algorithm from
53.7% to 69.53% for recognition of 8 emotions and from 83.73% to 90.35% for
recognition of 2 emotions.
When comparing our algorithm with others, it can recognize more emotions,
obtain better accuracy with fewer electrodes, and it can be used in real time.
For example, 32 channels were used in [42] whereas only 4 channels are needed
in our algorithm with an accuracy of 87.02% for recognition of 2 emotions and
53.7% for recognition of 8 emotions. Using the same number of channels, [15]
achieved 57.04% accuracy for recognition of 2 emotions, which are lower than
ours.

Table 7: The classification accuracy computed using Experiment 1 database.


Feature type Number of emotions recognized
5 4 3 2
HOC+6 statistical +FD 61.67 67.08 74.44 87.02
6statistical +FD 55 62.08 75.11 86.17
6 statistical 56.67 61.67 72.72 84.94
HOC 35 42.08 53.17 68.85

Table 8: The classification accuracy computed using Experiment 2 database.


Feature type Number of emotions recognized
6 5 4 3 2
HOC+6 statistical +FD 56.6 60.6 58.36 65.52 76.53
6statistical +FD 59.03 62.08 59.86 65.78 76.09
6 statistical 56.94 62.45 55.78 63.73 76.45
HOC 35.42 42.82 42.46 44.43 63.71
Real-time Subject-dependent EEG-based Emotion Recognition Algorithm 21

Table 9: Investigation of using more channels in the DEAP database.


Number of channels Number of emotions recognized
8 7 6 5 4 3 2
1 38.33 41.76 45.82 50.76 57.04 65.5 77.98
2 42.03 45.37 49.26 53.95 59.82 67.6 79.06
3 49.27 52.23 55.79 60.12 65.54 72.57 82.53
4 53.7 56.24 59.3 63.07 67.9 74.36 83.73
16 65.63 67.93 70.53 73.58 77.3 82.09 88.79
32 69.53 71.43 73.73 76.53 80 84.41 90.35

7 Conclusion

In this paper, we proposed a real-time subject-dependent algorithm based on the


Valence-Arousal-Dominance emotion model. The algorithm can recognize up to
8 emotions such as happy, surprised, satisfied, protected, angry, frightened, un-
concerned, and sad with the best average accuracy of 53.7% using 4 electrodes.
2 emotions can be recognized with the best average accuracy of 87.02% using 4
electrodes. The algorithm consists of two parts: features extraction and classifi-
cation. The combination of features (HOC, 6 statistical and 1 FD) that gave the
best emotion classification accuracy was chosen for the algorithm implementa-
tion. The algorithm uses just 4 channels that made it more applicable as less time
is needed to mount 4 electrodes. The algorithm was tested using two experimen-
tal EEG databases with data collected using the Emotiv EPOC device: one with
audio stimuli and the other with visual stimuli. It was also tested on the DEAP
benchmark database where video stimuli were used for emotion induction. The
accuracy of the proposed algorithm was similar on all databases. By using differ-
ent databases, it is confirmed that the proposed algorithm is device independent
as we get similar accuracy using the EEG data collected by two different devices:
14 EEG channels Emotiv Epoch and 32 EEG channels Biosemi ActiveTwo de-
vice. It is also confirmed that our algorithm is stimuli independent since our
algorithm is tested on the EEG databases created using audio, visual and video
stimuli. The channel selection was performed using the DEAP database as it had
32 subjects and combination of audio and visual stimuli, and FC5, F4, F7, and
AF3 channels were chosen for our algorithm implementation. The accuracy of
the algorithm was tested on all databases following the fixed channel choice. The
proposed algorithm can be used in any EEG-enabled applications such as adver-
tising [45], music therapy [65] and other serious games developments [27]. The
combination of EEG and other biosignals should be investigated in the future.

Acknowledgments. This research was done for Fraunhofer IDM@NTU, which


is funded by the National Research Foundation (NRF) and managed through the
multi-agency Interactive & Digital Media Programme Office (IDMPO) hosted
by the Media Development Authority of Singapore (MDA).
22 Yisi Liu and Olga Sourina

References
1. Biosemi, http://www.biosemi.com
2. Emotiv, http://www.emotiv.com
3. American electroencephalographic society guidelines for standard electrode posi-
tion nomenclature. Journal of Clinical Neurophysiology 8(2), 200–202 (1991)
4. Accardo, A., Affinito, M., Carrozzi, M., Bouquet, F.: Use of the fractal dimen-
sion for the analysis of electroencephalographic time series. Biological Cybernetics
77(5), 339–350 (1997)
5. Aftanas, L.I., Lotova, N.V., Koshkarov, V.I., Popov, S.A.: Non-linear dynamical
coupling between different brain areas during evoked emotions: An EEG investi-
gation. Biological Psychology 48(2), 121–138 (1998)
6. Anderson, E.W., Potter, K.C., Matzen, L.E., Shepherd, J.F., Preston, G.A., Silva,
C.T.: A user study of visualization effectiveness using EEG and cognitive load.
Computer Graphics Forum 30(3), 791–800 (2011)
7. Arvaneh, M., Cuntai, G., Kai Keng, A., Chai, Q.: Optimizing the channel selec-
tion and classification accuracy in EEG-Based BCI. Biomedical Engineering, IEEE
Transactions on 58(6), 1865–1873 (2011)
8. Aspiras, T.H., Asari, V.K.: Log power representation of EEG spectral bands for
the recognition of emotional states of mind. In: 8th International Conference on
Information, Communications and Signal Processing (ICICS) 2011. pp. 1 – 5 (2011)
9. Bechara, A., Damasio, H., Damasio, A.R.: Emotion, decision making and the or-
bitofrontal cortex. Cerebral Cortex 10(3), 295–307 (2000)
10. Bolls, P.D., Lang, A., Potter, R.F.: The effects of message valence and listener
arousal on attention, memory, and facial muscular responses to radio advertise-
ments. Communication Research 28(5), 627–651 (2001)
11. Bos., D.O.: EEG-based emotion recognition (2006), http://hmi.ewi.utwente.nl/
verslagen/capita-selecta/CS-Oude_Bos-Danny.pdf
12. Bradley, M.M.: Measuring emotion: The self-assessment manikin and the semantic
differential. Journal of Behavior Therapy and Experimental Psychiatry 25(1), 49–
59 (1994)
13. Bradley, M.M., Lang, P.J.: The international affective digitized sounds (2nd edi-
tion; IADS-2): Affective ratings of sounds and instruction manual. Tech. rep., Uni-
versity of Florida, Gainesville (2007)
14. Burgdorf, J., Panksepp, J.: The neurobiology of positive emotions. Neuroscience
& Biobehavioral Reviews 30(2), 173–187 (2006)
15. Cao, M., Fang, G., Ren, F.: EEG-based emotion recognition in chinese emotional
words. In: Proceedings of CCIS 2011. pp. 452–456 (2011)
16. Chanel, G., Rebetez, C., Betrancourt, M., Pun, T.: Emotion assessment from phys-
iological signals for adaptation of game difficulty. IEEE Transactions on Systems,
Man, and Cybernetics Part A:Systems and Humans 41(6), 1052–1063 (2011)
17. Chang, C.C., Lin, C.J.: LIBSVM : a library for support vector machines (2001),
http://www.csie.ntu.edu.tw/~cjlin/libsvm
18. Cristianini, N., Shawe-Taylor, J.: An introduction to Support Vector Machines:
and other kernel-based learning methods. Cambridge University Press, New York
(2000)
19. D’Alessandro, M., Esteller, R., Vachtsevanos, G., Hinson, A., Echauz, J., Litt, B.:
Epileptic seizure prediction using hybrid feature selection over multiple intracranial
EEG electrode contacts: a report of four patients. Biomedical Engineering, IEEE
Transactions on 50(5), 603–615 (2003)
Real-time Subject-dependent EEG-based Emotion Recognition Algorithm 23

20. Delorme, A., Makeig, S.: EEGLAB: An open source toolbox for analysis of single-
trial EEG dynamics including independent component analysis. Journal of Neu-
roscience Methods 134(1), 9–21 (2004)
21. Duvinage, M., Castermans, T., Dutoit, T., Petieau, M., Hoellinger, T., Saedeleer,
C.D., Seetharaman, K., Cheron, G.: A P300-based quantitative comparison be-
tween the emotiv epoc headset and a medical EEG device. In: Proceedings of
the 9th IASTED International Conference on Biomedical Engineering. pp. 37–42
(2012)
22. Gao, T., Wu, D., Huang, Y., Yao, D.: Detrended fluctuation analysis of the human
EEG during listening to emotional music. J Elect. Sci. Tech. Chin 5, 272–277 (2007)
23. Hadjidimitriou, S., Zacharakis, A., Doulgeris, P., Panoulas, K., Hadjileontiadis,
L., Panas, S.: Sensorimotor cortical response during motion reflecting audiovisual
stimulation: evidence from fractal EEG analysis. Medical and Biological Engineer-
ing and Computing 48(6), 561–572 (2010)
24. Hadjidimitriou, S.K., Zacharakis, A.I., Doulgeris, P.C., Panoulas, K.J., Hadjileon-
tiadis, L.J., Panas, S.M.: Revealing action representation processes in audio per-
ception using fractal EEG analysis. Biomedical Engineering, IEEE Transactions
on 58(4), 1120–1129 (2011)
25. Higuchi, T.: Approach to an irregular time series on the basis of the fractal theory.
Physica D: Nonlinear Phenomena 31(2), 277–283 (1988)
26. Hosseini, S.A., Khalilzadeh, M.A.: Emotional stress recognition system using eeg
and psychophysiological signals: Using new labelling process of EEG signals in emo-
tional stress state. In: Biomedical Engineering and Computer Science (ICBECS),
2010 International Conference on. pp. 1–6. IEEE (2010)
27. Hou, X., Sourina, O.: Emotion-enabled haptic-based serious game for post stroke
rehabilitation. In: Proceedings of VRST 2013. pp. 31–34 (2013)
28. Hsu, C.W., Chang, C.C., Lin, C.J.: A practical guide to support vector classifica-
tion. Tech. rep., National Taiwan University, Taipei (2003)
29. Huang, D., Guan, C., Kai Keng, A., Haihong, Z., Yaozhang, P.: Asymmetric spatial
pattern for EEG-based emotion detection. In: Neural Networks (IJCNN), The 2012
International Joint Conference on. pp. 1–7 (2012)
30. Jones, N.A., Fox, N.A.: Electroencephalogram asymmetry during emotionally
evocative films and its relation to positive and negative affectivity. Brain and Cog-
nition 20(2), 280–299 (1992)
31. Kandel, E.R., Schwartz, J.H., Jessell, T.M., et al.: Principles of neural science,
vol. 4. McGraw-Hill New York (2000)
32. Kedem, B.: Time Series Analysis by Higher Order Crossing. IEEE Press, New York
(1994)
33. Khosrowabadi, R., Wahab bin Abdul Rahman, A.: Classification of EEG correlates
on emotion using features from gaussian mixtures of eeg spectrogram. In: Infor-
mation and Communication Technology for the Muslim World (ICT4M), 2010
International Conference on. pp. E102–E107. IEEE (2010)
34. Kil, D.H., Shin, F.B.: Pattern recognition and prediction with applications to signal
characterization. AIP series in modern acoustics and signal processing, AIP Press,
Woodbury, N.Y. (1996)
35. Koelstra, S., Muhl, C., Soleymani, M., Lee, J.S., Yazdani, A., Ebrahimi, T., Pun,
T., Nijholt, A., Patras, I.: DEAP: A database for emotion analysis ;using physio-
logical signals. Affective Computing, IEEE Transactions on 3(1), 18–31 (2012)
36. Koelstra, S., Muhl, C., Soleymani, M., Lee, J.S., Yazdani, A., Ebrahimi, T., Pun,
T., Nijholt, A., Patras, I.: DEAP dataset (2012), http://www.eecs.qmul.ac.uk/
mmv/datasets/deap
24 Yisi Liu and Olga Sourina

37. Kringelbach, M.L.: The human orbitofrontal cortex: Linking reward to hedonic
experience. Nature Reviews Neuroscience 6(9), 691–702 (2005)
38. Kulish, V., Sourin, A., Sourina, O.: Analysis and visualization of human electroen-
cephalograms seen as fractal time series. Journal of Mechanics in Medicine and
Biology, World Scientific 26(2), 175–188 (2006)
39. Kulish, V., Sourin, A., Sourina, O.: Human electroencephalograms seen as fractal
time series: Mathematical analysis and visualization. Computers in Biology and
Medicine 36(3), 291–302 (2006)
40. Lal, T.N., Schroder, M., Hinterberger, T., Weston, J., Bogdan, M., Birbaumer, N.,
Scholkopf, B.: Support vector channel selection in BCI. Biomedical Engineering,
IEEE Transactions on 51(6), 1003–1010 (2004)
41. Lang, P., Bradley, M., Cuthbert, B.: International affective picture system (IAPS):
Affective ratings of pictures and instruction manual. technical report a-8. Tech.
rep., University of Florida, Gainesville, FL. (2008)
42. Lin, Y.P., Wang, C.H., Jung, T.P., Wu, T.L., Jeng, S.K., Duann, J.R., Chen,
J.H.: EEG-based emotion recognition in music listening. IEEE Transactions on
Biomedical Engineering 57(7), 1798–1806 (2010)
43. Liu, Y., Sourina, O., Nguyen, M.K.: Real-time EEG-based human emotion recog-
nition and visualization. In: Proc. 2010 Int. Conf. on Cyberworlds. pp. 262–269.
Singapore (2010)
44. Liu, Y., Sourina, O., Nguyen, M.K.: Real-time eeg-based emotion recognition and
its applications. Transactions on Computational Science XII, LNCS 6670, 256–277
(2011)
45. Liu, Y., Sourina, O.: EEG-based emotion-adaptive advertising. In: Proc. ACII
2013. pp. 843–848. Geneva (2013)
46. Liu, Y., Sourina, O.: EEG databases for emotion recognition. In: Proc. 2013 Int.
Conf. on Cyberworlds. Japan (2013)
47. Liu, Y., Sourina, O.: Real-time fractal-based valence level recognition from EEG.
In: Transactions on Computational Science XVIII, pp. 101–120. Springer (2013)
48. Lutzenberger, W., Elbert, T., Birbaumer, N., Ray, W.J., Schupp, H.: The scalp
distribution of the fractal dimension of the EEG and its variation with mental
tasks. Brain Topography 5(1), 27–34 (1992)
49. Maragos, P., Sun, F.K.: Measuring the fractal dimension of signals: morphological
covers and iterative optimization. IEEE Transactions on Signal Processing 41(1),
108–121 (1993)
50. Mauss, I.B., Robinson, M.D.: Measures of emotion: A review. Cognition and Emo-
tion 23(2), 209–237 (2009)
51. Mehrabian, A.: Framework for a comprehensive description and measurement of
emotional states. Genetic, social, and general psychology monographs 121(3), 339–
361 (1995)
52. Mehrabian, A.: Pleasure-arousal-dominance: A general framework for describing
and measuring individual differences in temperament. Current Psychology 14(4),
261–292 (1996)
53. Noble, W.S.: What is a support vector machine? Nat Biotech 24(12), 1565–1567
(2006)
54. O’Regan, S., Faul, S., Marnane, W.: Automatic detection of EEG artefacts arising
from head movements. In: Engineering in Medicine and Biology Society (EMBC),
2010 Annual International Conference of the IEEE. pp. 6353–6356 (2010)
55. Petrantonakis, P.C., Hadjileontiadis, L.J.: Emotion recognition from EEG us-
ing higher order crossings. IEEE Transactions on Information Technology in
Biomedicine 14(2), 186–197 (2010)
Real-time Subject-dependent EEG-based Emotion Recognition Algorithm 25

56. Petrantonakis, P.C., Hadjileontiadis, L.J.: Adaptive emotional information re-


trieval from EEG signals in the time-frequency domain. IEEE Transactions on
Signal Processing 60(5), 2604–2616 (2012)
57. Picard, R.W., Vyzas, E., Healey, J.: Toward machine emotional intelligence: Anal-
ysis of affective physiological state. IEEE Transactions on Pattern Analysis and
Machine Intelligence 23(10), 1175–1191 (2001)
58. Pradhan, N., Narayana Dutt, D.: Use of running fractal dimension for the analysis
of changing patterns in electroencephalograms. Computers in Biology and Medicine
23(5), 381–388 (1993)
59. Ranky, G.N., Adamovich, S.: Analysis of a commercial EEG device for the control
of a robot arm. In: Bioengineering Conference, Proceedings of the 2010 IEEE 36th
Annual Northeast. pp. 1–2 (2010)
60. Russell, J.A.: Affective space is bipolar. Journal of Personality and Social Psychol-
ogy 37(3), 345–356 (1979)
61. Schaaff, K., Schultz, T.: Towards emotion recognition from electroencephalographic
signals. In: Affective Computing and Intelligent Interaction and Workshops, 2009.
ACII 2009. 3rd International Conference on. pp. 1–6 (2009)
62. Soleymani, M., Pantic, M., Pun, T.: Multimodal emotion recognition in response
to videos. IEEE Transactions on Affective Computing 3(2), 211–223 (2012)
63. Sourina, O., Kulish, V.V., Sourin, A.: Novel tools for quantification of brain re-
sponses to music stimuli. In: Proc of 13th International Conference on Biomedical
Engineering ICBME 2008. pp. 411–414 (2008)
64. Sourina, O., Liu, Y.: A fractal-based algorithm of emotion recognition from EEG
using arousal-valence model. In: BIOSIGNALS. pp. 209–214 (2011)
65. Sourina, O., Liu, Y., Nguyen, M.K.: Real-time EEG-based emotion recognition for
music therapy. Journal on Multimodal User Interfaces 5(1-2), 27–35 (2012)
66. Sourina, O., Sourin, A., Kulish, V.: EEG data driven animation and its application.
In: Computer Vision/Computer Graphics CollaborationTechniques. pp. 380–388.
Springer (2009)
67. Stam, C.J.: Nonlinear dynamical analysis of EEG and MEG: Review of an emerging
field. Clinical Neurophysiology 116(10), 2266–2301 (2005)
68. Stytsenko, K., Jablonskis, E., Prahm, C.: Evaluation of consumer EEG device Emo-
tiv EPOC. In: Poster session presented at MEi:CogSci Conference 2011. Ljubljana
(2011)
69. Szily, E., Kéri, S.: Emotion-related brain regions. Ideggyógyászati szemle 61(3-4),
77 (2008)
70. Takahashi, K.: Remarks on emotion recognition from multi-modal bio-potential
signals. In: Industrial Technology, 2004 IEEE International Conference on. vol. 3,
pp. 1138–1143 (2004)
71. Vecchiato, G., Toppi, J., Astolfi, L., De Vico Fallani, F., Cincotti, F., Mattia,
D., Bez, F., Babiloni, F.: Spectral EEG frontal asymmetries correlate with the
experienced pleasantness of tv commercial advertisements. Medical and Biological
Engineering and Computing 49(5), 579–583 (2011)
72. Wang, Q., Sourina, O., Nguyen, M.K.: EEG-based ”serious” games design for med-
ical applications. In: Proc. 2010 Int. Conf. on Cyberworlds. pp. 270–276. Singapore
(2010)
73. Wang, Q., Sourina, O., Nguyen, M.: Fractal dimension based neurofeedback in
serious games. The Visual Computer 27(4), 299–309 (2011)
74. Zhang, Q., Lee, M.: Analysis of positive and negative emotions in natural scene
using brain activity and gist. Neurocomputing 72(4-6), 1302–1306 (2009)

View publication stats

You might also like