Liu LNCS
Liu LNCS
Liu LNCS
net/publication/278702586
CITATIONS READS
94 1,702
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Yisi Liu on 24 August 2016.
1 Introduction
Recently, the EEG devices became wireless, more portable, wearable and easy
to use, thus more research can be done on real-time emotion recognition algo-
rithms. Emotion recognition algorithms can be subject-dependent and subject-
independent. Subject-dependent algorithms have a better accuracy than subject-
independent algorithms but the system training session for each individual user
should be designed and implemented in the subject-dependent algorithms.
In this paper, we proposed and implemented a real-time subject-dependent
algorithm based on the Valence-Arousal-Dominance (VAD) emotion model. A
combination of features including Fractal Dimension (FD) was used because
FD values reflect nonlinearity of EEG signals. Fractal Dimension analysis is a
suitable approach for analyzing nonlinear systems and can be used in real-time
EEG signal processing [4, 72]. Early works show that Fractal Dimension can re-
flect changes in EEG signals [58], and Fractal Dimension is varied for different
mental tasks [48]. In [63, 66], music was used as a stimulus to elicit emotions, and
Fractal Dimension was applied for the analysis of the EEG signal. In [5], it was
demonstrated that the difference between positive and negative emotions can be
2 Yisi Liu and Olga Sourina
2 Background
The most widely used approach to represent emotion is the bipolar model with
valence and arousal dimensions proposed by Russell [60]. In this model, valence
dimension ranges from “negative” to “positive”, and arousal dimension ranges
from “not aroused” to “excited”. The 2-Dimensional (2D) model can locate the
discrete emotion labels in its space [50], and it could define emotions which are
even without discrete emotion labels. However, emotion of fear and anger cannot
be differed if they were defined by the 2D model as they both have the same
high arousal and negative valence level values.
In order to get a comprehensive description of emotions, Mehrabian and Rus-
sell proposed 3-Dimensional (3D) Pleasure (Valence)-Arousal-Dominance (PAD)
model in [51] and [52]. “Pleasure-displeasure” dimension of the model equals to
the valence dimension mentioned above, evaluating the pleasure level of the emo-
tion. “Arousal-non-arousal” dimension is equivalent to the arousal dimension,
referring to the alertness of an emotion. “Dominance-submissiveness” dimension
is a newly extended dimension, which is also named as a control dimension of
emotion [51, 52]. It ranges from a feeling of being in control during an emotional
experience to a feeling of being controlled by the emotion [10]. It makes the di-
mensional model more complete. With the Dominance dimension, more emotion
labels can be located in the 3D space. For example, happiness and surprise are
both emotions with positive valence and high arousal, and it can be differenti-
ated by their dominance level since happiness is with high dominance, whereas
surprise is with low dominance [51].
In our work, we use the 3-dimensional emotion classification model.
4 Yisi Liu and Olga Sourina
the number of channels used should be minimized. If more electrodes are used,
the comfort level of the user who wears the device decreases as well. Thus, our
main objective is to propose an algorithm performing with adequate accuracy
in real-time applications.
3 Method
where C is the number of classes, p is the channel index, µip is the mean value
of the feature from the pth channel for the ith class, and σpi is the standard
deviation of the feature from the pth channel for the ith class [34].
Although the EEG signal is nonlinear [38, 39], little has been done to investi-
gate its nonlinear nature when emotion recognition research is conducted. Linear
analysis such as Fourier Transform only preserves the power spectrum in the sig-
nal, but destroys the spike-wave structure [67].
In this work, we proposed to use Fractal Dimension feature in combination
with statistical features [57], Higher Order Crossings (HOC) [32, 55] to improve
emotion recognition accuracy. Statistical and HOC features were used as they
gave the highest emotion recognition accuracy as it was described in [55, 57, 70].
In this work, the Higuchi algorithm [25] was proposed to be used for FD values
calculation. The algorithm gave better accuracy than other FD algorithms as it
was shown in [73]. Details of these algorithms are given as below.
Statistical Features
3. The means of the absolute values of the first differences of the raw signals
N −1
1 X
δX = |X(n + 1) − X(n)| . (4)
N − 1 n=1
4. The means of the absolute values of the first differences of the normalized
signals
N −1
1 X δX
δX = X(n + 1) − X(n) = . (5)
N − 1 n=1 σX
5. The means of the absolute values of the second differences of the raw signals
N −2
1 X
γX = |X(n + 2) − X(n)| . (6)
N − 2 n=1
6. The means of the absolute values of the second differences of the normalized
signals
N −2
1 X γX
γX = X(n + 2) − X(n) = . (7)
N − 2 n=1 σX
is used and
N
X
Dk = [Zn (k) − Zn−1 (k)]2 . (11)
n=2
where m = 1, 2, ..., t is the initial time and t is the interval time [25].
For example, if t = 3 and N = 100, the newly constructed time series are:
X31 : X(1), X(4), ..., X(100),X32 : X(2), X(5), ..., X(98),
X33 : X(3), X(6), ..., X(99).
t sets of Lm (t) are calculated by
N −m
P[ t ] N −1
i=1 |X (m + it) − X (m + (i − 1) · t)|
[ N −m
t ]·t
Lm (t) = . (14)
t
hL (t)i denotes the average value of Lm (t), and one relationship exists
ln hL(t)i
dimH = . (16)
− ln t
Thus, the feature vector composed is
The goal of SVM method is to find a hyperplane of high dimensional space which
can be used for classification [18]. SVM is a powerful classifier. It projects low
dimension features into higher dimension using kernel functions which can solve
the inseparable cases [53]. There are different types of kernel functions used in
implemented classifiers. The polynomial kernel used in our work is defined as
8 Yisi Liu and Olga Sourina
follows [17]:
4 Experiment
We designed and carried out two experiments with audio and visual external
stimuli to collect EEG data based on the Valence-Arousal-Dominance emotion
model. The obtained EEG data with different emotional labels were used to test
the proposed algorithm.
4.1 Stimuli
4.2 Subjects
In Experiment 1, there are a total of 14 (9 females and 5 males) subjects par-
ticipating in the experiment. In Experiment 2, there are a total of 16 (9 females
and 7 males) subjects participating in the experiment. All of them are univer-
sity students and staff whose age ranged around 20 to 35 years old and without
auditory deficit or any history of mental illness.
4.3 Procedure
After a participant was invited to a project room, the experiment protocol and
the usage of a self-assessment questionnaire were explained to him/her. The sub-
jects needed to complete the questionnaire after the exposure to the audio/visual
stimuli. The Self-Assessment Manikin (SAM) technique [12] was employed which
used the 3D model with valence, arousal and dominance dimensions and nine
levels indicating the intensity in all dimensions. In the questionnaire, the subjects
were also asked to describe their feelings in any words including the emotions
like happy, surprised, satisfied, protected, angry, frightened, unconcerned, sad or
any other emotions they feel. The experiments were done with one subject at
each time. The audio experiment was conducted following the standard proce-
dure for emotion induction with audio stimuli [22, 42]. Therefore, in Experiment
1, the participants were asked to close their eyes to avoid artifacts and to be
focused on hearing. In Experiment 2, the subjects were asked to avoid making
movement except working on the keyboard.
The design of each session in Experiment 1 is as follows.
1. A silent period is given to the participant to calm down (12 seconds).
2. The subject is exposed to the sound stimuli (5clips x 6 seconds/clip=30
seconds).
3. The subject completes the self-assessment questionnaire.
In summary, each session lasted 42 seconds plus the self-assessment time.
The construction of each session in Experiment 2 is as follows.
1. A black screen is shown to the participant (3 seconds).
10 Yisi Liu and Olga Sourina
2. A white cross in the center of the screen is given to inform the subject that
visual stimuli will be shown (4 seconds).
3. The subject is exposed to the pictures (4 pictures x 10 seconds/clip=40
seconds).
4. The black screen is shown to the participant again (3 seconds).
5. The subject completes the self-assessment questionnaire.
In summary, each session lasted 50 seconds plus the self-assessment time.
5 Implementation
5.1 Fractal Features
In this work, the FD values were proposed to be used as features to improve the
accuracy of emotion recognition from EEG signals. To calculate one FD value per
finite set of time series samples, the Higuchi algorithm described in Section 3.2
was implemented and validated on the standard mono fractal signal generated by
the Weierstrass function where the theoretical FD values were known in advance
[49]. The size of the finite set N defines the size of the window in our emotion
recognition algorithm. Fig. 1 shows the result of the calculation of FD values of
the signals generated by the Weierstrass function with different window sizes. As
it is seen from the graph, the FD values calculated with the window size equal
to 512 samples are more close to the theoretical values. Thus, in our algorithm,
the size of the window of 512 samples was chosen. For each window size N , we
used different tmax values ranging from 8 to 64 in (15)-(16) to compute the FD
values. With N = 512, the value of tmax was set to 32 since it has the lowest
12 Yisi Liu and Olga Sourina
The classical FDR method [34] was applied for the channel selection [7, 19, 40].
A non-overlapping sliding window with size of 512 samples was used for FD fea-
ture calculation. Channel ranking for 32 subjects from the DEAP database was
calculated. In DEAP databases, there are 40 experimental trials labelled with
arousal, valence and dominance ratings, and in our case (recognition of 8 emo-
tions), for every subject, one trial was selected per emotion. As a result, up to
Real-time Subject-dependent EEG-based Emotion Recognition Algorithm 13
8 trials EEG data were selected with the corresponding emotions such as PLL,
PLH, PHL, PHH, NLL, NLH, NHL, and NHH in the following processing. If the
subject has more than one trial that labeled with the same emotions, the trial
with extreme rating will be selected. For example, for the state of Arousal>5,
Valence>5, Dominance>5, a trial with arousal rating 9, valence rating 8, dom-
inance rating 8 will be used instead of a trial with arousal rating 6, valence
rating 6, dominance rating 7. EEG data collected during playing the first 53
seconds of the video were used to calculate FD values in each session. The mean
FDR scores for each channel were computed across all the 32 subjects. By using
the data provided by the DEAP database, we can have more subjects to get
a more general channel rank patterns. The final channel rank is FC5, F4, F7,
AF3, CP6, T7, C3, FC6, P4, Fp2, F8, P3, CP5, O1, F3, P8, CP2, CP1, P7, Fp1,
PO4, O2, Pz, Oz, T8, FC2, Fz, AF4, PO3, Cz, C4, FC1. The ranking of each
channel is visualized in Fig. 3 using EEGLAB [20]. The visualization is based on
the mean FDR scores across different subjects for each channel. It is a standard
approach to use the channel rank to show spatial pattern. For example, [7] visu-
alizes the weights of the spatial filters obtained from the sparse common spatial
pattern (SCSP) algorithm. [40] uses the channel rank score to show the acti-
vated brain regions during the motor imagination tasks. The values were scaled
for better visualization. From the figure, it can be seen that the frontal lobe is
the most active because the most discriminant channels belong to the frontal
lobe. Previous research has confirmed that the orbitofrontal cortex, anterior cin-
gulated cortex, amygdala, and insula are highly involved in emotion processing
[69]. For example, it shows that negative emotions increase the amygdale ac-
tivation [14]. However, the subcortical structures such as amygdala cannot be
detected directly by EEG signals which are recorded from the scalp. The amyg-
dale connects and interacts with frontal cortex and emotions are experienced as
a result [11, 31]. The visualization in Fig. 3 complies with the above-mentioned
findings about the importance of frontal lobe in the emotion processing. Then,
we followed the final channel rank to calculate the classification accuracy and
to choose the number of the channels for our algorithm. The Support Vector
Machine classifier described in Section 3.3, implemented by LIBSVM [17] with
polynomial kernel for multiclass classification was used to compute the accu-
racy of emotion recognition using a different number of channels following the
rank. 5-fold cross validation of the data was applied: first, the raw data were
partitioned into 5 sets without overlapping, and then features were extracted
from each set. During the classification phase, 4 sets were used as training sets,
and 1 set was used as validation data for testing. The process was run 5 times,
and every set was used as the testing data for once. The mean accuracy of the
classification in 5 runs was computed as the final estimation of the classification
accuracy. The cross-validation can allow us to avoid the problem of over fitting
[28]. The parameters of the SVM classifier were set according to [55] where high
accuracy of emotion classification was achieved with different feature types as
follows: the value of gamma in (18) was set to 1, coef was set to 1 and order
d was set to 5. A grid-search approach was also applied to select the SVM ker-
14 Yisi Liu and Olga Sourina
done in [70] and [74]. As it was shown in Section 5.2, the FC5, F4, F7, and AF3
channels were chosen for the algorithm implementation. The feature vector F V
for emotion classification is defined as follows:
F V = [F V1 , F V2 , F V3 , F V4 ]. (19)
where 1 denotes FC5 channel, 2 denotes F4 channel, 3 denotes F7 channel, 4
denotes AF3 channel, and F Vi is the feature vector per channel. Here, the F Vi is
composed by the statistical features given in (8), HOC features given in (12), FD
features given in (17) or the combinations of different features F Vcombination1
and F Vcombination2 as given below in (20) and (21). Normalization is applied
to the FD, statistical features and HOC features across the four channels in (19).
16, 19, and 20. For each subject, EEG data from one trial labeled with one of
the eight emotions were used in the following processing. In Experiment 1, we
have got EEG data labeled with 5 emotions from 1 subject, with 3 emotions
from 4 subjects, and with 2 emotions from 6 subjects. In Experiment 2, we have
got EEG data labeled with 6 emotions from 2 subjects, with 5 emotions from 1
subject, with 4 emotions from 2 subjects, with 3 emotions from 4 subjects and
with 2 emotions from 5 subjects.
Fractal Dimension analysis could be used to quantify the nonlinear property
of EEG signals [4]. In this algorithm, we propose to combine FD feature with
other best features to improve the classification performance of emotion recogni-
tion. Using just 1 FD feature solely has better accuracy than using 6 statistical
features or 36 HOC features for some subjects. For example, as it shown in Fig. 6,
FD features outperforms the other two types of features in the recognition of
NHL and PLL, NHL and NLH, NHL and NLL, PHH and NLH, PLL and NLH
for Subject 10; NHL and PHL, NHH and PHH, NHH and PHL for Subject 19
in DEAP database.
each subject per emotion. Finally, the scaled mean FD values in step 2 are
averaged cross all 6 subjects per emotion and visualized on the brain map. As
it can be seen from Fig. 7, different emotions have different spatial patterns,
and the frontal lobe is always active (in red or yellow). Higher FD values of
EEG reflect higher activity of the brain. FD value can be used for differentiation
of valence dimension in the Valence-Arousal-Dominance model [47]. It can be
seen from Fig. 7 that in negative emotions such as (a) frightened, (b) angry, (g)
unconcerned, (h) sad, the spatial pattern shows that the right hemisphere is more
active than the left one, and in (a) frightened, (b) angry, the right hemisphere is
more active than the right hemisphere in (g) unconcerned and (h) sad emotions.
In positive emotions such as (c) happy, (d) surprise, (e) protected, (f) satisfied,
the spatial pattern shows that the left semisphere is more active than the right
one, and the left hemisphere is more active in (c) happy, (d) surprise than the
left hemisphere in (e) protected, (f) satisfied emotions.
Fig. 7: The visualization of FD pattern for 6 subjects from DEAP with 8 emo-
tions: (a) frightened, (b) angry, (c) happy, (d) surprise, (e) protected, (f) satisfied,
(g) unconcerned, and (h) sad.
for each subject was calculated, and the mean accuracy over 6 subjects is given
in the table. As expected, the classification accuracy increases when the number
of emotions recognized is reduced. A one-way ANOVA was performed on the
results of the recognition of 4 emotions, and the statistical test was applied to
the accuracy by using the combination of HOC, 6 statistical, and FD features
and by using other features. As shown in Table 6, the statistical results showed
that the proposed combined features when Fractal Dimension feature was in-
cluded (HOC, 6 statistical and 1 FD) are statistically superior to using solely
HOC (p=6.8926e-071) or 6 statistical features (p=0.0056). As can be seen from
the Table 5, using the combination of HOC, 6 statistical and 1 FD features has
slightly higher accuracy than using the combination of 6 statistical and 1 FD,
however, no significant difference is found between these two combined features
(p=0.42). Thus, both combinations of features could be used.
Table 6: F-values and p-values of the ANOVA tests applied on the accuracy of
proposed combined features (HOC, 6 statistical, FD) and the other features.
Feature F-value p-value
Statistical Features 7.72 <0.01
HOC 385.4 <0.01
6statistical, FD 0.44 0.42
We also validated our algorithm on the data from our own databases (Ex-
periment 1 and 2). The results are presented in Table 7 and 8 correspondingly.
The accuracy of fewer emotional states is the mean across all subjects who have
the data that are labeled with the corresponding number of emotions. For ex-
ample, the accuracy for 2 emotions recognition in Table 7 is the average across
all 11 subjects in Experiment 1 with their corresponding 2 emotions recognition
results. The results in Table 7 and 8 also support our conclusion that the combi-
nation of HOC, 6 statistical and 1 FD features or 6 statistical features with 1 FD
feature is the optimal choice for real-time applications. The algorithm accuracy
improves from 68.85% to 87.02% or 86.17% in Experiment 1 and from 63.71% to
76.53% or 76.09% in Experiment 2 when combinations of HOC, 6 statistical and
20 Yisi Liu and Olga Sourina
7 Conclusion
References
1. Biosemi, http://www.biosemi.com
2. Emotiv, http://www.emotiv.com
3. American electroencephalographic society guidelines for standard electrode posi-
tion nomenclature. Journal of Clinical Neurophysiology 8(2), 200–202 (1991)
4. Accardo, A., Affinito, M., Carrozzi, M., Bouquet, F.: Use of the fractal dimen-
sion for the analysis of electroencephalographic time series. Biological Cybernetics
77(5), 339–350 (1997)
5. Aftanas, L.I., Lotova, N.V., Koshkarov, V.I., Popov, S.A.: Non-linear dynamical
coupling between different brain areas during evoked emotions: An EEG investi-
gation. Biological Psychology 48(2), 121–138 (1998)
6. Anderson, E.W., Potter, K.C., Matzen, L.E., Shepherd, J.F., Preston, G.A., Silva,
C.T.: A user study of visualization effectiveness using EEG and cognitive load.
Computer Graphics Forum 30(3), 791–800 (2011)
7. Arvaneh, M., Cuntai, G., Kai Keng, A., Chai, Q.: Optimizing the channel selec-
tion and classification accuracy in EEG-Based BCI. Biomedical Engineering, IEEE
Transactions on 58(6), 1865–1873 (2011)
8. Aspiras, T.H., Asari, V.K.: Log power representation of EEG spectral bands for
the recognition of emotional states of mind. In: 8th International Conference on
Information, Communications and Signal Processing (ICICS) 2011. pp. 1 – 5 (2011)
9. Bechara, A., Damasio, H., Damasio, A.R.: Emotion, decision making and the or-
bitofrontal cortex. Cerebral Cortex 10(3), 295–307 (2000)
10. Bolls, P.D., Lang, A., Potter, R.F.: The effects of message valence and listener
arousal on attention, memory, and facial muscular responses to radio advertise-
ments. Communication Research 28(5), 627–651 (2001)
11. Bos., D.O.: EEG-based emotion recognition (2006), http://hmi.ewi.utwente.nl/
verslagen/capita-selecta/CS-Oude_Bos-Danny.pdf
12. Bradley, M.M.: Measuring emotion: The self-assessment manikin and the semantic
differential. Journal of Behavior Therapy and Experimental Psychiatry 25(1), 49–
59 (1994)
13. Bradley, M.M., Lang, P.J.: The international affective digitized sounds (2nd edi-
tion; IADS-2): Affective ratings of sounds and instruction manual. Tech. rep., Uni-
versity of Florida, Gainesville (2007)
14. Burgdorf, J., Panksepp, J.: The neurobiology of positive emotions. Neuroscience
& Biobehavioral Reviews 30(2), 173–187 (2006)
15. Cao, M., Fang, G., Ren, F.: EEG-based emotion recognition in chinese emotional
words. In: Proceedings of CCIS 2011. pp. 452–456 (2011)
16. Chanel, G., Rebetez, C., Betrancourt, M., Pun, T.: Emotion assessment from phys-
iological signals for adaptation of game difficulty. IEEE Transactions on Systems,
Man, and Cybernetics Part A:Systems and Humans 41(6), 1052–1063 (2011)
17. Chang, C.C., Lin, C.J.: LIBSVM : a library for support vector machines (2001),
http://www.csie.ntu.edu.tw/~cjlin/libsvm
18. Cristianini, N., Shawe-Taylor, J.: An introduction to Support Vector Machines:
and other kernel-based learning methods. Cambridge University Press, New York
(2000)
19. D’Alessandro, M., Esteller, R., Vachtsevanos, G., Hinson, A., Echauz, J., Litt, B.:
Epileptic seizure prediction using hybrid feature selection over multiple intracranial
EEG electrode contacts: a report of four patients. Biomedical Engineering, IEEE
Transactions on 50(5), 603–615 (2003)
Real-time Subject-dependent EEG-based Emotion Recognition Algorithm 23
20. Delorme, A., Makeig, S.: EEGLAB: An open source toolbox for analysis of single-
trial EEG dynamics including independent component analysis. Journal of Neu-
roscience Methods 134(1), 9–21 (2004)
21. Duvinage, M., Castermans, T., Dutoit, T., Petieau, M., Hoellinger, T., Saedeleer,
C.D., Seetharaman, K., Cheron, G.: A P300-based quantitative comparison be-
tween the emotiv epoc headset and a medical EEG device. In: Proceedings of
the 9th IASTED International Conference on Biomedical Engineering. pp. 37–42
(2012)
22. Gao, T., Wu, D., Huang, Y., Yao, D.: Detrended fluctuation analysis of the human
EEG during listening to emotional music. J Elect. Sci. Tech. Chin 5, 272–277 (2007)
23. Hadjidimitriou, S., Zacharakis, A., Doulgeris, P., Panoulas, K., Hadjileontiadis,
L., Panas, S.: Sensorimotor cortical response during motion reflecting audiovisual
stimulation: evidence from fractal EEG analysis. Medical and Biological Engineer-
ing and Computing 48(6), 561–572 (2010)
24. Hadjidimitriou, S.K., Zacharakis, A.I., Doulgeris, P.C., Panoulas, K.J., Hadjileon-
tiadis, L.J., Panas, S.M.: Revealing action representation processes in audio per-
ception using fractal EEG analysis. Biomedical Engineering, IEEE Transactions
on 58(4), 1120–1129 (2011)
25. Higuchi, T.: Approach to an irregular time series on the basis of the fractal theory.
Physica D: Nonlinear Phenomena 31(2), 277–283 (1988)
26. Hosseini, S.A., Khalilzadeh, M.A.: Emotional stress recognition system using eeg
and psychophysiological signals: Using new labelling process of EEG signals in emo-
tional stress state. In: Biomedical Engineering and Computer Science (ICBECS),
2010 International Conference on. pp. 1–6. IEEE (2010)
27. Hou, X., Sourina, O.: Emotion-enabled haptic-based serious game for post stroke
rehabilitation. In: Proceedings of VRST 2013. pp. 31–34 (2013)
28. Hsu, C.W., Chang, C.C., Lin, C.J.: A practical guide to support vector classifica-
tion. Tech. rep., National Taiwan University, Taipei (2003)
29. Huang, D., Guan, C., Kai Keng, A., Haihong, Z., Yaozhang, P.: Asymmetric spatial
pattern for EEG-based emotion detection. In: Neural Networks (IJCNN), The 2012
International Joint Conference on. pp. 1–7 (2012)
30. Jones, N.A., Fox, N.A.: Electroencephalogram asymmetry during emotionally
evocative films and its relation to positive and negative affectivity. Brain and Cog-
nition 20(2), 280–299 (1992)
31. Kandel, E.R., Schwartz, J.H., Jessell, T.M., et al.: Principles of neural science,
vol. 4. McGraw-Hill New York (2000)
32. Kedem, B.: Time Series Analysis by Higher Order Crossing. IEEE Press, New York
(1994)
33. Khosrowabadi, R., Wahab bin Abdul Rahman, A.: Classification of EEG correlates
on emotion using features from gaussian mixtures of eeg spectrogram. In: Infor-
mation and Communication Technology for the Muslim World (ICT4M), 2010
International Conference on. pp. E102–E107. IEEE (2010)
34. Kil, D.H., Shin, F.B.: Pattern recognition and prediction with applications to signal
characterization. AIP series in modern acoustics and signal processing, AIP Press,
Woodbury, N.Y. (1996)
35. Koelstra, S., Muhl, C., Soleymani, M., Lee, J.S., Yazdani, A., Ebrahimi, T., Pun,
T., Nijholt, A., Patras, I.: DEAP: A database for emotion analysis ;using physio-
logical signals. Affective Computing, IEEE Transactions on 3(1), 18–31 (2012)
36. Koelstra, S., Muhl, C., Soleymani, M., Lee, J.S., Yazdani, A., Ebrahimi, T., Pun,
T., Nijholt, A., Patras, I.: DEAP dataset (2012), http://www.eecs.qmul.ac.uk/
mmv/datasets/deap
24 Yisi Liu and Olga Sourina
37. Kringelbach, M.L.: The human orbitofrontal cortex: Linking reward to hedonic
experience. Nature Reviews Neuroscience 6(9), 691–702 (2005)
38. Kulish, V., Sourin, A., Sourina, O.: Analysis and visualization of human electroen-
cephalograms seen as fractal time series. Journal of Mechanics in Medicine and
Biology, World Scientific 26(2), 175–188 (2006)
39. Kulish, V., Sourin, A., Sourina, O.: Human electroencephalograms seen as fractal
time series: Mathematical analysis and visualization. Computers in Biology and
Medicine 36(3), 291–302 (2006)
40. Lal, T.N., Schroder, M., Hinterberger, T., Weston, J., Bogdan, M., Birbaumer, N.,
Scholkopf, B.: Support vector channel selection in BCI. Biomedical Engineering,
IEEE Transactions on 51(6), 1003–1010 (2004)
41. Lang, P., Bradley, M., Cuthbert, B.: International affective picture system (IAPS):
Affective ratings of pictures and instruction manual. technical report a-8. Tech.
rep., University of Florida, Gainesville, FL. (2008)
42. Lin, Y.P., Wang, C.H., Jung, T.P., Wu, T.L., Jeng, S.K., Duann, J.R., Chen,
J.H.: EEG-based emotion recognition in music listening. IEEE Transactions on
Biomedical Engineering 57(7), 1798–1806 (2010)
43. Liu, Y., Sourina, O., Nguyen, M.K.: Real-time EEG-based human emotion recog-
nition and visualization. In: Proc. 2010 Int. Conf. on Cyberworlds. pp. 262–269.
Singapore (2010)
44. Liu, Y., Sourina, O., Nguyen, M.K.: Real-time eeg-based emotion recognition and
its applications. Transactions on Computational Science XII, LNCS 6670, 256–277
(2011)
45. Liu, Y., Sourina, O.: EEG-based emotion-adaptive advertising. In: Proc. ACII
2013. pp. 843–848. Geneva (2013)
46. Liu, Y., Sourina, O.: EEG databases for emotion recognition. In: Proc. 2013 Int.
Conf. on Cyberworlds. Japan (2013)
47. Liu, Y., Sourina, O.: Real-time fractal-based valence level recognition from EEG.
In: Transactions on Computational Science XVIII, pp. 101–120. Springer (2013)
48. Lutzenberger, W., Elbert, T., Birbaumer, N., Ray, W.J., Schupp, H.: The scalp
distribution of the fractal dimension of the EEG and its variation with mental
tasks. Brain Topography 5(1), 27–34 (1992)
49. Maragos, P., Sun, F.K.: Measuring the fractal dimension of signals: morphological
covers and iterative optimization. IEEE Transactions on Signal Processing 41(1),
108–121 (1993)
50. Mauss, I.B., Robinson, M.D.: Measures of emotion: A review. Cognition and Emo-
tion 23(2), 209–237 (2009)
51. Mehrabian, A.: Framework for a comprehensive description and measurement of
emotional states. Genetic, social, and general psychology monographs 121(3), 339–
361 (1995)
52. Mehrabian, A.: Pleasure-arousal-dominance: A general framework for describing
and measuring individual differences in temperament. Current Psychology 14(4),
261–292 (1996)
53. Noble, W.S.: What is a support vector machine? Nat Biotech 24(12), 1565–1567
(2006)
54. O’Regan, S., Faul, S., Marnane, W.: Automatic detection of EEG artefacts arising
from head movements. In: Engineering in Medicine and Biology Society (EMBC),
2010 Annual International Conference of the IEEE. pp. 6353–6356 (2010)
55. Petrantonakis, P.C., Hadjileontiadis, L.J.: Emotion recognition from EEG us-
ing higher order crossings. IEEE Transactions on Information Technology in
Biomedicine 14(2), 186–197 (2010)
Real-time Subject-dependent EEG-based Emotion Recognition Algorithm 25