Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

1

Consumer Grade Brain Sensing for


Emotion Recognition
Payongkit Lakhan, Nannapas Banluesombatkul, Vongsagon Changniam, Ratwade Dhithijaiyratn,
Pitshaporn Leelaarporn, Ekkarat Boonchieng, Supanida Hompoonsup and Theerawit Wilaiprasitporn

Abstract—For several decades, electroencephalography (EEG) Over the years, emotion studies have broadened from psycho-
has featured as one of the most commonly used tools in emotional physiological focuses to engineering applications [2], and
state recognition via monitoring of distinctive brain activities. An
arXiv:1810.04582v4 [eess.SP] 9 Aug 2019

numerous tools and algorithms have been developed in an


array of datasets have been generated with the use of diverse
emotion-eliciting stimuli and the resulting brainwave responses attempt to tackle the challenges of emotion recognition en-
conventionally captured with high-end EEG devices. However, countered by the latter. In particular, the 21st century brought
the applicability of these devices is to some extent limited by a dramatic increase in the number of investigative efforts into
practical constraints and may prove difficult to be deployed brain activities during emotional processing. This is in part
in highly mobile context omnipresent in everyday happenings. owing to the advent of a non-invasive technique known as elec-
In this study, we evaluate the potential of OpenBCI to bridge
this gap by first comparing its performance to research grade troencephalography (EEG). A standard EEG device consists of
EEG system, employing the same algorithms that were applied multiple electrodes that can capture both spatial and temporal
on benchmark datasets. Moreover, for the purpose of emotion information, very much like a video capture [3]. Other brain
classification, we propose a novel method to facilitate the selection imaging techniques such as functional Magnetic Resonance
of audio-visual stimuli of high/low valence and arousal. Our setup Imaging (fMRI) might offer higher spatial resolution, but EEG
entailed recruiting 200 healthy volunteers of varying years of
age to identify the top 60 affective video clips from a total of devices come with a lower cost, higher temporal resolution,
120 candidates through standardized self assessment, genre tags, lighter weight, and simpler assembly. These attributes may
and unsupervised machine learning. Additional 43 participants enable researchers to monitor instantaneous changes in the
were enrolled to watch the pre-selected clips during which brain as emotion is being elicited by extraneous influence.
emotional EEG brainwaves and peripheral physiological signals Most emotion induction approaches make use of audio or
were collected. These recordings were analyzed and extracted
features fed into a classification model to predict whether the visual stimuli to rouse emotions of varying arousal and valence
elicited signals were associated with a high or low level of valence levels on command. Apposite brainwave and physiological
and arousal. As it turned out, our prediction accuracies were measures are gathered concurrently and these multimodal
decidedly comparable to those of previous studies that utilized recordings are then subjected to analysis by an emotion
more costly EEG amplifiers for data acquisition. recognition algorithm to assign the most likely emotion label,
Index Terms—Consumer grade EEG, Low-cost EEG, Open- followed by an evaluation of overall prediction accuracy.
BCI, Emotion recognition, Affective Computing This prototypical workflow is routinely adopted in emotion
recognition research. One such study applied an EEG-based
I. I NTRODUCTION feature extraction technique to classify six basic emotions
(anger, disgust, fear, happiness, sadness, and surprise) evoked
E MOTION plays an integral role in human social interac-
tions and is typically evoked as a result of psychological
or physiological responses to external stimuli. Due to the
during the viewing of pictures of facial expressions [4]. In a
similar fashion, Lin and colleagues [5] extracted bio-signals
distinctly personal nature, emotions or affective states are using EEG and classified users affective states while listening
traditionally assessed by psychologists through self report to music into 4 groups namely, anger, pleasure, sadness,
means and can be classified into well-defined categories [1]. and joy. Nie and colleagues [6] on the other hand utilized
combined audio and visual stimuli in the form of 12 movie
This work was supported by Robotics AI and Intelligent Solution Project, clips and classified the trajectory of emotions into positive
PTT Public Company Limited, Thailand Research Fund and the Office of and negative values using self-assessment manikin (SAM) and
Higher Education Commission under Grant MRG6180028 and Junior Science
Talent Project, NSTDA, Thailand. features extracted from EEG time waves. In lieu of external
P. Lakhan, N. Banluesombatkul, P. Leelaarporn and T. Wilaiprasitporn stimuli, Kothe and colleagues [7] relied on the power of
are members of the Bio-inspired Robotics and Neural Engineering Lab at imagination to induce emotions. In their study, the participants
the School of Information Science and Technology, Vidyasirimedhi Insti-
tute of Science & Technology, Rayong, Thailand (corresponding were asked to recall memories that matched the description
author: theerawit.w at vistec.ac.th) of targeted emotions, including positive and negative valence,
V. Changniam is with the Department of Tool and Materials Engineering, during which their EEG signals were being recorded. In most
King Mongkut’s University of Technology Thonburi, Bangkok, Thailand
R. Dhithijaiyratn is with the Department of Electrical Engineering, Chula- cases, there is considerable room for improvement when it
longkorn University, Bangkok, Thailand comes to prediction accuracy; these are equally true for studies
E. Boonchieng is with the Center of Excellence in Community Health that deploy either external or internal emotion-eliciting stimuli.
Informatics, Chiang Mai University, Chiang Mai, Thailand
S. Hompoonsup is with the Learning Institute, King Mongkut’s University Several EEG-based benchmark databases for emotion recog-
of Technology Thonburi, Bangkok, Thailand nition have been generated by independent groups; a fair
2

number of which are freely available to the public. One of the usability of an open-source consumer grade EEG amplifier,
first publicly accessible datasets, MAHNOB-HCI, was created OpenBCI, in emotion recognition application. In particular,
to meet the growing demand from the scientific community for by adopting the same paradigm and classification algorithm
multifaceted information. The dataset contains EEG signals, as used by high-end EEG works, we were able to appraise the
physiological responses, facial expressions, and gaze data prediction accuracies of OpenBCI-derived data in a systematic
collected during participants’ video viewing [8]. The affective manner against previous studies and found them to be more or
clips were short section cuts from a collection of commercial less comparable [8], [9], [12]. We developed a classification
films. The experimental design was built upon a preliminary model to predict whether the recorded EEG data had a high
study whereby online volunteers reported the emotions expe- or low level of valence and arousal. The algorithm essentially
rienced during their watching of the clips. In the main study, imparts simple feature extraction (e.g., power spectral density
the participants were instructed to submit a subjective rating (PSD)) and the classifier was constructed with a support
on arousal, valence, dominance, and content predictability. vector machine (SVM) architecture. In sum, our classification
Another dataset called Database for Emotion Analysis using results appeared on par if not better when compared to the
Physiological Signals or DEAP was similarly motivated; it results from benchmark datasets that were generated from
contains brainwave recordings from EEG in combination with high-end EEG devices. Our two principal contributions can
peripheral physiological signals [9]. The stimulus of choice be summarized as follows:
was an assortment of music videos that were pre-selected • We propose a robust stimulus selection method and
based on the demographic background of the participants, validate a clustering approach for emotion labeling.
most of whom were European. These music videos were • We provide concrete evidence for the capability of con-
retrieved from a website that collects songs and music videos sumer grade OpenBCI in emotion recognition study by
tagged with different emotion descriptors by users. Each of quality assessing our performance accuracies against re-
the music videos received a score from the participants on sults from public repositories.
arousal, valence, dominance, liking, and familiarity. The remainder of this paper provides an overview of the rising
As opposed to the previously mentioned datasets, a presence of consumer grade EEG devices in emotion studies
multimodal dataset built for the decoding of user physiological (Section II), methodology in Section III, results in Section IV,
responses to affective multimedia content or DECAF discussions in Section V, and conclusion in Section VI.
compared the signals obtained from EEG to the recordings
from ELEKTA Neuromag, a Magnetoencephalogram (MEG)
sensor [10]. Video selection was reliant on the affective II. C ONSUMER G RADE EEG FOR E MOTION R ECOGNITION
notations reported by volunteers prior to the study. In the According to our literature survey on recently published ar-
data collection phase, participants were told to rate the music ticles using “Low-Cost EEG Emotion” as a search phrase, the
videos and movie clips in terms of arousal, valence, and majority of studies used EEG headsets from EMOTIV EPOC+
dominance. Other types of measurable signals that correlated [18], while others used EMOTIV INSIGHT, MYNDPLAY
with implicit affective states were simultaneously monitored [19], NeuroSky [20], and MUSE [21]. Only a few studies
and collected with horizontal Electrooculogram (hEOG), on emotion recognition used OpenBCI [22]. Table I presents
Electrocardiogram (ECG), Near-Infra-Red (NIR) facial a comparison between a brand-new consumer grade EEG by
videos, and trapezius-Electromyogram (tEMG). Constructed the name of OpenBCI and other consumer grade EEGs on the
in 2015, SJTU Emotion EEG Dataset or SEED comprises market. Each product has been scientifically validated against
EEG, ECG, EMG, EOC, and skin resistance (SC) data high-end EEGs by measuring the established brain responses.
acquired from 15 Chinese students [11]. Fifteen emotion- Three studies validated EPOC+ against EEG from ANT Neuro
charged clips were selected from a pool of popular locally [23], Neuroscan [24], and gtec [25]. The studies included
produced Chinese films. The participants were directed to the measurement of P300, Event-Related Potential (ERP), and
categorize the clips as either positive, neutral, or negative. emotion, respectively. Frontal EEGs recorded from NeuroSky
However, the classification of emotions in this dataset did and MUSE were validated with two baseline EEGs named B-
not include valence and arousal as labels. This study was an Alert [26] and Enobio [27]. MUSE was also validated with
extension from the earlier study by the same research group the EEG from Brain Products GmbH [28] for ERP research.
where movie clips were being scored on three attributes i.e., Recently, one research group reported the performance of
valence, arousal, and dominance [6]. The most recent dataset OpenBCI based on Texas Instrument ADS1299 (biopotential
is possibly DREAMER, created using consumer grade EEG amplifier) using movement-related cortical potential measure-
and ECG. The EEG system used was an Emotiv EPOC ment. In comparison to the product from Neuroscan [24], there
which is famous for its easy-to-wear design and wireless were no statistically significant differences between the two
headset [12]. To gather the relevant data, the team instructed makes.
their recruits to fill in the SAM form to rate their levels of Research on emotion-related topics that adopt consumer
arousal, valence, and dominance. Short videos clips were grade EEG can be separated into two domains depending
chosen as the means of emotion elicitation and the emotions on the types of emotion-eliciting stimuli. The first domain
most likely to be invoked from these clips were predetermined. consists of audio-visual stimulation (video clips). Two studies
[12], [29] set out to construct emotion datasets for public
The key objective of our investigation was to evaluate the release and develop recognition algorithms for prospective
3

TABLE I: Comparison of consumer grade EEG devices on the market.

Product Name Price [USD] Sampling Rate [Hz] No. of Channels Open Source Raw Data Scientific Validation
OpenBCI 750/1800 250 8/16 Yes Yes MRCPs [13]
EMOTIV EPOC+ 799 256 14 SDK Yes* P300 [14], ERP [15], emotion [12]
EMOTIV INSIGHT 299 128 5 SDK Yes* -
Myndplay Myndband 200 512 3 SDK* Yes -
NeuroSky MindWave Mobile 99 512 1 Yes Yes frontal EEG [16]
InteraXon MUSE 199 220 4 SDK Yes frontal EEG [16], ERP [17]

*At additional cost, SDK stands for Software Development Kits

applications. One research team integrated EMOTIV EPOC+ such as an investigation involving 103-day single channel EEG
with a bracelet sensor [30] and eye tracker [31] to produce data on emotion recognition in daily life [49]. Nowadays,
a consumer grade emotion recognition system [32]. An evo- fundamental knowledge on emotion recognition or affective
lutionary computation was then proposed as a competitive computing using brain wave activity has been applied in broad
algorithm in emotion recognition and EMOTIV INSIGHT was areas such as studies of people with depression [50], stress in
used for performance evaluation [33]. Furthermore, MUSE and construction workers [51], and interactions between ambient
certain other physiological datasets were used in another work temperature in building and the occupant [52].
involving boredom detection during video clip viewing [34]. As previously mentioned, this paper describes a feasibility
Data collected from the MUSE headband were also shown study of emotion recognition by OpenBCI. In recent years,
to be able to predict the affective states and their magnitudes there have been a few related works, one of which demon-
[35]. As an example of its practical application, a real-time strated the usefulness of OpenBCI as a biofeedback instrument
emotion recognition framework was recently proposed, using (EEG, EMG) for an application titled IBPoet [53]. Three
EMOTIV EPOC+ to record participants’ brainwaves as they parties participated in the IBPoet demonstration: the reader,
were being shown Chinese film excerpts [36]. The second the readee, and the audience. Once the reader had relayed
domain involves auditory stimulation, most often music, as the selected poem, the readee sent biofeedback (emotional
the means to invoke desired emotions. The use of EMOTIV responses) to the reader via a vibration band and heat glove.
EPOC+ was demonstrated for the automatic detection of The reader and the system adapted according to the emotional
brain responses from three types of music (neutral, joyful, responses to give the audience a greater feeling of pleasure.
and melancholic). EEG connectivity features were reported to Similarly, another research group used the same device to
play an essential role in the recognition tasks [37]. Another evaluate and practice oral presentation [54]. The other findings
study, using the same device, observed a correlation of EEG are related to typical research on emotion recognition, but
with classical music rhythm [38], [39]. Music therapy is one the numbers of experimental subjects were very small (some
of the applications emanating from music-elicited emotion have several subjects) [55]–[58]. Moreover, EEG software for
[40]. Moreover, in the study of auditory-related emotion, one automated emotion recognition to support consumer grade
research group reported the development of envisioned speech devices, including OpenBCI, was proposed in late 2018 [59].
recognition using EEG [41]. Thus, it could be inferred that the demand for OpenBCI is
Motivated by the findings of the aforementioned research increasing. Therefore, this study focuses on the feasibility of
domains on emotion-related works, researchers from other using OpenBCI in emotion recognition research.
fields have also used consumer grade EEG devices. Examples
of such studies are provided in this paragraph to enable the III. M ETHODOLOGY
reader to get a sense of the impact of consumer grade devices. This section begins with an experiment to select effective
Two papers have reviewed past literature on the study of film trailers for emotion elicitation (Experiment I). Consumer
human behavior using brain signals for consumer neuroscience grade EEG and peripheral physiological sensors are introduced
and neuromarketing applications [42], [43]. In addition, a in Experiment II. The selected trailers from Experiment I
consumer-related research study showcased the effect of color are used to invoke emotions, and corresponding EEG and
priming on dress shopping, measured by EEG [44]. Another peripheral physiological signals are recorded and stored for
group reported the feasibility of using asymmetry in frontal subsequent analysis. All experiments fully abide by the 1975
EEG band power (especially the Alpha band) as a metric Declaration of Helsinki (revised in 2000), and have been
for user engagement while reading the news [45]. Band granted ethical approval by the Internal Review Board of
power asymmetry was also introduced as a study feature Chiang Mai University, Thailand.
on the effects of meditation on emotional responses [46].
Some groups explored simultaneous EEG recordings from
multiple subjects such as the research on synchronized-EEGs A. Experiment I: Affective Video Selection
while students were studying in the same classroom [47] Internet Movie Database (IMDb) provides categorical labels
and similar research on the emotional engagement between a for film trailers based on the main genres including drama,
performer and multiple audiences [48]. Other research groups comedy, romance, action, sci-fi, horror, and mystery. We
have studied longitudinal EEG data from individual subjects simplified these categories into three umbrellas: Happiness,
4

1 2 3 4 5

Select film trailers Five participants Randomly split into four groups 200 participants view trailers Each participant assesses his/her emotions
categorized by IMDB select 120 famous film trailers (30 trailers per group) from one out of the four groups for each trailer (H, F, E — 1-9)

clip 01 / 30
A B H
F
E
NEXT

C D

= 40 trailers for Happiness = 50 participants


= 40 trailers for Excitement
= 40 trailers for Fear
(a)

1 2 3 4

Gather all H, F, E scores Perform k-mean clustering (k=3) Remove data points which are not in Select the first 20 closest points
from all trailers and participants among those points their majority cluster for every trailer To the centroids of each cluster output

9 9 9 9 20 trailers from blue


8 8 8 8
Happiness (H)

Happiness (H)

Happiness (H)

Happiness (H)
7 7 7 7
6 6 6 6
5 5 5 5 20 trailers from green
4 4 4 4
3 3 3 3
2
1
2
1
2
1
2
1 20 trailers from red
9 9 9 9
8 8 8 8
7 6 9 7 6 9 7 6 9 7 6 9
5 7 8 5 7 8 5 7 8 5 7 8
4 3 5 6 4 3 5 6 4 3 5 6 4 3 5 6
Fea 2 3 4 Fea 2 3 4 Fea 2 3 4 Fea 2 3 4
r (F) 1 1 2
(E) r (F) 1 1 2
(E) r (F) 1 1 2
(E) r (F) 1 1 2
(E)
Excitement Excitement Excitement Excitement

= H, F, E score of one trailer = emotion scores of trailer A = centroids of each group


from one participant
Remove all points of A
Number of = 600 data points in red & green clusters
(200 participants x 30 clips per each)
(b)

Fig. 1: Experiment I starts with (a) random selection of 120 IMDb movie trailers by five individuals. Two-hundred participants
are then presented 30 trailers each and afterward asked to assess how the trailer makes them feel. (b) K-means clustering is
performed to identify trailers that are effective at inducing emotions in viewers.

Excitement, and Fear. The selection procedure as illustrated


in Figure 1 was followed to identify trailers with explicit
emotional content. Five participants randomly picked 40 main-
stream film trailers per genre, to give 120 trailer clips in
total. The trailer clips contained English soundtrack with
native language subtitles (Thai). Afterward, the 120 clips were
randomly divided into four groups of 30 clips each. The four
groups of 50 participants (n = 200) with near equal numbers
of males and females, ranging from 15–22 years old, were
then assigned to watch one of the trailer groups. To deal Fig. 2: Experimental setup.
with varying video duration, only the final minute was used
for the experiment. At the end of each clip, the participants
were required to assess their experienced emotions through a were averaged, leaving only a single representative point per
qualitative measure, specifically by choosing a number along clip. Finally, we calculated a Euclidean distance of each point
a continuous scale of 1–9 that they felt best represented their from the cluster centroid. In order to obtain the clips that
personal levels of Valence (V), Arousal (A), Happiness (H), best represented their classes, 20 points closest to the cluster
Fear (F), and Excitement (E). centroid were selected which added up to a total of 60 clips.

To analyze the qualitative data reported by all participants,


we attempted two standard clustering methods – i.e., K-means B. Experiment II: Emotion Recognition Using OpenBCI
clustering and Gaussian Mixture Model (GMM) . H, F, and E 1) Sensors: We used OpenBCI to record emotion-related
scores were features that corresponded to distinctive emotional EEG signals. OpenBCI is the only consumer grade EEG
states. We then used Davies-Bouldin index (DB-index) as a on the market that offers open-source code (software) and
metric to evaluate the models providing K=(2, 3, 4, ...,10). hardware (schematic). The advantage of an open-source device
Further, we applied the Elbow method to locate the optimal K; is that it directly allows the development of real-time research
K-means clustering with 3 clusters was chosen. Since the same applications via typical programming languages without addi-
clips were scored by different participants, it was possible tional charges. Moreover, electrode placement is flexible when
that they might not be allocated to the same cluster. Hence, compared to other devices. To cover brain wave information
we used only the data points that belonged to their majority across the scalp, an eight-channel EEG electrode montage was
class (as inferred from the calculated mode value) and filtered selected (F p1 , F p2 , Fz , Cz , T3 , T4 , Pz , and Oz ) with refer-
out the rest. All remaining data points from the same clip ence and ground on both mastoids. This montage is a subset
5

1st film trailer 2nd film trailer …. 15th film trailer

input personal information self emotional assessment self emotional assessment self emotional assessment Rate liking scores for each clip

Fig. 3: Each experimental run begins with the acquisition of demographic data including participants personal information,
hours of sleep, level of tiredness, and favorite movie genre. Video clips are played in full-screen mode and a display of emotion
self-assessment form is launched at the end of each clip. Screenshots shown here are of six movies by the titles in [60]–[65]

of the international standard 10–20 system for EEG electrode Arousal, Valence, Happiness, Fear, and Excitement, on a
placement. In addition to EEG, we also recorded peripheral scale of 1 to 9. Afterward, we checked their answers to
physiological signals from a wearable Empatica4 or E4 device ensure that they understood correctly. Subsequently the actual
[66], which is equipped with multiple sensors, including experiment was launched, as shown in Figure 3. Firstly, we
electrodermal activity (EDA), skin temperature (Temp), and collected personal information (age, gender, hours of sleep,
blood volume pulse (BVP) with their derivatives (heart rate, level of tiredness, and favorite movie genre). Secondly, a
inter-beat interval, etc.). Interestingly, E4 was recently utilized film trailer was played for 1 minute. Then, we presented a
by a research team to record electrodermal activity (EDA) in screen displaying the self-assessment questionnaire similar to
25 healthy participants in the study of arousal and valence one featured in the trial run. Finally, another film trailer was
recognition [67]. played, followed by the questionnaire and the process repeated
2) Participants: Forty-three healthy individuals aged be- for all 15 clips. More specifically, a fixed set of 9 (out of 15)
tween 16 and 34 (21 male and 22 female) were recruited as clips were played to all 43 participants; these included the top
volunteers for the study. They were fitted with two monitoring three clips from each cluster from Experiment I. The other six
devices for simultaneous recording: OpenBCI (EEG) and video clips were selected randomly: two per cluster.
Empatica4 (E4). E4 was strapped on the wrist with extension 4) Feature Extraction: In order to obtain stable and perti-
cables and gel electrodes attached (Kendall Foam Electrodes) nent emotional responses from the EEG and peripheral physi-
to measure EDA on the middle knuckle of the index and ological signals, we started the recording after each movie clip
middle finger. During the main experiment, the participants had been played for two seconds and stopped two seconds be-
were instructed to place their chin on the designated chin fore the clip ended. This means that on the whole, 56 seconds
rest and stay as still as possible in order to minimize artifact of signals were collected for a single video clip. Typical EEG
interference. We also strove to maintain the impedance below pre-processing steps were applied including notch filtering
5 kΩ for EEG. The experimental setup is depicted in Figure 2. using iirnotch, common average reference (CAR) to find the
3) Experimental Protocol: We developed a software to reference signal for all electrode channels, and independent
present the emotion-inducing video clips sequentially, and component analysis (ICA) to remove artifact components.
recorded synchronous EEG and peripheral physiological sig- Even if there was no EOG present, ICA components were
nals with two wearable devices (see Figure 2). Although we subject to inspection. On average, zero to one component with
intended to elicit a specific emotion from the clip as defined by characteristics most similar to those of EOG was removed in
IMDb tags, each participant might have experienced emotions keeping with the manual provided by MNE [68]. Both CAR
that were incongruent with the expectation. Owing to this, and ICA were implemented using MNE-python package [69].
when labelling the data, each participant was instructed to Conventional feature extraction, as shown in Table II, was
rate his/her own emotional reaction to the clip while being computed from pre-processed EEG, EDA, BVP, and Temp.
blinded to its IMDb tags. To avoid potential misunderstanding, These features were based on previous works [8], [9], [12]
before commencing the main experiment, we methodically and included as the baseline for comparison to the results
described the procedures and the meaning of every emotion of this study. To extract features within a specified range of
score. Additionally, an example clip, which was not used in the EEG frequencies, we applied lfilter (from SciPy package) for
actual experiment, was shown during a mock trial to test the bandpass filtering [70].
participants’ understanding. We then presented a questionnaire 5) Emotion Recognition: For this part, the investigation
for them to rate their experienced emotions pertaining to was divided into three subtasks, as detailed in the bullet
6

Leave-one-clip-out cross validation: 9 folds


Dataset (15 clips)
Leave one out Test
Test (1 clip)

The remaining eight Gridsearchcv ( Leave-one-clip-out: 8 folds )


Leave one out Validate (1 clip)

Train (8 + 6 clips) Train (7 + 6 clips)


An optimal set of
= random clips The remaining seven SVM parameters
= common clips
(watched by Select All Select All
all participants)
Return

Fig. 4: Schematic of leave-one-clip-out cross-validation steps with gridsearchcv.

TABLE II: Description of features that are extracted from EEG • Low-High: Valence/Arousal with Clustering
and Empatica signals. We conducted GMM and K-means clustering using V
and A scores from the participants in order to select the
Signal Extracted Features
most suitable clustering method. DB-index was used as
EEG(32) θ (3–7 [Hz]), α (8–13 [Hz]), β (14–29 [Hz]) and γ a metric and it turned out that K-means clustering with 4
(30–47 [Hz]) power spectral density for each channel
clusters was the best one. According to four combinations
EDA(21) average skin resistance, average of derivative, aver- of V and A models in a conventional emotion-related
age of derivative for negative values only, proportion study (low (L) or high (H) V and A scores), we labeled
of negative samples in the derivative vs all samples,
number of local minima, average rising time, 14 samples in the four groups stratified by K-means with
spectral power in the 0–2.4 [Hz] bands, zero crossing LVLA, LVHA, HVLA, and LVHA. The input features
rate of skin conductance slow response 0–0.2 [Hz], were identical to the previous subsection.
zero crossing rate of skin conductance very slow
response 0–0.08 [Hz] • EEG Electrode Channels and Frequency Bands
In this subtask, the electrode setups were examined for
BVP(13) average and standard deviation of HR, HRV, and in- prospective design of a user-friendly EEG device capable
ter beat intervals, energy ratio between the frequency
bands 0.04–0.15 [Hz] and 0.15–0.5 [Hz], spectral of emotion recognition. Here, 3 or 4 out of 8 electrode
power in the bands 0.1–0.2 [Hz], 0.2–0.3 [Hz], 0.3– channels were strategically chosen to explore options
0.4 [Hz], low frequency 0.01–0.08 [Hz], medium for hardware design. The process began by studying the
frequency 0.08–0.15 [Hz] and high frequency 0.15–
0.5 [Hz] components of HRV power spectrum sagittal line of the head (Fz , Cz , Pz Oz ), and another
eight sets of channels were then created (F P1 , F P2 ,
Temp(4) average, average of its derivative, spectral power in Fz ), (F P1 , F P2 , Cz ), (F P1 , F P2 , Cz ), (F P1 , F P2 , Pz ),
the bands 0–0.1 [Hz] and 0.1–0.2 [Hz]
(F P1 , F P2 , Oz ), (T3 , T4 , Fz ), (T3 , T4 , Cz ), (T3 , T4 , Pz )
and (T3 , T4 , Oz ). These were taken from a combination
of either the temporal or frontal lobe and one channel
points below. All analyses were based on binary classification from the sagittal line. In addition, we sought to pinpoint
(low-high valence and arousal). The first analysis relied on a the important frequency bands within the EEG signals,
straightforward threshold setting for labeling of two separable provided the four fundamental bands: θ (3–7 [Hz]), α (8–
classes, while the second and third relied on the use of K- 13 [Hz]), β (14–29 [Hz]), and γ (30–47 [Hz]). For binary
means clustering technique. One sample point correlated to a classification (low-high V and A), we performed labeling
recording from one participant in response to the short video in a similar manner to Low-High: Valence/Arousal with
clip; there were 645 (43 participants × 15 clips) sample points Clustering and compared the results among the sets of
overall. channels and frequency bands.
• Low-High: Valence/Arousal with Threshold
We labeled each sample on the basis of associated V and We implemented leave-one-clip-out cross-validation using
A scores and manually set the threshold at 4.5 (midpoint Support Vector Machine (SVM), as illustrated in Figure 4.
between 1 to 9) for binary classification. In essence, a Since a fixed set of 9 video clips were commonly seen by
sample with V or A score lower than 4.5 was labeled the participants (of all 15 videos watched by each), nine-fold
as low V or A, and vice versa. Two binary classification cross-validation was conducted. In each fold, one clip was set
tasks were then carried out, high vs low valence and high aside as a test sample and the rest were used as a training set.
vs low arousal. Table II displays the set of input features We normalized all features by using MinMaxScaler to scale
for the model, the total being 70 features. them into a common range [71]. In the training session, the
7

TABLE III: Clustering methods and associated Davies-


Bouldin indices

Method Davies-Bouldin Score


GMM:
2 cluster 1.4816
3 cluster 3.9107
4 cluster 2.2370
5 cluster 1.3389 (a) (b)
6 cluster 4.7667
K-Means:
2 cluster 1.1771
3 cluster 0.8729
4 cluster 0.8866
5 cluster 0.8956
6 cluster 0.8842

Optimum shown in boldface


(c) (d)
Fig. 6: (a) Scatter plot of all qualitative samples from Exper-
iment I. K-means clustering on (a) yields the output clusters
as shown in (b). After removing the data points that do not
belong to the majority class (as inferred from the mode value),
the remainder is shown in (c). In (d), 20 points per cluster are
retained to figure out the nearest distances to the centroids for
emotion elicitation in Experiment II.

method to select the optimal K parameter. The outcomes are


shown in Table III; the minimal or best DB-index achieved
corresponded to K=3 in K-means clustering. Moreover, in
Fig. 5: Elbow method to determine the optimal number of Figure 5, the Elbow method further confirmed the optimal
K-means clusters for affective video grouping, K=3 number of clusters for K-Means to be three. In Figure 6, all
data points were assigned a certain color according to their
designated cluster, as calculated by the K-means method (see
training set was used to determine the optimal parameters that (b)). The points corresponding to clips that were not allocated
drive the SVM model to return the best F1 score; this was done in the same cluster as their majority class (i.e., mode) were
with gridsearchcv. Within gridsearchcv, there was also leave- filtered out. Following which, the scatter points in each cluster
one-clip-out cross-validation. The parameters included kernel were obtained as illustrated in Figure 6 (c). Lastly, only 20
(linear, poly, rbf, sigmoid), C (1, 10, 100), degree (3, 4, 5), points in each cluster (60 clips in total) were retained to work
coef0 (0, 0.01, 0.1), and regularization (L1, L2). Finally, we out the nearest distances to the centroids (see Figure 6 (d)). We
evaluated the model of each fold by setting the optimal SVM hypothesized that the clips (Table IV) chosen by the assistance
parameters and performed prediction on the test set. of unsupervised learning would prove to be effective tools for
emotion induction exercise.
IV. R ESULTS
The result descriptions are sequentially organized according B. Experiment II: Emotion Recognition Using OpenBCI
to the order of the experiments. In Experiment I, the output As outlined in Methodology, our investigation was divided
from K-means clustering facilitated the sorting of emotion- into three subtasks. The following paragraph provides details
eliciting video clips into appropriate groups for use in Exper- on the experimental results of these subtasks.
iment II. The resulting datasets were analyzed with a simple
• Low-High: Valence/Arousal with Threshold
machine learning algorithm and the outputs were contrasted to
For this analysis, ground truth labels were generated
relevant datasets from previous studies on affective computing.
based on self-emotional assessment scores (V and A)
The performance appeared to be on par if not better when
and the threshold empirically set at 4.5. Scores higher
compared to the existing works employing similar emotion-
than the threshold were assigned the high-level label,
eliciting method and emotion recognition algorithm.
and vice versa for any scores lower than the threshold.
The input features extracted from EEG, E4, and Fusion
A. Experiment I: Affective Video Selection (EEG and E4) were used to train and test the model.
To start off, we analyzed the participants’ self ratings on Table VI presents the mean accuracy and mean nine-
Happiness (H), Fear (F), and Excitement (E). In Figure 6 (a), fold F1 score. The condition with EEG as input features
all data points are scattered across a three-dimensional space reached 66.67 % of mean accuracy and 0.6640 of mean
(H, F, E). We adopted K-means clustering and GMM methods F1 score for A. For V, the condition with Fusion of
and employed Davies-Bouldin index (DB-index) and Elbow EEG and E4 as input features reached 71.57 % of mean
8

TABLE IV: Comprehensive list of 60 affective video titles featured in Experiment II to allow monitoring of emotionally
relevant EEG and peripheral physiological signals. Means and standard deviations of quantified attributes (Valence, Arousal,
Happiness, Fear, Excitement) are listed in adjacent columns. ’Samp’ (short for sample) denotes the number of participants
having seen the particular clip and submitted subjective ratings.
ID Movie Title Affective Tags (IMDb) Valence Arousal Happiness Fear Excitement Samp
1 The Help Drama 4.28 ± 2.08 3.34 ± 1.96 4.41 ± 2.12 1.26 ± 0.37 2.56 ± 1.66 82
2 Star Wars: The Last Jedi Action, Adventure, Fantasy, Sci-Fi 4.98 ± 2.15 4.36 ± 2.08 4.06 ± 2.16 1.79 ± 1.00 5.09 ± 2.12 81
3 Suicide Squad Action, Adventure, Fantasy, Sci-Fi 4.73 ± 2.02 4.12 ±1.90 4.36 ± 2.18 1.73 ± 1.41 4.37 ± 2.14 81
4 Pacific Rim Action, Adventure, Sci-Fi 4.10 ± 2.03 3.81 ± 2.16 3.93 ± 2.04 1.31 ± 0.52 4.25 ± 2.12 44
5 War for the Planet of the Apes Action, Adventure, Drama, Sci-Fi, Thriller 4.73 ± 2.06 4.04 ± 2.20 2.57 ± 1.90 2.67 ± 1.76 4.83 ± 2.08 41
6 God Help the Girl Drama, Musical, Romance 4.38 ± 2.45 3.09 ± 2.02 5.00 ± 2.45 1.27 ± 0.47 3.01 ± 1.85 43
7 Rogue One: A Star Wars Story Action, Adventure, Sci-Fi 5.09 ± 1.78 4.65 ± 2.15 3.34 ± 2.17 2.04 ± 1.23 5.28 ± 2.27 42
8 Blade Runner 2049 Drama, Mystery, Sci-Fi 4.44 ± 2.47 4.34 ± 2.38 3.15 ± 2.02 2.39 ± 1.73 4.81 ± 2.29 44
9 Hope Springs Comedy, Drama, Romance 4.78 ± 2.46 3.56 ± 2.12 5.09 ± 2.34 1.19 ± 0.17 2.84 ± 1.89 45
10 Ghost in the Shell Action, Drama, Sci-Fi 4.77 ± 2.10 4.28 ± 2.27 3.41 ± 2.13 2.07 ± 1.57 5.01 ± 2.27 47
11 Point Break Action, Crime, Sport 4.65 ± 2.33 4.49 ± 2.40 3.31 ± 2.33 2.08 ± 1.41 5.12 ± 2.56 40
12 The Hunger Games Adventure, Sci-Fi, Thriller 5.42 ± 2.26 4.76 ± 2.18 3.66 ± 2.09 2.06 ± 1.34 5.38 ± 2.05 42
13 Crazy, Stupid, Love. Comedy, Drama, Romance 4.82 ± 2.54 4.01 ± 2.52 5.12 ± 2.64 1.21 ± 0.25 3.06 ± 2.35 43
14 Arrival Drama, Mystery, Sci-F 4.84 ± 2.22 4.82 ± 2.28 3.28 ± 2.03 2.64 ± 1.90 5.66 ± 2.40 44
15 Mr. Hurt Comedy, Romance 4.50 ± 2.21 3.59 ± 2.20 4.66 ± 2.19 1.23 ± 0.21 2.41 ± 1.43 42
16 American Assassin Action, Thriller 4.19 ± 2.18 4.80 ± 2.33 2.90 ± 1.88 2.48 ± 1.92 5.03 ± 2.36 43
17 G.I. Joe: Retaliation Action, Adventure, Sci-Fi 4.69 ± 2.39 4.11 ± 2.31 3.46 ± 2.09 1.55 ± 0.93 5.02 ± 2.54 48
18 Beginners Comedy, Drama, Romance 4.42 ± 2.45 3.08 ± 2.18 4.97 ± 2.20 1.23 ± 0.27 2.42 ± 1.48 44
19 Open Grave Horror, Mystery, Thriller 3.70 ± 1.90 4.70 ± 2.30 1.90 ± 1.14 4.03 ± 2.40 4.55 ± 2.46 43
20 Flipped Comedy, Drama, Romance 5.05 ± 2.69 3.75 ± 2.40 5.43 ± 2.48 1.16 ± 0.08 2.88 ± 2.12 44
21 The Choice Drama, Romance 4.58 ± 2.11 4.12 ± 1.98 4.63 ± 2.16 1.43 ± 0.68 3.24. ± 1.71 82
22 Danny Collins Biography, Comedy, Drama 4.54 ± 2.20 3.55 ± 2.07 5.01 ± 2.09 1.17 ± 0.09 2.76 ± 1.68 82
23 The Big Sick Comedy, Drama, Romance 4.44 ± 2.10 3.21 ± 1.80 4.56 ± 2.15 1.28 ± 0.35 2.93 ± 1.90 86
24 Monsters University Animation, Adventure, Comedy 5.55 ± 2.07 4.01 ± 2.12 6.15 ± 2.12 1.24 ± 0.23 4.44 ± 2.14 47
25 Kung Fu Panda 3 Animation, Action, Adventure 6.11 ± 2.17 4.46 ± 2.31 6.44 ± 2.37 1.25 ± 0.24 4.14 ± 2.11 47
26 Baby Driver Action, Crime, Drama 5.29 ± 2.10 4.65 ± 2.27 5.19 ± 2.32 1.53 ± 0.88 5.30 ± 2.28 44
27 The Good Dinosaur Animation, Adventure, Comedy 6.43 ± 2.15 4.63 ± 2.33 6.47 ± 2.18 1.19 ± 0.41 4.22 ± 2.07 43
28 About Time Comedy, Drama, Fantasy 5.25 ± 2.60 4.29 ± 2.68 5.97 ± 2.32 1.22 ± 0.34 4.25 ± 2.24 43
29 Ordinary World Comedy, Drama, Music 4.88 ± 1.72 3.93 ± 1.72 5.47 ± 1.68 1.22 ± 0.20 3.45 ± 1.98 49
30 Lion Biography, Drama 5.36 ± 2.30 4.62 ± 2.65 5.01 ± 2.49 1.79 ± 1.25 3.93 ± 2.35 48
31 Shrek Forever After Animation, Adventure, Comedy 5.87 ± 2.12 3.85 ± 2.37 6.29 ± 2.29 1.26 ± 0.29 4.35 ± 2.49 44
32 Chappie Action, Crime, Drama 5.47 ± 2.26 4.43 ± 2.20 4.54 ± 2.45 1.69 ± 0.73 5.31 ± 2.31 47
33 Guardians of the Galaxy Vol. 2 Action, Adventure, Sci-Fi 6.15 ± 2.40 4.61 ± 2.34 5.85 ± 2.40 1.44 ± 1.01 5.56 ± 2.50 48
34 The Intern Comedy, Drama 6.34 ± 2.02 5.06 ± 2.13 6.31 ± 1.98 1.23 ± 0.39 3.74 ± 2.34 42
35 La La Land Comedy, Drama, Music 5.44 ± 2.24 4.09 ± 2.37 5.55 ± 2.29 1.49 ± 1.28 3.15 ± 1.99 47
36 Ice Age: Collision Course Animation, Adventure, Comedy 6.38 ± 2.36 4.96 ± 2.40 6.92 ± 1.94 1.21 ± 0.26 4.87 ± 2.36 43
37 Frozen Animation, Adventure, Comedy 5.88 ± 2.40 4.17 ± 2.38 6.31 ± 2.41 1.24 ± 0.40 4.35 ± 2.36 47
38 Transformers: The Last Knight Action, Adventure, Sci-Fi 4.57 ± 2.06 4.18 ± 1.96 4.10 ± 2.33 1.91 ± 1.22 4.87 ± 1.97 42
39 Divergent Adventure, Mystery, Sci-Fi 5.87 ± 1.81 4.87 ± 2.00 4.75 ± 2.27 1.95 ± 1.34 5.84 ± 2.05 49
40 Why Him? Comedy 5.85 ± 2.24 4.60 ± 2.40 6.03 ± 2.30 1.25 ± 0.39 4.06 ± 2.44 43
41 The Boy Horror, Mystery, Thriller 3.85 ± 2.09 4.92 ± 1.97 1.78 ± 1.06 5.21 ± 2.23 5.01 ± 2.16 82
42 Jigsaw Crime, Horror, Mystery 4.04 ± 2.14 4.68 ± 2.15 2.07 ± 1.37 4.65 ± 2.18 4.91 ± 2.18 86
43 Shutter Horror, Mystery, Thriller 3.53 ± 2.20 4.71 ± 2.25 1.68 ± 0.97 5.23 ± 2.34 4.63 ± 2.29 81
44 Ladda Land Horror 4.61 ± 2.20 4.81 ± 2.23 1.95 ± 1.47 5.62 ± 2.13 4.88 ± 2.07 42
45 No One Lives Horror, Thriller 4.20 ± 2.04 4.84 ± 2.28 1.94 ± 1.35 4.97 ± 2.56 5.07 ± 2.33 47
46 Tales from the Crypt Horror 3.83 ± 2.22 4.41 ± 2.21 2.21 ± 1.88 4.67 ± 2.36 4.68 ± 2.41 44
47 Orphan Horror, Mystery, Thriller 4.07 ± 2.35 5.18 ± 2.11 1.85 ± 1.40 5.11 ± 2.15 4.68 ± 2.27 40
48 Unfriended Drama, Horror, Mystery 4.34 ± 2.57 5.40 ± 2.57 1.98 ± 1.93 5.34 ± 2.53 5.37 ± 2.55 43
49 Poltergeist Horror, Thriller 4.13 ± 2.44 5.28 ± 2.55 1.91 ± 1.70 5.85 ± 2.36 5.20 ± 2.60 43
50 Jeruzalem Horror 4.20 ± 2.12 4.82 ± 2.14 2.02 ± 1.42 5.00 ± 2.23 4.90 ± 2.27 49
51 Leatherface Crime, Horror, Thriller 3.92 ± 2.11 4.77 ± 2.39 1.89 ± 1.27 4.93 ± 2.53 4.90 ± 2.50 47
52 The Babadook Drama, Horror 3.62 ± 2.06 4.84 ± 2.19 1.67 ± 1.05 5.34 ± 2.11 4.74 ± 2.17 43
53 Oculus Horror, Mystery 4.12 ± 2.11 5.81 ± 1.90 1.76 ± 1.29 6.07 ± 1.81 5.36 ± 2.28 42
54 The Witch Horror, Mystery 3.69 ± 2.17 4.69 ± 2.44 1.89 ± 1.38 5.12 ± 2.40 4.54 ± 2.37 42
55 Trick ’r Treat Comedy, Horror, Thriller 4.50 ± 2.27 5.59 ± 2.30 1.93 ± 1.42 5.68 ± 2.26 5.55 ± 2.34 42
56 The Woman in Black Drama, Fantasy, Horror 4.23 ± 2.29 4.67 ± 2.27 1.94 ± 1.54 5.21 ± 2.33 4.79 ± 2.16 42
57 The Possession Horror, Thriller 4.83 ± 2.47 5.32 ± 2.29 1.82 ± 1.44 5.90 ± 2.42 5.66 ± 2.38 42
58 Crimson Peak Drama, Fantasy, Horror 3.93 ± 2.15 4.79 ± 2.33 2.33 ± 1.71 4.81 ± 2.43 5.10 ± 2.16 43
59 Program na winyan akat Horror, Thriller 3.95 ± 2.11 4.98 ± 2.48 1.72 ± 1.20 5.52 ± 2.45 5.00 ± 2.26 43
60 The Pact Horror, Mystery, Thriller 4.37 ± 2.00 5.56 ± 2.40 1.86 ± 1.42 6.23 ± 2.05 5.62 ± 2.50 44

accuracy and 0.6916 of mean F1 score. Furthermore, we • Low-High: Valence/Arousal with Clustering
performed repeated measure ANOVA, with Greenhouse- As shown in Table IV, V and A scores appeared to
Geisser correction for statistical analysis. It was found be widely distributed given their considerable standard
that the mean accuracy of low-high A classification was deviation values. It could be the case that larger pop-
significantly different among the application of EEG, E4, ulation is by default associated with higher standard
and Fusion data (F(1.994, 15.953)=15.791), p < 0.05, deviation. Hence, the fixed threshold strategy might not
df(factor, error)=1.994,15.953). Further, the Bonferroni be suitable in this scenario. To address this potential
post hoc test revealed that Fusion (71.574 ± 1.578%) was issue, we tested two clustering methods, K-Means and
significantly better than using either EEG only (67.442 ± GMM. As shown in Table V, K-means clustering with 4
1.644%) or E4 only (62.866 ± 0.253%). clusters was likely the most suitable method as indicated
9

TABLE VI: Comparative table featuring accuracy and F1


TABLE V: Davies-Bouldin indices achieved with GMM and score for low-high classification, along with results from
K-means clustering of Valence and Arousal scores. public repositories that applied identical feature extraction and
recognition method in their analysis.
Method Davies-Bouldin Score
Accuracy F1 Score
GMM:
Modality Valence Arousal Valence Arousal
2 clusters 7.3844
3 clusters 7.2030 OpenBCI (EEG) 0.6667 0.6744 0.6440 0.6470
4 clusters 1.6682 E4 0.6383 0.6228 0.6200 0.5912
5 clusters 1.2772 OpenBCI & E4 0.6538 0.7157 0.6428 0.6916
6 clusters 2.4652 Class Ratio 1.38:1 1.66:1 1.38:1 1.66:1
K-Means:
2 clusters 0.8216 OpenBCI (EEG) :K-means 0.7080 0.6331 0.6767 0.6233
3 clusters 1.0010 E4 :K-means 0.6357 0.6072 0.5959 0.5761
4 clusters 0.8095 OpenBCI & E4 :K-means 0.7235 0.6847 0.6932 0.6620
5 clusters 0.8556 Class Ratio 1.90:1 0.68:1 1.90:1 0.68:1
6 clusters 0.8255 DREAMER (EEG) [12] 0.6249 0.6217 0.5184 0.5767
DREAMER (ECG) [12] 0.6237 0.6237 0.5305 0.5798
Optimum shown in boldface DREAMER (Fusion) [12] 0.6184 0.6232 0.5213 0.5750
DEAP (EEG) [9] 0.5760 0.6200 0.5630 0.5830
DEAP (Peripheral) [9] 0.6270 0.5700 0.6080 0.5330
MAHNOB-HCI (EEG) [8] 0.5700 0.5240 0.5600 0.4200
MAHNOB-HCI (Peripheral) [8] 0.4550 0.4620 0.3900 0.3800
DECAF (MEG) [10] 0.5900 0.6200 0.5500 0.5800
DECAF (Peripheral) [10] 0.6000 0.5500 0.5900 0.5400

Class ratios displayed as [low : high].


Optimal values among respective groups shown in boldface

TABLE VII: Accuracy and F1 scores associated with


channel selection.
(a) (b) Accuracy F1 Score
Channel Valence Arousal Valence Arousal
F P1 , F P2 , Fz 0.6331 0.5814 0.6044 0.5590
F P1 , F P2 , Cz 0.6770 0.6047 0.6550 0.5934
F P1 , F P2 , Pz 0.6409 0.5788 0.6143 0.5565
F P1 , F P2 , Oz 0.5917 0.6021 0.5445 0.5681
T3 , T4 , Fz 0.6331 0.6227 0.5963 0.6150
T3 , T4 , Cz 0.6486 0.6202 0.5898 0.6135
T3 , T4 , Pz 0.6486 0.6279 0.6119 0.6175
T3 , T4 , Oz 0.460 0.6357 0.5875 0.6258
(c) (d) Fz , Cz , Pz , Oz 0.7184 0.5840 0.6741 0.5667

Optimal values column-wise shown in boldface

by its minimal DB index. Moreover, as shown in Figure 7,


comparing the clustering results between these methods
and different numbers of clusters, our chosen solution (h)
(e) (f) was better than the others. Therefore, it was adopted for
the application of labeling low-high levels of V and A. We
empirically labeled according to the VA model [1], with
the Blue and Red group for low-A (LA), and the Green
and Yellow group for high-A (HA). In terms of valence,
the Red and Green group was selected for low-V (LV) and
the Blue and Yellow group for high-V (HV). After that,
(g) (h) binary classification was carried out, the resulting mean
accuracy and mean nine-fold F1 scores are laid out in Ta-
ble VI. The condition with the Fusion of EEG and E4 as
the input features reached the mean accuracy of 72.35%
and 68.47% for V and A, respectively. In terms of F1
score, Fusion features provided the best results for V and
A (0.6932 and 0.6620, respectively). We also performed
(i) (j) repeated measures ANOVA with Greenhouse-Geisser cor-
rection. It appeared that the mean accuracy of low-high
Fig. 7: (a)-(e) GMM and (f)-(j) K-means plots with increasing
A classification was significantly different between using
number of clusters (K=2 to K=6).
EEG, E4, and Fusion (F(1.574, 12.591)=10.057), p <
0.05. Also, the Bonferroni post hoc test revealed that
10

TABLE VIII: Accuracy and F1 scores from different The emotion recognition task in this study resorted to
EEG frequency bands. the measurement of valence and arousal experienced by the
viewers during video presentation. In most studies, ground
Accuracy F1 Score
Frequency Band Valence Arousal Valence Arousal truth labeling relies on an empirically set threshold for high-
θ 0.6718 0.6770 0.6363 0.6518 low partitioning [8]–[10], [12]. However, the list of selected
α 0.7003 0.6434 0.6640 0.6123 titles in Table IV shows large standard deviations for self-
β 0.6925 0.6796 0.6557 0.6590
γ 0.6719 0.6667 0.6286 0.6394 assessment scores. In a large population, simple threshold-
ing might not work well since different people are likely
Optimal values column-wise shown in boldface to experience varying levels of low or high valence and
arousal. Thus, a simple mathematical based algorithm, K-
means clustering, was proposed for labeling the ground truth.
using Fusion (72.350 ± 1.191%) was significantly better As reported in Table VI, the proposed method for ground
than using only E4 (63.567 ± 1.450%). Table VI reports truth labeling may lend support to attaining high classification
the V and A classification results from DREAMER [12], performance, especially in valence classification. Moreover,
DEAP [9], MAHNOB-HCI [8], and DECAF [10] datasets the valence and arousal classification results among using
all using identical feature extraction and classification EEG, peripheral signals, and fusion were notably consistent
method. The observed performance of our data seemed to compared to the conventional ground truth labeling method.
be lending support to the capability of the consumer grade Hence, labeling by K-means clustering might be more suitable
brain sensor, OpenBCI, as a versatile tool in emotion in future emotional studies with a large number of participants.
recognition studies. Moreover, the outcome of emotion recognition from OpenBCI
• EEG Electrode Channels and Frequency Bands was comparable to state-of-the-art works featuring high-end or
The results from channel selection are presented in Ta- expensive EEGs (both in accuracy and F1 score) as can be seen
ble VII with the mean accuracy and mean F1 score from in Table VI, including one involving MEG, a functional brain
nine folds. In the low-high classification task for V, the mapping method (considerably more expensive than EEG).
condition with EEG channels from Fz , Cz , Pz , Oz as Table VII and Table VIII display the study results of EEG
the input features achieved a mean accuracy of 71.84% factors – i.e., electrode channel selection for future devel-
and a mean F1 score of 0.6741. However, for A, the opment of user-friendly devices in emotion recognition, and
mean accuracy reached 63.57% and the mean F1 score frequency selection to identify more effective EEG frequency
reached 0.6258 using T3 , T4 , Oz . Finally, the results bands as classification features. Referring to the tables, T3 , T4 ,
obtained from varying EEG frequency bands, as shown Oz achieved the best results for arousal classification both in
in Table VIII, indicate the mean accuracy and mean F1 terms of accuracy and F1 score, while the middle line (Fz , Cz ,
score from nine folds. The classification accuracy for V Pz , Oz ) was promising for further improvement of an emotion
reached 70.03% by using EEG features from α band and recognition algorithm in valence for low-high classification.
A reached 67.96% by using EEG features from β band. Taking both results together, T3 , T4 , Oz , Fz , Cz , Pz with
In terms of F1 score, the accuracy results were similar. all EEG frequency bands as input features offer the best path
The features from α bands provided the best results in V for developing practical, usable consumer grade devices for
and the features from β band provided the best results in mental state recognition, especially one concerning valence
A (0.6640 and 0.6590, respectively). We then carried out and arousal. True to any scientific investigation, there are
repeated measures ANOVA, however, there appeared to limitations inherent to this study, one of which comes down to
be no significant difference between the selection results ICA implementation. We were able to pinpoint the axes with
of frequency and channel. EMG traits, however, decided not to remove any of them as
this may lead to a potential loss of important information.
V. D ISCUSSION To our knowledge, this study is the first to carry out an
evaluation of OpenBCI applicability in the domain of emotion
The practice of evoking emotions on command is common- recognition. In comparison to medical grade EEG amplifiers
place in emotion research and lately has gained a foothold in with a greater number of electrodes and sampling frequencies,
extended applications such as classification of affective states OpenBCI demonstrably could hold its own. A consumer grade,
from correlated EEG and peripheral physiological signals. A open-source device has a potential to be a real game changer
number of previous datasets have been generated using audio- for programmers or researchers on the quest for better emotion
visual stimuli, more specifically video clips, to invoke desired recognition tools. The device may facilitate further progress
emotions in individuals while at the same time record their toward online applications since it is inexpensive and possibly
EEG and bodily responses. In the present study, we propose a affordable even to those with more limited purchasing power.
novel method for affective stimulus selection, involving video In the same vein, emotion recognition using peripheral phys-
ratings by 200 participants aged 15–22 and K-means clustering iological data from a real-time bracelet sensor or E4 remains
method in video ranking. A complete list of movie titles used a challenge. E4 is an easy-to-use wearable device resembling
in our study is provided in Table IV. These videos can readily a watch which may be useful as a long-term affective state
be found on YouTube and online websites, should there be a or mental health monitoring system. In fact, a research team
need for further use. recently proposed the algorithm K-nearest neighbors, based on
11

dynamic time warping, to process physiological signals and [10] M. K. Abadi, R. Subramanian, S. M. Kia, P. Avesani, I. Patras, and
perform affective computing on E4-derived datasets [72]. We N. Sebe, “Decaf: Meg-based multimodal database for decoding affective
physiological responses,” IEEE Transactions on Affective Computing,
might also be able to adapt this work for OpenBCI. Besides, vol. 6, no. 3, pp. 209–222, 2015.
we have two ongoing projects on deep learning and EEG [11] W.-L. Zheng and B.-L. Lu, “Investigating critical frequency bands and
in which the data from this study could be incorporated for channels for eeg-based emotion recognition with deep neural networks,”
IEEE Transactions on Autonomous Mental Development, vol. 7, no. 3,
further investigation [73], [74]. pp. 162–175, 2015.
[12] S. Katsigiannis and N. Ramzan, “Dreamer: a database for emotion
VI. C ONCLUSION recognition through eeg and ecg signals from wireless low-cost off-
the-shelf devices,” IEEE journal of biomedical and health informatics,
We present evidence in support of the applicability of vol. 22, no. 1, pp. 98–107, 2018.
OpenBCI, an open-source, consumer grade EEG device for [13] U. Rashid, I. Niazi, N. Signal, and D. Taylor, “An eeg experimental study
evaluating the performance of texas instruments ads1299,” Sensors,
emotion recognition research. A brief summary of our effort vol. 18, no. 11, p. 3721, 2018.
to collect this evidence is as follows. EEG and peripheral [14] M. Duvinage, T. Castermans, M. Petieau, T. Hoellinger, G. Cheron,
physiological signals were collected from 43 participants as and T. Dutoit, “Performance of the emotiv epoc headset for p300-based
applications,” Biomedical engineering online, vol. 12, no. 1, p. 56, 2013.
they were being shown a sequence of short movie trailers [15] N. A. Badcock, P. Mousikou, Y. Mahajan, P. De Lissa, J. Thie, and
expected to invoke distinctive emotions. The participants were G. McArthur, “Validation of the emotiv epoc® eeg gaming system for
prompted to score their experienced valence and arousal for measuring research quality auditory erps,” PeerJ, vol. 1, p. e38, 2013.
[16] E. Ratti, S. Waninger, C. Berka, G. Ruffini, and A. Verma, “Comparison
each video. Subsequently, we applied K-means clustering of medical and consumer wireless eeg systems for use in clinical trials,”
algorithm on these valence and arousal ratings in order to Frontiers in human neuroscience, vol. 11, p. 398, 2017.
establish ground truth labels for low and high cluster, dissim- [17] O. E. Krigolson, C. C. Williams, A. Norton, C. D. Hassall, and F. L.
Colino, “Choosing muse: Validation of a low-cost, portable eeg system
ilar to reported studies that performed labeling by empirical for erp research,” Frontiers in neuroscience, vol. 11, p. 109, 2017.
thresholding. We found that our prediction outcome appeared [18] https://www.emotiv.com/.
in a similar performance range to those derived from state- [19] https://store.myndplay.com/.
[20] http://neurosky.com/.
of-the-art datasets that were acquired by medical grade EEG [21] https://choosemuse.com/.
systems. The ultimate goal of this study is to inform and [22] http://openbci.com/.
inspire researchers and engineers alike on the practicality of [23] https://www.ant-neuro.com/.
[24] https://compumedicsneuroscan.com/.
OpenBCI as a recording device in the development of online, [25] http://www.gtec.at/.
emotional EEG-related applications. Our experimental data are [26] https://www.advancedbrainmonitoring.com/.
available to academic peers and researchers upon request. [27] https://www.neuroelectrics.com/.
[28] https://www.brainproducts.com/.
[29] J. A. M. Correa, M. K. Abadi, N. Sebe, and I. Patras, “Amigos: A
ACKNOWLEDGMENT dataset for affect, personality and mood research on individuals and
groups,” IEEE Transactions on Affective Computing, 2018.
The authors would like to thank Sombat Ketrat for his sup- [30] https://www.empatica.com/research/e4/.
port and suggestions on server installation for data storage and [31] https://www.tobiipro.com/.
computing. Our thank also extends to Irawadee Thawornbut [32] J.-M. Lpez-Gil, J. Virgili-Gom, R. Gil, T. Guilera, I. Batalla, J. Soler-
Gonzlez, and R. Garca, “Method for improving eeg based emotion
for her assistance in the setting up of the preliminary study. recognition by combining it with synchronized biometric and eye
tracking technologies in a non-invasive and low cost way,” Frontiers
R EFERENCES in Computational Neuroscience, vol. 10, p. 85, 2016.
[33] B. Nakisa, M. N. Rastgoo, D. Tjondronegoro, and V. Chandran,
[1] J. A. Russell, “A circumplex model of affect.” Journal of personality “Evolutionary computation algorithms for feature selection of eeg-
and social psychology, vol. 39, no. 6, p. 1161, 1980. based emotion recognition using mobile sensors,” Expert Systems with
[2] R. W. Picard, Affective computing. MIT press, 2000. Applications, vol. 93, pp. 143 – 155, 2018.
[3] P. Reis, F. Hebenstreit, F. Gabsteiger, V. von Tscharner, and [34] J. Kim, J. Seo, and T. H. Laine, “Detecting boredom from eye gaze and
M. Lochmann, “Methodological aspects of eeg and body dynamics eeg,” Biomedical Signal Processing and Control, vol. 46, pp. 302 – 313,
measurements during motion,” Frontiers in human neuroscience, vol. 8, 2018.
p. 156, 2014. [35] K. Dhindsa and S. Becker, “Emotional reaction recognition from eeg,” in
[4] P. C. Petrantonakis and L. J. Hadjileontiadis, “Emotion recognition from 2017 International Workshop on Pattern Recognition in Neuroimaging
eeg using higher order crossings,” IEEE Transactions on information (PRNI), June 2017, pp. 1–4.
Technology in Biomedicine, vol. 14, no. 2, pp. 186–197, 2009. [36] Y. Liu, M. Yu, G. Zhao, J. Song, Y. Ge, and Y. Shi, “Real-time
[5] Y.-P. Lin, C.-H. Wang, T.-P. Jung, T.-L. Wu, S.-K. Jeng, J.-R. Duann, and movie-induced discrete emotion recognition from eeg signals,” IEEE
J.-H. Chen, “Eeg-based emotion recognition in music listening,” IEEE Transactions on Affective Computing, vol. 9, no. 4, pp. 550–562, Oct
Transactions on Biomedical Engineering, vol. 57, no. 7, pp. 1798–1806, 2018.
2010. [37] H. Shahabi and S. Moghimi, “Toward automatic detection of brain
[6] D. Nie, X.-W. Wang, L.-C. Shi, and B.-L. Lu, “Eeg-based emotion recog- responses to emotional music through analysis of eeg effective connec-
nition during watching movies,” in 2011 5th International IEEE/EMBS tivity,” Computers in Human Behavior, vol. 58, pp. 231 – 239, 2016.
Conference on Neural Engineering. IEEE, 2011, pp. 667–670. [38] A. Martnez-Rodrigo, A. Fernndez-Sotos, J. M. Latorre, J. Moncho-
[7] C. A. Kothe, S. Makeig, and J. A. Onton, “Emotion recognition from Bogani, and A. Fernndez-Caballero, “Neural correlates of phrase
eeg during self-paced emotional imagery,” in 2013 Humaine Association rhythm: An eeg study of bipartite vs. rondo sonata form,” Frontiers
Conference on Affective Computing and Intelligent Interaction, Sep. in Neuroinformatics, vol. 11, p. 29, 2017.
2013, pp. 855–858. [39] A. Fernndez-Soto, A. Martnez-Rodrigo, J. Moncho-Bogani, J. M.
[8] M. Soleymani, J. Lichtenauer, T. Pun, and M. Pantic, “A multimodal Latorre, and A. Fernndez-Caballero, “Neural correlates of phrase
database for affect recognition and implicit tagging,” IEEE Transactions quadrature perception in harmonic rhythm: An eeg study using a
on Affective Computing, vol. 3, no. 1, pp. 42–55, Jan 2012. brain?computer interface,” International Journal of Neural Systems,
[9] S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, vol. 28, no. 05, p. 1750054, 2018, pMID: 29298521.
T. Pun, A. Nijholt, and I. Patras, “Deap: A database for emotion analysis; [40] R. Ramirez, J. Planas, N. Escude, J. Mercade, and C. Farriols, “Eeg-
using physiological signals,” IEEE Transactions on Affective Computing, based analysis of the emotional effect of music therapy on palliative
vol. 3, no. 1, pp. 18–31, 2012. care cancer patients,” Frontiers in Psychology, vol. 9, p. 254, 2018.
12

[41] P. Kumar, R. Saini, P. P. Roy, P. K. Sahu, and D. P. Dogra, “Envi- [66] “The e4 wristband is a wearable research device that offers real-time
sioned speech recognition using eeg sensors,” Personal and Ubiquitous physiological data acquisition and software for in-depth analysis and
Computing, vol. 22, no. 1, pp. 185–199, 2018. visualization.” https://www.empatica.com/en-eu/research/e4/.
[42] M.-H. Lin, S. N. Cross, W. J. Jones, and T. L. Childers, “Applying eeg [67] A. Greco, G. Valenza, L. Citi, and E. P. Scilingo, “Arousal and valence
in consumer neuroscience,” European Journal of Marketing, vol. 52, no. recognition of affective sounds based on electrodermal activity,” IEEE
1/2, pp. 66–91, 2018. Sensors Journal, vol. 17, no. 3, pp. 716–725, 2017.
[43] M. Yadava, P. Kumar, R. Saini, P. P. Roy, and D. P. Dogra, “Analysis [68] “Artifact correction with ica,” https://martinos.org/mne/stable/
of eeg signals and its application to neuromarketing,” Multimedia Tools auto tutorials/preprocessing/plot artifacts correction ica.html#
and Applications, vol. 76, no. 18, pp. 19 087–19 111, 2017. sphx-glr-auto-tutorials-preprocessing-plot-artifacts-correction-ica-py,
[44] A. J. Casson and E. V. Trimble, “Enabling free movement eeg tasks accessed: 2019-05-22.
by eye fixation and gyroscope motion correction: Eeg effects of color [69] A. Gramfort, M. Luessi, E. Larson, D. Engemann, D. Strohmeier,
priming in dress shopping,” IEEE Access, vol. 6, pp. 62 975–62 987, C. Brodbeck, R. Goj, M. Jas, T. Brooks, L. Parkkonen, and M. Hmlinen,
2018. “Meg and eeg data analysis with mne-python,” Frontiers in Neuro-
[45] I. Arapakis, M. Barreda-Angeles, and A. Pereda-Banos, “Interest as a science, vol. 7, p. 267, 2013.
proxy of engagement in news reading: Spectral and entropy analyses of [70] E. Jones, T. Oliphant, P. Peterson et al., “SciPy: Open source scientific
eeg activity patterns,” IEEE Transactions on Affective Computing, pp. tools for Python,” 2001–, [Online; accessed 2019-05-22]. [Online].
1–1, 2018. Available: http://www.scipy.org/
[46] N. Jadhav, R. Manthalkar, and Y. Joshi, “Effect of meditation on [71] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion,
emotional response: An eeg-based study,” Biomedical Signal Processing O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vander-
and Control, vol. 34, pp. 101–113, 2017. plas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch-
[47] A. T. Poulsen, S. Kamronn, J. Dmochowski, L. C. Parra, and L. K. esnay, “Scikit-learn: Machine learning in Python,” Journal of Machine
Hansen, “Eeg in the classroom: Synchronised neural recordings during Learning Research, vol. 12, pp. 2825–2830, 2011.
video presentation,” Scientific Reports, vol. 7, p. 43916, 2017. [72] A. Albraikan, D. P. Tobón, and A. El Saddik, “Toward user-independent
[48] P. Zioga, F. Pollick, M. Ma, P. Chapman, and K. Stefanov, “enheduannaa emotion recognition using physiological signals,” IEEE Sensors Journal,
manifesto of falling live brain-computer cinema performance: Performer 2018.
and audience participation, cognition and emotional engagement using [73] T. Wilaiprasitporn, A. Ditthapron, K. Matchaparn, T. Tongbuasirilai,
multi-brain bci interaction,” Frontiers in neuroscience, vol. 12, p. 191, N. Banluesombatkul, and E. Chuangsuwanich, “Affective eeg-based per-
2018. son identification using the deep learning approach,” IEEE Transactions
[49] Y. Dai, X. Wang, P. Zhang, and W. Zhang, “Wearable biosensor on Cognitive and Developmental Systems, pp. 1–1, 2019.
network enabled multimodal daily-life emotion recognition employing [74] A. Ditthapron, N. Banluesombatkul, S. Ketrat, E. Chuangsuwanich,
reputation-driven imbalanced fuzzy classification,” Measurement, vol. and T. Wilaiprasitporn, “Universal joint feature extraction for p300 eeg
109, pp. 408–424, 2017. classification using multi-task autoencoder,” IEEE Access, vol. 7, pp.
[50] H. Cai, X. Zhang, Y. Zhang, Z. Wang, and B. Hu, “A case-based 68 415–68 428, 2019.
reasoning model for depression based on three-electrode eeg data,” IEEE
Transactions on Affective Computing, 2018.
[51] H. Jebelli, S. Hwang, and S. Lee, “Eeg-based workers’ stress recognition
at construction sites,” Automation in Construction, vol. 93, pp. 315–324,
2018.
[52] X. Shan, E.-H. Yang, J. Zhou, and V. W.-C. Chang, “Human-building
interaction under various indoor temperatures through neural-signal
electroencephalogram (eeg) methods,” Building and Environment, vol.
129, pp. 46–53, 2018.
[53] J. Rosenthal and G. Benabdallah, “Ibpoet: an interactive & biosensitive
poetry composition device,” in Proceedings of the 2017 ACM Interna-
tional Joint Conference on Pervasive and Ubiquitous Computing and
Proceedings of the 2017 ACM International Symposium on Wearable
Computers. ACM, 2017, pp. 281–284.
[54] R. Munoz, R. Villarroel, T. S. Barcelos, A. Souza, E. Merino, R. Guiñez,
and L. A. Silva, “Development of a software that supports multimodal
learning analytics: A case study on oral presentations,” Journal of
Universal Computer Science, vol. 24, no. 2, pp. 149–170, 2018.
[55] M. Mohammadpour, S. M. R. Hashemi, and N. Houshmand, “Classi-
fication of eeg-based emotion for bci applications,” in 2017 Artificial
Intelligence and Robotics (IRANOPEN), April 2017, pp. 127–131.
[56] M. Mohammadpour, M. M. AlyanNezhadi, S. M. R. Hashemi, and
Z. Amiri, “Music emotion recognition based on wigner-ville distribution
feature extraction,” in 2017 IEEE 4th International Conference on
Knowledge-Based Engineering and Innovation (KBEI), Dec 2017, pp.
0012–0016.
[57] M. A. Hafeez, S. Shakil, and S. Jangsher, “Stress effects on exam
performance using eeg,” in 2018 14th International Conference on
Emerging Technologies (ICET), Nov 2018, pp. 1–4.
[58] L. Rahman and K. Oyama, “Long-term monitoring of nirs and eeg
signals for assessment of daily changes in emotional valence,” in 2018
IEEE International Conference on Cognitive Computing (ICCC), July
2018, pp. 118–121.
[59] R. Munoz, R. Olivares, C. Taramasco, R. Villarroel, R. Soto, M. F.
Alonso-Sánchez, E. Merino, and V. H. C. de Albuquerque, “A new
eeg software that supports emotion recognition by using an autonomous
approach,” Neural Computing and Applications, pp. 1–17.
[60] B. Chris and L. Jennifer, “Frozen,” USA: Walt Disney Pictures, 2013.
[61] M. Bay, “Transformers: The last knight,” USA: Paramount Pictures,
2017.
[62] J. Collet-Serra, “Orphan,” USA: Warner Bros. Pictures, 2009.
[63] D. Chazelle, “La la land,” USA: Lionsgate, 2016.
[64] N. Meyers, “The intern,” USA: Warner Bros. Pictures, 2015.
[65] D. Villeneuve, “Arrival,” USA: Paramount Pictures, 2016.

You might also like