Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Early Detection of Cyberbullying On Social Media Networks

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Future Generation Computer Systems 118 (2021) 219–229

Contents lists available at ScienceDirect

Future Generation Computer Systems


journal homepage: www.elsevier.com/locate/fgcs

Early detection of cyberbullying on social media networks



Manuel F. López-Vizcaíno, Francisco J. Nóvoa, Victor Carneiro, Fidel Cacheda
CITIC Research Center, Computer Science and Information Technologies Department, University of A Coruña, Campus de Elviña, A
Coruña, 15071, Spain

article info a b s t r a c t

Article history: Cyberbullying is an important issue for our society and has a major negative effect on the victims,
Received 15 June 2020 that can be highly damaging due to the frequency and high propagation provided by Information
Received in revised form 9 November 2020 Technologies. Therefore, the early detection of cyberbullying in social networks becomes crucial to
Accepted 8 January 2021
mitigate the impact on the victims. In this article, we aim to explore different approaches that take
Available online 13 January 2021
into account the time in the detection of cyberbullying in social networks. We follow a supervised
Keywords: learning method with two different specific early detection models, named threshold and dual. The
Cyberbullying former follows a more simple approach, while the latter requires two machine learning models. To the
Social networks best of our knowledge, this is the first attempt to investigate the early detection of cyberbullying. We
Early detection propose two groups of features and two early detection methods, specifically designed for this problem.
Machine learning We conduct an extensive evaluation using a real world dataset, following a time-aware evaluation that
penalizes late detections. Our results show how we can improve baseline detection models up to 42%.
© 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND
license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

1. Introduction Consequently, an early detection of cyberbullying in social


networks is paramount to mitigate and reduce its negative effects
Bullying can be defined as an aggressive, intentional act or on the victims. Moreover, the repetitive nature of cyberbully-
behaviour that is carried out by a group or an individual against ing makes it extremely important to detect and terminate, as
a victim who cannot easily defend him or herself repeatedly and soon as possible, the cyberagression to, on one side, identify the
over time [1]. When the aggression occurs using Information aggressors and, on the other side, support the victims.
Technologies (IT), such as the Internet, we talk of cyberbully- In this work we aim to explore different approaches that
ing [2]. do not only take into account, the appropriate detection of cy-
Cyberbullying has been identified as a major issue [3] and berbullying in social networks, but also the time required for
has been documented as a national health problem [4] due to the detection. In the literature there are multiple and diverse
the continuous growth of online communication and the social works that explore different approaches to detect cyberbullying
media [5]. The percentages of individuals who have experienced in social networks but, to the best of our knowledge, this is the
cyberbullying at some point during their lifetime has doubled, first attempt to investigate techniques specifically designed for an
with 18% in 2007 and 36% in 2019 according to [6], and this is early detection of cyberbullying. We follow a supervised learning
only expected to continue raising taking the high use of IT, social approach, as the majority of previous works, but focusing on two
different specific early detection methods, named threshold and
networks and mobile devices by children and teenagers [7] into
dual. The former follows a more simple approach, while the latter
account.
requires two machine learning models. We also propose two
The negative effects of cyberbullying share many similarities
new groups of features, specifically designed for this problem.
with traditional bullying [8] but, at the same time, it could be
The first group is intended to capture textual similarities among
more damaging due to the frequency and high propagation al-
comments using a Bag-of-Words (BoW) model, while the second
lowed by technology [9]. Studies have linked cyberbullying with
group will capture repetitive time aspects on the comments. We
negative effects on psychological and physical health, academic
conduct an extensive evaluation using a real world dataset and
performance [10,11], depression [12] and a higher risk of suicidal
follow a time-aware approach that penalizes late detections. Our
ideation [13,14]. results show how the threshold model is able to improve baseline
detection models by 26% and the dual model up to 42%.
∗ Corresponding author. The main contributions of this work could be summarized as:
E-mail addresses: manuel.fernandezl@udc.es (M.F. López-Vizcaíno),
francisco.javier.novoa@udc.es (F.J. Nóvoa), victor.carneiro@udc.es (V. Carneiro), • We define and characterize the cyberbullying early detec-
fidel.cacheda@udc.es (F. Cacheda). tion problem.

https://doi.org/10.1016/j.future.2021.01.006
0167-739X/© 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-
nc-nd/4.0/).
M.F. López-Vizcaíno, F.J. Nóvoa, V. Carneiro et al. Future Generation Computer Systems 118 (2021) 219–229

• We present two specific machine learning models (i.e. 2.2. Cyberbullying detection
threshold and dual) for the cyberbullying early detection
problem and we show the impact of two sets of features Cyberbullying detection has been explored quite extensively,
(i.e. BoW and time features) that contribute to improve the starting with user studies from social sciences and psychology
performance in the cyberbullying early detection problem. fields, and more recently, moving to computer science, aiming at
• We carry out extensive experiments using a real world developing models for its automatic detection.
dataset and following a time-aware evaluation that proves Al-Garadi et al. in [5] presented an extensive analysis of cy-
the performance improvement over baselines. berbullying prediction models on social media and point at some
open challenges such as the prediction of cyberbullying severity,
The article is organized as follows: in Section 2 we present human data characteristics or language dynamics. There are mul-
state-of-the-art on early phenomena and cyberbullying detection, tiple studies that explore different machine learning alternatives
with a special focus on social networks. Section 3 describes all in the detection of cyberbullying. In [41], the authors explore the
the details of the cyberbullying early detection problem and use of Support Vector Machines (SVM) and a lexical syntactical
Section 4 presents experimental evaluation and performance im- feature to predict offencive language achieving high precision
provements over the baselines. Finally, Section 5 includes our values. Dadvar et al. [42] use labelled text instances to train a
conclusions and future work on this line of research. SVM model for creating a gender specific cyberbullying detection
system that improves the discrimination capacity. In [43], the
authors also use SVM to predict cyberbullying in Ask.fm social
2. Related works network. They present a new scheme annotation to describe the
severity of cyberbullying and conclude that the detection of fine-
2.1. Early phenomena detection on social networks grained categories (e.g. threats) is more challenging due to data
sparsity.
From a generic perspective, this work is related to the early de- In [44] the authors work on the detection of cyberbullying
tection of different phenomena or anomalies on social networks. on a multimodal social media environment, identifying several
For example, over the last years, there has been a rising features, non only textual, but also audio and visual. Their re-
interest in the detection of fake news [15–20], rumours [21– sults suggest that audio–visual features can help improve the
performance of purely textual cyberbullying detectors.
24], misinformation [25–27] or fake profile detection [28–30]
Van et al. also investigate the automatic detection of
using the information published on social networks, but with-
cyberbullying-related posts on social media [45], both in En-
out considering the time required in the detection. In fact, the
glish and Dutch. Using six natural language processing features
works that explore the early detection perspective are limited.
groups (word n-grams, subjectivity lexicons, character n-grams,
For example, [31] explores the prediction of fake news before
term lists and topic models), the proposed model outperform
it has been propagated on social media. With this purpose, they the baseline considered and identify false positives on implicit
propose a theory-driven model that represents the news content cyberbullying or offences through irony.
at four language levels (lexicon-level, syntax-level, semantic-level The authors of [46,47] study the detection of cyberbullying in
and discourse-level) achieving 88% accuracy and outperforming the Vine social network using several machine learning models,
all baselines considered. achieving the best results with AdaBoost, closely followed by
Qin et al. aim at improving early detection of rumours on Random Forest. Hosseinmardi et al. [48] work on the detection
social networks by using novelty based features that consider the of cyberbullying incidents on the Instagram social network. They
increase presence of unconfirmed information in a message with use Naïve Bayes and SVM classifiers, with the latter obtaining the
respect to trusted sources of information [32]. Their experiments best performance by incorporating multi-modal text and images
using data collected from Sina Weibo, a Chinese social media features as well as media session data. Some other works focus
service, show that their proposed method performs significantly on the features considered to detect cyberbullying, for example,
better in terms of effectiveness than other real-time and early de- by analysing the social network structure among users [49,50],
tection baselines. Also, [33] presents a rumour detection approach combining text and images analysis techniques [51], profanity
by clustering tweets by their likelihood of actually containing a features [47,52–54], sentiment analysis [43,55–57] or location
disputed factual claim. The authors include in the evaluation the features [58], among others.
earliness of detection by measuring how soon a method is able An extensive review of published research on automatic detec-
to detect a rumour assuming a batch processing of one hour. tion of cyberbullying can be found in [59] and [60]. From a global
Also recently, the workshop on early risk prediction on the point of view, most works employ textual features, followed by
sentiment attributes. User features (e.g. age, gender) and social
Internet (eRisk) at the Conference and Labs for the Evaluation
features (e.g. number of friends or followers) are also commonly
Forum (CLEF) has provided different challenges oriented to the
considered. Interestingly, few works incorporate temporal fea-
problems of detecting depression, anorexia and self-harm on the
tures into their models, such as [61,62], in order to capture the
Reddit social network [34]. One of the best performing methods
temporal and repetitive aspects of cyberbullying. Cheng et al. [61]
for the early detection of depression employed linguistic metain-
incorporate a time interval prediction into their prediction model
formation extracted from the subjects’ writings and developed a outperforming state-of-the-art models in terms of F1 and Area
classifier using recurrent neural networks [35,36]. Alternatively, Under the Curve (AUC). In [62], the authors model the temporal
the model proposed in [37,38] uses a word-based approach that dynamics of cyberbullying in online sessions and show how the
estimates risk based on different word statistics (within-class inclusion of these temporal features increases the performance of
frequency, within-class significance and inter-class term signifi- cyberbullying detection.
cance), which obtained good results in the detection of depression However, all previous works measure their performance re-
and self-harm. We have also explored the early detection of de- garding how successfully the model can distinguish between
pression on social media by developing specific learning models cyberbullying and non-cyberbullying cases using standard eval-
(e.g. singleton and dual), significantly improving previous works uation metrics, such as, accuracy, precision, recall, F-measure or
performance [39,40]. area under the curve [5,59,60], without taking the time required
220
M.F. López-Vizcaíno, F.J. Nóvoa, V. Carneiro et al. Future Generation Computer Systems 118 (2021) 219–229

to produce the prediction into account. In this sense, to the Table 1


best of our knowledge, our work is the first attempt to measure Dataset statistics.

the cyberbullying detection performance taking into account, not Cyberbullying Normal Total
only the accuracy of the system, but also the time required for Media sessions 190 771 961
the prediction. Comments 16,332 61,129 77,461
Comments/session 85.96 79.29 80.61
Likes 261,009 1,667,943 1,928,952
3. Cyberbullying early detection Likes/session 1373.73 2163.35 2007.23
Average followers/user 132,299 188,413 176,611
The problem of cyberbullying early detection on social net- Average following/user 4759 1971 2557
Average time span (s) 210 240 234
works can be considered different to cyberbullying prediction. In
this case, there is a set of social media sessions, that we denote
as S, where some may correspond to cyberbully aggressions. We
define the social media sessions set as: instead of aggregating all comments for a session [46,47], we will
S = {s1 , s2 , . . . , s|S | } work with each comment individually and determine, processing
as few comments from a session as possible, if the session can
where |S | denotes the number of sessions and si refers to session be considered cyberbullying or normal. Note that operating with
i. individual comments will allow us to easily aggregate comments
Each social media session, s ∈ S, is formed by a sequence up to a certain point, i.e. k.
of posts, denoted as Ps , and a binary indicator, bs , that specifies
whether this specific session is considered cyberbullying (bs = 3.2. Features
true) or not (bs = false). The sequence of posts for a specific
session will change throughout time and is given by: For our experiments we start with the features that provided
Ps = (⟨P1s , t1s ⟩, ⟨P2s , t2s ⟩, . . . , ⟨Pns , tns ⟩) the best results in [47]. These features are grouped by:

where the tuple ⟨Pks , tks ⟩, k ∈ [1, n] represents the kth post for • Profile owner features: that capture the characteristics of
social media session s and tks is the timestamp when post Pks was the user who posted the initial video. These features include
published. numbers of followers and following, polarity and subjectiv-
At the same time, each post Pks is specified by a vector of ity of the user’s profile description.
features: • Media session features: number of likes, comments and
sharing and polarity and subjectivity of media caption.
Pks = fks1 , fks2 , . . . , fksm , k ∈ [1, n]
[ ]
• Comment features: that are intended to determine the neg-
Given a social media session s, the objective is to detect if the ativity associated with the comment. These features include
session corresponds to cyberbullying but processing as few posts percentage of negative comments, profane words in the
from Ps as possible. Therefore, our target is to learn a function comment, average polarity and subjectivity for the com-
s s s s
f (bsi |si , ⟨P1i , t1i ⟩, . . . , ⟨Pkk , tkk ⟩) to predict whether a session is ments, differentiating between owner and other comments.
cyberbullying or not. • Video features: intended to capture the nature of the video,
In this sense, the function will receive as input, posts from 1 these features validate the emotions and content in the
to k, and it will return three possible values: {0, 1, 2}, following video.
the methodology proposed at [63]. Where 0 corresponds to a • LDA features: top ten topics extracted using Latent Dirichlet
session s that is considered normal (i.e. non-cyberbullying), 1 if Allocation from all comments.
it is considered cyberbullying and 2 if no definitive decision can
be emitted for session s after processing k posts and more posts We also further extend the features with two new group of
must be read (i.e. delay). characteristics that we consider may be relevant for the cyber-
bullying early detection problem: Bag-of-Words (BoW) similarity
3.1. Dataset and time aspects.
Following previous works, such as [4,41,43], we consider BoW
To study the cyberbullying early detection problem we will similarity. In our case, we aim at computing these features with-
use a public dataset collected from the Vine social network [46, out supervision. For this purpose, the training dataset is divided
47]. The dataset was collected using a snowball sampling method, into two disjunctive sets: cyberbullying and non-cyberbullying
where an initial user u is selected as a seed and then the collec- sessions. The main goal of these features is to estimate the like-
tion continues with the users following ′ u′ . The authors provide liness between a given comment versus cyberbullying and nor-
a detailed study to ensure the representativeness of the social mal comments, without considering a set of predefined terms
network. (e.g. profane words).
For each user, a standard information was collected (i.e. user For each comment, we calculate the average, standard de-
name, full name, profile description, number of followers, number viation, minimum, maximum and median of the TF–IDF (Term
of videos posted, number of followings) and all videos posted Frequency–Inverse Document Frequency) similarity obtained
along with their comments, number of likes and number of re- comparing this comment to every other cyberbullying comment.
posts. A social media session is composed of a posted video along Then, the same process is repeated for the similarities with non-
with all the likes and comments associated. Sessions with less cyberbullying comments. In both cases, the active comment is
than 15 comments have been removed from the dataset by the removed from the corresponding sample.
authors in order to have a sufficient number of comments [47]. Since cyberbullying implies a certain repetition over time, we
A total of 961 sessions were labelled as cyberbullying or normal consider relevant to include some features to capture different
using crowdsourcing and, following [47], we required at least 60% time aspects of the comments. This is pointed from the dataset
confidence from the labellers to be considered cyberbullying. statistics on Table 1, where we can observe a shorter time span
Table 1 shows the main statistics for the dataset. In our case, for cyberbullying sessions, which is confirmed by a Welch two
each comment is considered a post for a social media session and, sample t-test with a p-value close to zero. Fig. 1 represents the
221
M.F. López-Vizcaíno, F.J. Nóvoa, V. Carneiro et al. Future Generation Computer Systems 118 (2021) 219–229

Fig. 1. Density plot for time difference between two consecutive comments for cyberbullying and non-cyberbulling for the Vine dataset. For each comment from
the dataset, the time difference with the previous comment from the same media session was computed and the ground truth label was used for splitting between
cyberbullying and non-cyberbullying. For the first comment of each media session, the difference is calculated with respect to the time when the video was posted.

time between two consecutive comments, both for cyberbullying Initially, the decision function for the threshold model is defined
and non-cyberbulling comments. From the figure we observe that as:
there is a higher number of comments produced in a very short
δ1 (m, th+ (), th− ())
time span (a few seconds) for cyberbullying, and then, in the long
tail, both types behave uniformly. where m denotes a machine learning model used in (e.g. one of
For this purpose, we include features that measure the time the baseline models), th+ () is a threshold function used to set
difference between two consecutive comments. We differentiate the limit of the class probability for a positive final decision and
two cases: the time difference with the last comment and the th− () is the threshold function to set the limit for a negative
time difference considering all previous comments. In both cases, decision. The threshold model is an adaptation of the singleton
we aggregate them by calculating the average, median, maximum model from [40] where the training and test sets are divided
and minimum values. into ten homogeneous groups of posts (named batch or chunk
processing), while in this case, each post is processed individually
3.3. Baselines (named stream processing), following a more realistic evaluation
approach.
As baselines we will consider the models reported on [47] that We have explored different threshold functions, ranging from
achieved the best performance results: Random Forest (RF ), Ad- independent functions for positive and negative decisions to sev-
aBoost (AB), Extra Tree (ET ), linear Support Vector Classification eral decreasing functions depending on the number of posts
(SVC ) and Logistic Regression (LR). processed. Contrary to [40], the best and most stable performance
Since standard classification models provide a binary output, was obtained with a constant function of the form, th() = ℓ, for
in order to predict if a session is considered cyberbullying or not both cases, positive and negative. This may be motivated because
(i.e. no delay result can be generated), we use a simple adaptation we provide a fine grain time evaluation, validating one comment
of the baselines considering a fixed number of input comments, at a time, opposed to the batch evaluation performed in [40].
where a delay is produced until this fixed number is reached. In the experimental evaluation we analyse the performance for
For example, if the number of input comments is fixed at 5, a different values of ℓ.
delay will be produced for comments 1 to 4 and then a final We have also tested the order in which the threshold functions
decision will be emitted. For each baseline model we consider are applied to the class probabilities obtained by each model
four pre-established input comments: 1, 5, 10 and 15. Therefore, (i.e. first th+ () and then th− (), or vice versa), obtaining the same
for instance, the Random Forest model will be presented with 1, results for both cases. In our experiments, we will execute first
5, 10 and 15 comments, and, in each case, a final decision will be the positive threshold and then the negative.
generated, creating four early detection variants for the RF model, On the other hand, the objective of the dual model is to predict
namely, RF1 , RF5 , RF10 and RF15 . independently each option (i.e. positive and negative). In this
sense, and inspired by multiclass classifiers one-versus-all, the
3.4. Early detection models dual model consists of two independent learning models, each
one trained with an independent set of features. One model is
We also consider specific early detection models, that we trained to detect positive cases (denoted as m+ ), while the other
adapt for the cyberbullying early detection problem. In particular, is trained to detect negative cases (denoted as m− ). Again, we
we adapt two early detection models that have reported good adapt the proposal from [40] by defining a decision function of
results [39,40]: threshold and dual. the form:
Taking into account that this is not a classical binary classifi-
δ2 (m+ , m− , th+ (), th− ())
cation problem because a non-final decision can be emitted, the
threshold model is based on one learning model, which is trained where, m+ is the learning model responsible for positive predic-
using the features described in Section 3.2, and it integrates a tions and m− is the model in charge of negative predictions. As
decision function based on the class probabilities to determine in the previous case, th+ () and th− () are the threshold functions
if enough evidence is available to proceed with a firm decision. for positive and negative cases, which, in this case, are associated
222
M.F. López-Vizcaíno, F.J. Nóvoa, V. Carneiro et al. Future Generation Computer Systems 118 (2021) 219–229

with their respective models, m+ and m− . Following the lead of Table 2


Results for baseline models after processing 5 comments using individual groups
the former model, we considered different constant functions for
of features. The best results for each group of features are highlighted for both
both thresholds. ERDE and Flatency and underlined for precision and recall.
Based on the baseline models, we have defined different RF5 AB5 ET5 SVC5 LR5
threshold and dual model implementations that are expected Profile owner features
to capture the special characteristics of the cyberbullying early
ERDE 0.1757 0.1685 0.1617 0.2054 0.1712
detection problem in a better way than the standard baselines, Flatency 0.1414 0.1363 0.2672 0.2759 0.1272
since they have been specifically designed for the early detection Precision 0.1905 0.3333 0.4118 0.1805 0.2500
problem. Recall 0.1212 0.0909 0.2121 0.7273 0.0909
Media session features
ERDE 0.1609 0.1746 0.1599 0.1967 0.1710
3.5. Evaluation metrics Flatency 0.2881 0.0465 0.2783 0.2196 0.0000
Precision 0.4000 0.1250 0.4667 0.1625 0.0000
Recall 0.2424 0.0303 0.2121 0.3939 0.0000
As evaluation metrics we will consider two metrics specifically
designed for the early detection problem, although applied in Comment features

a different environment: Early Risk Detection Error (ERDE) [63] ERDE 0.1642 0.1694 0.1556 0.1627 0.1575
Flatency 0.2121 0.1332 0.3368 0.2776 0.3348
and latency-weighted F1 [64]. In both cases, they were used to Precision 0.4167 0.3000 0.5000 0.3636 0.4167
measure performance on the early detection of depression on Recall 0.1515 0.0909 0.2727 0.2424 0.3030
individuals based on their posts on social networks. LDA features
The ERDE metric is measured at a specific time point, o (pro- ERDE 0.1633 0.1738 0.1668 0.1686 0.1705
vided as a parameter), and for a session s after processing k Flatency 0.1909 0.1193 0.1735 0.1660 0.2045
comments (typically, because the model has required k comments Precision 0.5714 0.2000 0.3636 0.3077 0.2609
Recall 0.1212 0.0909 0.1212 0.1212 0.1818
to produce a final prediction) is defined as follows:
⎧∑ Video features
si ∈S ∧bs =true 1
i
if False Positiv e ERDE 0.1728 0.1719 0.1728 0.1719 0.1719


⎪ |S |
Flatency 0.0489 0.0000 0.0489 0.0000 0.0000
False Negativ e

ERDE o (si , k) = 1 if Precision 0.1667 0.0000 0.1667 0.0000 0.0000

⎪ 1− 1
1+ek−o
if True Positiv e Recall 0.0303 0.0000 0.0303 0.0000 0.0000
True Negativ e

0 if

BoW features
ERDE 0.1659 0.1719 0.1685 0.1710 0.1710
In the case of wrong predictions (false positive or negative) the
Flatency 0.1468 0.0502 0.1363 0.0000 0.0000
error will increase, in the former proportionally to the number Precision 0.5000 0.2000 0.3333 0.0000 0.0000
of positive cases and in the latter by 1. A true negative will not Recall 0.0909 0.0303 0.0909 0.0000 0.0000
increase the error, but a true positive may impact negatively if Time features
the number of posts required to make the prediction surpasses ERDE 0.1730 0.1745 0.1581 0.1740 0.1710
the measuring point o. Flatency 0.1497 0.0000 0.2726 0.1909 0.0000
The latency-weighted F1, or in short Flatency or F1-latency, is Precision 0.2222 0.0000 0.6667 0.2222 0.0000
Recall 0.1212 0.0000 0.1818 0.1818 0.0000
proposed by Sadeque et al. as an alternative to the ERDE metric,
combining both latency and accuracy [64]. The F-latency metric
is defined as:
( ( )) 4.1. Baselines
2
Flatency (si , k) = F 1 · 1 − median −1 +
si ∈S ∧bsi =true 1 + e−p(k−1) In the first set of experiments, we validate the performance of
where, p is a parameter that determines how quickly the penalty the baseline models. We start by presenting on Tables 2 and 3 the
should increase, which is set to achieve 50% of latency penalty results for baseline models after processing 5 comments.
We start our analysis by examining the behaviour of the
at the median number of items and F 1 is the standard F-measure
features considered in this work in order to determine which can
that is calculated as the harmonic mean of precision and recall.
provide a better performance in the early detection of cyberbul-
Note how ERDE is an error measure and, therefore, values
lying. Table 2 presents the results for the individual features in
closer to 0 are better, while for Flatency values closer to 1 are
terms of ERDE and Flatency as early detection metrics, and also
representative of good results.
precision and recall are presented for completeness.
Some limitations have been reported for the ERDE metric [65, The first five group of features correspond to those reported
66] and so, we will rely more on Flatency . By default, we report on [47] and we can observe that the best performance is obtained
results for ERDE at a low time point, o = 5, and p is set to 0.02288 by comment features, both in terms of Flatency (0.3368 ± 0.0048),
for Flatency . Also, in the first experiments we will provide results and ERDE (0.1556 ± 0.0037). This is motivated because comments
for precision and recall, as complementary values. concentrate the information that changes as the session advances,
while the other features are shared for the whole session. Also
note how the second-best performance is obtained with media
4. Experimental evaluation
session features (0.2881 ± 0.0046 and 0.1599 ± 0.0037, for
Flatency and ERDE, respectively), significantly below the comments
For evaluation purposes, we use 80% of the dataset for training features performance.
and the remaining is used for testing. Note that the dataset has Regarding the features proposed, we observe that BoW fea-
been divided by social media sessions and each session includes tures achieve modest results in terms of Flatency and ERDE, some-
all its posts. The posts are presented to the models chronological how expected, since their capacity to identify similarities would
and sequentially in order to make a prediction. When reported, be limited to textual equivalence, while time features reach rea-
confidence intervals are calculated at the 95% confidence level. sonable results in terms of early detection, with no significant
223
M.F. López-Vizcaíno, F.J. Nóvoa, V. Carneiro et al. Future Generation Computer Systems 118 (2021) 219–229

Table 3 just using comment features. On the other side, AdaBoost and
Results for baseline models after processing 5 comments using combinations of Linear Regression are the methods with a lower performance.
features. The best results for each combination are highlighted for both ERDE
and Flatency and underlined for precision and recall.
These results contrast with [47], where AdaBoost was the best
performing model, which may be due to its sensitivity to the
RF5 AB5 ET5 SVC5 LR5
noise and outliers [67,68] that may arise when dealing with little
Profile owner, media session, comment, LDA, video features
information. When combining features (Table 3), Extra Tree and
ERDE 0.1650 0.1663 0.1625 0.1665 0.1711 AdaBoost are the best performing models for the early detection
Flatency 0.1507 0.2849 0.2219 0.3141 0.0516
Precision 0.6000 0.2941 0.5000 0.2826 0.2500
metrics. We consider that the motivation for AB performance im-
Recall 0.0909 0.3030 0.1515 0.3939 0.0303 provement may be due to the incorporation of more information
+ BoW features by the combination of features.
To further study the time impact in the models performance,
ERDE 0.1641 0.1663 0.1608 0.1815 0.1711
Flatency 0.1547 0.2849 0.2545 0.2386 0.0516 we analyse the performance of the different models with respect
Precision 0.7500 0.2941 0.5000 0.2000 0.2500 to the number of posts processed.
Recall 0.0909 0.3030 0.1818 0.3333 0.0303 Fig. 2 presents one graph for each feature group from Tables 2
+ Time features and 3 with Flatency score for all models. For the sake of presen-
ERDE 0.1624 0.1593 0.1599 0.1980 0.1719 tation we have removed video feature graph from the figure as
Flatency 0.1957 0.3332 0.2603 0.2688 0.0502 it provided the lowest performance and did not contribute to the
Precision 0.6667 0.3667 0.5455 0.1835 0.2000 discussion. The graph labelled ‘‘BS’’ represents the combination of
Recall 0.1212 0.3333 0.1818 0.6061 0.0303
owner profile, media session, comments, LDA and video features
+ BoW + Time features from [47] that constitutes our baseline features.
ERDE 0.1615 0.1593 0.1573 0.1908 0.1719 Interestingly, for the owner profile and media session features
Flatency 0.2009 0.3332 0.2969 0.2726 0.0502 the negative impact in the performance as the time progresses
Precision 0.8000 0.3667 0.5833 0.1935 0.2000
Recall 0.1212 0.3333 0.2121 0.5455 0.0303
is clear, with the performance for all models decreasing as more
posts are processed, since these features do not change as new
comments are processed. On the other hand, comment, LDA,
BoW and time features, and their combinations, present a more
difference in terms of ERDE with the comment features perfor- heterogeneous behaviour as time progresses, since there is a
mance. dispute between the improvement obtained by providing more
However, it is interesting to note that BoW and time features information to the models (i.e. more posts) versus the negative
obtain good precision scores: the latter obtains the best score and impact in the performance since the prediction is being delayed.
the former is second-best, tied with comment features. In the case From the models perspective, Logistic Regression is providing
of BoW features, we consider that it is due to the fact that there the lower performance independently of features considered and,
are terms that are repeated on multiple cyberbullying comments on the other side, Extra Tree, SVC and Random Forest tend to
and these features will identify them, although a high number of be on the upper section of the different graphs for all feature
false negatives is generated (note the low recall value) because combinations. In fact, the best score is obtained combining all
the same terms will appear on normal comments. Regarding the features (i.e. BS+BoW+Time) by ET15 with 0.3657 ± 0.0049,
time features, the high precision confirms our intuition from performing significantly better than the same model with only
Section 3.2 suggesting that time difference between cyberbullying comment features (Table 2).
comments tends to be shorter but, at the same time, some normal Regarding the ERDE metric, we skip the graphical results since
comments will present the same characteristic, producing low they basically mimic the previous figure from an error metric
recall values. In summary, the features proposed do not outper- perspective. ERDE values are more concentrated and differences
form baseline features in terms of early detection, but they are are more difficult to spot but, again, Extra Tree model is per-
expected to be a good complement, as we will discuss later. forming consistently better than other models for most features,
Table 3 presents the results obtained when combining fea- closely followed by Random Forest in many cases. Again, the best
tures. We start by combining all features from [47], that consti- performance is achieved by ET15 , with a score of 0.1477 ± 0.0036.
tutes our starting point. The best performing model is linear SVC, To provide a better understanding of the models performance
improving precision and recall over individual features, but not and to compare both metrics employed, we present a box-plot
outperforming Flatency nor ERDE from individual comment features for Flatency and ERDE on Fig. 3. From the figure we confirm that
(0.3141 ± 0.0047 and 0.1665 ± 0.0038, respectively). Extra Tree is, for both metrics, the best performing model. Re-
When incorporating, BoW and time features individually, the garding Flatency (Fig. 3(a)), SVC is second-best, with no significant
performance increases for RF, AB and ET with respect to the base- difference with ET, however it is the worst performing on the
line, while it decreases for SVC and LR, suggesting that models ERDE metric (Fig. 3(b)). Also note how the differences for ERDE
based on the data space underperform in this problem. Analysing scores are small, which makes Flatency a better option in terms of
each feature individually, we observe that the bigger improve- sensitivity.
ment is obtained by time features, pointing towards reiteration Therefore, in the remaining experiments we will focus on the
as an important characteristic in the early detection of cyberbul- best performing model, that is, Extra Tree, and we will report re-
lying. sults only on Flatency for sake of simplicity. We have run all exper-
When adding both features, Flatency and ERDE further improve iments including all models, but the other models kept providing
for RF and ET with respect to individual features. However, the a lower performance. As baseline and value to beat, we consider
best Flatency score remains in AB5 and, although being close, it the best score achieved in these experiments corresponding to
does not improve the comment feature performance on Table 2, ET15 with 0.3657 ± 0.0049.
despite there is no significant difference among them (confidence
interval (0.3284, 0.3380) for AB5 ). 4.2. Early detection models
If we focus on the models, Random Forest and Extra Trees ob-
tain consistently the best values in terms of Flatency for individual In these experiments we test the performance of the early
features (Table 2), with ET5 being the best performing method detection models. We start with the threshold model for Extra
224
M.F. López-Vizcaíno, F.J. Nóvoa, V. Carneiro et al. Future Generation Computer Systems 118 (2021) 219–229

Fig. 2. Flatency for all models. One graph is included for each feature and their combinations. BS graph refers to the combination of Profile, Media, Comments, LDA
and Video features.

Fig. 3. Box-plots of the performance according to the model ((a) corresponds to Flatency and (b) to ERDE). Performance is computed for all features and all number
of posts processed. Lower and upper box boundaries are first and third quartile, respectively. Outliers correspond to data falling outside 1.5 times the interquartile
range (IQR).

Tree, setting th+ () = ℓ+ and th− () = ℓ− and we test different Tree. Since the dual variant requires two independent models we
values for the ℓ+ and ℓ− , ranging from 0.9 to 0.5. Regarding the present a grid, where rows correspond to the features used by
features, we start with the baseline features (i.e. Profile owner, the negative model (best value for each row is highlighted) while
media session, comment, LDA and video features), and we com- columns represent the features employed by the positive model
plete the results incorporating the features proposed (i.e. BoW (best value for each column is underlined).
and Time) both individually and combined. Fig. 4 presents the Firstly, we observe that best results are obtained when using
results for the threshold model. all features for the positive model (i.e. last column), confirming
In this case, for each social media session, the number of the same tendency from the previous experiment. In general
posts required to produce a final result (i.e. positive or negative) terms, when the positive model uses the baseline features in
will vary because the class probability must be higher than the combination with our proposed features (i.e. last four columns on
threshold and, therefore, the posts processed will vary for each Table 4) the highest concentration of Flatency values are obtained
session. for the dual model based on Extra Tree. This confirms the impor-
Focusing on the threshold values, we observe that, consis- tance of considering significant features for the positive model in
tently, the best values are achieved when th+ () = 0.5 and, as the order to discriminate cyberbullying sessions correctly.
negative threshold (i.e. th− ()) increases, so does the performance It is also interesting to observe how the use of simple features
up to 0.8 (decreasing at 0.9). In fact, the top score, 0.4615 ± on the negative model (e.g. Profile owner), leads to the best score
0.0051, is obtained by ET with th+ () = 0.5 and th− () = 0.8 (0.472 ± 0.0051), something that was already observed in previ-
including all features (BS+BoW+Time), improving the baseline ous works [40]. This may be motivated by the fact that multiple
performance from the previous experiment (Fig. 2) by 26%. Fo- characteristics are required to determine if a social media session
cusing on the features, we observe that the group including all corresponds to cyberbullying, while a non-cyberbullying session
features is consistently providing the best scores (i.e. purple line can be decided in a much simpler way, using less information,
steadily on top), which confirms the importance of the features since it has already been discarded as cyberbullying.
proposed (i.e. BoW and Time features), along with the basic Finally, we have further explored the performance of the dual
features. model by analysing combinations of best performing positive
In the final set of experiments, we study the performance of model features (i.e. BS, BS+BoW, BS+Time, BS+BoW+Time) with
the dual model. For this purpose, we set initially th+ () = 0.5 all negative model features, for different values of th+ () and th− ().
and th− () = 0.8, and present the results on Table 4 for Extra On Fig. 5 we show Flatency scores only for Extra Tree.
225
M.F. López-Vizcaíno, F.J. Nóvoa, V. Carneiro et al. Future Generation Computer Systems 118 (2021) 219–229

Fig. 4. Flatency for threshold model using and Extra Tree. One graph is included for each value of th− () = ℓ− . The X axis represents the values of th+ () = ℓ+ . BS refers
to the combination of Profile, Media, Comments, LDA and Video features.

Table 4
Results for Flatency for dual models based on Extra Tree, with th+ () = 0.5 and th− () = 0.8. The best value for each
row (features negative model) is highlighted. The best value for each column (features positive model) is underlined.
PO: Profile owner, MS: Media session, C: Comment, V: Video, BS: Profile owner + Media session + Comments +
LDA + Video features, All: BS + BoW + Time.
PO MS C LDA V BoW Time BS +BoW +Time All
PO 0.364 0.302 0.392 0.362 0.049 0.342 0.336 0.357 0.400 0.437 0.472
MS 0.364 0.302 0.379 0.360 0.049 0.359 0.336 0.437 0.372 0.442 0.434
C 0.364 0.302 0.200 0.200 0.051 0.257 0.187 0.302 0.241 0.345 0.436
LDA 0.364 0.302 0.211 0.219 0.051 0.250 0.208 0.286 0.273 0.316 0.386
V 0.364 0.302 0.374 0.293 0.049 0.358 0.282 0.333 0.327 0.404 0.404
BoW 0.364 0.302 0.310 0.222 0.051 0.243 0.200 0.339 0.262 0.333 0.400
Time 0.364 0.302 0.262 0.240 0.051 0.274 0.208 0.286 0.254 0.316 0.393
BS 0.364 0.302 0.282 0.260 0.051 0.279 0.227 0.339 0.316 0.343 0.348
+BoW 0.364 0.302 0.329 0.302 0.051 0.306 0.276 0.353 0.273 0.382 0.412
+Time 0.364 0.302 0.222 0.231 0.051 0.260 0.212 0.313 0.215 0.305 0.388
All 0.364 0.302 0.358 0.329 0.051 0.329 0.276 0.448 0.371 0.394 0.462

From the figure we observe that lower values of th+ keep con- models, threshold and dual, and verified their behaviour in our
centrating the highest scores. In fact, the best performance is ob- evaluation.
tained with th+ () = 0.5 and th− () = 0.6 using all features for the The experimental evaluation was based on a real world dataset
positive model and profile owner features for the negative model. from the Vine social network and we used specific time-aware
This configuration achieves Flatency = 0.5217 (confidence interval
metrics (i.e. ERDE and Flatency ). Our results show how the threshold
(0.5166, 0.5268)), significantly improving baseline performance
model is able to significantly improve the baseline detection
from Fig. 2 by 42% and best threshold model by 13%.
Interestingly, the top five configurations use all features for models by 26% and the dual model is able to further increase
the positive model and profile owner features for the negative this improvement up to 42%, in both cases using the Extra Tree
model, with different variations of threshold configurations (note model as basis. Moreover, the combination of proposed features
the upper right graph from Fig. 5). This corroborates findings along with baseline features (i.e. profile owner, media session,
from previous experiments (Table 4), and confirms that an inde- comment, LDA and video features) lead to the best performance
pendent feature of the social media session, such as the owner for both, threshold and dual models.
characteristics, is relevant for identification of non-cyberbullying As a main conclusion, the dual model consistently provides
sessions, while the classification as cyberbullying requires specific
the best performance for the early detection of cyberbullying,
session characteristics (e.g. comment features, BoW or time).
based on the use of all features for the identification of positive
Focusing on the threshold values, best performing configu-
rations set th+ () always on the low side (i.e. values 0.5 and cases along with low thresholds to produce early detections,
0.6), while th− () takes values from the whole range. This, in and simpler features (i.e. profile owner characteristics) for the
combination with the use of all features for the positive model, negative model.
suggest the importance of defining a positive model highly ca- In the near future, we expect to extend this research in several
pable of properly detecting cyberbullying cases, with low class ways. First, we would like to explore heterogeneous combina-
probabilities to reduce detection time and, hence, requiring a low tions of different machine learning models on the dual model.
threshold. Meanwhile, the negative model relies mainly on the For example, Extra Tree for the positive model, while Random
use of simple features to accurately detect negative cases, once
Forest for the negative model. Second, we plan to further extend
they have been discarded as positive.
the features regarding comments, as these concentrate most of
the information for the early detection. Third, we would like to
5. Conclusions
investigate an evaluation based on time, instead of number of
In this paper, we introduced the cyberbullying early detec- posts, since it may be relevant for the early detection of cyberbul-
tion problem and we proposed two feature groups, specifically lying. Finally, we intend to experiment with other datasets from
designed for this problem: text similarities and time features. some other social media platforms to validate our approach and
Moreover, we have also adapted two specific machine learning generalize the results.
226
M.F. López-Vizcaíno, F.J. Nóvoa, V. Carneiro et al. Future Generation Computer Systems 118 (2021) 219–229

Fig. 5. Flatency for dual model using Extra Tree. Columns represent positive model features (i.e. BS, BS+BoW, BS+Time, BS+BoW+Time), while rows represent negative
model features. The X axis represents the values of th+ () = ℓ+ , while one line is included for each value of th− () = ℓ− . BS refers to the combination of Profile,
Media, Comments, LDA and Video features.

CRediT authorship contribution statement Spain) and the European Union (European Regional Development
Fund — Galicia 2014–2020 Program), by grant ED431G 2019/01.
Manuel F. López-Vizcaíno: Data-curation, Methodology, Soft-
ware, Writing- reviewing and editing. Francisco J. Nóvoa: Data-
References
curation, Investigation, Writing - reviewing and editing. Victor
Carneiro: Project administration, Writing - reviewing and edit-
[1] D. Olweus, Bullying at school, in: Aggressive Behavior, Springer, 1994, pp.
ing. Fidel Cacheda: Conceptualization, Investigation, Writing - 97–130.
original draft. [2] R. Slonje, P.K. Smith, Cyberbullying: Another main type of bullying? Scand.
J. Psychol. 49 (2) (2008) 147–154.
Declaration of competing interest [3] G.S. O’Keeffe, K. Clarke-Pearson, et al., The impact of social media on
children, adolescents, and families, Pediatrics 127 (4) (2011) 800–804.
The authors declare that they have no known competing finan- [4] J.-M. Xu, K.-S. Jun, X. Zhu, A. Bellmore, Learning from bullying traces
in social media, in: Proceedings of the 2012 Conference of the North
cial interests or personal relationships that could have appeared American Chapter of the Association for Computational Linguistics: Human
to influence the work reported in this paper. Language Technologies, Association for Computational Linguistics, 2012,
pp. 656–666.
Acknowledgements [5] M.A. Al-Garadi, M.R. Hussain, N. Khan, G. Murtaza, H.F. Nweke, I. Ali, G.
Mujtaba, H. Chiroma, H.A. Khattak, A. Gani, Predicting cyberbullying on
social media in the big data era using machine learning algorithms: Review
This research was supported by the Ministry of Economy and
of literature and open challenges, IEEE Access 7 (2019) 70701–70718.
Competitiveness of Spain and FEDER funds of the European Union [6] J.W. Patchin, Summary of our cyberbullying research, 2019, accessed March
(Project PID2019-111388GB-I00) and by the Centro de Investi- 10, 2020. URL https://cyberbullying.org/summary-of-our-cyberbullying-
gación de Galicia ‘‘CITIC’’, funded by Xunta de Galicia (Galicia, research.

227
M.F. López-Vizcaíno, F.J. Nóvoa, V. Carneiro et al. Future Generation Computer Systems 118 (2021) 219–229

[7] S. Hinduja, J.W. Patchin, Cyberbullying Fact Sheet: Identification, Preven- [33] Z. Zhao, P. Resnick, Q. Mei, Enquiring minds: Early detection of rumors in
tion, and Response, Cyberbullying Research Center, 2020, pp. 1–9, accessed social media from enquiry posts, in: Proceedings of the 24th International
March 10. Conference on World Wide Web, 2015, pp. 1395–1405.
[8] R.S. Tokunaga, Following you home from school: A critical review and [34] D.E. Losada, F. Crestani, J. Parapar, erisk 2020: Self-harm and depression
synthesis of research on cyberbullying victimization, Comput. Hum. Behav. challenges, in: European Conference on Information Retrieval, Springer,
26 (3) (2010) 277–287. 2020, pp. 557–563.
[9] I. Aoyama, T.F. Saxon, D.D. Fearon, Internalizing problems among cyberbul- [35] M. Trotzek, S. Koitka, C.M. Friedrich, Linguistic metadata augmented
lying victims and moderator effects of friendship quality, Multicult. Educ. classifiers at the clef 2017 task for early detection of depression, in: CLEF
Technol. J. 5 (2) (2011) 92–105. (Working Notes), 2017.
[10] R.M. Kowalski, S.P. Limber, Psychological, physical, and academic correlates [36] M. Trotzek, S. Koitka, C.M. Friedrich, Utilizing neural networks and linguis-
of cyberbullying and traditional bullying, J. Adolesc. Health 53 (1) (2013) tic metadata for early detection of depression indications in text sequences,
S13–S20. IEEE Trans. Knowl. Data Eng..
[11] A.T. Khine, Y.M. Saw, Z.Y. Htut, C.T. Khaing, H.Z. Soe, K.K. Swe, T. Thike, [37] M.P. Villegas, D.G. Funez, M.J.G. Ucelay, L.C. Cagnina, M.L. Errecalde,
H. Htet, T.N. Saw, S.M. Cho, et al., Assessing risk factors and impact Lidic-unsl’s participation at erisk 2017: Pilot task on early detection of
of cyberbullying victimization among university students in myanmar: A depression, in: CLEF (Working Notes), 2017.
cross-sectional study, PLoS One 15 (1) (2020) e0227051. [38] S.G. Burdisso, M. Errecalde, M. Montes y Gómez, Unsl at erisk 2019: a
unified approach for anorexia, self-harm and depression detection in social
[12] S. Rathore, P.K. Sharma, V. Loia, Y.-S. Jeong, J.H. Park, Social network
media, in: Working Notes of the Conference and Labs of the Evaluation
security: Issues, challenges, threats, and solutions, Inf. Sci. 421 (2017)
Forum-CEUR Workshop Proceedings, Vol. 2380, 2019.
43–69.
[39] F. Cacheda, D.F. Iglesias, F.J. Nóvoa, V. Carneiro, Analysis and experiments
[13] H. Sampasa-Kanyinga, P. Roumeliotis, H. Xu, Associations between cyber-
on early detection of depression, CLEF (Work. Notes) 2125 (2018) 1–11.
bullying and school bullying victimization and suicidal ideation, plans and
[40] F. Cacheda, D. Fernandez, F.J. Novoa, V. Carneiro, Early detection of
attempts among canadian schoolchildren, PLoS One 9 (7) (2014) e102145.
depression: Social network analysis and random forest techniques, J. Med.
[14] S. Hinduja, J.W. Patchin, Bullying, cyberbullying, and suicide, Arch. Suicide
Internet Res. 21 (6) (2019) e12554.
Res. 14 (3) (2010) 206–221.
[41] Y. Chen, Y. Zhou, S. Zhu, H. Xu, Detecting offensive language in social media
[15] S. Kumar, N. Shah, False information on web and social media: A survey, to protect adolescent online safety, in: 2012 International Conference on
arXiv preprint arXiv:1804.08559. Privacy, Security, Risk and Trust and 2012 International Confernece on
[16] K. Shu, A. Sliva, S. Wang, J. Tang, H. Liu, Fake news detection on social Social Computing, IEEE, 2012, pp. 71–80.
media: A data mining perspective, ACM SIGKDD Explor. Newsl. 19 (1) [42] M. Dadvar, F.d. Jong, R. Ordelman, D. Trieschnigg, Improved cyberbullying
(2017) 22–36. detection using gender information, in: Proceedings of the Twelfth Dutch-
[17] K. Sharma, F. Qian, H. Jiang, N. Ruchansky, M. Zhang, Y. Liu, Combating fake Belgian Information Retrieval Workshop (DIR 2012), University of Ghent,
news: A survey on identification and mitigation techniques, ACM Trans. 2012.
Intell. Syst. Technol. (TIST) 10 (3) (2019) 1–42. [43] C. Van Hee, E. Lefever, B. Verhoeven, J. Mennes, B. Desmet, G. De Pauw,
[18] C. Janze, M. Risius, Automatic detection of fake news on social media W. Daelemans, V. Hoste, Detection and fine-grained classification of cyber-
platforms, in: PACIS, 2017, p. 261. bullying events, in: International Conference Recent Advances in Natural
[19] C. Buntain, J. Golbeck, Automatically identifying fake news in popular Language Processing (RANLP), 2015, pp. 672–680.
twitter threads, in: 2017 IEEE International Conference on Smart Cloud [44] D. Soni, V.K. Singh, See no evil, hear no evil: Audio-visual-textual cyber-
(SmartCloud), IEEE, 2017, pp. 208–215. bullying detection, Proc. ACM Hum.-Comput. Interact. 2 (CSCW) (2018)
[20] M. Aldwairi, A. Alwahedi, Detecting fake news in social media networks, 1–26.
Procedia Comput. Sci. 141 (2018) 215–222. [45] C. Van Hee, G. Jacobs, C. Emmery, B. Desmet, E. Lefever, B. Verhoeven, G.
[21] C. Andrews, E. Fichet, Y. Ding, E.S. Spiro, K. Starbird, Keeping up with De Pauw, W. Daelemans, V. Hoste, Automatic detection of cyberbullying
the tweet-dashians: The impact of ‘official’ accounts on online rumoring, in social media text, PLoS One 13 (10).
in: Proceedings of the 19th ACM Conference on Computer-Supported [46] R.I. Rafiq, H. Hosseinmardi, R. Han, Q. Lv, S. Mishra, S.A. Mattson, Careful
Cooperative Work & Social Computing, 2016, pp. 452–465. what you share in six seconds: Detecting cyberbullying instances in
[22] A. Arif, K. Shanahan, F.-J. Chou, Y. Dosouto, K. Starbird, E.S. Spiro, vine, in: 2015 IEEE/ACM International Conference on Advances in Social
How information snowballs: Exploring the role of exposure in online Networks Analysis and Mining (ASONAM), IEEE, 2015, pp. 617–622.
rumor propagation, in: Proceedings of the 19th ACM Conference on [47] R.I. Rafiq, H. Hosseinmardi, S.A. Mattson, R. Han, Q. Lv, S. Mishra, Analysis
Computer-Supported Cooperative Work & Social Computing, 2016, pp. and detection of labeled cyberbullying instances in vine, a video-based
466–477. social network, Soc. Netw. Anal. Min. 6 (1) (2016) 88.
[23] J. Ma, W. Gao, K.-F. Wong, Rumor Detection on Twitter with Tree- [48] H. Hosseinmardi, S.A. Mattson, R.I. Rafiq, R. Han, Q. Lv, S. Mishra, Detection
Structured Recursive Neural Networks, Association for Computational of cyberbullying incidents on the instagram social network, 2015, arXiv
Linguistics, 2018. preprint arXiv:1503.03909 1503.03909.
[24] S.A. Alkhodair, S.H. Ding, B.C. Fung, J. Liu, Detecting breaking news rumors [49] Q. Huang, V.K. Singh, P.K. Atrey, Cyber bullying detection using social and
of emerging topics in social media, Inf. Process. Manage. 57 (2) (2020) textual analysis, in: Proceedings of the 3rd International Workshop on
102018. Socially-Aware Multimedia, 2014, pp. 3–6.
[50] A. Squicciarini, S. Rajtmajer, Y. Liu, C. Griffin, Identification and char-
[25] V. Qazvinian, E. Rosengren, D.R. Radev, Q. Mei, Rumor has it: Identi-
acterization of cyberbullying dynamics in an online social network, in:
fying misinformation in microblogs, in: Proceedings of the Conference
Proceedings of the 2015 IEEE/ACM International Conference on Advances
on Empirical Methods in Natural Language Processing, Association for
in Social Networks Analysis and Mining 2015, 2015, pp. 280–285.
Computational Linguistics, 2011, pp. 1589–1599.
[51] K.B. Kansara, N.M. Shekokar, A framework for cyberbullying detection in
[26] S.D. Bhattacharjee, W.J. Tolone, V.S. Paranjape, Identifying malicious social
social network, Int. J. Curr. Eng. Technol. 5 (1) (2015) 494–498.
media contents using multi-view context-aware active learning, Future
[52] K. Dinakar, R. Reichart, H. Lieberman, Modeling the detection of textual
Gener. Comput. Syst. 100 (2019) 365–379.
cyberbullying, in: Fifth International AAAI Conference on Weblogs and
[27] C. Castillo, M. Mendoza, B. Poblete, Information credibility on twitter, in:
Social Media, 2011.
Proceedings of the 20th International Conference on World Wide Web,
[53] K. Reynolds, A. Kontostathis, L. Edwards, Using machine learning to detect
2011, pp. 675–684.
cyberbullying, in: 2011 10th International Conference on Machine Learning
[28] C. Cai, L. Li, D. Zeng, Detecting social bots by jointly modeling deep and Applications and Workshops, Vol. 2, IEEE, 2011, pp. 241–244.
behavior and content information, in: Proceedings of the 2017 ACM [54] V. Nahar, X. Li, C. Pang, An effective approach for cyberbullying detection,
on Conference on Information and Knowledge Management, 2017, pp. Commun. Inf. Sci. Manage. Eng. 3 (5) (2013) 238.
1995–1998. [55] H. Sanchez, S. Kumar, Twitter bullying detection, Ser. NSDI 12 (2011)
[29] J. Cheng, M. Bernstein, C. Danescu-Niculescu-Mizil, J. Leskovec, Anyone (2011) 15.
can become a troll: Causes of trolling behavior in online discussions, [56] A. Kontostathis, K. Reynolds, A. Garron, L. Edwards, Detecting cyberbully-
in: Proceedings of the 2017 ACM Conference on Computer Supported ing: query terms and techniques, in: Proceedings of the 5th Annual ACM
Cooperative Work and Social Computing, 2017, pp. 1217–1230. Web Science Conference, 2013, pp. 195–204.
[30] Z. Chu, S. Gianvecchio, H. Wang, S. Jajodia, Detecting automation of twitter [57] H. Dani, J. Li, H. Liu, Sentiment informed cyberbullying detection in social
accounts: Are you a human, bot, or cyborg? IEEE Trans. Dependable Secure media, in: Joint European Conference on Machine Learning and Knowledge
Comput. 9 (6) (2012) 811–824. Discovery in Databases, Springer, 2017, pp. 52–67.
[31] X. Zhou, A. Jain, V.V. Phoha, R. Zafarani, Fake news early detection: A [58] P. Galán-García, J.G.d.l. Puerta, C.L. Gómez, I. Santos, P.G. Bringas, Super-
theory-driven model, arXiv preprint arXiv:1904.11679. vised machine learning for the detection of troll profiles in twitter social
[32] Y. Qin, D. Wurzer, V. Lavrenko, C. Tang, Spotting rumors via novelty network: Application to a real case of cyberbullying, Log. J. IGPL 24 (1)
detection, arXiv preprint arXiv:1611.06322. (2016) 42–53.

228
M.F. López-Vizcaíno, F.J. Nóvoa, V. Carneiro et al. Future Generation Computer Systems 118 (2021) 219–229

[59] S. Salawu, Y. He, J. Lumsden, Approaches to automated detection of Francisco J. Nóvoa was born in Ourense, Spain in 1974.
cyberbullying: A survey, IEEE Trans. Affect. Comput.. He received the M.S degree in computer science from
[60] H. Rosa, N. Pereira, R. Ribeiro, P.C. Ferreira, J.P. Carvalho, S. Oliveira, the University of Deusto, Spain, in 1998. He obtained
L. Coheur, P. Paulino, A.V. Simão, I. Trancoso, Automatic cyberbullying his Ph.D. degree in computer science at the University
detection: A systematic review, Comput. Hum. Behav. 93 (2019) 333–345. of A Coruña, Spain, in 2007. From 1998 to 2007, he de-
[61] L. Cheng, R. Guo, Y. Silva, D. Hall, H. Liu, Hierarchical attention networks for veloped his professional career in the business field of
cyberbullying detection on the instagram social network, in: Proceedings Information Technology, reaching multiple professional
of the 2019 SIAM International Conference on Data Mining, SIAM, 2019, certifications such as CCNA, CCNP, MCP, among others.
pp. 235–243. From 2007 to 2018, he was assistant professor at the
[62] D. Soni, V. Singh, Time reveals all wounds: Modeling temporal character- Computer Science Department at the University of A
istics of cyberbullying, in: Twelfth International AAAI Conference on Web Coruña. Since then, he was Associate Professor at the
and Social Media, 2018. same department. He is author of 12 journal articles, 10 book chapters and more
[63] D.E. Losada, F. Crestani, A test collection for research on depression than 30 conference articles. His research interests include network security,
and language use, in: International Conference of the Cross-Language intrusion detection, data flow analysis, IoT, medical informatics, biomedical
Evaluation Forum for European Languages, Springer, 2016, pp. 28–39. imaging, artificial intelligence and neural networks.
[64] F. Sadeque, D. Xu, S. Bethard, Measuring the latency of depression detec-
tion in social media, in: Proceedings of the Eleventh ACM International
Víctor Carneiro received his Ph.D. and B.S. degree
Conference on Web Search and Data Mining, ACM, 2018, pp. 495–503.
in Computer Science from University of A Coruña, A
[65] D.E. Losada, F. Crestani, J. Parapar, Overview of erisk at clef 2019 early
Coruña, Spain, in 1998 and 1993, respectively. He has
risk prediction on the internet (extended overview), in: International Con-
been an associate professor of the Department of Infor-
ference of the Cross-Language Evaluation Forum for European Languages,
mation and Communication Technologies, University of
Springer, 2019.
A Coruña, Spain, since 1995.
[66] M.F. Lopez-Vizcaino, F.J. Novoa, D. Fernandez, V. Carneiro, F. Cacheda, Early
He has participated in a lot of research projects and
intrusion detection for os scan attacks, in: 2019 IEEE 18th International
professional experiences related with network manage-
Symposium on Network Computing and Applications (NCA), IEEE, 2019,
ment, distributed systems, information retrieval over
pp. 209–213.
Internet and recommender systems based in collab-
[67] H.-W. Liao, D.-L. Zhou, Review of adaboost and its improvement, Jisuanji
orative filtering techniques. Nowadays he is working
Xitong Yingyong- Comput. Syst. Appl. 21 (5) (2012) 240–244.
in technologies based on collective intelligence applied to the detection of
[68] H. Allende-Cid, R. Salas, H. Allende, R. Nanculef, Robust alternating ad-
anomalies and attacks in TCP/IP networks and IoT protocols.
aboost, in: Iberoamerican Congress on Pattern Recognition, Springer, 2007,
pp. 427–436.
Fidel Cacheda was born in Poissy, France in 1973. He
received the B.S. degree in computer science from the
University of A Coruña, Spain, in 1994 and the M.S.
Manuel F. López-Vizcaíno was born in Lugo, Spain, in
degree in computer science from the same university in
1990. He received the B.S. degree in computer science
1996. He obtained his Ph.D. degree in computer science
from the University of A Coruña, Spain, in 2015. He
at the University of A Coruña, Spain in 2002.
is currently developing his Ph.D. studies in the same
From 1998 to 2006, he was an Assistant Professor
University, where he also works as teaching assistant at
at the Computer Science Department at the University
the same time. His research focuses on the evaluation
of A Coruña. Since then, he has been an Associate
and application of early detection methods to anoma-
Professor at the same department. He is author of
lies in cybersecurity, although he is also interested in
four books, nine book chapters, more than 20 journal
other topics regarding Artificial Intelligence, evaluation
articles and more than 60 conference articles. His research interests include
metrics and network security.
information retrieval, recommender systems and early detection of anomalies
applied to cybersecurity.

229

You might also like