Fake News Spreaders Detection: Sometimes Attention Is Not All You Need
Abstract
:1. Introduction
- Analysing linguistic features of FNS and non-Fake News Spreaders (henceforth, nFNS) using corpus linguistics techniques;
- Comparing several State-of-the-Art (SotA) models to assess the impact of different architectures on the same dataset;
- On the basis of our comparative evaluation and our preliminary linguistic analysis, proving that large pre-trained models are not necessarily the optimal solution for the proposed task;
- Observing and investigating the behaviour of the best performing model (a shallow CNN) through a post-hoc analysis of the model layer outputs.
2. Related Work
2.1. Fake News Detection
2.2. Fake News Spreaders Detection
3. Materials and Methods
3.1. Models Architectures
- BERT. Presented in [43], BERT is one of the first language representation model presented. It is designed to pre-train bidirectional representations, by jointly conditioning on both left and right context, starting from unlabeled text. The model is pretrained using two objectives: (1) Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words, (2) Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. The model can be fine-tuned with just one additional output layer depending on the task. For the English dataset we implemented the original bert-base presented in [43] while for the Spanish dataset we used BETO, the pretrained Spanish version discussed in [44].
- DistilBERT. Given the interesting results obtained in [45] we implemented for our task a DistilBERT [46] model. DistilBERT is a method to pre-train a general-purpose language representation model. The result is a smaller model if compared to BERT. Thanks to a distillation process, in DistilBERT the size of a BERT model is reduced by 40%, while retaining 97% of its language understanding capabilities and being 60% faster. For the English dataset we implemented the original distilbert-base presented in [46] while for the Spanish dataset we used the pretrained version extracted from distilbert-base-multilingual-cased and discussed in [47].
- RoBERTa. Presenting a replication study of BERT pre-training, authors in [48] improve the performances of BERT operating modifications to the pretrain phase of a BERT model. These modifications include: (1) training the model longer, with bigger batches; (2) removing the next sentence prediction objective; (3) training on longer sequences; and (4) dynamically changing the masking pattern applied to the training data. For the English dataset we implemented the version of RoBERTa presented in [48], while for the Spanish dataset we used the version of RoBERTa pretrained with a total of 570 GB of clean and deduplicated text. The text is compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) from 2009 to 2019 and discussed in [49].
- ELECTRA. Presented in [50], instead of masking the input as in BERT, ELECTRA implies replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, a discriminative model is trained to predicts whether each token in the corrupted input was replaced by a generator sample or not. For the English dataset we implemented the original version presented in [50], for the Spanish dataset we used an ELECTRA model trained on the same Large Spanish Corpus used in BETO.
- Longformer. Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation in [51] authors introduce the Longformer. Thanks to an attention mechanism that scales linearly with sequence length, processing documents of thousands of tokens or longer should be easier. Longformer uses a combination of a sliding window (local) attention and global attention. Global attention is based on the task to allow the model to learn task-specific representations. For the English dataset we used the original pretrained version presented in [51], for the Spanish dataset we implemented the version pretrained on: (1) Encyclopedic articles from Wikipedia in Spanish, (2) News from Wikinews in Spanish, (3) Texts from the Spanish corpus AnCora ( http://clic.ub.edu/corpus/en, accessed on 15 August 2022), which is a mix from different newswire and literature sources.
- XLNet. The approach proposed in [52] is a generalised autoregressive pretraining method. It enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order. On several tasks (including question answering, natural language inference, sentiment analysis, and document ranking) XLNet outperforms BERT, often by a large margin. A well-established XLNet pretrained on a Spanish corpus is missing. So in our study we implemented the same pretrained XLNet for both datasets. Evaluating, in this case, a zero-shot cross lingual transfer [53].
- CNN. The shallow CNN tested is based on the one presented in [54] and is shown in Figure 1. This CNN is a novel architecture developed and tuned specifically for this work. In the PAN2021 author profiling task, a very similar architecture was able to win the challenge, ranking first against over 60 participating teams (https://pan.webis.de/clef21/pan21-web/author-profiling.html, accessed on 15 August 2022). The CNN is based on a word embedding layer, a single convolutional layer, a max pooling layer, a dense layer, a global average pooling layer and a final single dense unit layer. As proved by its results, on a similar binary classification task, the model is able to outperform Transformer-based models and several other deep and non-deep model as reported in [55].
- Multi Channel CNN. To further investigate the interesting results obtained by a shallow CNN in [54], we tested a multi channel CNN similar to the one proposed in [45]. Thanks to parallel channels, consisting of word embedding, convolutional and max pooling layers, the model is able to capture different-sized windows of ngrams compared to the single layer and single filter size of the shallow CNN tested here.
- SVM. Based on [56], we tested the sklearn SVC implementation (https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html, accessed on 15 August 2022). We used a linear kernel type with a value of 1.0 as regularization parameter.
- Naive Bayes. As firstly discussed in [57] and as empirically proved along the years by the interesting results obtained on several text classification tasks, we implemented a Multinomial Naive Bayes classifier from sklearn MultinomialNB implementation (https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.html, accessed on 15 August 2022). MultinomialNB implements the naive Bayes algorithm for multinomially distributed data where data are typically represented as word vector counts.
3.2. Dataset Analysis
3.2.1. Compare Corpora
3.2.2. Keywords
3.2.3. Word Sketch Difference
3.3. Experimental Setup
4. Results and Discussion
5. Post-Hoc Model Analysis
5.1. Word Embedding Layer Output
5.2. Convolutional Layer Output
5.3. Global Average Pooling Output
5.4. Qualitative Error Analysis
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
FN | Fake News |
FNS | Fake News Spreaders |
nFNS | non-Fake News Spreaders |
PFNSoT | Profiling Fake News Spreaders on Twitter |
References
- Vosoughi, S.; Roy, D.; Aral, S. The spread of true and false news online. Science 2018, 359, 1146–1151. [Google Scholar] [CrossRef] [PubMed]
- Pian, W.; Chi, J.; Ma, F. The causes, impacts and countermeasures of COVID-19 “Infodemic”: A systematic review using narrative synthesis. Inf. Process. Manag. 2021, 58, 102713. [Google Scholar] [CrossRef]
- McGonagle, T. ’Fake news’ False fears or real concerns? Neth. Q. Hum. Rights 2017, 35, 203–209. [Google Scholar] [CrossRef]
- Zhou, X.; Zafarani, R. A survey of fake news: Fundamental theories, detection methods, and opportunities. ACM Comput. Surv. (CSUR) 2020, 53, 1–40. [Google Scholar] [CrossRef]
- Zhang, X.; Ghorbani, A.A. An overview of online fake news: Characterization, detection, and discussion. Inf. Process. Manag. 2020, 57, 102025. [Google Scholar] [CrossRef]
- Lazer, D.M.; Baum, M.A.; Benkler, Y.; Berinsky, A.J.; Greenhill, K.M.; Menczer, F.; Metzger, M.J.; Nyhan, B.; Pennycook, G.; Rothschild, D.; et al. The science of fake news. Science 2018, 359, 1094–1096. [Google Scholar] [CrossRef] [PubMed]
- Guo, Z.; Schlichtkrull, M.; Vlachos, A. A Survey on Automated Fact-Checking. Trans. Assoc. Comput. Linguist. 2022, 10, 178–206. [Google Scholar] [CrossRef]
- Shu, K.; Sliva, A.; Wang, S.; Tang, J.; Liu, H. Fake news detection on social media: A data mining perspective. Acm Sigkdd Explor. Newsl. 2017, 19, 22–36. [Google Scholar] [CrossRef]
- Rubin, V.L. On deception and deception detection: Content analysis of computer-mediated stated beliefs. Proc. Am. Soc. Inf. Sci. Technol. 2010, 47, 1–10. [Google Scholar] [CrossRef]
- Moravec, P.L.; Minas, R.K.; Dennis, A. Fake News on Social Media: People Believe What They Want to Believe When it Makes No Sense At All. Manag. Inf. Syst. Q. 2019, 43, 1343–1360. [Google Scholar] [CrossRef]
- Sharma, K.; Qian, F.; Jiang, H.; Ruchansky, N.; Zhang, M.; Liu, Y. Combating fake news: A survey on identification and mitigation techniques. Acm Trans. Intell. Syst. Technol. (TIST) 2019, 10, 1–42. [Google Scholar] [CrossRef]
- Southwell, B.G.; Thorson, E.A.; Sheble, L. The Persistence and Peril of Misinformation: Defining what truth means and deciphering how human brains verify information are some of the challenges to battling widespread falsehoods. Am. Sci. 2017, 105, 372–376. [Google Scholar]
- Molina, M.D.; Sundar, S.S.; Le, T.; Lee, D. “Fake news” is not simply false information: A concept explication and taxonomy of online content. Am. Behav. Sci. 2021, 65, 180–212. [Google Scholar] [CrossRef]
- Cambria, E.; Li, Y.; Xing, F.Z.; Poria, S.; Kwok, K. SenticNet 6: Ensemble application of symbolic and subsymbolic AI for sentiment analysis. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, Virtual, 19–23 October 2020; pp. 105–114. [Google Scholar]
- Cambria, E.; Kumar, A.; Al-Ayyoub, M.; Howard, N. Guest Editorial: Explainable Artificial Intelligence for Sentiment Analysis. Know.-Based Syst. 2021, 238, 1–3. [Google Scholar] [CrossRef]
- Rangel, F.; Giachanou, A.; Ghanem, B.; Rosso, P. Overview of the 8th Author Profiling Task at PAN 2020: Profiling Fake News Spreaders on Twitter. In Proceedings of the CLEF 2020–Conference and Labs of the Evaluation Forum, Thessaloniki, Greece, 22–25 September 2020. [Google Scholar]
- Derczynski, L.; Bontcheva, K.; Liakata, M.; Procter, R.; Hoi, G.W.S.; Zubiaga, A. SemEval-2017 Task 8: RumourEval: Determining rumour veracity and support for rumours. arXiv 2017, arXiv:1704.05972. [Google Scholar]
- Gorrell, G.; Kochkina, E.; Liakata, M.; Aker, A.; Zubiaga, A.; Bontcheva, K.; Derczynski, L. SemEval-2019 task 7: RumourEval, determining rumour veracity and support for rumours. In Proceedings of the 13th International Workshop on Semantic Evaluation, Minneapolis, MN, USA, 6–7 June 2019; pp. 845–854. [Google Scholar]
- Chakraborty, A.; Paranjape, B.; Kakarla, S.; Ganguly, N. Stop clickbait: Detecting and preventing clickbaits in online news media. In Proceedings of the 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (Asonam), San Francisco, CA, USA, 18–21 August 2016; pp. 9–16. [Google Scholar]
- Ghanem, B.; Rosso, P.; Rangel, F. An emotional analysis of false information in social media and news articles. ACM Trans. Internet Technol. (TOIT) 2020, 20, 1–18. [Google Scholar] [CrossRef]
- Ferrara, E.; Varol, O.; Davis, C.; Menczer, F.; Flammini, A. The rise of social bots. Commun. ACM 2016, 59, 96–104. [Google Scholar] [CrossRef]
- Rangel, F.; Rosso, P. Overview of the 7th author profiling task at PAN 2019: Bots and gender profiling in Twitter. In Proceedings of the CLEF 2019–Conference and Labs of the Evaluation Forum, Lugano, Switzerland, 9–12 September 2019. [Google Scholar]
- Lomonaco, F.; Donabauer, G.; Siino, M. COURAGE at CheckThat! 2022: Harmful Tweet Detection using Graph Neural Networks and ELECTRA. CEUR Workshop Proc. 2022, 3180, 573–583. [Google Scholar]
- Li, Y.; Yu, R.; Shahabi, C.; Liu, Y. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. arXiv 2017, arXiv:1707.01926. [Google Scholar]
- Pradhyumna, P.; Shreya, G.P.; Mohana. Graph Neural Network (GNN) in Image and Video Understanding Using Deep Learning for Computer Vision Applications. In Proceedings of the 2021 Second International Conference on Electronics and Sustainable Communication Systems (ICESC), Coimbatore, India, 4–6 August 2021; pp. 1183–1189. [Google Scholar]
- Siino, M.; La Cascia, M.; Tinnirello, I. WhoSNext: Recommending Twitter Users to Follow Using a Spreading Activation Network Based Approach. In Proceedings of the 2020 International Conference on Data Mining Workshops (ICDMW), Sorrento, Italy, 17–20 November 2020; pp. 62–70. [Google Scholar]
- Figuerêdo, J.S.L.; Maia, A.L.L.; Calumby, R.T. Early depression detection in social media based on deep learning and underlying emotions. Online Soc. Netw. Media 2022, 31, 100225. [Google Scholar] [CrossRef]
- Siino, M.; Tinnirello, I.; La Cascia, M. T100: A modern classic ensemble to profile irony and stereotype spreaders. CEUR Workshop Proc. 2022, 3180, 2666–2674. [Google Scholar]
- Croce, D.; Garlisi, D.; Siino, M. An SVM Ensamble Approach to Detect Irony and Stereotype Spreaders on Twitter. CEUR Workshop Proc. 2022, 3180, 2426–2432. [Google Scholar]
- Mangione, S.; Siino, M.; Garbo, G. Improving Irony and Stereotype Spreaders Detection using Data Augmentation and Convolutional Neural Network. CEUR Workshop Proc. 2022, 3180, 2585–2593. [Google Scholar]
- Patwa, P.; Bhardwaj, M.; Guptha, V.; Kumari, G.; Sharma, S.; Pykl, S.; Das, A.; Ekbal, A.; Akhtar, M.S.; Chakraborty, T. Overview of constraint 2021 shared tasks: Detecting english covid-19 fake news and hindi hostile posts. In International Workshop on Combating On line Hostile Posts in Regional Languages during Emergency Situation; Springer: Cham, Switzerland, 2021; pp. 42–53. [Google Scholar]
- Al-Yahya, M.; Al-Khalifa, H.; Al-Baity, H.; AlSaeed, D.; Essam, A. Arabic Fake News Detection: Comparative Study of Neural Networks and Transformer-Based Approaches. Complexity 2021, 2021, 5516945. [Google Scholar] [CrossRef]
- Mahir, E.M.; Akhter, S.; Huq, M.R.; Abdullah-All-Tanvir. Detecting fake news using machine learning and deep learning algorithms. In Proceedings of the 2019 7th International Conference on Smart Computing & Communications (ICSCC), Sarawak, Malaysia, 28–30 June 2019; pp. 1–5. [Google Scholar]
- Bali, A.P.S.; Fernandes, M.; Choubey, S.; Goel, M. Comparative performance of machine learning algorithms for fake news detection. In International Conference on Advances in Computing and Data Sciences; Springer: Singapore, 2019; pp. 420–430. [Google Scholar]
- Liu, Y.; Wu, Y.F. Early detection of fake news on social media through propagation path classification with recurrent and convolutional networks. In Proceedings of the AAAI conference on artificial intelligence, New Orleans, LA, USA, 4–6 February 2018; Volume 32. [Google Scholar]
- Pérez-Rosas, V.; Kleinberg, B.; Lefevre, A.; Mihalcea, R. Automatic Detection of Fake News. In Proceedings of the 27th International Conference on Computational Linguistics, Santa Fe, NM, USA, 20–26 August 2018; pp. 3391–3401. [Google Scholar]
- Pizarro, J. Using N-grams to detect Fake News Spreaders on Twitter. In Proceedings of the CLEF 2020–Conference and Labs of the Evaluation Forum, Thessaloniki, Greece, 22–25 September 2020. [Google Scholar]
- Buda, J.; Bolonyai, F. An Ensemble Model Using N-grams and Statistical Features to Identify Fake News Spreaders on Twitter. In Proceedings of the CLEF 2020–Conference and Labs of the Evaluation Forum, Thessaloniki, Greece, 22–25 September 2020. [Google Scholar]
- Leonardi, S.; Rizzo, G.; Morisio, M. Automated classification of fake news spreaders to break the misinformation chain. Information 2021, 12, 248. [Google Scholar] [CrossRef]
- Cui, L.; Lee, D. Coaid: Covid-19 healthcare misinformation dataset. arXiv 2020, arXiv:2006.00885. [Google Scholar]
- Giachanou, A.; Ghanem, B.; Ríssola, E.A.; Rosso, P.; Crestani, F.; Oberski, D. The impact of psycholinguistic patterns in discriminating between fake news spreaders and fact checkers. Data Knowl. Eng. 2022, 138, 101960. [Google Scholar] [CrossRef]
- Cervero, R.; Rosso, P.; Pasi, G. Profiling Fake News Spreaders: Personality and Visual Information Matter. In International Conference on Applications of Natural Language to Information Systems; Springer: Cham, Switzerland, 2021; pp. 355–363. [Google Scholar]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019; pp. 4171–4186. [Google Scholar]
- Cañete, J.; Chaperon, G.; Fuentes, R.; Ho, J.H.; Kang, H.; Pérez, J. Spanish Pre-Trained BERT Model and Evaluation Data. PML4DC ICLR 2020, 2020, 1–10. [Google Scholar]
- Siino, M.; La Cascia, M.; Tinnirello, I. McRock at SemEval-2022 Task 4: Patronizing and Condescending Language Detection using Multi-Channel CNN, Hybrid LSTM, DistilBERT and XLNet. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022), Seattle, WC, USA, 14–15 July 2022; pp. 409–417. [Google Scholar] [CrossRef]
- Sanh, V.; Debut, L.; Chaumond, J.; Wolf, T. DistilBERT, a distilled version of BERT: Smaller, faster, cheaper and lighter. arXiv 2019, arXiv:1910.01108. [Google Scholar]
- Abdaoui, A.; Pradel, C.; Sigel, G. Load What You Need: Smaller Versions of Mutililingual BERT. In Proceedings of the SustaiNLP: Workshop on Simple and Efficient Natural Language Processing, Virtual, 20 November 2020; pp. 119–123. [Google Scholar]
- Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; Stoyanov, V. Roberta: A robustly optimized bert pretraining approach. arXiv 2019, arXiv:1907.11692. [Google Scholar]
- Gutiérrez-Fandiño, A.; Armengol-Estapé, J.; Pàmies, M.; Llop-Palao, J.; Silveira-Ocampo, J.; Carrino, C.P.; Gonzalez-Agirre, A.; Armentano-Oller, C.; Rodriguez-Penagos, C.; Villegas, M. Spanish Language Models. arXiv 2021, arXiv:2107.07253. [Google Scholar]
- Clark, K.; Luong, M.T.; Le, Q.V.; Manning, C.D. Electra: Pre-training text encoders as discriminators rather than generators. arXiv 2020, arXiv:2003.10555. [Google Scholar]
- Beltagy, I.; Peters, M.E.; Cohan, A. Longformer: The long-document transformer. arXiv 2020, arXiv:2004.05150. [Google Scholar]
- Yang, Z.; Dai, Z.; Yang, Y.; Carbonell, J.; Salakhutdinov, R.R.; Le, Q.V. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv 2019, arXiv:1906.08237. [Google Scholar]
- Chen, G.; Ma, S.; Chen, Y.; Dong, L.; Zhang, D.; Pan, J.; Wang, W.; Wei, F. Zero-shot Cross-lingual Transfer of Neural Machine Translation with Multilingual Pretrained Encoders. arXiv 2021, arXiv:2104.08757. [Google Scholar]
- Siino, M.; Di Nuovo, E.; Tinnirello, I.; La Cascia, M. Detection of Hate Speech Spreaders using Convolutional Neural Networks. CEUR Workshop Proc. 2021, 2936, 2126–2136. [Google Scholar]
- Rangel, F.; Sarracén, G.; Chulvi, B.; Fersini, E.; Rosso, P. Profiling hate speech spreaders on twitter task at PAN 2021. In Proceedings of the CLEF 2021–Conference and Labs of the Evaluation Forum, Bucharest, Romania, 21–24 September 2021. [Google Scholar]
- Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. Acm Trans. Intell. Syst. Technol. (TIST) 2011, 2, 1–27. [Google Scholar] [CrossRef]
- McCallum, A.; Nigam, K. A comparison of event models for naive bayes text classification. In Proceedings of the AAAI-98 Workshop on Learning for Text Categorization, Madison, WI, USA, 26–27 July 1998. [Google Scholar]
- Kilgarriff, A.; Baisa, V.; Bušta, J.; Jakubíček, M.; Kovář, V.; Michelfeit, J.; Rychlý, P.; Suchomel, V. The Sketch Engine: Ten years on. Lexicography 2014, 1, 7–36. [Google Scholar] [CrossRef]
- Kilgarriff, A. Comparing corpora. Int. J. Corpus Linguist. 2001, 6, 97–133. [Google Scholar] [CrossRef]
- Demmen, J.E.; Culpeper, J.V. Keywords. In The Cambridge Handbook of English Corpus Linguistics; Biber, D., Reppen, R., Eds.; Cambridge University Press: Cambridge, UK, 2015; pp. 90–105. [Google Scholar]
- Kilgarriff, A. Getting to know your corpus. In International Conference on Text, Speech and Dialogue; Springer: Berlin/Heidelberg, Germany, 2012; pp. 3–15. [Google Scholar]
- Wolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M.; et al. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Virtual, 16–20 November 2020; pp. 38–45. [Google Scholar]
- Cambria, E.; Schuller, B.; Xia, Y.; Havasi, C. New avenues in opinion mining and sentiment analysis. IEEE Intell. Syst. 2013, 28, 15–21. [Google Scholar] [CrossRef]
- Kenny, E.M.; Ford, C.; Quinn, M.; Keane, M.T. Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies. Artif. Intell. 2021, 294, 103459. [Google Scholar] [CrossRef]
- Bender, E.M.; Koller, A. Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 5–10 June 2020; pp. 5185–5198. [Google Scholar] [CrossRef]
Subcorpus Name | # Tokens | Percentage | Total |
---|---|---|---|
es_0 | 832,755 | 53.71% | 1,550,505 |
es_1 | 717,750 | 46.29% | |
en_0 | 669,519 | 50.57% | 1,323,982 |
en_1 | 654,463 | 49.43% | |
es_train_0 | 500,003 | 54.04% | 925,152 |
es_train_1 | 425,149 | 45.96% | |
en_train_0 | 402,788 | 50.92% | 791,024 |
en_train_1 | 388,236 | 49.08% | |
es_test_0 | 332,752 | 53.21% | 625,353 |
es_test_1 | 292,601 | 46.79% | |
en_test_0 | 266,731 | 50.04% | 532,958 |
en_test_1 | 266,227 | 49.96% |
Spanish Corpus First 50 Keywords of nFNS—Corpus 0 as Focus and Corpus 1 as Reference | |||||||||
1 | T | 11 | PRECAUCIÓN | 21 | qué | 31 | seguridad | 41 | información |
2 | HASHTAG | 12 | tuit | 22 | to | 32 | PodemosCMadrid | 42 | esa |
3 | Buenos | 13 | Albacete | 23 | added | 33 | hemos | 43 | Mancha |
4 | Android | 14 | bulos | 24 | Castilla-La | 34 | han | 44 | sociales |
5 | h | 15 | 25 | Pues | 35 | usuarios | 45 | Os | |
6 | he | 16 | artículo | 26 | sí | 36 | servicio | 46 | cómo |
7 | sentido | 17 | Xiaomi | 27 | Albedo | 37 | RT | 47 | Nuevos |
8 | RECOMENDACIONES | 18 | León | 28 | algo | 38 | datos | 48 | pruebas |
9 | Samsung | 19 | móvil | 29 | pantalla | 39 | os | 49 | Gracias |
10 | Galaxy | 20 | Cs_Madrid | 30 | disponible | 40 | playlist | 50 | creo |
Spanish Corpus First 50 Keywords of FNS—Corpus 1 as Focus and Corpus 0 as Reference | |||||||||
1 | Unete | 11 | Lapiz | 21 | Dominicana | 31 | OLVIDES | 41 | Concierto |
2 | VIDEO | 12 | Vida | 22 | Fuertes | 32 | Joven | 42 | Acaba |
3 | Video | 13 | Conciente | 23 | Follow | 33 | Años | 43 | Muere |
4 | Clasico | 14 | DESCARGAR | 24 | DE | 34 | COMPÁRTELO | 44 | Hombre |
5 | ESTRENO | 15 | Mozart | 25 | Su | 35 | IMAGENES | 45 | Secreto |
6 | MINUTO | 16 | De | 26 | Descargar | 36 | Le | 46 | ft |
7 | ULTIMO | 17 | Ft | 27 | añadido | 37 | IMPACTANTE | 47 | Preview |
8 | Mayor | 18 | Imagenes | 28 | FUERTES | 38 | Accidente | 48 | lista |
9 | Alfa | 19 | Official | 29 | Don | 39 | Miguelo | 49 | Republica |
10 | Oficial | 20 | reproducción | 30 | Del | 40 | Remedios | 50 | Omega |
English Corpus First 50 Keywords of nFNS—Corpus 0 as Focus and Corpus 1 as Reference | |||||||||
1 | Via | 11 | Synopsis | 21 | Tie | 31 | Check | 41 | isabelle |
2 | Promo | 12 | Styles | 22 | qua | 32 | Academy | 42 | AAPL |
3 | Review | 13 | Lane | 23 | Bayelsa | 33 | Ankara | 43 | fashion |
4 | Episode | 14 | GQMagazine | 24 | du | 34 | rabolas | 44 | Date |
5 | PHOTOS | 15 | Mariska | 25 | Robe | 35 | PhD | 45 | esme |
6 | Read | 16 | Hargitay | 26 | NYFA | 36 | Spoilers | 46 | isla |
7 | Actor | 17 | Nigerian | 27 | Tendance | 37 | DE | 47 | Marketing |
8 | TrackBot | 18 | READ | 28 | Supernatural | 38 | story | 48 | Link |
9 | RCN | 19 | br | 29 | Film | 39 | Draw | 49 | prinny |
10 | AU | 20 | beauty | 30 | Bilson | 40 | University | 50 | your |
English Corpus first 50 Keywords of FNS—Corpus 1 as Focus and Corpus 0 as Reference | |||||||||
1 | Jordyn | 11 | ALERT | 21 | Schiff | 31 | tai | 41 | Price |
2 | realDonaldTrump | 12 | Grande | 22 | InStyle | 32 | Him | 42 | Says |
3 | Trump | 13 | Biden | 23 | Democrats | 33 | Her | 43 | post |
4 | Donald | 14 | Meghan | 24 | Trump’s | 34 | 44 | About | |
5 | Hillary | 15 | NEWS | 25 | His | 35 | Markle | 45 | rally |
6 | Obama | 16 | published | 26 | After | 36 | Jonas | 46 | BUY |
7 | Clinton | 17 | Ariana | 27 | Reveals | 37 | border | 47 | Bernie |
8 | FAKE | 18 | Webtalk | 28 | Snoop | 38 | Khloe | 48 | Tristan |
9 | Woods | 19 | Viral | 29 | Thrones | 39 | Scandal | 49 | tweet |
10 | RelNews | 20 | added | 30 | Border | 40 | Pelosi | 50 | FBI |
Spanish Corpus | English Corpus | ||||
---|---|---|---|---|---|
Modifiers | nFNS | FNS | Modifiers | nFNS | FNS |
vial | 2 | 0 | single-car | 1 | 0 |
infortunado | 1 | 0 | Dangote | 1 | 0 |
ferroviario | 1 | 0 | motorcycle | 2 | 0 |
mortal | 1 | 0 | truck | 1 | 0 |
aéreo | 1 | 0 | train | 1 | 0 |
múltiple | 1 | 0 | fatal | 1 | 0 |
grave | 1 | 0 | car | 0 | 1 |
laboral | 2 | 2 | theme | 0 | 1 |
aparatoso | 1 | 5 | Park | 0 | 1 |
propio | 0 | 2 | tragic | 0 | 1 |
cerebrovascular | 0 | 1 | snowmobile | 0 | 1 |
automovilístico | 0 | 2 | N.L. | 0 | 1 |
trágico | 0 | 8 | |||
terrible | 0 | 19 |
English | Spanish | |||
---|---|---|---|---|
Acc | Acc | |||
CNN | 0.715 | 0.022 | 0.815 | 0.005 |
Multi-CNN | 0.545 | 0.004 | 0.670 | 0.013 |
BERT | 0.625 | 0.036 | 0.735 | 0.018 |
RoBERTa | 0.695 | 0.014 | 0.735 | 0.024 |
ELECTRA | 0.630 | 0.016 | 0.760 | 0.015 |
DistilBERT | 0.645 | 0.016 | 0.725 | 0.014 |
XLNet | 0.675 | 0.020 | 0.710 | 0.070 |
Longformer | 0.685 | 0.041 | 0.695 | 0.007 |
Naive Bayes | 0.695 | - | 0.695 | - |
SVM | 0.630 | - | 0.755 | - |
Position | Team | English | Spanish | AVG |
---|---|---|---|---|
1 | bolonyai20 | 0.750 | 0.805 | 0.777 |
1 | pizarro20 | 0.735 | 0.820 | 0.777 |
- | SYMANTO (LDSE) | 0.745 | 0.790 | 0.767 |
3 | koloski20 | 0.715 | 0.795 | 0.755 |
3 | deborjavalero20 | 0.730 | 0.780 | 0.755 |
3 | vogel20 | 0.725 | 0.785 | 0.755 |
- | SVM + c nGrams | 0.680 | 0.790 | 0.735 |
- | NN + w nGrams | 0.690 | 0.700 | 0.695 |
- | LSTM | 0.560 | 0.600 | 0.580 |
- | RANDOM | 0.510 | 0.500 | 0.505 |
Example | Tweet Text |
---|---|
1 | RT #USER#: Remedio casero para limpiar las juntas del |
azulejo. #URL# | |
2 | La venta de medicamentos con receta bajó todos los años |
entre 2016 y 2019. Además, en 2018 la mitad de los hogares | |
pobres de CABA y el Conurbano debieron dejar de comprar | |
remedios por problemas económicos. Más info en esta nota | |
#URL# de #USER#. #URL# | |
3 | Poderoso remedio casero para eliminar el colesterol de los |
vasos sanguíneos y perder peso -… #URL# |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Siino, M.; Di Nuovo, E.; Tinnirello, I.; La Cascia, M. Fake News Spreaders Detection: Sometimes Attention Is Not All You Need. Information 2022, 13, 426. https://doi.org/10.3390/info13090426
Siino M, Di Nuovo E, Tinnirello I, La Cascia M. Fake News Spreaders Detection: Sometimes Attention Is Not All You Need. Information. 2022; 13(9):426. https://doi.org/10.3390/info13090426
Chicago/Turabian StyleSiino, Marco, Elisa Di Nuovo, Ilenia Tinnirello, and Marco La Cascia. 2022. "Fake News Spreaders Detection: Sometimes Attention Is Not All You Need" Information 13, no. 9: 426. https://doi.org/10.3390/info13090426