Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Multimodal Religiously Hateful Social Media Memes Classification Based on Textual and Image Data

Published: 07 August 2024 Publication History

Abstract

Multimodal hateful social media meme detection is an important and challenging problem in the vision-language domain. Recent studies show high accuracy for such multimodal tasks due to datasets that provide better joint multimodal embedding to narrow the semantic gap. Religiously hateful meme detection is not extensively explored among published datasets. While there is a need for higher accuracy on religiously hateful memes, deep learning–based models often suffer from inductive bias. This issue is addressed in this work with the following contributions. First, a religiously hateful memes dataset is created and published publicly to advance hateful religious memes detection research. Over 2000 meme images are collected with their corresponding text. The proposed approach compares and fine-tunes VisualBERT pre-trained on the Conceptual Caption (CC) dataset for the downstream classification task. We also extend the dataset with the Facebook hateful memes dataset. We extract visual features using ResNeXT-152 Aggregated Residual Transformations–based Masked Regions with Convolutional Neural Networks (R-CNN) and Bidirectional Encoder Representations from Transformers (BERT) uncased for textual encoding for the early fusion model. We use the primary evaluation metric of an Area Under the Operator Characters Curve (AUROC) to measure model separability. Results show that the proposed approach has a higher AUROC score of 78%, proving the model’s higher separability performance and an accuracy of 70%. It shows comparatively superior performance considering dataset size and against ensemble-based machine learning approaches.

References

[1]
Tariq Habib Afridi, Aftab Alam, Muhammad Numan Khan, Jawad Khan, and Young-Koo Lee. 2021. A multimodal memes classification: A survey and open research issues. In Innovations in Smart Cities Applications Volume 4: The Proceedings of the 5th International Conference on Smart City Applications. Springer, 1451–1466.
[2]
Apeksha Aggarwal, Vibhav Sharma, Anshul Trivedi, Mayank Yadav, Chirag Agrawal, Dilbag Singh, Vipul Mishra, and Hassène Gritli. 2021. Two-way feature extraction using sequential and multimodal approach for hateful meme classification. Complexity 2021 (2021).
[3]
Sadique Ahmad, Najib Ben Aoun, Mohammed A El Affendi, M Shahid Anwar, Sidra Abbas, and Ahmed A Latif. 2022. Optimization of students’ performance prediction through an iterative model of frustration severity. Computational Intelligence and Neuroscience 2022 (2022).
[4]
Wassen Aldjanabi, Abdelghani Dahou, Mohammed AA Al-qaness, Mohamed Abd Elaziz, Ahmed Mohamed Helmi, and Robertas Damaševičius. 2021. Arabic offensive and hate speech detection using a cross-corpora multi-task learning model. In Informatics, Vol. 8. Multidisciplinary Digital Publishing Institute, 69.
[5]
Abdullah Alqahtani, Habib Ullah Khan, Shtwai Alsubai, Mohemmed Sha, Ahmad Almadhor, Tayyab Iqbal, and Sidra Abbas. 2022. An efficient approach for textual data classification using deep learning. (2022).
[6]
Raghad Alshalan and Hend Al-Khalifa. 2020. A deep learning approach for automatic hate speech detection in the Saudi Twittersphere. Applied Sciences 10, 23 (2020), 8614.
[7]
Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6077–6086.
[8]
Karine Aoun Barakat, Amal Dabbous, and Abbas Tarhini. 2021. An empirical approach to understanding users’ fake news identification on social media. Online Information Review 45, 6 (2021), 1080–1096.
[9]
Mohamad Arafeh, Paolo Ceravolo, Azzam Mourad, Ernesto Damiani, and Emanuele Bellini. 2021. Ontology based recommender system using social network data. Future Generation Computer Systems 115 (2021), 769–779.
[10]
Muhammad Zubair Asghar, Fazli Subhan, Hussain Ahmad, Wazir Zada Khan, Saqib Hakak, Thippa Reddy Gadekallu, and Mamoun Alazab. 2021. Senti-eSystem: A sentiment-based eSystem-using hybridized fuzzy and deep neural network for measuring customer satisfaction. Software: Practice and Experience 51, 3 (2021), 571–594.
[11]
Muhammad Farrukh Bashir, Hamza Arshad, Abdul Rehman Javed, Natalia Kryvinska, and Shahab S. Band. 2021. Subjective answers evaluation using machine learning and natural language processing. IEEE Access 9 (2021), 158972–158983.
[12]
Yoshua Bengio, Ian Goodfellow, and Aaron Courville. 2017. Deep Learning. Vol. 1. MIT Press, Cambridge, Massachusetts, USA.
[13]
Hemanta Kumar Bhuyan, Chinmay Chakraborty, Subhendu Kumar Pani, and Vinayakumar Ravi. 2021. Feature and subfeature selection for classification using correlation coefficient and fuzzy model. IEEE Transactions on Engineering Management (2021).
[14]
Hemanta Kumar Bhuyan and Vinayakumar Ravi. 2021. Analysis of subfeature for classification in data mining. IEEE Transactions on Engineering Management (2021).
[15]
Joanna Bitton and Zoe Papakipos. 2021. AugLy: A data augmentations library for audio, image, text, and video. https://github.com/facebookresearch/AugLy.
[16]
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics 5 (2017), 135–146.
[17]
Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. 2021. Improving language models by retrieving from trillions of tokens. arXiv preprint arXiv:2112.04426 (2021).
[18]
Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. 2020. End-to-end object detection with Transformers. In European Conference on Computer Vision. Springer, 213–229.
[19]
Maria Chiara Caschera, Patrizia Grifoni, and Fernando Ferri. 2022. Emotion classification from speech and text in videos using a multimodal approach. Multimodal Technologies and Interaction 6, 4 (2022), 28.
[20]
Despoina Chatzakou, Nicolas Kourtellis, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, and Athena Vakali. 2017. Mean birds: Detecting aggression and bullying on Twitter. In Proceedings of the 2017 ACM on Web Science Conference. 13–22.
[21]
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C. Lawrence Zitnick. 2015. Microsoft COCO captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325 (2015).
[22]
Yuyang Chen and Feng Pan. 2022. Multimodal detection of hateful memes by applying a vision-language pre-training model. PLOS One 17, 9 (2022), e0274300.
[23]
Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. UNITER: UNiversal Image-TExt Representation learning. In European Conference on Computer Vision. Springer, 104–120.
[24]
Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 11.
[25]
Tanvi Deshpande and Nitya Mani. 2021. An interpretable approach to hateful meme detection. In Proceedings of the 2021 International Conference on Multimodal Interaction. 723–727.
[26]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional Transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
[27]
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020).
[28]
Hong Fan, Wu Du, Abdelghani Dahou, Ahmed A. Ewees, Dalia Yousri, Mohamed Abd Elaziz, Ammar H. Elsheikh, Laith Abualigah, and Mohammed A. A. Al-qaness. 2021. Social media toxicity classification using deep learning: Real-world application UK Brexit. Electronics 10, 11 (2021), 1332.
[29]
Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating Models’ Local Decision Boundaries via Contrast Sets. arxiv:cs.CL/2004.02709
[30]
Raul Gomez, Jaume Gibert, Lluis Gomez, and Dimosthenis Karatzas. 2020. Exploring hate speech detection in multimodal publications. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). 0–0.
[31]
Raul Gomez, Jaume Gibert, Lluis Gomez, and Dimosthenis Karatzas. 2020. Exploring hate speech detection in multimodal publications. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 1470–1478.
[32]
Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6904–6913.
[33]
Zhiwei Guo, Yu Shen, Shaohua Wan, Wenlong Shang, and Keping Yu. 2021. Hybrid intelligence-driven medical image recognition for remote patient diagnosis in Internet of Medical Things. IEEE Journal of Biomedical and Health Informatics (2021).
[34]
Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. 2017. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision. 2961–2969.
[35]
Maryam Hina, Mohsan Ali, Abdul Rehman Javed, Gautam Srivastava, Thippa Reddy Gadekallu, and Zunera Jalil. 2021. Email classification and forensics analysis using machine learning. In 2021 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computing, Scalable Computing Communications, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/IOP/SCI). 630–635.
[36]
Homa Hosseinmardi, Sabrina Arredondo Mattson, Rahat Ibn Rafiq, Richard Han, Qin Lv, and Shivakant Mishra. 2015. Detection of cyberbullying incidents on the Instagram social network. arXiv preprint arXiv:1503.03909 (2015).
[37]
Changqin Huang, Zhongmei Han, Ming Li, Xizhe Wang, and Wenzhu Zhao. 2021. Sentiment evolution with interaction levels in blended learning environments: Using learning analytics and epistemic network analysis. Australasian Journal of Educational Technology 37, 2 (2021), 81–95.
[38]
Abdul Rehman Javed, Waqas Ahmed, Sharnil Pandya, Praveen Kumar Reddy Maddikunta, Mamoun Alazab, and Thippa Reddy Gadekallu. 2023. A survey of explainable artificial intelligence for smart cities. Electronics 12, 4 (2023), 1020.
[39]
Divyansh Kaushik, Eduard Hovy, and Zachary C. Lipton. 2019. Learning the difference that makes a difference with counterfactually-augmented data. arXiv preprint arXiv:1909.12434 (2019).
[40]
Divyansh Kaushik, Eduard Hovy, and Zachary C. Lipton. 2020. Learning the Difference that Makes a Difference with Counterfactually-Augmented Data. arxiv:cs.CL/1909.12434
[41]
Salman Khan, Muzammal Naseer, Munawar Hayat, Syed Waqas Zamir, Fahad Shahbaz Khan, and Mubarak Shah. 2021. Transformers in vision: A survey. arXiv preprint arXiv:2101.01169 (2021).
[42]
Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. 2020. The hateful memes challenge: Detecting hate speech in multimodal memes. arXiv preprint arXiv:2005.04790 (2020).
[43]
Huaizhen Kou, Hanwen Liu, Yucong Duan, Wenwen Gong, Yanwei Xu, Xiaolong Xu, and Lianyong Qi. 2021. Building trust/distrust relationships on signed social service network through privacy-aware link prediction process. Applied Soft Computing 100 (2021), 106942.
[44]
Huaizhen Kou, Jian Xu, and Lianyong Qi. 2023. Diversity-driven automated web API recommendation based on implicit requirements. Applied Soft Computing (2023), 110137.
[45]
Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Fei-Fei Li. 2016. Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations. arxiv:cs.CV/1602.07332
[46]
Lin Li, Xiaohua Wu, Miao Kong, Jinhang Liu, and Jianwei Zhang. 2023. Quantitatively interpreting residents happiness prediction by considering factor–factor interactions. IEEE Transactions on Computational Social Systems (2023).
[47]
Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. VisualBERT: A Simple and Performant Baseline for Vision and Language. arxiv:cs.CV/1908.03557
[48]
Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. 2020. Oscar: Object-semantics aligned pre-training for vision-language tasks. In European Conference on Computer Vision. Springer, 121–137.
[49]
Phillip Lippe, Nithin Holla, Shantanu Chandra, Santhosh Rajamanickam, Georgios Antoniou, Ekaterina Shutova, and Helen Yannakoudakis. 2020. A Multimodal Framework for the Detection of Hateful Memes. arxiv:cs.CL/2012.12871
[50]
Xiner Liu, Jianshu He, Mingzhe Liu, Zhengtong Yin, Lirong Yin, and Wenfeng Zheng. 2023. A scenario-generic neural machine translation data augmentation method. Electronics 12, 10 (2023), 2320.
[51]
Xuan Liu, Tianyi Shi, Guohui Zhou, Mingzhe Liu, Zhengtong Yin, Lirong Yin, and Wenfeng Zheng. 2023. Emotion classification for short texts: An improved multi-label method. Humanities and Social Sciences Communications 10, 1 (2023), 1–9.
[52]
Yang Liu, Keze Wang, Lingbo Liu, Haoyuan Lan, and Liang Lin. 2022. TCGL: Temporal contrastive graph for self-supervised video representation learning. IEEE Transactions on Image Processing 31 (2022), 1978–1993.
[53]
Siyu Lu, Mingzhe Liu, Lirong Yin, Zhengtong Yin, Xuan Liu, and Wenfeng Zheng. 2023. The multi-modal fusion in visual question answering: A review of attention mechanisms. Peer J Computer Science 9 (2023), e1400.
[54]
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013).
[55]
J. Nockleyby. 2000. ‘Hate speech in Encyclopedia of the American Constitution. Electronic Journal of Academic and Special Librarianship (2000).
[56]
Ahmad Nsouli, Azzam Mourad, and Danielle Azar. 2018. Towards proactive social learning approach for traffic event detection based on Arabic tweets. In 2018 14th International Wireless Communications & Mobile Computing Conference (IWCMC). IEEE, 1501–1506.
[57]
Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. 2018. Image transformer. In International Conference on Machine Learning. PMLR, 4055–4064.
[58]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. PyTorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems 32 (2019), 8026–8037.
[59]
Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). 1532–1543.
[60]
Damian Radcliffe and Hadil Abuhmaid. 2020. Social media in the Middle East: 2019 in review. Available at SSRN 3517916 (2020).
[61]
Zahy B. Ramadan and Maya F. Farah. 2020. Influencing the influencers: The case of retailers’ social shopping platforms. International Journal of Web Based Communities 16, 3 (2020), 279–295.
[62]
Chhavi Sharma, Deepesh Bhageria, William Scott, Srinivas PYKL, Amitava Das, Tanmoy Chakraborty, Viswanath Pulabaigari, and Bjorn Gambäck. 2020. Task report: Memotion analysis 1.0@ Semeval 2020: The visuo-lingual metaphor. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, Sep. Association for Computational Linguistics.
[63]
Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2556–2565.
[64]
Littlejohn Shinder and Michael Cross. 2008. Chapter 2 —the evolution of cybercrime. In Scene of the Cybercrime (Second Edition), Littlejohn Shinder and Michael Cross (Eds.). Syngress, Burlington, 41–75.
[65]
Amanpreet Singh, Vedanuj Goswami, and Devi Parikh. 2020. Are we pretraining it right? Digging deeper into visio-linguistic pretraining. arXiv preprint arXiv:2004.08744 (2020).
[66]
Aravind Srinivas, Tsung-Yi Lin, Niki Parmar, Jonathon Shlens, Pieter Abbeel, and Ashish Vaswani. 2021. Bottleneck Transformers for visual recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16519–16529.
[67]
Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2019. Vl-BERT: Pre-training of generic visual-linguistic representations. arXiv preprint arXiv:1908.08530 (2019).
[68]
Shardul Suryawanshi, Bharathi Raja Chakravarthi, Mihael Arcan, and Paul Buitelaar. 2020. Multimodal meme dataset (MultiOFF) for identifying offensive content in image and text. In Proceedings of the 2nd Workshop on Trolling, Aggression and Cyberbullying. 32–41.
[69]
Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from Transformers. arXiv preprint arXiv:1908.07490 (2019).
[70]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. 5998–6008.
[71]
Riza Velioglu and Jewgeni Rose. 2020. Detecting Hate Speech in Memes Using Multimodal Deep Learning Approaches: Prize-winning solution to Hateful Memes Challenge. arxiv:cs.AI/2012.12975
[72]
Zeerak Waseem. 2016. Are you a racist or am I seeing things? Annotator influence on hate speech detection on Twitter. In Proceedings of the 1st Workshop on NLP and Computational Social Science. 138–142.
[73]
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface’s Transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771 (2019).
[74]
Song Yang, Qiang Li, Wenhui Li, Xuanya Li, and An-An Liu. 2022. Dual-level representation enhancement on characteristic and context for image-text retrieval. IEEE Transactions on Circuits and Systems for Video Technology 32, 11 (2022), 8037–8050.
[75]
N. Zeeni, J. Abi Kharma, and L. Mattar. 2021. Social media use impacts body image and eating behavior in pregnant women. Current Psychology (2021), 1–8.
[76]
Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. From recognition to cognition: Visual commonsense reasoning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6720–6731.
[77]
Yajun Zhang, Zhuoyan Shao, Jin Zhang, Banggang Wu, and Liying Zhou. 2023. The effect of image enhancement on influencer’s product recommendation effectiveness: The roles of perceived influencer authenticity and post type. Journal of Research in Interactive Marketing (2023).

Cited By

View all
  • (2024)Enhancing Multimodal Understanding With LIUSJournal of Organizational and End User Computing10.4018/JOEUC.33627636:1(1-17)Online publication date: 12-Jan-2024
  • (2024)A Hybrid Deep BiLSTM-CNN for Hate Speech Detection in Multi-social mediaACM Transactions on Asian and Low-Resource Language Information Processing10.1145/365763523:8(1-22)Online publication date: 6-May-2024
  • (2024)Flexible margins and multiple samples learning to enhance lexical semantic similarityEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.108275133:PCOnline publication date: 1-Jul-2024

Index Terms

  1. Multimodal Religiously Hateful Social Media Memes Classification Based on Textual and Image Data

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Asian and Low-Resource Language Information Processing
    ACM Transactions on Asian and Low-Resource Language Information Processing  Volume 23, Issue 8
    August 2024
    343 pages
    EISSN:2375-4702
    DOI:10.1145/3613611
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 07 August 2024
    Online AM: 16 September 2023
    Accepted: 23 August 2023
    Revised: 08 July 2023
    Received: 03 May 2023
    Published in TALLIP Volume 23, Issue 8

    Check for updates

    Author Tags

    1. Social media mining
    2. multimodal
    3. hateful memes
    4. Transformers-based multimodal bidirectional encoder representations
    5. memes
    6. deep learning
    7. memes classification

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)675
    • Downloads (Last 6 weeks)100
    Reflects downloads up to 09 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Enhancing Multimodal Understanding With LIUSJournal of Organizational and End User Computing10.4018/JOEUC.33627636:1(1-17)Online publication date: 12-Jan-2024
    • (2024)A Hybrid Deep BiLSTM-CNN for Hate Speech Detection in Multi-social mediaACM Transactions on Asian and Low-Resource Language Information Processing10.1145/365763523:8(1-22)Online publication date: 6-May-2024
    • (2024)Flexible margins and multiple samples learning to enhance lexical semantic similarityEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.108275133:PCOnline publication date: 1-Jul-2024

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media