Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3477495.3532019acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

MET-Meme: A Multimodal Meme Dataset Rich in Metaphors

Published: 07 July 2022 Publication History
  • Get Citation Alerts
  • Abstract

    Memes have become the popular means of communication for Internet users worldwide. Understanding the Internet meme is one of the most tricky challenges in natural language processing (NLP) tasks due to its convenient non-standard writing and network vocabulary. Recently, many linguists suggested that memes contain rich metaphorical information. However, the existing researches ignore this key feature. Therefore, to incorporate informative metaphors into the meme analysis, we introduce a novel multimodal meme dataset called MET-Meme, which is rich in metaphorical features. It contains 10045 text-image pairs, with manual annotations of the metaphor occurrence, sentiment categories, intentions, and offensiveness degree. Moreover, we propose a range of strong baselines to demonstrate the importance of combining metaphorical features for meme sentiment analysis and semantic understanding tasks, respectively. MET-Meme, and its code are released publicly for research in \urlhttps://github.com/liaolianfoka/MET-Meme-A-Multi-modal-Meme-Dataset-Rich-in-Metaphors.

    References

    [1]
    Stephen Madu Anurudu and Isioma Maureen Obi. 2017. Decoding the Metaphor of Internet Meme: A Study of Satirical Tweets on Black Friday Sales in Nigeria. AFRREV LALIGENS: An International Journal of Language, Literature and Gender Studies, Vol. 6, 1 (2017), 91--100.
    [2]
    Arup Baruah, Kaushik Das, Ferdous Barbhuiya, and Kuntal Dey. 2020. IIITG-ADBU at SemEval-2020 Task 8: A Multimodal Approach to Detect Offensive, Sarcastic and Humorous Memes. In Proceedings of the Fourteenth Workshop on Semantic Evaluation. 885--890.
    [3]
    Geng Binzong, Yang Min, Yuan Fajie, Wang Shupeng, Ao Xiang, and Xu Ruifeng. 2021. Iterative Network Pruning with Uncertainty Regularization for Lifelong Sentiment Classification. ACM (2021).
    [4]
    Patrick Davison. 2012. The Language of Internet Memes . 120--134.
    [5]
    Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
    [6]
    Eli Dresner and Susan C Herring. 2010. Functions of the nonverbal in CMC: Emoticons and illocutionary force. Communication theory, Vol. 20, 3 (2010), 249--268.
    [7]
    Amanda Du Preez and Elanie Lombard. 2014. The role of memes in the construction of Facebook personae. Communicatio, Vol. 40 (09 2014), 253--270. https://doi.org/10.1080/02500167.2014.938671
    [8]
    Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin, Vol. 76, 5 (1971), 378.
    [9]
    Jean H. French. 2017. Image-based memes as sentiment predictors. In 2017 International Conference on Information Society (i-Society). 80--85. https://doi.org/10.23919/i-Society.2017.8354676
    [10]
    Francesca Gasparini, Giulia Rizzi, Aurora Saibene, and Elisabetta Fersini. 2021. Benchmark dataset of memes with text transcriptions for automatic detection of multi-modal misogynistic content. arXiv preprint arXiv:2106.08409 (2021).
    [11]
    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.
    [12]
    Anthony Hu and Seth Flaxman. 2018. Multimodal Sentiment Analysis To Explore the Structure of Emotions. https://doi.org/10.1145/3219819.3219853
    [13]
    Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. 2020. The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes.
    [14]
    Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. Computer Science (2014).
    [15]
    Hannah Rose Kirk, Yennie Jun, Paulius Rauba, Gal Wachtel, Ruining Li, Xingjian Bai, Noah Broestl, Martin Doff-Sotta, Aleksandar Shtedritski, and Yuki M Asano. 2021. Memes in the Wild: Assessing the Generalizability of the Hateful Memes Challenge Dataset. arXiv preprint arXiv:2107.04313 (2021).
    [16]
    Julia Kruk, Jonah Lubin, Karan Sikka, Xiao Lin, Dan Jurafsky, and Ajay Divakaran. 2019. Integrating text and image: Determining multimodal document intent in instagram posts. arXiv preprint arXiv:1904.09073 (2019).
    [17]
    Shao-Kang Lo. 2008. The nonverbal communication functions of emoticons in computer-mediated communication. Cyberpsychology & behavior, Vol. 11, 5 (2008), 595--597.
    [18]
    Chenwei Lou, Bin Liang, Lin Gui, Yulan He, Yixue Dang, and Ruifeng Xu. 2021. Affective Dependency Graph for Sarcasm Detection. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval . 1844--1849.
    [19]
    Maria Mitsiaki. 2020. INVESTIGATING METAPHOR IN MODERN GREEK INTERNET MEMES:: AN APPLIED APPROACH WITH L2 PEDAGOGICAL IMPLICATIONS. Revista Brasileira de Alfabetizacc ao 12 (2020), 73--106.
    [20]
    Kai Nakamura, Sharon Levy, and William Yang Wang. 2019. r/Fakeddit: A New Multimodal Benchmark Dataset for Fine-grained Fake News Detection. (2019).
    [21]
    Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. (2019).
    [22]
    Shraman Pramanick, Md. Shad Akhtar, and Tanmoy Chakraborty. 2021. Exercise? I thought you said 'Extra Fries': Leveraging Sentence Demarcations and Multi-hop Attention for Meme Affect Analysis. (2021).
    [23]
    Lu Ren, Bo Xu, Hongfei Lin, Xikai Liu, and Liang Yang. 2020. Sarcasm detection with sentiment semantics enhanced multi-level memory network. Neurocomputing, Vol. 401 (2020), 320--326.
    [24]
    Chhavi Sharma, Deepesh Bhageria, William Scott, Srinivas Pykl, and Bjorn Gamback. 2020. SemEval-2020 Task 8: Memotion Analysis -- The Visuo-Lingual Metaphor! (2020).
    [25]
    Karen Simonyan and Andrew Zisserman. 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. Computer Science (2014).
    [26]
    Viswanath Sivakumar, Albert Gordo, and Manohar Paluri. 2018. Rosetta: Understanding text in images and videos with machine learning. Facebook Engineering blog posted on, Vol. 11 (2018), 2018.
    [27]
    Shardul Suryawanshi, Bharathi Raja Chakravarthi, Mihael Arcan, and Paul Buitelaar. 2020. Multimodal meme dataset (MultiOFF) for identifying offensive content in image and text. In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying. 32--41.
    [28]
    Milovs Tasić and Duvs an Stamenković. 2015. The interplay of words and images in expressing multimodal metaphors in comics. Procedia-Social and Behavioral Sciences, Vol. 212 (2015), 117--122.
    [29]
    Channary Tauch and Eiman Kanjo. 2016. The roles of emojis in mobile phone notifications. In Proceedings of the 2016 acm international joint conference on pervasive and ubiquitous computing: Adjunct . 1560--1565.
    [30]
    Yulia Tsvetkov, Leonid Boytsov, Anatole Gershman, Eric Nyberg, and Chris Dyer. 2014. Metaphor detection with cross-lingual model transfer. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 248--258.
    [31]
    Riza Velioglu and Jewgeni Rose. 2020. Detecting Hate Speech in Memes Using Multimodal Deep Learning Approaches: Prize-winning solution to Hateful Memes Challenge. (2020).
    [32]
    Zihan Wang, Stephen Mayhew, Dan Roth, et almbox. 2019. Cross-lingual ability of multilingual bert: An empirical study. arXiv preprint arXiv:1912.07840 (2019).
    [33]
    Linhong Xu, Hongfei Lin, Yu Pan, Hui Ren, and Jianmei Chen. 2008. Constructing the affective lexicon ontology. Journal of the China society for scientific and technical information, Vol. 27, 2 (2008), 180--185.
    [34]
    Dongyu Zhang, Minghao Zhang, Teng Guo, Ciyuan Peng, Vidya Saikrishna, and Feng Xia. 2021 a. In Your Face: Sentiment Analysis of Metaphor with Facial Expressive Features. In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 1--8.
    [35]
    Dongyu Zhang, Minghao Zhang, Ciyuan Peng, Jason. J Jung, and Feng Xia. 2021 b. Metaphor Research in the 21st Century: A Bibliographic Analysis. (2021).
    [36]
    Dongyu Zhang, Minghao Zhang, Heting Zhang, Liang Yang, and Hongfei Lin. 2021 c. MultiMET: A Multimodal Dataset for Metaphor Understanding. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) .
    [37]
    Xiayu Zhong. 2020. Classification of Multimodal Hate Speech--The Winning Solution of Hateful Memes Challenge. arXiv preprint arXiv:2012.01002 (2020).
    [38]
    Xianbing Zhou, Yang Yong, Xiaochao Fan, Ge Ren, and Hongfei Lin. 2021. Hate Speech Detection Based on Sentiment Knowledge Sharing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) .

    Cited By

    View all
    • (2024)Understanding (Dark) Humour with Internet Meme AnalysisCompanion Proceedings of the ACM on Web Conference 202410.1145/3589335.3641249(1276-1279)Online publication date: 13-May-2024
    • (2024)MemeCraft: Contextual and Stance-Driven Multimodal Meme GenerationProceedings of the ACM on Web Conference 202410.1145/3589334.3648151(4642-4652)Online publication date: 13-May-2024
    • (2024)Capturing the Concept Projection in Metaphorical Memes for Downstream Learning TasksIEEE Access10.1109/ACCESS.2023.334798812(1250-1265)Online publication date: 2024
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SIGIR '22: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval
    July 2022
    3569 pages
    ISBN:9781450387323
    DOI:10.1145/3477495
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 07 July 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. meme dataset
    2. metaphor
    3. multimodal learning
    4. sentiment analysis

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    SIGIR '22
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 792 of 3,983 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)392
    • Downloads (Last 6 weeks)27

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Understanding (Dark) Humour with Internet Meme AnalysisCompanion Proceedings of the ACM on Web Conference 202410.1145/3589335.3641249(1276-1279)Online publication date: 13-May-2024
    • (2024)MemeCraft: Contextual and Stance-Driven Multimodal Meme GenerationProceedings of the ACM on Web Conference 202410.1145/3589334.3648151(4642-4652)Online publication date: 13-May-2024
    • (2024)Capturing the Concept Projection in Metaphorical Memes for Downstream Learning TasksIEEE Access10.1109/ACCESS.2023.334798812(1250-1265)Online publication date: 2024
    • (2024)Towards determining perceived audience intent for multimodal social media posts using the theory of reasoned actionScientific Reports10.1038/s41598-024-60299-w14:1Online publication date: 8-May-2024
    • (2024)SC-Net: Multimodal metaphor detection using semantic conflictsNeurocomputing10.1016/j.neucom.2024.127825594(127825)Online publication date: Aug-2024
    • (2024)What do they “meme”? A metaphor-aware multi-modal multi-task framework for fine-grained meme understandingKnowledge-Based Systems10.1016/j.knosys.2024.111778294(111778)Online publication date: Jun-2024
    • (2024)VIEMFInformation Processing and Management: an International Journal10.1016/j.ipm.2024.10365261:3Online publication date: 2-Jul-2024
    • (2024)A Comprehensive Overview of CFN From a Commonsense PerspectiveMachine Intelligence Research10.1007/s11633-023-1450-821:2(239-256)Online publication date: 12-Jan-2024
    • (2023)Multimodal Deep Learning with Discriminant Descriptors for Offensive Memes DetectionJournal of Data and Information Quality10.1145/359730815:3(1-16)Online publication date: 28-Sep-2023
    • (2023)Comparative Study on Sentiment Analysis in Image-Based Memes2023 9th International Conference on Smart Computing and Communications (ICSCC)10.1109/ICSCC59169.2023.10334945(518-523)Online publication date: 17-Aug-2023
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media