Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Advertisement

Emotion detection in text: advances in sentiment analysis

  • ORIGINAL ARTICLE
  • Published:
International Journal of System Assurance Engineering and Management Aims and scope Submit manuscript

Abstract

In the era of rapid internet expansion, social networking platforms have become indispensable channels for individuals to convey their emotions and opinions to a global audience. People employ various media types, including text, images, audio, and video, to articulate their sentiments. However, the sheer volume of textual content on web-based social media platforms can be overwhelming. These platforms generate an enormous amount of unstructured data every second. To gain insights into human psychology, it is imperative to process this data as quickly as it is produced. This can be achieved through sentiment analysis, an advanced technique called Transformer-based model (TBM) which discerns the polarity of text, determining whether the author holds a positive, negative, or neutral stance towards a subject, service, individual, or location. The performance of this model can vary based on factors like the dataset used, the specific Transformer variant, model hyper parameters, and the evaluation metrics employed. Findings from this study show that social media users with depression or anorexia may be identified by the presence and unpredictability of their emotions. The proposed model is used to analyze text data and make sentiment predictions effectively. The proposed TBM strategy illustrated predominant execution over distinctive measurements compared to other methods. For 50 clients, TBM accomplished an precision of 94.23%, accuracy of 89.13%, and review of 91.59%. As the client check expanded to 100, 150, 200, and 250, TBM reliably outflanked others, coming to up to 97.03% precision, 92.89% exactness, and 93.51% review. These comes about emphasize the viability of the TBM approach over elective strategies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Aragón M, López-Monroy A, González-Gurrola L, Montes-Gómez M delved into detecting depression in social media using fine-grained emotions at the 2019 conference of the North American chapter of the association for computational linguistics.

  • Biswas S, Pal A, Mukherjee A (2020) Exploring deep learning models for twitter sentiment analysis. Proceedings of the 8th international conference on advanced computing and Communication Systems (ICACCS) 1:146–151

  • Blitzer J, Dredze M, Pereira F (2006) Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. Assoc Comput Linguist

  • Brown T, and colleagues discussed language models are few-shot learners at the 34th annual conference on neural information processing systems in 2020.John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 120–128, Sydney, Australia. Association for Computational Linguistics

  • Chalkidis I and colleagues presented an empirical study on large-scale multi-label text classification at EMNLP 2020.

  • Chikersal P, Belgrave D, Doherty G, Enrique A, Palacios JE, Richards D, Thieme A presented their work titled understanding client support strategies to enhance clinical outcomes in an online mental health intervention at the 2020 CHI CONFERENCE on human factors in computing systems.

  • Clark K, Luong MT, Le QV, Manning CD (2020) ELECTRA: Pre-training text encoders as discriminators rather than generators. Proceedings of the 8th international conference on learning representations (ICLR)

  • Dai Z, Yang Z, Yang Y, Carbonell J, Le QV, Salakhutdinov R (2019) Transformer-XL: attentive language models beyond a fxed-length context. arXiv preprint. arXiv:1901.02860

  • Dai X, Chalkidis I, Darkner S, Elliott D revisited transformer-based models for long document classification.

  • Devlin J, Chang MW, Lee K, Toutanova K, introduced BERT: bidirectional encoder representations from transformers in their 2018 arXiv preprint.

  • Devlin J, Chang MW, Lee K, et al presented BERT: pre-training of deep bidirectional transformers for language understanding in their 2018 arXiv preprint.

  • Ding S and colleagues introduced ERNIE-Doc: a retrospective long-document modeling transformer at ACL-IJCNLP 2021.

  • Dong H and colleagues explored explainable automated coding of clinical notes in JBI 2021.

  • Gillioz J, Casas E, Mugellini, Khaled OA provided an overview of transformer-based models for NLP tasks at the 2020 15th conference on computer science and information systems (FedCSIS) in Sofia.

  • Gururangan S and colleagues presented don’t stop pretraining: adapt language models to domains and tasks at ACL 2020.

  • Gururangan S and colleagues shared insights on don’t stop pretraining: adapt language models to domains and tasks at ACL 2020.

  • Jia C, Yang Y, Xia Y, Chen YT, Parekh Z, Pham H, Le Q, Sung YH, Li Z, Duerig T presented their work at the 38th international conference on machine learning in 2021.

  • Jin B, Xu X (2024) Predictions of steel price indices through machine learning for the regional northeast Chinese market. Neural Comput Appl 36(33):20863–20882. https://doi.org/10.1007/s00521-024-10270-7

    Article  Google Scholar 

  • Khanuja S, Tsvetkov Y, Salakhutdinov R, Black AW discussed learning and evaluating emotion lexicons for 91 languages at the 2019 annual meeting of the association for computational linguistics.

  • Kiesel J and others discussed hyperpartisan news detection in SemEval2019 task 4.

  • Li X, Xiong C, Callan J explored generating human-like responses for sentiment-based user simulations in the proceedings of the 57th annual meeting of the association for computational linguistics in 2019.

  • Liu S, Heinzel S, Haucke MN, Heinz A discussed increased psychological distress, loneliness, and unemployment in the spread of COVID-19 over 6 months in Germany in their 2021 publication in medicina.

  • Rahali A, Akhloufi MA explored end-to-end transformer-based models in textual-based NLP in AI 2023.

  • Ramírez-Cifuentes D, Freire A contributed to Upf’s Participation at the CLEF eRisk 2018: Early risk prediction on the internet, which was presented at the 9th international conference of the CLEF association, CLEF 2018, Avignon, 2018.

  • Ramponi A, Plank B conducted a survey on neural unsupervised domain adaptation in NLP in COLING 2020.

  • Sun C, Dehghani M shared insights on BERT for sentiment analysis and dataset construction in their 2019 arXiv preprint.

  • Sun C, Qiu X, Xu Y, Huang X discussed how to fine-tune BERT for text classification in CCL 2019.

  • Trotzek M, Koitka S, Friedrich C presented their work on word embeddings and linguistic metadata at the CLEF 2018 tasks for early detection of depression and anorexia at the 9th international conference of the CLEF association, CLEF 2018, Avignon.

  • Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Polosukhin I, presented their groundbreaking work attention is all you need in 2017.

  • Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A, Kaiser L, Polosukhin I presented their work titled attention is all you need in 2017.

  • Xia L, He L, Wan X (2021) Sentiment-based aspect-aware attention mechanism in deep learning for sentiment analysis. IEEE Trans Neural Netw Learn Syst 32(3):1045–1059

    MATH  Google Scholar 

  • Xiao L and colleagues presented label-specific document representation for multi-label text classification in EMNLP-IJCNLP 2019.

  • Xuetong C, Martin D, Thomas W, Suzanne E explored Identifying depression on twitter with temporal measures of emotions at the web conference 2018.

  • Yang W, Luo L, Zheng K, Chen W, Xu K (2019) Sentiment analysis and opinion mining of online reviews: A systematic literature review. Expert Syst Appl 130:234–249

    Google Scholar 

  • Yin W, Rajani NF, Radev D, Socher R, Xiong C shared their research on universal natural language processing with limited annotations at the 2020 conference on empirical methods in natural language processing (EMNLP).

Download references

Funding

No funding received.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to R. Tamilkodi.

Ethics declarations

Conflict of interest

No Conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tamilkodi, R., Sujatha, B. & Leelavathy, N. Emotion detection in text: advances in sentiment analysis. Int J Syst Assur Eng Manag 16, 552–560 (2025). https://doi.org/10.1007/s13198-024-02597-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13198-024-02597-0

Keywords