Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Adversarial Soft Prompt Tuning for Cross-Domain Sentiment Analysis

Hui Wu, Xiaodong Shi


Abstract
Cross-domain sentiment analysis has achieved promising results with the help of pre-trained language models. As GPT-3 appears, prompt tuning has been widely explored to enable better semantic modeling in many natural language processing tasks. However, directly using a fixed predefined template for cross-domain research cannot model different distributions of the [MASK] token in different domains, thus making underuse of the prompt tuning technique. In this paper, we propose a novel Adversarial Soft Prompt Tuning method (AdSPT) to better model cross-domain sentiment analysis. On the one hand, AdSPT adopts separate soft prompts instead of hard templates to learn different vectors for different domains, thus alleviating the domain discrepancy of the [MASK] token in the masked language modeling task. On the other hand, AdSPT uses a novel domain adversarial training strategy to learn domain-invariant representations between each source domain and the target domain. Experiments on a publicly available sentiment analysis dataset show that our model achieves the new state-of-the-art results for both single-source domain adaptation and multi-source domain adaptation.
Anthology ID:
2022.acl-long.174
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2438–2447
Language:
URL:
https://aclanthology.org/2022.acl-long.174
DOI:
10.18653/v1/2022.acl-long.174
Bibkey:
Cite (ACL):
Hui Wu and Xiaodong Shi. 2022. Adversarial Soft Prompt Tuning for Cross-Domain Sentiment Analysis. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2438–2447, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Adversarial Soft Prompt Tuning for Cross-Domain Sentiment Analysis (Wu & Shi, ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-long.174.pdf