Prcbert: Prompt learning for requirement classification using bert-based pretrained language models
Proceedings of the 37th IEEE/ACM International Conference on Automated …, 2022•dl.acm.org
Software requirement classification is a longstanding and important problem in requirement
engineering. Previous studies have applied various machine learning techniques for this
problem, including Support Vector Machine (SVM) and decision trees. With the recent
popularity of NLP technique, the state-of-the-art approach NoRBERT utilizes the pre-trained
language model BERT and achieves a satisfactory performance. However, the dataset
PROMISE used by the existing approaches for this problem consists of only hundreds of …
engineering. Previous studies have applied various machine learning techniques for this
problem, including Support Vector Machine (SVM) and decision trees. With the recent
popularity of NLP technique, the state-of-the-art approach NoRBERT utilizes the pre-trained
language model BERT and achieves a satisfactory performance. However, the dataset
PROMISE used by the existing approaches for this problem consists of only hundreds of …
Software requirement classification is a longstanding and important problem in requirement engineering. Previous studies have applied various machine learning techniques for this problem, including Support Vector Machine (SVM) and decision trees. With the recent popularity of NLP technique, the state-of-the-art approach NoRBERT utilizes the pre-trained language model BERT and achieves a satisfactory performance. However, the dataset PROMISE used by the existing approaches for this problem consists of only hundreds of requirements that are outdated according to today’s technology and market trends. Besides, the NLP technique applied in these approaches might be obsolete. In this paper, we propose an approach of prompt learning for requirement classification using BERT-based pretrained language models (PRCBERT), which applies flexible prompt templates to achieve accurate requirements classification. Experiments conducted on two existing small-size requirement datasets (PROMISE and NFR-Review) and our collected large-scale requirement dataset NFR-SO prove that PRCBERT exhibits moderately better classification performance than NoRBERT and MLM-BERT (BERT with the standard prompt template). On the de-labeled NFR-Review and NFR-SO datasets, Trans_PRCBERT (the version of PRCBERT which is fine-tuned on PROMISE) is able to have a satisfactory zero-shot performance with 53.27% and 72.96% F1-score when enabling a self-learning strategy.
