Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3421515.3421519acmotherconferencesArticle/Chapter ViewAbstractPublication PagessspsConference Proceedingsconference-collections
research-article

Combined Method Based on Source Text and Representation for Text Enhancement

Published: 17 December 2020 Publication History

Abstract

Text classification is a basic and important work in natural language processing (NLP). The existing text classification models are powerful. However, training such a model requires a large number of labeled training sets, but in the actual scene, insufficient data is often faced with. The lack of data is mainly divided into two categories: cold start and low resources. To solve this problem, text enhancement methods are usually used. In this paper, the source text enhancement and representation enhancement are combined to improve the enhancement effect. Five sets of experiments are designed to verify that our method is effective on different data sets and different classifiers. The simulation results show that the accuracy is improved and the generalization ability of the classifier is enhanced to some extent. We also find that the enhancement factor and the size of the training data set are not positively related to the enhancement effect. Therefore, the enhancement factor needs to be selected according to the characteristics of the data.

Supplementary Material

p61-li-supplement (p61-li-supplement.ppt)
Presentation slides

References

[1]
Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, Naama Tepper, and Naama Zwerdling. 2020. Do Not Have Enough Data? Deep Learning to the Rescue!. In AAAI. 7383–7390.
[2]
Marzieh Fadaee, Arianna Bisazza, and Christof Monz. 2017. Data augmentation for low-resource neural machine translation. arXiv preprint arXiv:1705.00440(2017).
[3]
Zhiting Hu, Bowen Tan, Russ R Salakhutdinov, Tom M Mitchell, and Eric P Xing. 2019. Learning data manipulation for augmentation and weighting. In Advances in Neural Information Processing Systems. 15764–15775.
[4]
Michał Jungiewicz and Aleksander Smywiński-Pohl. 2019. Towards textual data augmentation for neural networks: synonyms and maximum loss. Computer Science 20(2019).
[5]
Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic relations. arXiv preprint arXiv:1805.06201(2018).
[6]
Ashutosh Kumar, Satwik Bhattamishra, Manik Bhandari, and Partha Talukdar. 2019. Submodular optimization-based diverse paraphrasing and its effectiveness in data augmentation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 3609–3619.
[7]
Nikolaos Malandrakis, Minmin Shen, Anuj Goyal, Shuyang Gao, Abhishek Sethi, and Angeliki Metallinou. 2019. Controlled text generation for data augmentation in intelligent artificial agents. arXiv preprint arXiv:1910.03487(2019).
[8]
Vukosi Marivate and Tshephisho Sefara. 2020. Improving short text classification through global augmentation methods. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction. Springer, 385–399.
[9]
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2818–2826.
[10]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199(2013).
[11]
Fabio Henrique Kiyoiti dos Santos Tanakaand Claus Aranha. 2019. Data augmentation using GANs. arXiv preprint arXiv:1904.09135(2019).
[12]
Jason Wei and Kai Zou. 2019. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. arXiv preprint arXiv:1901.11196(2019).
[13]
Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019. Conditional BERT contextual augmentation. In International Conference on Computational Science. Springer, 84–95.
[14]
Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. 2019. Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848(2019).
[15]
Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. 2017. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412(2017).

Cited By

View all
  • (2024)Research on entity extraction method for rehabilitation medicine knowledge management2024 IEEE 9th International Conference on Data Science in Cyberspace (DSC)10.1109/DSC63484.2024.00046(294-299)Online publication date: 23-Aug-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
SSPS '20: Proceedings of the 2020 2nd Symposium on Signal Processing Systems
July 2020
125 pages
ISBN:9781450388627
DOI:10.1145/3421515
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 17 December 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. combined enhancement
  2. deep neural networks
  3. natural language processing
  4. text classifier
  5. text enhancement

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

SSPS 2020

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)0
Reflects downloads up to 11 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Research on entity extraction method for rehabilitation medicine knowledge management2024 IEEE 9th International Conference on Data Science in Cyberspace (DSC)10.1109/DSC63484.2024.00046(294-299)Online publication date: 23-Aug-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media