Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters

Shwai He, Liang Ding, Daize Dong, Jeremy Zhang, Dacheng Tao


Abstract
Adapter Tuning, which freezes the pretrained language models (PLMs) and only fine-tunes a few extra modules, becomes an appealing efficient alternative to the full model fine-tuning. Although computationally efficient, the recent Adapters often increase parameters (e.g. bottleneck dimension) for matching the performance of full model fine-tuning, which we argue goes against their original intention. In this work, we re-examine the parameter-efficiency of Adapter through the lens of network pruning (we name such plug-in concept as SparseAdapter) and find that SparseAdapter can achieve comparable or better performance than standard Adapters when the sparse ratio reaches up to 80%. Based on our findings, we introduce an easy but effective setting “Large-Sparse” to improve the model capacity of Adapters under the same parameter budget. Experiments on five competitive Adapters upon three advanced PLMs show that with proper sparse method (e.g. SNIP) and ratio (e.g. 40%) SparseAdapter can consistently outperform their corresponding counterpart. Encouragingly, with the Large-Sparse setting, we can obtain further appealing gains, even outperforming the full fine-tuning by a large margin.
Anthology ID:
2022.findings-emnlp.160
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2184–2190
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.160
DOI:
10.18653/v1/2022.findings-emnlp.160
Bibkey:
Cite (ACL):
Shwai He, Liang Ding, Daize Dong, Jeremy Zhang, and Dacheng Tao. 2022. SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2184–2190, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters (He et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.160.pdf