Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Learning Succinct Models: Pipelined Compression with L1-Regularization, Hashing, Elias-Fano Indices, and Quantization

Hajime Senuma, Akiko Aizawa


Abstract
The recent proliferation of smart devices necessitates methods to learn small-sized models. This paper demonstrates that if there are m features in total but only n = o(√m) features are required to distinguish examples, with 𝛺(log m) training examples and reasonable settings, it is possible to obtain a good model in a succinct representation using n log2 mn + o(m) bits, by using a pipeline of existing compression methods: L1-regularized logistic regression, feature hashing, Elias–Fano indices, and randomized quantization. An experiment shows that a noun phrase chunking task for which an existing library requires 27 megabytes can be compressed to less than 13 kilobytes without notable loss of accuracy.
Anthology ID:
C16-1261
Volume:
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers
Month:
December
Year:
2016
Address:
Osaka, Japan
Editors:
Yuji Matsumoto, Rashmi Prasad
Venue:
COLING
SIG:
Publisher:
The COLING 2016 Organizing Committee
Note:
Pages:
2774–2784
Language:
URL:
https://aclanthology.org/C16-1261
DOI:
Bibkey:
Cite (ACL):
Hajime Senuma and Akiko Aizawa. 2016. Learning Succinct Models: Pipelined Compression with L1-Regularization, Hashing, Elias-Fano Indices, and Quantization. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2774–2784, Osaka, Japan. The COLING 2016 Organizing Committee.
Cite (Informal):
Learning Succinct Models: Pipelined Compression with L1-Regularization, Hashing, Elias-Fano Indices, and Quantization (Senuma & Aizawa, COLING 2016)
Copy Citation:
PDF:
https://aclanthology.org/C16-1261.pdf