Compressing word embeddings via deep compositional code learning
R Shu, H Nakayama - arXiv preprint arXiv:1711.01068, 2017 - arxiv.org
arXiv preprint arXiv:1711.01068, 2017•arxiv.org
Natural language processing (NLP) models often require a massive number of parameters
for word embeddings, resulting in a large storage or memory footprint. Deploying neural
NLP models to mobile devices requires compressing the word embeddings without any
significant sacrifices in performance. For this purpose, we propose to construct the
embeddings with few basis vectors. For each word, the composition of basis vectors is
determined by a hash code. To maximize the compression rate, we adopt the multi …
for word embeddings, resulting in a large storage or memory footprint. Deploying neural
NLP models to mobile devices requires compressing the word embeddings without any
significant sacrifices in performance. For this purpose, we propose to construct the
embeddings with few basis vectors. For each word, the composition of basis vectors is
determined by a hash code. To maximize the compression rate, we adopt the multi …
Natural language processing (NLP) models often require a massive number of parameters for word embeddings, resulting in a large storage or memory footprint. Deploying neural NLP models to mobile devices requires compressing the word embeddings without any significant sacrifices in performance. For this purpose, we propose to construct the embeddings with few basis vectors. For each word, the composition of basis vectors is determined by a hash code. To maximize the compression rate, we adopt the multi-codebook quantization approach instead of binary coding scheme. Each code is composed of multiple discrete numbers, such as (3, 2, 1, 8), where the value of each component is limited to a fixed range. We propose to directly learn the discrete codes in an end-to-end neural network by applying the Gumbel-softmax trick. Experiments show the compression rate achieves 98% in a sentiment analysis task and 94% ~ 99% in machine translation tasks without performance loss. In both tasks, the proposed method can improve the model performance by slightly lowering the compression rate. Compared to other approaches such as character-level segmentation, the proposed method is language-independent and does not require modifications to the network architecture.
arxiv.org