Towards lossless encoding of sentences
arXiv preprint arXiv:1906.01659, 2019•arxiv.org
A lot of work has been done in the field of image compression via machine learning, but not
much attention has been given to the compression of natural language. Compressing text
into lossless representations while making features easily retrievable is not a trivial task, yet
has huge benefits. Most methods designed to produce feature rich sentence embeddings
focus solely on performing well on downstream tasks and are unable to properly reconstruct
the original sequence from the learned embedding. In this work, we propose a near lossless …
much attention has been given to the compression of natural language. Compressing text
into lossless representations while making features easily retrievable is not a trivial task, yet
has huge benefits. Most methods designed to produce feature rich sentence embeddings
focus solely on performing well on downstream tasks and are unable to properly reconstruct
the original sequence from the learned embedding. In this work, we propose a near lossless …
A lot of work has been done in the field of image compression via machine learning, but not much attention has been given to the compression of natural language. Compressing text into lossless representations while making features easily retrievable is not a trivial task, yet has huge benefits. Most methods designed to produce feature rich sentence embeddings focus solely on performing well on downstream tasks and are unable to properly reconstruct the original sequence from the learned embedding. In this work, we propose a near lossless method for encoding long sequences of texts as well as all of their sub-sequences into feature rich representations. We test our method on sentiment analysis and show good performance across all sub-sentence and sentence embeddings.
arxiv.org
Showing the best result for this search. See all results