Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Evaluating the Consistency of Word Embeddings from Small Data

Jelke Bloem, Antske Fokkens, Aurélie Herbelot


Abstract
In this work, we address the evaluation of distributional semantic models trained on smaller, domain-specific texts, specifically, philosophical text. Specifically, we inspect the behaviour of models using a pre-trained background space in learning. We propose a measure of consistency which can be used as an evaluation metric when no in-domain gold-standard data is available. This measure simply computes the ability of a model to learn similar embeddings from different parts of some homogeneous data. We show that in spite of being a simple evaluation, consistency actually depends on various combinations of factors, including the nature of the data itself, the model used to train the semantic space, and the frequency of the learnt terms, both in the background space and in the in-domain data of interest.
Anthology ID:
R19-1016
Volume:
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)
Month:
September
Year:
2019
Address:
Varna, Bulgaria
Editors:
Ruslan Mitkov, Galia Angelova
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
132–141
Language:
URL:
https://aclanthology.org/R19-1016
DOI:
10.26615/978-954-452-056-4_016
Bibkey:
Cite (ACL):
Jelke Bloem, Antske Fokkens, and Aurélie Herbelot. 2019. Evaluating the Consistency of Word Embeddings from Small Data. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 132–141, Varna, Bulgaria. INCOMA Ltd..
Cite (Informal):
Evaluating the Consistency of Word Embeddings from Small Data (Bloem et al., RANLP 2019)
Copy Citation:
PDF:
https://aclanthology.org/R19-1016.pdf