Abstract
Skip-gram (word2vec) is a recent method for creating vector representations of words (“distributed word representations”) using a neural network. The representation gained popularity in various areas of natural language processing, because it seems to capture syntactic and semantic information about words without any explicit supervision in this respect.
We propose SubGram, a refinement of the Skip-gram model to consider also the word structure during the training process, achieving large gains on the Skip-gram original test set.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
http://radimrehurek.com/gensim Gensim implements the model twice, in Python and an optimized version in C. For our prototype, we opted to modify the Python version, which unfortunately resulted in a code about 100 times slower and forced us to train the model only on the 96M word corpus as opposed to Mikolov’s 100,000M word2vec training data used in training of the released model.
- 2.
- 3.
- 4.
References
Lazaridou, A., Pham, N.T., Baroni, M.: Combining language and vision with a multimodal skip-gram model (2015). arXiv preprint arXiv:1501.02598
Weston, J., Bengio, S., Usunier, N.: Wsabie: scaling up to large vocabulary image annotation. In: IJCAI, vol. 11 (2011)
Schwenk, H., Gauvain, J.L.: Neural network language models for conversational speech recognition. In: INTERSPEECH (2004)
Schwenk, H., Dchelotte, D., Gauvain, J.L.: Continuous space language models for statistical machine translation. In: Proceedings of the COLING/ACL on Main Conference Poster Sessions (2006)
Mnih, A., Hinton, G.: Three new graphical models for statistical language modelling. In: Proceedings of the 24th International Conference on Machine Learning (2007)
Soricut, R., Och, F.: Unsupervised morphology induction using word embeddings. In: Proceedings of NAACL (2015)
Wang, Z., Zhang, J., Feng, J., Chen, Z.: Knowledge graph and text jointly embedding. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics (2014)
Mikolov, T., Chen, K., Corrado, G., Dean., J.: Efficient estimation of word representations in vector space (2013). arXiv preprint arXiv:1301.3781
Morin, F., Bengio, Y.: Hierarchical probabilistic neural network language model. In: Proceedings of the International Workshop on AI and Statistics (2005)
Lin, Q., Cao, Y., Nie, Z., Rui, Y.: Learning word representation considering proximity and ambiguity. In: Twenty-Eighth AAAI Conference on Artificial Intelligence (2014)
Yoon, K., Jernite, Y., Sontag, D., Rush, A.M.: Character-aware neural language models (2015). arXiv preprint arXiv:1508.06615
Cui, Q., Gao, B., Bian, J., Qiu, S., Liu, T.Y.: A framework for learning knowledge-powered word embedding (2014)
Bian, J., Gao, B., Liu, T.Y.: Knowledge-powered deep learning for word embedding. In: Machine Learning and Knowledge Discovery in Databases (2014)
Vylomova, E., Rimmel, L., Cohn, T., Baldwin, T.: Take and took, gaggle and goose, book and read: evaluating the utility of vector differences for lexical relation learning (2015). arXiv preprint arXiv:1509.01692
Bojar, O., Dušek, O., Kocmi, T., Libovický, J., Novák, M., Popel, M., Sudarikov, R., Variš, D.: Czeng 1.6: enlarged Czech-English parallel corpus with processing tools dockered. In: Sojka, P., et al. (eds.) TSD 2016. LNAI, vol. 9924, pp. 231–238. Springer International Publishing, Heidelberg (2016)
Acknowledgment
This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement no. 645452 (QT21), the grant GAUK 8502/2016, and SVV project number 260 333.
This work has been using language resources developed, stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Kocmi, T., Bojar, O. (2016). SubGram: Extending Skip-Gram Word Representation with Substrings. In: Sojka, P., Horák, A., Kopeček, I., Pala, K. (eds) Text, Speech, and Dialogue. TSD 2016. Lecture Notes in Computer Science(), vol 9924. Springer, Cham. https://doi.org/10.1007/978-3-319-45510-5_21
Download citation
DOI: https://doi.org/10.1007/978-3-319-45510-5_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-45509-9
Online ISBN: 978-3-319-45510-5
eBook Packages: Computer ScienceComputer Science (R0)