Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.5555/2999792.2999959guideproceedingsArticle/Chapter ViewAbstractPublication PagesnipsConference Proceedingsconference-collections
Article

Distributed representations of words and phrases and their compositionality

Published: 05 December 2013 Publication History

Abstract

The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling.
An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.

References

[1]
Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. A neural probabilistic language model. The Journal of Machine Learning Research, 3:1137-1155, 2003.
[2]
Ronan Collobert and Jason Weston. A unified architecture for natural language processing: deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160-167. ACM, 2008.
[3]
Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Domain adaptation for large-scale sentiment classification: A deep learning approach. In ICML, 513-520, 2011.
[4]
Michael U Gutmann and Aapo Hyvärinen. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. The Journal of Machine Learning Research, 13:307-361, 2012.
[5]
Tomas Mikolov, Stefan Kombrink, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Extensions of recurrent neural network language model. In Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on, pages 5528-5531. IEEE, 2011.
[6]
Tomas Mikolov, Anoop Deoras, Daniel Povey, Lukas Burget and Jan Cernocky. Strategies for Training Large Scale Neural Network Language Models. In Proc. Automatic Speech Recognition and Understanding, 2011.
[7]
Tomas Mikolov. Statistical Language Models Based on Neural Networks. PhD thesis, PhD Thesis, Brno University of Technology, 2012.
[8]
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. ICLR Workshop, 2013.
[9]
Tomas Mikolov, Wen-tau Yih and Geoffrey Zweig. Linguistic Regularities in Continuous Space Word Representations. In Proceedings of NAACL HLT, 2013.
[10]
Andriy Mnih and Geoffrey E Hinton. A scalable hierarchical distributed language model. Advances in neural information processing systems, 21:1081-1088, 2009.
[11]
Andriy Mnih and Yee Whye Teh. A fast and simple algorithm for training neural probabilistic language models. arXiv preprint arXiv:1206.6426, 2012.
[12]
Frederic Morin and Yoshua Bengio. Hierarchical probabilistic neural network language model. In Proceedings of the international workshop on artificial intelligence and statistics, pages 246-252, 2005.
[13]
David E Rumelhart, Geoffrey E Hintont, and Ronald J Williams. Learning representations by back-propagating errors. Nature, 323(6088):533-536, 1986.
[14]
Holger Schwenk. Continuous space language models. Computer Speech and Language, vol. 21, 2007.
[15]
Richard Socher, Cliff C. Lin, Andrew Y. Ng, and Christopher D. Manning. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 26th International Conference on Machine Learning (ICML), volume 2, 2011.
[16]
Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. Semantic Compositionality Through Recursive Matrix-Vector Spaces. In Proceedings of the 2012 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2012.
[17]
Joseph Turian, Lev Ratinov, and Yoshua Bengio. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384-394. Association for Computational Linguistics, 2010.
[18]
Peter D. Turney and Patrick Pantel. From frequency to meaning: Vector space models of semantics. In Journal of Artificial Intelligence Research, 37:141-188, 2010.
[19]
Peter D. Turney. Distributional semantics beyond words: Supervised learning of analogy and paraphrase. In Transactions of the Association for Computational Linguistics (TACL), 353-366, 2013.
[20]
Jason Weston, Samy Bengio, and Nicolas Usunier. Wsabie: Scaling up to large vocabulary image annotation. In Proceedings of the Twenty-Second international joint conference on Artificial Intelligence-Volume Volume Three, pages 2764-2770. AAAI Press, 2011.

Cited By

View all
  • (2024)A Natural Language Processing Model for Automated Organization and Analysis of Intangible Cultural HeritageJournal of Organizational and End User Computing10.4018/JOEUC.34973636:1(1-27)Online publication date: 30-Jul-2024
  • (2024)A review on network representation learning with multi-granularity perspectiveIntelligent Data Analysis10.3233/IDA-22732828:1(3-32)Online publication date: 1-Jan-2024
  • (2024)A Spark Optimizer for Adaptive, Fine-Grained Parameter TuningProceedings of the VLDB Endowment10.14778/3681954.368202117:11(3565-3579)Online publication date: 1-Jul-2024
  • Show More Cited By
  1. Distributed representations of words and phrases and their compositionality

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Guide Proceedings
    NIPS'13: Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2
    December 2013
    3236 pages

    Publisher

    Curran Associates Inc.

    Red Hook, NY, United States

    Publication History

    Published: 05 December 2013

    Qualifiers

    • Article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 09 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A Natural Language Processing Model for Automated Organization and Analysis of Intangible Cultural HeritageJournal of Organizational and End User Computing10.4018/JOEUC.34973636:1(1-27)Online publication date: 30-Jul-2024
    • (2024)A review on network representation learning with multi-granularity perspectiveIntelligent Data Analysis10.3233/IDA-22732828:1(3-32)Online publication date: 1-Jan-2024
    • (2024)A Spark Optimizer for Adaptive, Fine-Grained Parameter TuningProceedings of the VLDB Endowment10.14778/3681954.368202117:11(3565-3579)Online publication date: 1-Jul-2024
    • (2024)Semantic constraints to represent common sense required in household actions for multimodal learning-from-observation robotInternational Journal of Robotics Research10.1177/0278364923121292943:2(134-170)Online publication date: 1-Feb-2024
    • (2024)Node classifications with DjCaNEJournal of Information Science10.1177/0165551522111100250:4(977-990)Online publication date: 1-Aug-2024
    • (2024)Coding-PTMs: How to Find Optimal Code Pre-trained Models for Code Embedding in Vulnerability Detection?Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695539(1732-1744)Online publication date: 27-Oct-2024
    • (2024)RCFG2Vec: Considering Long-Distance Dependency for Binary Code Similarity DetectionProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695070(770-782)Online publication date: 27-Oct-2024
    • (2024)The Potential of Multimodal Compositionality for Enhanced Recommendations through Sentiment AnalysisCompanion Proceedings of the 26th International Conference on Multimodal Interaction10.1145/3686215.3690145(26-30)Online publication date: 4-Nov-2024
    • (2024)PromptLink: Multi-template prompt learning with adversarial training for issue-commit link recoveryProceedings of the 18th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement10.1145/3674805.3690751(461-467)Online publication date: 24-Oct-2024
    • (2024)Automatic Data Labeling for Software Vulnerability Prediction Models: How Far Are We?Proceedings of the 18th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement10.1145/3674805.3686675(131-142)Online publication date: 24-Oct-2024
    • Show More Cited By

    View Options

    View options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media