Abstract
The recently introduced Paragraph Vector (PV) is an efficient method for learning high-quality distributed representations for texts. However, from the probabilistic view, PV is not a complete model since it only models the generation of words but not texts, leading to two major limitations. Firstly, without a text-level model, PV assumes the independence between texts and thus cannot leverage the corpus-wide information to help text representation learning. Secondly, without the generation model of texts, the inference of text representations outside of the training set becomes difficult. Although PV makes itself as an optimization problem so that one can obtain representations for new texts anyway, it loses the sound probabilistic interpretability in that way. To tackle these problems, we first introduce a Generative Paragraph Vector, an extension of the Distributed Bag of Words version of Paragraph Vector with a complete generative process. By defining the generation model over texts, we further incorporate text labels into the model and turn it into a supervised version, namely Supervised Generative Paragraph Vector. Experiments on five text classification benchmark collections show that both unsupervised and supervised model architectures can yield superior classification performance against the state-of-the-art counterparts.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
- 3.
http://nlp.stanford.edu/sentiment/. We train the model on both phrases and sentences but only score on sentences at test time, as in [10].
- 4.
- 5.
References
Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3(Jan), 993–1022 (2003)
Deerwester, S., Dumais, S.T., Furnas, G.W., Landauer, T.K., Harshman, R.: Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. 41(6), 391 (1990)
Harris, Z.S.: Distributional structure. Word 10(2–3), 146–162 (1954)
Hill, F., Cho, K., Korhonen, A.: Learning distributed representations of sentences from unlabelled data. arXiv preprint arXiv:1602.03483 (2016)
Hofmann, T.: Probabilistic latent semantic indexing. In: SIGIR, pp. 50–57. ACM (1999)
Irsoy, O., Cardie, C.: Deep recursive neural networks for compositionality in language. In: Advances in Neural Information Processing Systems, pp. 2096–2104 (2014)
Iyyer, M., Manjunatha, V., Boyd-Graber, J., Daumé III, H.: Deep unordered composition rivals syntactic methods for text classification. In: ACL (2015)
Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188 (2014)
Kim, Y.: Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 (2014)
Le, Q.V., Mikolov, T.: Distributed representations of sentences and documents. ICML 14, 1188–1196 (2014)
Levy, O., Goldberg, Y., Dagan, I.: Improving distributional similarity with lessons learned from word embeddings. Trans. Assoc. Comput. Linguist. 3, 211–225 (2015)
Li, X., Roth, D.: Learning question classifiers. In Proceedings of the 19th International Conference on Computational Linguistics, vol. 1, pp. 1–7. Association for Computational Linguistics (2002)
Mcauliffe, J.D., Blei, D.M.: Supervised topic models. In: NIPS, pp. 121–128 (2008)
Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)
Pang, B., Lee, L.: A sentimental education: sentiment analysis using subjectivity summarization based on minimum cuts. In: ACL, p. 271. Association for Computational Linguistics (2004)
Pang, B., Lee, L.: Seeing stars: exploiting class relationships for sentiment categorization with respect to rating scales. In: ACL, pp. 115–124. Association for Computational Linguistics (2005)
Socher, R., Huval, B., Manning, C.D., Ng, A.Y.: Semantic compositionality through recursive matrix-vector spaces. In: EMNLP, pp. 1201–1211. Association for Computational Linguistics (2012)
Socher, R., et al.: Recursive deep models for semantic compositionality over a sentiment TreeBank. In: EMNLP, vol. 1631, p. 1642. Citeseer (2013)
Sutskever, I., Vinyals, O., Le, Q. V.: Sequence to sequence learning with neural networks. In: NIPS, pp. 3104–3112 (2014)
Tai, K.S., Socher, R., Manning, C.D.: Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075 (2015)
Tellex, S., Katz, B., Lin, J., Fernandes, A., Marton, G.: Quantitative evaluation of passage retrieval algorithms for question answering. In: SIGIR, pp. 41–47. ACM (2003)
Wang, S., Manning, C.D.: Baselines and bigrams: simple, good sentiment and topic classification. In: ACL, pp. 90–94. Association for Computational Linguistics (2012)
Zhao, H., Lu, Z., Poupart, P.: Self-adaptive hierarchical sentence model. arXiv preprint arXiv:1504.05070 (2015)
Acknowledgements
This work was funded by the 973 Program of China under Grant No. 2014CB340401, the National Natural Science Foundation of China (NSFC) under Grants No. 61425016, 61472401, 61722211, and 20180290, the Youth Innovation Promotion Association CAS under Grants No. 20144310, and 2016102, and the National Key R&D Program of China under Grants No. 2016QY02D0405.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, R., Guo, J., Lan, Y., Xu, J., Cheng, X. (2018). Generative Paragraph Vector. In: Zhang, S., Liu, TY., Li, X., Guo, J., Li, C. (eds) Information Retrieval. CCIR 2018. Lecture Notes in Computer Science(), vol 11168. Springer, Cham. https://doi.org/10.1007/978-3-030-01012-6_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-01012-6_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-01011-9
Online ISBN: 978-3-030-01012-6
eBook Packages: Computer ScienceComputer Science (R0)