Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Presentation is loading. Please wait.

Presentation is loading. Please wait.

From Word Embeddings To Document Distances

Similar presentations


Presentation on theme: "From Word Embeddings To Document Distances"— Presentation transcript:

1 From Word Embeddings To Document Distances
Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kilian Q. Weinberger From Word Embeddings To Document Distances. In Proceedings of the 32nd International Conference on Machine Learning (ICML 2015), pages 957–966. From Word Embeddings To Document Distances Matt Kusner, Yu Sun, Nicholas Kolkin, Kilian Weinberger Matt Kusner, Yu Sun, Nicholas Kolkin, Kilian Weinberger

2 Problem : find most similar document to a given document
Idea Represent words in document as vectors in a reduced space (word embeddings) Calculate how much effort it takes to move all the words in given document to new document Earth Moving Distance: Gaspard Monge "Mémoire sur la théorie des déblais et des remblais". Histoire de l’Académie Royale des Science, Année 1781, avec les Mémoires de Mathématique et de Physique. Take as « most similar » the new document requiring the least effort

3

4 Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kilian Q. Weinberger
Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kilian Q. Weinberger From Word Embeddings To Document Distances. In Proceedings of the 32nd International Conference on Machine Learning (ICML 2015), pages 957–966.

5

6 What is word2vec? word2vec is not a single algorithm
It is a software package for representing words as vectors, containing: Two distinct models CBoW Skip-Gram Various training methods Negative Sampling Hierarchical Softmax A rich preprocessing pipeline Dynamic Context Windows Subsampling Deleting Rare Words Word2vec is *not* an algorithm. It’s actually a software package With two distinct models Various ways to train these models And a rich preprocessing pipeline

7 https://code.google.com/p/word2vec/

8 vec(king) vec(man) + vec(woman) = vec(queen) vec(Einstein ) - vec(scientist ) + vec(Picasso ) = vec(painter )

9 What is word2vec? word2vec is not a single algorithm
It is a software package for representing words as vectors, containing: Two distinct models CBoW Skip-Gram (SG) Various training methods Negative Sampling (NS) Hierarchical Softmax A rich preprocessing pipeline Dynamic Context Windows Subsampling Deleting Rare Words Now, we’re going to focus on Skip-Grams with Negative Sampling, Which is considered the state-of-the-art.

10 Skip-Grams with Negative Sampling (SGNS)
Marco saw a furry little wampimuk hiding in the tree. So, how does SGNS work? Let’s say we have this sentence, (This example was shamelessly taken from Marco Baroni) “word2vec Explained…” Goldberg & Levy, arXiv 2014

11 Skip-Grams with Negative Sampling (SGNS)
Marco saw a furry little wampimuk hiding in the tree. And we want to understand what a wampimuk is. “word2vec Explained…” Goldberg & Levy, arXiv 2014

12 Skip-Grams with Negative Sampling (SGNS)
Marco saw a furry little wampimuk hiding in the tree. words contexts wampimuk furry wampimuk little wampimuk hiding wampimuk in … … 𝐷 (data) What SGNS does is take the context words around “wampimuk”, And in this way, Constructs a dataset of word-context pairs, D. “word2vec Explained…” Goldberg & Levy, arXiv 2014

13 Skip-Grams with Negative Sampling (SGNS)
SGNS finds a vector for each word in our vocabulary Each such vector has latent dimensions (e.g. ) Effectively, it learns a matrix whose rows represent Key point: it also learns a similar auxiliary matrix of context vectors In fact, each word has two embeddings 𝑊 𝑑 𝑉 𝑊 Now, using D, SGNS is going to learn a vector for each word in the vocabulary A vector of say… 100 latent dimensions. Effectively, it’s learning a matrix W, Where each row is a vector that represents a specific word. Now, a key point in understanding SGNS, Is that it also learns an auxiliary matrix C of context vectors So in fact, each word has two different embeddings Which are not necessarily similar. 𝐶 𝑉 𝐶 𝑑 :wampimuk “word2vec Explained…” Goldberg & Levy, arXiv 2014 :wampimuk

14 Skip-Grams with Negative Sampling (SGNS)
So to train these vectors… “word2vec Explained…” Goldberg & Levy, arXiv 2014

15 Skip-Grams with Negative Sampling (SGNS)
Maximize: 𝜎 𝑤 ⋅ 𝑐 𝑐 was observed with 𝑤 words contexts wampimuk furry wampimuk little wampimuk hiding wampimuk in …SGNS maximizes the similarity between word and context vectors that were observed together. “word2vec Explained…” Goldberg & Levy, arXiv 2014

16 Skip-Grams with Negative Sampling (SGNS)
Maximize: was observed with words contexts wampimuk furry wampimuk little wampimuk hiding wampimuk in Minimize: was hallucinated with words contexts wampimuk Australia wampimuk cyber wampimuk the wampimuk 1985 …and minimizes the similarity between word and context vectors hallucinated together. “word2vec Explained…” Goldberg & Levy, arXiv 2014

17 Skip-Grams with Negative Sampling (SGNS)
SGNS samples contexts at random as negative examples “Random” = unigram distribution Spoiler: Changing this distribution has a significant effect Now, the way SGNS hallucinates is really important. It’s actually where it gets its name from. For each observed word-context pair, SGNS samples k contexts at random, As negative examples Now, when we say random, we mean the unigram distribution And Spoiler Alert: changing this distribution has quite a significant effect.

18 Identifying New Hyperparameters
<pause>

19 New Hyperparameters Preprocessing (word2vec) Postprocessing (GloVe)
Dynamic Context Windows Subsampling Deleting Rare Words Postprocessing (GloVe) Adding Context Vectors Association Metric (SGNS) Shifted PMI Context Distribution Smoothing So let’s make a list of all the new hyperparameters we found: The first group of hyperparameters is in the pre-processing pipeline. All of these were introduced as part of word2vec. The second group contains post-processing modifications, And the third group of hyperparameters affects the association metric between words and contexts. This last group is really interesting, because to make sense of it, you have to understand that SGNS is actually factorizing the word-context PMI matrix.

20 New Hyperparameters Preprocessing (word2vec) Postprocessing (GloVe)
Dynamic Context Windows Subsampling Deleting Rare Words Postprocessing (GloVe) Adding Context Vectors Association Metric (SGNS) Shifted PMI Context Distribution Smoothing …so we’ll just look at these two: Dynamic context windows and Adding context vectors and finally…

21 New Hyperparameters Preprocessing (word2vec) Postprocessing (GloVe)
Dynamic Context Windows Subsampling Deleting Rare Words Postprocessing (GloVe) Adding Context Vectors Association Metric (SGNS) Shifted PMI Context Distribution Smoothing …we’ll look at Context Distribution Smoothing, and see how we can adapt it to traditional count-based methods.

22 Dynamic Context Windows
Marco saw a furry little wampimuk hiding in the tree. So dynamic context windows Let’s say we have our sentence from earlier, And we want to look at a context window of 4 words around wampimuk.

23 Dynamic Context Windows
Marco saw a furry little wampimuk hiding in the tree. Now, some of these context words are obviously more related to the meaning of wampimuk than others. The intuition behind dynamic context windows, is that the closer the context is to the target…

24 Dynamic Context Windows
Marco saw a furry little wampimuk hiding in the tree. word2vec: GloVe: Aggressive: The Word-Space Model (Sahlgren, 2006) …the more relevant it is. word2vec does just that, by randomly sampling the size of the context window around each token. What you see here are the probabilities that each specific context word will be included in the training data. GloVe also does something similar, but in a deterministic manner, and with a slightly different distribution. There are of course other ways to do this, And they were all, apparently, Applied to traditional algorithms about a decade ago.

25 Hyperparameter Settings
Classic Vanilla Setting (commonly used for distributional baselines) Preprocessing <None> Postprocessing Association Metric Vanilla PMI/PPMI Recommended word2vec Setting (tuned for SGNS) Preprocessing Dynamic Context Window Subsampling Postprocessing <None> Association Metric Shifted PMI/PPMI Context Distribution Smoothing On the other hand, we have the recommended setting of word2vec, Which was tuned for SGNS… …and does quite a bit more. Now, when we just look at these 2 settings, side by side, It’s easier to see that part of word2vec’s contribution is this collection of hyperparameters.

26 Experiments <pause>
So for our example, we’ll use the wordsim dataset to compare PPMI vectors, a traditional distributional representation, with SGNS embeddings.

27 Experiments: Prior Art
Experiments: “Oranges to Oranges” Experiments: “Apples to Apples” In previous comparisons, PPMI vectors would be generated using the vanilla setting, While SGNS would use word2vec’s hyperparameter setting. From looking at this experiment alone, SGNS does have a significant advantage. Now, one thing we could do, Is isolate SGNS from its hyperparameters, and run it in the vanilla setting. SGNS is now slightly worse, but still better than PPMI by nearly 5 points. But here comes the interesting part: Because we found out how to adapt hyperparameters across different algorithms, We can now apply the word2vec setting to train better PPMI vectors. And this is how we improve distributional similarity.

28 Experiments: Hyperparameter Tuning
Experiments: “Oranges to Oranges” BUT this isn’t the end of the story. What if the word2vec setting isn’t the best setting for this task? After all, it was tuned for a certain type of analogies. The right thing to do is to allow each method... …to tune every hyperparameter. As you can all see, tuning hyperparameters can take us well beyond conventional settings, And also give us a more elaborate and realistic comparison of different algorithms. <> Now, it’s important to stress that these two settings, Can be, and usually are, Very different in practice. In fact, each task and each algorithm usually require different settings, Which is why we used cross-validation to tune hyperparameters, when we compared the different algorithms. [different settings]

29 Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kilian Q. Weinberger
Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kilian Q. Weinberger From Word Embeddings To Document Distances. In Proceedings of the 32nd International Conference on Machine Learning (ICML 2015), pages 957–966.

30 Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kilian Q. Weinberger
Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kilian Q. Weinberger From Word Embeddings To Document Distances. In Proceedings of the 32nd International Conference on Machine Learning (ICML 2015), pages 957–966.

31 WMD metric leads to low error rates
Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kilian Q. Weinberger From Word Embeddings To Document Distances. In Proceedings of the 32nd International Conference on Machine Learning (ICML 2015), pages 957–966. WMD metric leads to low error rates high quality of the word2vec embedding trained on billions of words Attractive feature of WMD: interpretability


Download ppt "From Word Embeddings To Document Distances"

Similar presentations


Ads by Google