Scalable out-of-sample extension of graph embeddings using deep neural networks
arXiv preprint arXiv:1508.04422, 2015•arxiv.org
Several popular graph embedding techniques for representation learning and
dimensionality reduction rely on performing computationally expensive
eigendecompositions to derive a nonlinear transformation of the input data space. The
resulting eigenvectors encode the embedding coordinates for the training samples only, and
so the embedding of novel data samples requires further costly computation. In this paper,
we present a method for the out-of-sample extension of graph embeddings using deep …
dimensionality reduction rely on performing computationally expensive
eigendecompositions to derive a nonlinear transformation of the input data space. The
resulting eigenvectors encode the embedding coordinates for the training samples only, and
so the embedding of novel data samples requires further costly computation. In this paper,
we present a method for the out-of-sample extension of graph embeddings using deep …
Several popular graph embedding techniques for representation learning and dimensionality reduction rely on performing computationally expensive eigendecompositions to derive a nonlinear transformation of the input data space. The resulting eigenvectors encode the embedding coordinates for the training samples only, and so the embedding of novel data samples requires further costly computation. In this paper, we present a method for the out-of-sample extension of graph embeddings using deep neural networks (DNN) to parametrically approximate these nonlinear maps. Compared with traditional nonparametric out-of-sample extension methods, we demonstrate that the DNNs can generalize with equal or better fidelity and require orders of magnitude less computation at test time. Moreover, we find that unsupervised pretraining of the DNNs improves optimization for larger network sizes, thus removing sensitivity to model selection.
arxiv.org