We start by noting that conditional independence X������ Y| B��� X does not necessarily imply the... more We start by noting that conditional independence X������ Y| B��� X does not necessarily imply the correlation between B��� X and Y is maximized. To see this, let X be a Gaussian random vairable with zero mean and diagonal covariance matrix. Assume B is an identity matrix and Y= X2=(B��� X) 2 (elementwise square for a vectorial X). The conditional independence is obviously satisfied yet the correlation between B��� X and Y is zero.
Abstract We apply the framework of kernel dimension reduction, originally designed for supervised... more Abstract We apply the framework of kernel dimension reduction, originally designed for supervised problems, to unsupervised dimensionality reduction. In this framework, kernel-based measures of independence are used to derive low-dimensional representations that maximally capture information in covariates in order to predict responses. We extend this idea and develop similarly motivated measures for unsupervised problems where covariates and responses are the same.
In this paper, we describe our experiments using Latent Dirichlet Allocation (LDA) to model image... more In this paper, we describe our experiments using Latent Dirichlet Allocation (LDA) to model images containing both perceptual features and words. To build a large-scale image tagging system, we distribute the computation of LDA parameters using MapReduce. Empirical study shows that our scalable LDA supports image annotation both effectively and efficiently.
We start by noting that conditional independence X������ Y| B��� X does not necessarily imply the... more We start by noting that conditional independence X������ Y| B��� X does not necessarily imply the correlation between B��� X and Y is maximized. To see this, let X be a Gaussian random vairable with zero mean and diagonal covariance matrix. Assume B is an identity matrix and Y= X2=(B��� X) 2 (elementwise square for a vectorial X). The conditional independence is obviously satisfied yet the correlation between B��� X and Y is zero.
Abstract We apply the framework of kernel dimension reduction, originally designed for supervised... more Abstract We apply the framework of kernel dimension reduction, originally designed for supervised problems, to unsupervised dimensionality reduction. In this framework, kernel-based measures of independence are used to derive low-dimensional representations that maximally capture information in covariates in order to predict responses. We extend this idea and develop similarly motivated measures for unsupervised problems where covariates and responses are the same.
In this paper, we describe our experiments using Latent Dirichlet Allocation (LDA) to model image... more In this paper, we describe our experiments using Latent Dirichlet Allocation (LDA) to model images containing both perceptual features and words. To build a large-scale image tagging system, we distribute the computation of LDA parameters using MapReduce. Empirical study shows that our scalable LDA supports image annotation both effectively and efficiently.
Uploads
Papers by Meihong Wang