Replies: 1 comment 1 reply
-
Not read the paper in detail ... but trying to look at the local likelihood definition ... it looks like they're computing the likelihood for GPs independently computed on multiple spatial dimensions ... I couldn't quite unpick whether it's the posterior likelihood that they're looking at (in which case you want the posterior mean and covariance which GPy will provide... compute the likelihood of the corresponding multivariate Gaussian) or if they're training a set of shared parameters on the prior likelihoods (in which case you want the standard log_likelihood that GPy will return setting parameters of the model to those learnt on the training data, but fixing the inputs and targets to be test data. IT's multivariate ... with independence across dimensions (as far as I can tell) and that's what GPy assumes you want if you provide a matrix for the response variable. So that bit should drop though without modification. |
Beta Was this translation helpful? Give feedback.
-
Hello!
I'm trying to apply the technique in this paper: https://doi.org/10.1109/MLSP.2013.6661947
In that, GPs are fit to trajectories, then those trajectories are clustered by measuring similarity between each fit GP and each 'test' trajectory. The similarity measure is based on what the authors call local likelihood (equation 8) which I think is the likelihood for a given trajectory point being generated from a given GP, if I understood the paper correctly. They then go on to create a global similarity from the sum of local similarity for all points in the trajectory, then cluster from that.
Is it possible to use GPy to do something like this? I feel like it should be obvious so I'm probably missing something or simply don't understand the terminology correctly. Can I instead use the log predictive density as my similarity metric, it seems like it does something similar?
Thanks for your help!
Beta Was this translation helpful? Give feedback.
All reactions