-
Notifications
You must be signed in to change notification settings - Fork 60
MGP #60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
MGP #60
Changes from all commits
Commits
Show all changes
46 commits
Select commit
Hold shift + click to select a range
62febc7
Approximate marginalisation during prediction time using Laplace appr…
gpfins 2f554cc
Adding docstring
gpfins 267fdad
Small example
gpfins be294ee
Adding predictions for multiple points
gpfins 56e0811
Add notebook for MGP
gpfins 8cb9e83
Update notebook MGP
gpfins 5eff800
Make use of the free_vars instead of collecting the variables manually
gpfins 4a54add
Remove unnecessary functions
gpfins 1e6d81d
Can hanle predict_f and predict_y
gpfins 850df94
Can handle multi-output GP
gpfins 8c4c239
Add predict_density
gpfins 7869161
Models tests
gpfins 2242178
Models tests
gpfins cf672a5
Approximate marginalisation during prediction time using Laplace appr…
gpfins 8acfad6
Adding docstring
gpfins 66ff92c
Small example
gpfins 2316ac9
Adding predictions for multiple points
gpfins 8f609cd
Add notebook for MGP
gpfins d884538
Update notebook MGP
gpfins c0a41e7
Make use of the free_vars instead of collecting the variables manually
gpfins c83eb73
Remove unnecessary functions
gpfins c0f5c00
Can hanle predict_f and predict_y
gpfins 958eb49
Can handle multi-output GP
gpfins 0657328
Add predict_density
gpfins 8f79822
Models tests
gpfins 2c940bc
Models tests
gpfins 3b3bd35
Approximate marginalisation during prediction time using Laplace appr…
gpfins e9d3fd2
Adding docstring
gpfins 908cadd
Small example
gpfins 451f3dd
Adding predictions for multiple points
gpfins 673fbd4
Add notebook for MGP
gpfins d24da18
Update notebook MGP
gpfins a31dfe7
Make use of the free_vars instead of collecting the variables manually
gpfins 43daf62
Remove unnecessary functions
gpfins 4e69a7f
Can hanle predict_f and predict_y
gpfins 0b14297
Can handle multi-output GP
gpfins c85fc6e
Add predict_density
gpfins 96c6307
Models tests
gpfins 01a67d1
Models tests
gpfins 9630095
Merge remote-tracking branch 'origin/mgp' into mgp
gpfins cd6e969
Add objective to __init__
gpfins 34ca364
Delete testmgp.py
gpfins 19a6937
Update .travis.yml
gpfins 94e1028
Update test_models.py
gpfins f2ac9e8
Merge branch 'master' into mgp
javdrher b4c3615
Bugfix
gpfins File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,122 @@ | ||
| from GPflow.param import Parameterized, AutoFlow, Param | ||
| from GPflow.model import Model, GPModel | ||
| from GPflow.likelihoods import Gaussian | ||
| import GPflow | ||
| import tensorflow as tf | ||
|
|
||
| float_type = GPflow.settings.dtypes.float_type | ||
|
|
||
|
|
||
| def rowwise_gradients(Y, X): | ||
| """ | ||
| For a 2D Tensor Y, compute the derivatiave of each columns w.r.t a 2D tensor X. | ||
|
|
||
| This is done with while_loop, because of a known incompatibility between map_fn and gradients. | ||
| """ | ||
| num_rows = tf.shape(Y)[0] | ||
| num_feat = tf.shape(X)[0] | ||
|
|
||
| def body(old_grads, row): | ||
| g = tf.expand_dims(tf.gradients(Y[row], X)[0], axis=0) | ||
| new_grads = tf.concat([old_grads, g], axis=0) | ||
| return new_grads, row + 1 | ||
|
|
||
| def cond(_, row): | ||
| return tf.less(row, num_rows) | ||
|
|
||
| shape_invariants = [tf.TensorShape([None, None]), tf.TensorShape([])] | ||
| grads, _ = tf.while_loop(cond, body, [tf.zeros([0, num_feat], float_type), tf.constant(0)], | ||
| shape_invariants=shape_invariants) | ||
|
|
||
| return grads | ||
|
|
||
|
|
||
| class MGP(Model): | ||
| """ | ||
| Marginalisation of the hyperparameters during evaluation time using a Laplace Approximation | ||
| Key reference: | ||
|
|
||
| :: | ||
|
|
||
| @article{garnett2013active, | ||
| title={Active learning of linear embeddings for Gaussian processes}, | ||
| author={Garnett, Roman and Osborne, Michael A and Hennig, Philipp}, | ||
| journal={arXiv preprint arXiv:1310.6740}, | ||
| year={2013} | ||
| } | ||
| """ | ||
|
|
||
| def __init__(self, obj): | ||
| assert isinstance(obj, GPModel), "Class has to be a GP model" | ||
| assert isinstance(obj.likelihood, Gaussian), "Likelihood has to be Gaussian" | ||
| self.wrapped = obj | ||
| super(MGP, self).__init__(name=obj.name + "_MGP") | ||
|
|
||
| def __getattr__(self, item): | ||
| """ | ||
| If an attribute is not found in this class, it is searched in the wrapped model | ||
| """ | ||
| return self.wrapped.__getattribute__(item) | ||
|
|
||
| def __setattr__(self, key, value): | ||
| """ | ||
| If setting :attr:`wrapped` attribute, point parent to this object (the datascaler) | ||
| """ | ||
| if key is 'wrapped': | ||
| object.__setattr__(self, key, value) | ||
| value.__setattr__('_parent', self) | ||
| return | ||
|
|
||
| super(MGP, self).__setattr__(key, value) | ||
|
|
||
| def build_predict(self, fmean, fvar, theta): | ||
| h = tf.hessians(self.build_likelihood() + self.build_prior(), theta)[0] | ||
| L = tf.cholesky(-h) | ||
|
|
||
| N = tf.shape(fmean)[0] | ||
| D = tf.shape(fmean)[1] | ||
|
|
||
| fmeanf = tf.reshape(fmean, [N * D, 1]) # N*D x 1 | ||
| fvarf = tf.reshape(fvar, [N * D, 1]) # N*D x 1 | ||
|
|
||
| Dfmean = rowwise_gradients(fmeanf, theta) # N*D x k | ||
| Dfvar = rowwise_gradients(fvarf, theta) # N*D x k | ||
|
|
||
| tmp1 = tf.transpose(tf.matrix_triangular_solve(L, tf.transpose(Dfmean))) # N*D x k | ||
| tmp2 = tf.transpose(tf.matrix_triangular_solve(L, tf.transpose(Dfvar))) # N*D x k | ||
| return fmean, 4 / 3 * fvar + tf.reshape(tf.reduce_sum(tf.square(tmp1), axis=1), [N, D]) \ | ||
| + 1 / 3 / (fvar + 1E-3) * tf.reshape(tf.reduce_sum(tf.square(tmp2), axis=1), [N, D]) | ||
|
|
||
| @AutoFlow((float_type, [None, None])) | ||
| def predict_f(self, Xnew): | ||
| """ | ||
| Compute the mean and variance of the latent function(s) at the points | ||
| Xnew. | ||
| """ | ||
| theta = self._predict_f_AF_storage['free_vars'] | ||
| fmean, fvar = self.wrapped.build_predict(Xnew) | ||
| return self.build_predict(fmean, fvar, theta) | ||
|
|
||
| @AutoFlow((float_type, [None, None])) | ||
| def predict_y(self, Xnew): | ||
| """ | ||
| Compute the mean and variance of held-out data at the points Xnew | ||
| """ | ||
| theta = self._predict_y_AF_storage['free_vars'] | ||
| pred_f_mean, pred_f_var = self.wrapped.build_predict(Xnew) | ||
| fmean, fvar = self.wrapped.likelihood.predict_mean_and_var(pred_f_mean, pred_f_var) | ||
| return self.build_predict(fmean, fvar, theta) | ||
|
|
||
| @AutoFlow((float_type, [None, None]), (float_type, [None, None])) | ||
| def predict_density(self, Xnew, Ynew): | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Still missing a test for predict density :) |
||
| """ | ||
| Compute the (log) density of the data Ynew at the points Xnew | ||
|
|
||
| Note that this computes the log density of the data individually, | ||
| ignoring correlations between them. The result is a matrix the same | ||
| shape as Ynew containing the log densities. | ||
| """ | ||
| theta = self._predict_density_AF_storage['free_vars'] | ||
| pred_f_mean, pred_f_var = self.wrapped.build_predict(Xnew) | ||
| pred_f_mean, pred_f_var = self.build_predict(pred_f_mean, pred_f_var, theta) | ||
| return self.likelihood.predict_density(pred_f_mean, pred_f_var, Ynew) | ||
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,56 @@ | ||
| import GPflowOpt | ||
| import GPflow | ||
| import numpy as np | ||
| import unittest | ||
| from GPflowOpt.models import MGP | ||
|
|
||
|
|
||
| def parabola2d(X): | ||
| return np.atleast_2d(np.sum(X ** 2, axis=1)).T | ||
|
|
||
|
|
||
| class TestMGP(unittest.TestCase): | ||
| @property | ||
| def domain(self): | ||
| return np.sum([GPflowOpt.domain.ContinuousParameter("x{0}".format(i), -1, 1) for i in range(1, 3)]) | ||
|
|
||
| def create_parabola_model(self, design=None): | ||
| if design is None: | ||
| design = GPflowOpt.design.LatinHyperCube(16, self.domain) | ||
| X, Y = design.generate(), parabola2d(design.generate()) | ||
| m = GPflow.gpr.GPR(X, Y, GPflow.kernels.RBF(2, ARD=True)) | ||
| return m | ||
|
|
||
| def test_object_integrity(self): | ||
| m = self.create_parabola_model() | ||
| Xs, Ys = m.X.value, m.Y.value | ||
| n = MGP(m) | ||
|
|
||
| self.assertEqual(n.wrapped, m) | ||
| self.assertEqual(m._parent, n) | ||
| self.assertTrue(np.allclose(Xs, n.X.value)) | ||
| self.assertTrue(np.allclose(Ys, n.Y.value)) | ||
|
|
||
| def test_predict(self): | ||
| m = self.create_parabola_model() | ||
| n = MGP(self.create_parabola_model()) | ||
| m.optimize() | ||
| n.optimize() | ||
|
|
||
| Xt = GPflowOpt.design.RandomDesign(20, self.domain).generate() | ||
| fr, vr = m.predict_f(Xt) | ||
| fs, vs = n.predict_f(Xt) | ||
| self.assertTrue(np.shape(fr) == np.shape(fs)) | ||
| self.assertTrue(np.shape(vr) == np.shape(vs)) | ||
| self.assertTrue(np.allclose(fr, fs, atol=1e-3)) | ||
|
|
||
| fr, vr = m.predict_y(Xt) | ||
| fs, vs = n.predict_y(Xt) | ||
| self.assertTrue(np.shape(fr) == np.shape(fs)) | ||
| self.assertTrue(np.shape(vr) == np.shape(vs)) | ||
| self.assertTrue(np.allclose(fr, fs, atol=1e-3)) | ||
|
|
||
| Yt = parabola2d(Xt) | ||
| fr = m.predict_density(Xt, Yt) | ||
| fs = n.predict_density(Xt, Yt) | ||
| self.assertTrue(np.shape(fr) == np.shape(fs)) |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we later specify this without a version number so we just always test against the latest stable? Which is the goal anyways, or rather work with the latest stable GPflow which hopefully follows the latest stable TF closely.