-
Notifications
You must be signed in to change notification settings - Fork 60
Description
As mentioned #4 , tests often failed (mostly on python 2.7) due to cholesky decomposition errors. First I thought this was mostly caused by updating the data and calling optimize() again, but resetting the hyperparameters wasn't working all the time. Increasing the likelihood variance sometimes helps slightly but isn't very robust either.
Right now the tests specify lengthscales for the initial model, and apply a hyperprior on the kernel variance. Each BO iteration, the hyperparameters supplied with the initial model are applied as a starting point. In addition, restarts are applied by randomizing the Params. This approach made it a lot more stable but isn't perfect yet. Especially in longer runs of BO, reverting to the supplied lengthscales each time ultimately causes crashes.
Some things we may consider:
- Normalizing the input/output data. Tested this a bit, didn't solve the issue. Additionally, the model hyperparameters loose some interpretability. Note that I think we will ultimately need this for PES anyway.
- Add a callback for re-configuring hyperparameters. Instead of reverting to the initially supplied hypers each iteration, this function is called and configures the initial state. I think for more complex modeling approaches this ultimately required, but for simple scenario's with GPR this has to work automatically.
- Applying hyperpriors is going to be important.
I'd love to hear thoughts on how to improve this.