Hello everyone, and thank you for this amazing library—I'm genuinely grateful for the great work! I've been using the JS library model for a few days now, and it performs exceptionally well.
However, I noticed that the adapt() method isn't exposed, and the JS demo also doesn't include an initial personalisation or calibration procedure. I've used the automatic clickstream to calibrate the model, but I feel like the performance degrades if too many clicks accumulate in a specific area.
I was wondering if there was a particular reason for omitting the adapt() method from the JS implementation. Also, this might warrant a separate issue, but I noticed that while the research paper describes MAML meta-learning, the actual library relies on an affine transformation and a one-step Adam optimiser. Could you share the reasoning behind these decisions?
Thanks again for your hard work!
Hello everyone, and thank you for this amazing library—I'm genuinely grateful for the great work! I've been using the JS library model for a few days now, and it performs exceptionally well.
However, I noticed that the adapt() method isn't exposed, and the JS demo also doesn't include an initial personalisation or calibration procedure. I've used the automatic clickstream to calibrate the model, but I feel like the performance degrades if too many clicks accumulate in a specific area.
I was wondering if there was a particular reason for omitting the adapt() method from the JS implementation. Also, this might warrant a separate issue, but I noticed that while the research paper describes MAML meta-learning, the actual library relies on an affine transformation and a one-step Adam optimiser. Could you share the reasoning behind these decisions?
Thanks again for your hard work!