Currently, setting the KernelLimit of the model means applying an identical KernelLimit property to each of the IEstimation objects created by the model. In fact, it may be more desirable to have per estimation kernel limits, since the probability distribution underlying different estimators can have very different structures. For instance, it may be ideal to set a small number of kernels for estimators applied to individual units, as these may be better represented with only a few kernels, while the ground process distribution may still require a larger number of kernels.
Additionally, it may be useful to have a global KernelLimit parameter where the kernel limit is not handled explicitly by the IEstimation object, but rather, is handled by the Encoder to monitor the total number of kernels across all IEstimations. The reason for this would be to allow different estimation objects to consume more memory should their underlying data distributions require it, but also cap the maximum number of kernels so as to ensure an upper limit on memory usage.