Skip to content

Improve perfomance of the normalization factor estimation for constraint priors #835

@JasperMartins

Description

@JasperMartins

Currently, the estimation/integration of the normalization factor for constraint priors performs a fairly simple Monte Carlo integration that stops once a target number of accepted samples has been produced. This has two issues:

  1. If the constraint only removes a small part of the unconstrained volume, the number of accepted samples is reached comparatively fast. Thus, the total number of proposed samples will be small, leading to larger variances in the quality of the integral compared to constraints that remove, say, half of the prior volume.
  2. On the flip side, if the constraint removes almost all of the prior volume, the integration routine will take a long time to converge to the target number of samples. This case is somewhat artificial since, for such priors, a different parametrization should probably be used to improve sampling efficiency anyway, but especially in very high dimensions, the prior volume removed by a constraint might be significantly larger than naively expected.

For these reasons, I propose switching to an off-the-shelf stochastic integration routine and to check the integration error, for instance the qmc_quad routine implemented by scipy. Alternatively, one could at max_iterargument, or similar, to fix excessive routimes in case 2.
If such changes are up for consideration, I would go ahead on an implementation.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions