Filled as a separate issue, but could be handled as part of #323.
At present, and as also observed by a drive-by contributor, we do not always make true observations on the Helm storage, but rather piggy back on past observations. This is not inline with the Kubernetes API conventions, but also more prone to vague errors because the world as observed may (unexpectedly) change.
An example of a state determination based on a previous observation can be found here: https://github.com/fluxcd/helm-controller/blob/main/controllers/helmrelease_controller.go#L313
It should instead confirm the Helm storage matches the expected state, and probably requires the introduction of a "storage observer" to properly capture all errors that may happen during a release. Reason this observer is required, is because Helm actions do not consistently return the (failed) release object on an error (or a revision number).
Filled as a separate issue, but could be handled as part of #323.
At present, and as also observed by a drive-by contributor, we do not always make true observations on the Helm storage, but rather piggy back on past observations. This is not inline with the Kubernetes API conventions, but also more prone to vague errors because the world as observed may (unexpectedly) change.
An example of a state determination based on a previous observation can be found here: https://github.com/fluxcd/helm-controller/blob/main/controllers/helmrelease_controller.go#L313
It should instead confirm the Helm storage matches the expected state, and probably requires the introduction of a "storage observer" to properly capture all errors that may happen during a release. Reason this observer is required, is because Helm actions do not consistently return the (failed) release object on an error (or a revision number).