Tighten some assertCMLApproxData rtols.#2224
Conversation
We discovered in the past that numpy 1.8 had some serious problems with summations over large arrays -- which impacts average and standard-deviation calculations. This was mostly fixed in 1.9, I think by adopting "pairwise" calculations From my recent experiments with numpy 1.10 and 1.11 it seems that when an array is non-contiguous you can still get the "old" (bad) behaviour. What I've observed:
So it does look like we are now in the clear after all, in that the "newest" version 1.11 produces definitely "better" results. _However..._ it seems this only works with It is worth re-iterating that the differences are in most cases very small, and only really show up with 32-byte floats. The problem testcase here is a particularly tricky example. |
|
this looks sensible and I would like to merge the analysis on the ticket is really relevant and I would like to keep it @pp-mo |
I've considered this, but I don't know quite what to do about it. I tried a few other approaches, but I couldn't construct a testcase that Iris trips up on. I'm tempted to let this drop now unless we can show some relevance.
|
The really big 'rtol' values bothered me, so here I've reduced them, just so far as allows it to succeed with both numpy 1.10 and 1.11.
I think we _still_ have a problem with the collapse calculations (see below).
However, I don't know yet if that is relevant to these result differences between numpy 1.10 and 1.11.
I did also try to retest with numpy 1.8, but dependencies don't resolve so easily for the latest Iris.
(So I gave up).
I think we really don't care about 1.8 any more.