Conversation
lib/iris/tests/__init__.py
Outdated
| bits[0] = os.path.splitext(file_name)[0] | ||
| folder, location = os.path.split(path) | ||
| bits = [location] + bits | ||
| while location != 'iris': |
There was a problem hiding this comment.
@marqh 💣 This won't work for documentation example_tests, iris isn't the only root namespace module in the testing context.
You'll need to account for example_tests also being a terminal module.
| sha1 = hashlib.sha1(res_file.read()) | ||
| if unique_id not in repo: | ||
| msg = 'Image comparison failed: Created image {} for test {}.' | ||
| raise ValueError(msg.format(result_fname, unique_id)) |
There was a problem hiding this comment.
@marqh You've changed the logic here from the original, which I kinda disagree with.
As it now stands any developer who adds a new graphics test, has to manually craft the changes to the imagerepo.json and manage the hash renaming of the png ... this doesn't scale very well.
I'll propose an alternative approach on your rebase_hash branch as a PR that (I believe) improves this work-flow. We can discuss that further there ... so hang tight for that.
There was a problem hiding this comment.
@marqh Okay, I'm working on that rebase_hash branch PR ... in-bound soon, thanks
There was a problem hiding this comment.
I understand your concerns and I am keen to see a work flow for updating lots of images
however, in your previous implementation, the effect was that tests with no checksum to check would pass, for developers and for travis.
This is a more serious problem. I simplified the logic to allow me to fix this problem, which I deemed critical.
If you are working with respect to this branch, please make sure that an implementation does not allow tests to pass where there is no entry in the imagerepo or where the image checksum comparison fails.
any tooling to help updating must (imho) deal with these as test failures, then help the developer fix them, not enable them to pass
I hope that point of view is clear and coherent
There was a problem hiding this comment.
@marqh I completely understand your concerns, and I think that there is a suitable compromise to be had here to meet all our needs.
I don't think it would be wise of me to mix this into this PR - let's do that in a separate PR 👍
| import logging | ||
| import os | ||
| from six.moves.queue import Queue | ||
| import requests |
There was a problem hiding this comment.
@marqh Move this import above six.moves.queue, thanks.
lib/iris/tests/test_image_json.py
Outdated
| repo = json.load(codecs.getreader('utf-8')(fi)) | ||
| uris = [] | ||
| for k, v in six.iteritems(repo): | ||
| uris = uris + v |
There was a problem hiding this comment.
@marqh Drop the usage of k ... you don't need it, then use six.itervalues
Another neat pattern from itertools to flatten a list of lists is ...
from itertools import chain
uris = list(chain.from_iterable(six.itervalues(repo)))|
#2163 concern about testing stability i suggest we move forward, not treat this as a showstopper |
|
@marqh Not yet ... |
|
@marqh Now that your happy LGTM |
QuLogic
left a comment
There was a problem hiding this comment.
I have some qualms with this, though it's been merged already.
| try: | ||
| result = requests.head(resource) | ||
| if (result.status_code == 200 and | ||
| resource.startswith('https://scitools.github.io')): |
There was a problem hiding this comment.
Also, why not check the startswith beforehand and save a request?
| from threading import Thread | ||
|
|
||
| # maximum number of threads for multi-threading code | ||
| MAXTHREADS = 128 |
There was a problem hiding this comment.
Do you really need to run 128 parallel HEAD requests? Browsers limit themselves to something like 2-3 connections per host; this seems like overkill.
When I run it with 128, I get a bunch of ConnectionError(MaxRetryError... in the exception deque and it takes 22.277s. Running with 1 thread takes 33.248s, 2 threads is 16.276s, 4 threads is 8.292s and 8 threads is 5.938s. The delta is already diminishing there, so I doubt there is any benefit to bumping it all the way to 128.
new PR to replace #2161 with a cleaner history (perhaps)