Conversation
Current coverage is 76.86%@@ master #1957 diff @@
==========================================
Files 62 62
Lines 12900 12919 +19
Methods 0 0
Messages 0 0
Branches 0 0
==========================================
+ Hits 9913 9929 +16
- Misses 2987 2990 +3
Partials 0 0 |
|
Well that's new. Could they have made the graph any bigger? 😆 |
|
Oh, that's terrible. I definitely didn't turn that on—I find those code coverage notifications uselessly noisy. Let me see if there's some setting I need to turn off again… |
|
OK, I hope I've turned that off. It looks like CodeCov changed the way they do configuration without notifying us. 😢 I had to create a new |
| for thread in self.all_threads: | ||
| thread.abort() | ||
|
|
||
| def run(self): |
There was a problem hiding this comment.
This (and _run in the subclasses) could use some docstrings to explain their new relationship.
|
Interesting! Yes, the inability to ^C is a frustrating aspect of the multithreaded importer. We should look at this carefully.
Of course, the best thing would be some magical way to raise |
| for thread in threads[:-1]: | ||
| thread.join() | ||
| try: | ||
| for thread in threads[:-1]: |
There was a problem hiding this comment.
@sampsyo: I guess this should rather be for thread in threads:? For now I didn't change the slice, but to my understanding, threads[-1] is only guaranteed to have finished upon normal shutdown, when aborting, it might very well be alive.
There was a problem hiding this comment.
Hmm, maybe so. Perhaps this is a case where "try it and see" would be a good idea?
There was a problem hiding this comment.
I'm currently importing some music with that patch, seems fine for now. As long as the threads are non-daemonized, this might not make any difference. But as the last thread (I think) will do the actual writing, it possibly is one of the more important to wait for.
|
Seems sensible. I'll do the cherrypick, and leave the remaining code here without merging. There's https://github.com/elijahandrews/flake8-blind-except, flake8 does not seem to include a checker like this by default.
Some searching seems to confirm this, although the docs on threading in general are not that good. Thanks for the heads-up on finally handlers, didn't think about them. But the better way should be using the proposals form #803, anyway. def check_abort():
t = threading.current_thread()
if t.abort_flag:
raise PipelineAbort()and # read t.abort_stack[-1] when aborting the pipeline to see whether thread t needs to be waited for
@contextmanager
def can_abort(allow_abort):
t = threading.current_thread
t.abort_stack.append(allow_abort)
yield
t.abort_stack.pop()whatever is better suited for the specific thread. Then make the threads use it. Maybe add some profiling code and check on some real import tasks what actually takes a long time. |
|
An optional profiling mode, to see where the queues are backed up and such, would be incredibly cool for many reasons. |
|
For reference, beets probably did not really lock up at all, but ran into this ntfs-3g issue: http://tuxera.com/forum/viewtopic.php?f=2&t=31065 when mutagen resizes the metadata block using mmap. Which is rather unfortunate, as it gives a huge performance drop. |
|
Hello @wisp3rwind and thank you for your contribution. I know it's been some time, but are you still intending to continue work on this? |
|
I still care for improved pipeline aborting, however, this PR can be closed: A part of it (blind |
The situation: I was running an import task, that apparently locked up, i.e. it didn't advance for hours, even though not all user queries were run. Replaygain was enabled, but for 20-ish albums, even that should not take as long. Worse, I couldn't quit beets per
KeyboardInterruptand as a result didn't end up with a traceback when I eventuallykill-ed beets. Not very satisfying, because I couldn't find out where it actually got stuck.(The actual culprit may have been related to a flaky USB connection to the external drive with the music.)
Therefore, this is my attempt at allowing to quit at any time, safeguarded by having to press
Ctrl-Crapidly several times (and a warning about data loss). What do you think? Worth merging, or better off as a developer tool in a separate branch?I think at least the first commit should go in, as it fixes a few catch-all
except:s. According togrep, the only remaining catch-alls are now inbeets/util/bluelet.py, which I dared not touch. Also, it is only used by thebpdplugin (again, saysgrep), and might already be properly handled (I did not read theblueletcode).This tackles #803 , but does not actually solve the problem (Maybe I'll revive that issue).