borg extract: add --continue flag#1665
Conversation
| if pi: | ||
| pi.show(increase=prefix_length) | ||
| ids = [c.id for c in chunks] | ||
| for _, data in self.pipeline.fetch_many(ids, is_preloaded=True): |
There was a problem hiding this comment.
not sure if this causes an issue with the preloaded chunks:
iirc it preloads all chunks of a file when processing its item, but then it fetches either none (existing, full file) or only some (existing, partial file), before it goes into fetch-all mode (non-existing files).
There was a problem hiding this comment.
Good catch, didn't consider that. I made up RemoteRepository.discard_preload for that, which looks about right to me, but makes it hang - for some reason.
There was a problem hiding this comment.
This is kinda strange. If I set a breakpoint in RR.discard_ids I see that neither cache nor responses contain anything, but still - removing the IDs from preload_ids makes it hang in call_many, in the while not self.to_send and (calls or self.preload_ids) and len(waiting_for) < MAX_INFLIGHT: loop.
Current coverage is 84.67% (diff: 85.96%)@@ master #1665 diff @@
==========================================
Files 20 20
Lines 6548 6589 +41
Methods 0 0
Messages 0 0
Branches 1112 1123 +11
==========================================
+ Hits 5547 5579 +32
- Misses 734 739 +5
- Partials 267 271 +4
|
|
FYI the reason this was moved to b3 is that with larger files it still hangs in RemoteRepository.call_many, in the same spot as earlier:
I haven't looked at it again yet. |
| fd.seek(prefix_length) | ||
| fd.truncate() | ||
| discarded_count = len(item.chunks) - len(chunks) | ||
| discarded_chunks_ids = [c.id for c in item.chunks[:discarded_count]] |
There was a problem hiding this comment.
nitpick: discard_count / discard_chunk_ids (not past tense)
|
"later" and no activity = close. |
Fixes #1356