-
-
Notifications
You must be signed in to change notification settings - Fork 837
Closed
Milestone
Description
idea from DiVOC borg session: speed-optimize tar pipe
Problem: for big repos, the tar-pipe could take rather long:
- tar-export would read content chunks again and again via relatively slow connection from a remote repo
- maybe use
borg.remote.cache_if_remoteand a big local persistent cache to avoid repeated remote transfer of same chunks.
- maybe use
- the tar-pipe should be between 2 borg processes on same machine, lots of data flowing here!
- tar-import would do lots of chunking and hashing, but would be faster for content of 2nd+ archives due to dedup
- could additionally use a files cache like
createdoes (for already "seen" and unmodified tarstream items)
- could additionally use a files cache like
Note: an optimisation sending chunkid lists over the tar pipe requires the id hash algorithm+secret and the chunker secret to be identical in both repos.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels