Skip to content

Restore remote data inplace with deduplication or rsync algorithm #95

@oderwat

Description

@oderwat

I copied this over from the Attic Issue jborg/attic#322

Using Attic to snapshot multiple GB of data to a remote server leaves me with the problem that restoring the data to a former state needs a full extraction of all data traveling the network.

I could extract on the remote server and then rsync the data back to the source but that would need full extraction (time and space) and also decodes the backup data on the remote server which renders the AES kinda obsolete.

My current best options seems to mount the data on the remote and then use rsync with that. But this needs the secret key on the backup server which I would like to avoid. It also introduces multiple points of failure and complexity.

What about having an extract mode which uses the already existing local data for an in place diff recovery? This could make it pretty cheap to restore to an older state!

I guess this could be one with the existing de-duplication algorithm as it is kinda "reverse backup".

It would also to allow snapshots on a remote server and extraction to another system which "mimics" the original. Last one is my use-case: We could let the servers create backups every some hours and restore that state easily onto a developer machine. Even if that snapshot is multiple gigabyte large and the developer has no access to the original server. It would be fast because most data is already on the developer machine. Basically it would just restore a "diff" of the last extracted snapshot.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions