You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 15, 2023. It is now read-only.
The node crashes when restarted after a warp-sync. It only happens for ParityDB. RocksDB works fine. cc @cheme
Reproduce
Start a Polkadot node v0.9.31 or later: polkadot --sync warp --db paritydb -d new-data-dir
CTRL-C after warp sync when old blocks download starts (⏩ Block history ....)
Restart the node with the same command
It prints this and dies:
# Many of these lines, seemingly depending on how long you let it download "Block history" after the warp sync:
ERROR tokio-runtime-worker state-db: Block record is missing from the pruning window, block number 0
…
ERROR tokio-runtime-worker state-db: Block record is missing from the pruning window, block number 0
ERROR tokio-runtime-worker afg: GRANDPA voter error: could not complete a round on disk: State Database error: Block record is missing from the pruning window
ERROR tokio-runtime-worker sc_service::task_manager: Essential task `grandpa-voter` failed. Shutting down service.
ERROR tokio-runtime-worker state-db: Block record is missing from the pruning window, block number 0
Error:
0: Other: Essential task failed.
My suspect here is that there is some data corruption within the DB. Is also strange that when using parity-db and I send a SIGINT (aka CTRL-C) the process takes ~15 sec to stop (<<< is this always happened?)
I also noticed that since a few versions it takes much longer to shut down the node. Not sure if this is because of me using ParityDB or something else.