-
Notifications
You must be signed in to change notification settings - Fork 3.7k
[fix](partial update) mishandling of exceptions in the publish phase may result in data loss #30366
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[fix](partial update) mishandling of exceptions in the publish phase may result in data loss #30366
Conversation
…may result in data loss
|
run buildall |
|
clang-tidy review says "All clean, LGTM! 👍" |
1 similar comment
|
clang-tidy review says "All clean, LGTM! 👍" |
TPC-H: Total hot run time: 39198 ms |
TPC-DS: Total hot run time: 186667 ms |
ClickBench: Total hot run time: 31.01 s |
|
Load test result on machine: 'aliyun_ecs.c7a.8xlarge_32C64G' |
liaoxin01
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
PR approved by anyone and no changes requested. |
|
run buildall |
|
clang-tidy review says "All clean, LGTM! 👍" |
TPC-H: Total hot run time: 38858 ms |
TPC-DS: Total hot run time: 186209 ms |
ClickBench: Total hot run time: 30.7 s |
|
Load test result on machine: 'aliyun_ecs.c7a.8xlarge_32C64G' |
|
run buildall |
|
clang-tidy review says "All clean, LGTM! 👍" |
TPC-H: Total hot run time: 38613 ms |
TPC-DS: Total hot run time: 186518 ms |
ClickBench: Total hot run time: 31.21 s |
|
Load test result on machine: 'aliyun_ecs.c7a.8xlarge_32C64G' |
|
run buildall |
|
clang-tidy review says "All clean, LGTM! 👍" |
TPC-H: Total hot run time: 38768 ms |
TPC-DS: Total hot run time: 186064 ms |
ClickBench: Total hot run time: 31.14 s |
|
Load test result on machine: 'aliyun_ecs.c7a.8xlarge_32C64G' |
dataroaring
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
PR approved by at least one committer and no changes requested. |
…may result in data loss (#30366)
…ict concurrent partial update (#35739) ## Proposed changes Issue Number: close #xxx 1. In #30366 , in order to avoid that some incomplete delete bitmap left in `txn_info->delete_bitmap` when publish failed, we make a copy of `txn_info->delete_bitmap` before we start to compute the delete bitmap 2. this copy is not updated back to `txn_info->delete_bitmap` after `rowset->rowset_meta()->merge_rowset_meta()` is successful 3. `txnManager::publish_txn()` saves the contents of `txn_info->delete_bitmap` to RocksDB after the call to `update_delete_bitmap()`, due to the issue in step 2, bitmap generated during publish is not saved to RocksDB, so if BE restarts at this point, this part of the incremental delete bitmap will be lost 4. it will result in duplicated keys on querying
…ict concurrent partial update (#35739) ## Proposed changes Issue Number: close #xxx 1. In #30366 , in order to avoid that some incomplete delete bitmap left in `txn_info->delete_bitmap` when publish failed, we make a copy of `txn_info->delete_bitmap` before we start to compute the delete bitmap 2. this copy is not updated back to `txn_info->delete_bitmap` after `rowset->rowset_meta()->merge_rowset_meta()` is successful 3. `txnManager::publish_txn()` saves the contents of `txn_info->delete_bitmap` to RocksDB after the call to `update_delete_bitmap()`, due to the issue in step 2, bitmap generated during publish is not saved to RocksDB, so if BE restarts at this point, this part of the incremental delete bitmap will be lost 4. it will result in duplicated keys on querying
…ict concurrent partial update (apache#35739) ## Proposed changes Issue Number: close #xxx 1. In apache#30366 , in order to avoid that some incomplete delete bitmap left in `txn_info->delete_bitmap` when publish failed, we make a copy of `txn_info->delete_bitmap` before we start to compute the delete bitmap 2. this copy is not updated back to `txn_info->delete_bitmap` after `rowset->rowset_meta()->merge_rowset_meta()` is successful 3. `txnManager::publish_txn()` saves the contents of `txn_info->delete_bitmap` to RocksDB after the call to `update_delete_bitmap()`, due to the issue in step 2, bitmap generated during publish is not saved to RocksDB, so if BE restarts at this point, this part of the incremental delete bitmap will be lost 4. it will result in duplicated keys on querying
…ase may result in data loss (apache#30366) (apache#33503) cherry-pick apache#30366
Proposed changes
Issue Number: close #xxx
We found in our stress test environment that when performing concurrent partial column updates, data was lost or inconsistent due to not to handle write exceptions of the newly generated segment well in publish phase:
Further comments
If this is a relatively large or complex change, kick off the discussion at dev@doris.apache.org by explaining why you chose the solution you did and what alternatives you considered, etc...