Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
eadcbc7
Backport #69274 to 24.8: fix `metadata_version` in ZooKeeper
robot-clickhouse Nov 15, 2024
56d7140
Backport #71966 to 24.8: Fix partition pruning with binary monotonic …
robot-clickhouse Nov 18, 2024
3df9e19
Update autogenerated version to 24.8.7.41 and contributors
robot-clickhouse Nov 18, 2024
c91bc0a
Backport #71849 to 24.8: Fix: add monotonic estimation for DateTime64…
robot-clickhouse Nov 18, 2024
09906a1
Merge pull request #72032 from ClickHouse/backport/24.8/71966
robot-clickhouse-ci-1 Nov 18, 2024
d0d1002
Merge pull request #72038 from ClickHouse/backport/24.8/71849
yakov-olkhovskiy Nov 18, 2024
248477f
Backport #72051 to 24.8: Correct permissions for dictionaries
robot-clickhouse Nov 19, 2024
1bddc62
Backport #72049 to 24.8: Another fix for client syntax highlighting
robot-clickhouse Nov 19, 2024
89f7572
Merge pull request #72060 from ClickHouse/backport/24.8/72051
robot-clickhouse-ci-2 Nov 19, 2024
b1a405c
Merge pull request #72067 from ClickHouse/backport/24.8/72049
robot-clickhouse Nov 19, 2024
aa758b3
Merge pull request #71981 from ClickHouse/backport/24.8/69274
tavplubix Nov 19, 2024
218e28f
Backport #72080 to 24.8: Fix formatting of `MOVE PARTITION ... TO TAB…
robot-clickhouse Nov 20, 2024
9a7e25c
Merge pull request #72114 from ClickHouse/backport/24.8/72080
robot-ch-test-poll3 Nov 20, 2024
37d0ca9
Backport #71845 to 24.8: Acquire zero-copy shared lock before moving …
robot-clickhouse Nov 20, 2024
aa86a78
Merge pull request #72142 from ClickHouse/backport/24.8/71845
robot-ch-test-poll4 Nov 20, 2024
87db88e
Backport #71982 to 24.8: Allow only SELECT queries in EXPLAIN AST use…
robot-clickhouse Nov 20, 2024
81036bd
Merge pull request #72155 from ClickHouse/backport/24.8/71982
robot-clickhouse-ci-1 Nov 20, 2024
edddd08
Merge remote-tracking branch 'altinity/customizations/24.8.7' into cu…
Enmk Nov 27, 2024
1dad069
Disabled pushing to slack
Enmk Nov 28, 2024
934fbde
Merge pull request #538 from Altinity/24.8_ci_buddy-do-not-attempt-to…
Enmk Nov 28, 2024
745dd69
Disabled getting AZURE_CONNECTION_STRING from SSM
Enmk Nov 28, 2024
479c3a1
Also for stress test
Enmk Nov 28, 2024
2c31b57
Fixed CH startup
Enmk Nov 29, 2024
adec549
Revert "Merge pull request #62565 from ClickHouse/ci_add_azure_tests"
Enmk Dec 5, 2024
51522b8
Udpdated azurite version
Enmk Dec 7, 2024
9883f88
Pushing events to proper database
Enmk Dec 9, 2024
4373bb9
Enable zram
MyroTk Dec 9, 2024
cea8bed
Fix stress test
MyroTk Dec 9, 2024
070a352
Update reusable_build.yml
MyroTk Dec 9, 2024
bbfcd10
Merge pull request #539 from Altinity/24.8_fix_stateless_and_stateful…
Enmk Dec 9, 2024
4c205ec
Merge pull request #545 from Altinity/24.8_fix_stress
Enmk Dec 9, 2024
3e3b6d7
Merge pull request #546 from Altinity/24.8_fix_stateless_and_stateful…
Enmk Dec 9, 2024
b069bfb
Fix Build Report and move FinishCheck to standy runner
MyroTk Dec 10, 2024
3d29e79
Merge pull request #549 from Altinity/24.8_pipeline_patch
Enmk Dec 11, 2024
a8f5e7b
Attempt to make version management sane
Enmk Nov 27, 2024
1555ef7
Fixed minor hiccup
Enmk Nov 28, 2024
2499a02
Merge pull request #537 from Altinity/24.8_easier_versioning
Enmk Dec 11, 2024
e508f82
Testing if creating a tag actually sets the proper version
Enmk Dec 11, 2024
e365b2d
Fix unit tests for commits with no PRs
Enmk Dec 11, 2024
d4fb1be
Getting tweak and flavour from the tag
Enmk Dec 12, 2024
48fbf99
Update release_branches.yml
Enmk Dec 12, 2024
7f5173a
Updated tests to match new logic of generating version
Enmk Dec 12, 2024
6005d93
Merge pull request #551 from Altinity/24.8_easier_versioning
Enmk Dec 12, 2024
d45f9d7
Updating version tweak based on previous tag and number of commits si…
Enmk Dec 12, 2024
2198c54
Merge pull request #555 from Altinity/24.8_easier_versioning
Enmk Dec 13, 2024
017480c
Merge branch 'project-antalya' into project-antalya-24.8.8
Enmk Dec 13, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions .github/actions/common_setup/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,16 @@ runs:
run: |
# to remove every leftovers
sudo rm -fr "$TEMP_PATH" && mkdir -p "$TEMP_PATH"
- name: Setup zram
shell: bash
run: |
sudo modprobe zram
MemTotal=$(grep -Po "(?<=MemTotal:)\s+\d+" /proc/meminfo) # KiB
Percent=200
ZRAM_SIZE=$(($MemTotal / 1024 / 1024 * $Percent / 100)) # Convert to GiB
.github/retry.sh 30 2 sudo zramctl --size ${ZRAM_SIZE}GiB --algorithm zstd /dev/zram0
sudo mkswap /dev/zram0 && sudo swapon -p 100 /dev/zram0
sudo sysctl vm.swappiness=200
- name: Tune vm.mmap_rnd_bits for sanitizers
shell: bash
run: |
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/release_branches.yml
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@ jobs:
- name: Builds report
run: |
cd "$GITHUB_WORKSPACE/tests/ci"
python3 ./build_report_check.py --reports package_release package_aarch64 package_asan package_msan package_ubsan package_tsan package_debug binary_darwin binary_darwin_aarch64
python3 ./build_report_check.py --reports package_release package_aarch64 package_asan package_msan package_ubsan package_tsan package_debug
- name: Set status
# NOTE(vnemkov): generate and upload the report even if previous step failed
if: success() || failure()
Expand Down Expand Up @@ -547,7 +547,7 @@ jobs:
- RegressionTestsRelease
- RegressionTestsAarch64
- SignRelease
runs-on: [self-hosted, altinity-on-demand, altinity-type-cax11, altinity-image-arm-system-ubuntu-22.04]
runs-on: [self-hosted, altinity-on-demand, altinity-type-cax11, altinity-image-arm-snapshot-22.04-arm, altinity-startup-snapshot, altinity-setup-none]
steps:
- name: Check out repository code
uses: Altinity/checkout@19599efdf36c4f3f30eb55d5bb388896faea69f6
Expand Down
3 changes: 1 addition & 2 deletions .github/workflows/reusable_build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
env:
# Force the stdout and stderr streams to be unbuffered
PYTHONUNBUFFERED: 1
CLICKHOUSE_STABLE_VERSION_SUFFIX: altinityedge
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
Expand Down Expand Up @@ -54,7 +53,7 @@ jobs:
if: ${{ contains(fromJson(inputs.data).jobs_data.jobs_to_do, inputs.build_name) || inputs.force }}
env:
GITHUB_JOB_OVERRIDDEN: Build-${{inputs.build_name}}
runs-on: [self-hosted, altinity-setup-builder, altinity-type-ccx53, altinity-on-demand, altinity-in-ash, altinity-image-x86-system-ubuntu-22.04]
runs-on: [self-hosted, altinity-type-ccx53, altinity-on-demand, altinity-image-x86-snapshot-22.04-amd, altinity-startup-snapshot, altinity-setup-none]
steps:
- name: Check out repository code
uses: Altinity/checkout@19599efdf36c4f3f30eb55d5bb388896faea69f6
Expand Down
14 changes: 8 additions & 6 deletions cmake/autogenerated_versions.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,18 @@

# NOTE: VERSION_REVISION has nothing common with DBMS_TCP_PROTOCOL_VERSION,
# only DBMS_TCP_PROTOCOL_VERSION should be incremented on protocol changes.
SET(VERSION_REVISION 54495)
SET(VERSION_REVISION 54496)
SET(VERSION_MAJOR 24)
SET(VERSION_MINOR 8)
SET(VERSION_PATCH 7)
SET(VERSION_GITHASH ddb8c2197719757fcc7ecee79079b00ebd8a7487)
SET(VERSION_PATCH 8)
SET(VERSION_GITHASH e28553d4f2ba78643f9ef47b698954a2c54e6bcc)

SET(VERSION_TWEAK 43)
#1000 for altinitystable candidates
#2000 for altinityedge candidates
SET(VERSION_TWEAK 182000)
SET(VERSION_FLAVOUR altinityedge)

SET(VERSION_DESCRIBE v24.8.7.43.altinityedge)
SET(VERSION_STRING 24.8.7.43)
SET(VERSION_DESCRIBE v24.8.8.182000.altinityedge)
SET(VERSION_STRING 24.8.8.182000)

# end of autochange
7 changes: 4 additions & 3 deletions cmake/version.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,10 @@ include(${PROJECT_SOURCE_DIR}/cmake/autogenerated_versions.txt)
set(VERSION_EXTRA "" CACHE STRING "")
set(VERSION_TWEAK "" CACHE STRING "")

if (VERSION_TWEAK)
string(CONCAT VERSION_STRING ${VERSION_STRING} "." ${VERSION_TWEAK})
endif ()
# NOTE(vnemkov): we rely on VERSION_TWEAK portion to be already present in VERSION_STRING
# if (VERSION_TWEAK)
# string(CONCAT VERSION_STRING ${VERSION_STRING} "." ${VERSION_TWEAK})
# endif ()

if (VERSION_EXTRA)
string(CONCAT VERSION_STRING ${VERSION_STRING} "." ${VERSION_EXTRA})
Expand Down
3 changes: 1 addition & 2 deletions docker/test/stateful/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,7 @@ source /utils.lib
# install test configs
/usr/share/clickhouse-test/config/install.sh

azurite-blob --blobHost 0.0.0.0 --blobPort 10000 --silent --inMemoryPersistence &

azurite-blob --blobHost 0.0.0.0 --blobPort 10000 --debug /azurite_log &
./setup_minio.sh stateful
./mc admin trace clickminio > /test_output/minio.log &
MC_ADMIN_PID=$!
Expand Down
2 changes: 1 addition & 1 deletion docker/test/stateless/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ ENV MINIO_ROOT_PASSWORD="clickhouse"
ENV EXPORT_S3_STORAGE_POLICIES=1
ENV CLICKHOUSE_GRPC_CLIENT="/usr/share/clickhouse-utils/grpc-client/clickhouse-grpc-client.py"

RUN npm install -g azurite@3.30.0 \
RUN npm install -g azurite@^3.33.0 \
&& npm install -g tslib && npm install -g node

COPY run.sh /
Expand Down
6 changes: 6 additions & 0 deletions docker/test/stateless/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,12 @@ source /utils.lib
# install test configs
/usr/share/clickhouse-test/config/install.sh

if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
echo "Azure is disabled"
else
azurite-blob --blobHost 0.0.0.0 --blobPort 10000 --debug /azurite_log &
fi

./setup_minio.sh stateless

./setup_hdfs_minicluster.sh
Expand Down
1 change: 1 addition & 0 deletions docker/test/stress/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@ export ZOOKEEPER_FAULT_INJECTION=1
# available for dump via clickhouse-local
configure

azurite-blob --blobHost 0.0.0.0 --blobPort 10000 --debug /azurite_log &
./setup_minio.sh stateless # to have a proper environment

config_logs_export_cluster /etc/clickhouse-server/config.d/system_logs_export.yaml
Expand Down
4 changes: 4 additions & 0 deletions src/Common/FailPoint.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,10 @@ static struct InitFiu
REGULAR(lazy_pipe_fds_fail_close) \
PAUSEABLE(infinite_sleep) \
PAUSEABLE(stop_moving_part_before_swap_with_active) \
REGULAR(slowdown_index_analysis) \
REGULAR(replicated_merge_tree_all_replicas_stale) \
REGULAR(zero_copy_lock_zk_fail_before_op) \
REGULAR(zero_copy_lock_zk_fail_after_op) \


namespace FailPoints
Expand Down
1 change: 1 addition & 0 deletions src/Functions/DateTimeTransforms.h
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ constexpr time_t MAX_DATETIME_DAY_NUM = 49710; // 2106-02-07
/// This factor transformation will say that the function is monotone everywhere.
struct ZeroTransform
{
static constexpr auto name = "Zero";
static UInt16 execute(Int64, const DateLUTImpl &) { return 0; }
static UInt16 execute(UInt32, const DateLUTImpl &) { return 0; }
static UInt16 execute(Int32, const DateLUTImpl &) { return 0; }
Expand Down
19 changes: 16 additions & 3 deletions src/Functions/IFunctionCustomWeek.h
Original file line number Diff line number Diff line change
Expand Up @@ -55,13 +55,26 @@ class IFunctionCustomWeek : public IFunction
? is_monotonic
: is_not_monotonic;
}
else

if (checkAndGetDataType<DataTypeDateTime64>(&type))
{
return Transform::FactorTransform::execute(UInt32(left.safeGet<UInt64>()), date_lut)
== Transform::FactorTransform::execute(UInt32(right.safeGet<UInt64>()), date_lut)

const auto & left_date_time = left.safeGet<DateTime64>();
TransformDateTime64<typename Transform::FactorTransform> transformer_left(left_date_time.getScale());

const auto & right_date_time = right.safeGet<DateTime64>();
TransformDateTime64<typename Transform::FactorTransform> transformer_right(right_date_time.getScale());

return transformer_left.execute(left_date_time.getValue(), date_lut)
== transformer_right.execute(right_date_time.getValue(), date_lut)
? is_monotonic
: is_not_monotonic;
}

return Transform::FactorTransform::execute(UInt32(left.safeGet<UInt64>()), date_lut)
== Transform::FactorTransform::execute(UInt32(right.safeGet<UInt64>()), date_lut)
? is_monotonic
: is_not_monotonic;
}

protected:
Expand Down
7 changes: 4 additions & 3 deletions src/Parsers/ASTAlterQuery.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -70,8 +70,12 @@ ASTPtr ASTAlterCommand::clone() const

void ASTAlterCommand::formatImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const
{
scope_guard closing_bracket_guard;
if (format_alter_commands_with_parentheses)
{
settings.ostr << "(";
closing_bracket_guard = make_scope_guard(std::function<void(void)>([&settings]() { settings.ostr << ")"; }));
}

if (type == ASTAlterCommand::ADD_COLUMN)
{
Expand Down Expand Up @@ -498,9 +502,6 @@ void ASTAlterCommand::formatImpl(const FormatSettings & settings, FormatState &
}
else
throw Exception(ErrorCodes::UNEXPECTED_AST_STRUCTURE, "Unexpected type of ALTER");

if (format_alter_commands_with_parentheses)
settings.ostr << ")";
}

void ASTAlterCommand::forEachPointerToChild(std::function<void(void**)> f)
Expand Down
3 changes: 3 additions & 0 deletions src/Parsers/ExpressionElementParsers.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -140,6 +140,9 @@ bool ParserSubquery::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
const ASTPtr & explained_ast = explain_query.getExplainedQuery();
if (explained_ast)
{
if (!explained_ast->as<ASTSelectWithUnionQuery>())
throw Exception(ErrorCodes::BAD_ARGUMENTS, "EXPLAIN inside subquery supports only SELECT queries");

auto view_explain = makeASTFunction("viewExplain",
std::make_shared<ASTLiteral>(kind_str),
std::make_shared<ASTLiteral>(settings_str),
Expand Down
7 changes: 6 additions & 1 deletion src/Parsers/IParser.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,12 @@ void Expected::highlight(HighlightedRange range)
/// for each highlight x and the next one y: x.end <= y.begin, thus preventing any overlap.

if (it != highlights.begin())
it = std::prev(it);
{
auto prev_it = std::prev(it);

if (range.begin < prev_it->end)
it = prev_it;
}

while (it != highlights.end() && range.begin < it->end)
{
Expand Down
13 changes: 9 additions & 4 deletions src/Storages/MergeTree/IMergeTreeDataPart.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -524,6 +524,14 @@ SerializationPtr IMergeTreeDataPart::tryGetSerialization(const String & column_n
return it == serializations.end() ? nullptr : it->second;
}

bool IMergeTreeDataPart::isMovingPart() const
{
fs::path part_directory_path = getDataPartStorage().getRelativePath();
if (part_directory_path.filename().empty())
part_directory_path = part_directory_path.parent_path();
return part_directory_path.parent_path().filename() == "moving";
}

void IMergeTreeDataPart::removeIfNeeded()
{
assert(assertHasValidVersionMetadata());
Expand All @@ -548,10 +556,7 @@ void IMergeTreeDataPart::removeIfNeeded()
throw Exception(ErrorCodes::LOGICAL_ERROR, "relative_path {} of part {} is invalid or not set",
getDataPartStorage().getPartDirectory(), name);

fs::path part_directory_path = getDataPartStorage().getRelativePath();
if (part_directory_path.filename().empty())
part_directory_path = part_directory_path.parent_path();
bool is_moving_part = part_directory_path.parent_path().filename() == "moving";
bool is_moving_part = isMovingPart();
if (!startsWith(file_name, "tmp") && !endsWith(file_name, ".tmp_proj") && !is_moving_part)
{
LOG_ERROR(
Expand Down
3 changes: 3 additions & 0 deletions src/Storages/MergeTree/IMergeTreeDataPart.h
Original file line number Diff line number Diff line change
Expand Up @@ -429,6 +429,9 @@ class IMergeTreeDataPart : public std::enable_shared_from_this<IMergeTreeDataPar

bool isProjectionPart() const { return parent_part != nullptr; }

/// Check if the part is in the `/moving` directory
bool isMovingPart() const;

const IMergeTreeDataPart * getParentPart() const { return parent_part; }
String getParentPartName() const { return parent_part_name; }

Expand Down
18 changes: 16 additions & 2 deletions src/Storages/MergeTree/KeyCondition.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -914,6 +914,8 @@ static FieldRef applyFunction(const FunctionBasePtr & func, const DataTypePtr &
return {field.columns, field.row_idx, result_idx};
}

DataTypePtr getArgumentTypeOfMonotonicFunction(const IFunctionBase & func);

/// Sequentially applies functions to the column, returns `true`
/// if all function arguments are compatible with functions
/// signatures, and none of the functions produce `NULL` output.
Expand Down Expand Up @@ -945,7 +947,7 @@ bool applyFunctionChainToColumn(
}

// And cast it to the argument type of the first function in the chain
auto in_argument_type = functions[0]->getArgumentTypes()[0];
auto in_argument_type = getArgumentTypeOfMonotonicFunction(*functions[0]);
if (canBeSafelyCasted(result_type, in_argument_type))
{
result_column = castColumnAccurate({result_column, result_type, ""}, in_argument_type);
Expand Down Expand Up @@ -974,7 +976,7 @@ bool applyFunctionChainToColumn(
if (func->getArgumentTypes().empty())
return false;

auto argument_type = func->getArgumentTypes()[0];
auto argument_type = getArgumentTypeOfMonotonicFunction(*func);
if (!canBeSafelyCasted(result_type, argument_type))
return false;

Expand Down Expand Up @@ -1384,6 +1386,18 @@ class FunctionWithOptionalConstArg : public IFunctionBase
Kind kind = Kind::NO_CONST;
};

DataTypePtr getArgumentTypeOfMonotonicFunction(const IFunctionBase & func)
{
const auto & arg_types = func.getArgumentTypes();
if (const auto * func_ptr = typeid_cast<const FunctionWithOptionalConstArg *>(&func))
{
if (func_ptr->getKind() == FunctionWithOptionalConstArg::Kind::LEFT_CONST)
return arg_types.at(1);
}

return arg_types.at(0);
}


bool KeyCondition::isKeyPossiblyWrappedByMonotonicFunctions(
const RPNBuilderTreeNode & node,
Expand Down
48 changes: 32 additions & 16 deletions src/Storages/MergeTree/MergeTreeData.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -7998,33 +7998,49 @@ MovePartsOutcome MergeTreeData::moveParts(const CurrentlyMovingPartsTaggerPtr &
/// replica will actually move the part from disk to some
/// zero-copy storage other replicas will just fetch
/// metainformation.
if (auto lock = tryCreateZeroCopyExclusiveLock(moving_part.part->name, disk); lock)
auto lock = tryCreateZeroCopyExclusiveLock(moving_part.part->name, disk);
if (!lock)
{
/// Move will be retried but with backoff.
LOG_DEBUG(
log,
"Move of part {} postponed, because zero copy mode enabled and zero-copy lock was not acquired",
moving_part.part->name);
result = MovePartsOutcome::MoveWasPostponedBecauseOfZeroCopy;
break;
}

if (lock->isLocked())
{
cloned_part = parts_mover.clonePart(moving_part, read_settings, write_settings);
/// Cloning part can take a long time.
/// Recheck if the lock (and keeper session expirity) is OK
if (lock->isLocked())
{
cloned_part = parts_mover.clonePart(moving_part, read_settings, write_settings);
parts_mover.swapClonedPart(cloned_part);
break; /// Successfully moved
}
else
{
LOG_DEBUG(
log,
"Move of part {} postponed, because zero copy mode enabled and zero-copy lock was lost during cloning the part",
moving_part.part->name);
result = MovePartsOutcome::MoveWasPostponedBecauseOfZeroCopy;
break;
}
else if (wait_for_move_if_zero_copy)
}
if (wait_for_move_if_zero_copy)
{
LOG_DEBUG(log, "Other replica is working on move of {}, will wait until lock disappear", moving_part.part->name);
/// Wait and checks not only for timeout but also for shutdown and so on.
while (!waitZeroCopyLockToDisappear(*lock, 3000))
{
LOG_DEBUG(log, "Other replica is working on move of {}, will wait until lock disappear", moving_part.part->name);
/// Wait and checks not only for timeout but also for shutdown and so on.
while (!waitZeroCopyLockToDisappear(*lock, 3000))
{
LOG_DEBUG(log, "Waiting until some replica will move {} and zero copy lock disappear", moving_part.part->name);
}
LOG_DEBUG(log, "Waiting until some replica will move {} and zero copy lock disappear", moving_part.part->name);
}
else
break;
}
else
{
/// Move will be retried but with backoff.
LOG_DEBUG(log, "Move of part {} postponed, because zero copy mode enabled and someone other moving this part right now", moving_part.part->name);
result = MovePartsOutcome::MoveWasPostponedBecauseOfZeroCopy;
break;
}
}
}
else /// Ordinary move as it should be
Expand Down
Loading
Loading