Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion docs/configuration/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -806,7 +806,8 @@ A sample Coordinator dynamic config JSON object is shown below:
"decommissioningNodes": ["localhost:8182", "localhost:8282"],
"decommissioningMaxPercentOfMaxSegmentsToMove": 70,
"pauseCoordination": false,
"replicateAfterLoadTimeout": false
"replicateAfterLoadTimeout": false,
"maxNonPrimaryReplicantsToLoad": 2147483647
}
```

Expand All @@ -831,6 +832,7 @@ Issuing a GET request at the same URL will return the spec that is currently in
|`decommissioningMaxPercentOfMaxSegmentsToMove`| The maximum number of segments that may be moved away from 'decommissioning' servers to non-decommissioning (that is, active) servers during one Coordinator run. This value is relative to the total maximum segment movements allowed during one run which is determined by `maxSegmentsToMove`. If `decommissioningMaxPercentOfMaxSegmentsToMove` is 0, segments will neither be moved from _or to_ 'decommissioning' servers, effectively putting them in a sort of "maintenance" mode that will not participate in balancing or assignment by load rules. Decommissioning can also become stalled if there are no available active servers to place the segments. By leveraging the maximum percent of decommissioning segment movements, an operator can prevent active servers from overload by prioritizing balancing, or decrease decommissioning time instead. The value should be between 0 and 100.|70|
|`pauseCoordination`| Boolean flag for whether or not the coordinator should execute its various duties of coordinating the cluster. Setting this to true essentially pauses all coordination work while allowing the API to remain up. Duties that are paused include all classes that implement the `CoordinatorDuty` Interface. Such duties include: Segment balancing, Segment compaction, Emission of metrics controlled by the dynamic coordinator config `emitBalancingStats`, Submitting kill tasks for unused segments (if enabled), Logging of used segments in the cluster, Marking of newly unused or overshadowed segments, Matching and execution of load/drop rules for used segments, Unloading segments that are no longer marked as used from Historical servers. An example of when an admin may want to pause coordination would be if they are doing deep storage maintenance on HDFS Name Nodes with downtime and don't want the coordinator to be directing Historical Nodes to hit the Name Node with API requests until maintenance is done and the deep store is declared healthy for use again. |false|
|`replicateAfterLoadTimeout`| Boolean flag for whether or not additional replication is needed for segments that have failed to load due to the expiry of `druid.coordinator.load.timeout`. If this is set to true, the coordinator will attempt to replicate the failed segment on a different historical server. This helps improve the segment availability if there are a few slow historicals in the cluster. However, the slow historical may still load the segment later and the coordinator may issue drop requests if the segment is over-replicated.|false|
|`maxNonPrimaryReplicantsToLoad`|This is the maximum number of non-primary segment replicants to load per Coordination run. This number can be set to put a hard upper limit on the number of replicants loaded. It is a tool that can help prevent long delays in new data being available for query after events that require many non-primary replicants to be loaded by the cluster; such as a Historical node disconnecting from the cluster. The default value essentially means there is no limit on the number of replicants loaded per coordination cycle. If you want to use a non-default value for this config, you may want to start with it being `~20%` of the number of segments found on your Historical server with the most segments. You can use the Druid metric, `coordinator/time` with the filter `duty=org.apache.druid.server.coordinator.duty.RunRules` to see how different values of this config impact your Coordinator execution time.|`Integer.MAX_VALUE`|


To view the audit history of Coordinator dynamic config issue a GET request to the URL -
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -96,6 +96,13 @@ public class CoordinatorDynamicConfig
*/
private final boolean replicateAfterLoadTimeout;

/**
* This is the maximum number of non-primary segment replicants to load per Coordination run. This number can
* be set to put a hard upper limit on the number of replicants loaded. It is a tool that can help prevent
* long delays in new data loads after events such as a Historical server leaving the cluster.
*/
private final int maxNonPrimaryReplicantsToLoad;

private static final Logger log = new Logger(CoordinatorDynamicConfig.class);

@JsonCreator
Expand Down Expand Up @@ -129,7 +136,8 @@ public CoordinatorDynamicConfig(
@JsonProperty("decommissioningNodes") Object decommissioningNodes,
@JsonProperty("decommissioningMaxPercentOfMaxSegmentsToMove") int decommissioningMaxPercentOfMaxSegmentsToMove,
@JsonProperty("pauseCoordination") boolean pauseCoordination,
@JsonProperty("replicateAfterLoadTimeout") boolean replicateAfterLoadTimeout
@JsonProperty("replicateAfterLoadTimeout") boolean replicateAfterLoadTimeout,
@JsonProperty("maxNonPrimaryReplicantsToLoad") @Nullable Integer maxNonPrimaryReplicantsToLoad
)
{
this.leadingTimeMillisBeforeCanMarkAsUnusedOvershadowedSegments =
Expand Down Expand Up @@ -176,6 +184,22 @@ public CoordinatorDynamicConfig(
}
this.pauseCoordination = pauseCoordination;
this.replicateAfterLoadTimeout = replicateAfterLoadTimeout;

if (maxNonPrimaryReplicantsToLoad == null) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we consider using 0 as an non-configured value and change the check here? That would avoid the primitive type change.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I think I would be ok with that. I don't see any valid use case where a value of 0 would be required by the user. At that point they would want to disable replication via load rules.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

although doing this would kind of hide a user error. If they submit 0 but we change 0 to the default and log it, they wouldn't know 0 is invalid.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I'm fine with leaving it as Integer until we have a better solution in place to fix the dynamic config behavior during upgrade. It would be useful to log an issue for that behavior in case somebody would like to work on it.

log.debug(
"maxNonPrimaryReplicantsToLoad was null! This is likely because your metastore does not "
+ "reflect this configuration being added to Druid in a recent release. Druid is defaulting the value "
+ "to the Druid default of %d. It is recommended that you re-submit your dynamic config with your "
+ "desired value for maxNonPrimaryReplicantsToLoad",
Builder.DEFAULT_MAX_NON_PRIMARY_REPLICANTS_TO_LOAD
);
maxNonPrimaryReplicantsToLoad = Builder.DEFAULT_MAX_NON_PRIMARY_REPLICANTS_TO_LOAD;
}
Preconditions.checkArgument(
maxNonPrimaryReplicantsToLoad >= 0,
"maxNonPrimaryReplicantsToLoad must be greater than or equal to 0."
);
this.maxNonPrimaryReplicantsToLoad = maxNonPrimaryReplicantsToLoad;
}

private static Set<String> parseJsonStringOrArray(Object jsonStringOrArray)
Expand Down Expand Up @@ -336,6 +360,13 @@ public boolean getReplicateAfterLoadTimeout()
return replicateAfterLoadTimeout;
}

@Min(0)
@JsonProperty
public int getMaxNonPrimaryReplicantsToLoad()
{
return maxNonPrimaryReplicantsToLoad;
}

@Override
public String toString()
{
Expand All @@ -358,6 +389,7 @@ public String toString()
", decommissioningMaxPercentOfMaxSegmentsToMove=" + decommissioningMaxPercentOfMaxSegmentsToMove +
", pauseCoordination=" + pauseCoordination +
", replicateAfterLoadTimeout=" + replicateAfterLoadTimeout +
", maxNonPrimaryReplicantsToLoad=" + maxNonPrimaryReplicantsToLoad +
'}';
}

Expand Down Expand Up @@ -422,6 +454,9 @@ public boolean equals(Object o)
if (replicateAfterLoadTimeout != that.replicateAfterLoadTimeout) {
return false;
}
if (maxNonPrimaryReplicantsToLoad != that.maxNonPrimaryReplicantsToLoad) {
return false;
}
return decommissioningMaxPercentOfMaxSegmentsToMove == that.decommissioningMaxPercentOfMaxSegmentsToMove;
}

Expand All @@ -444,7 +479,8 @@ public int hashCode()
dataSourcesToNotKillStalePendingSegmentsIn,
decommissioningNodes,
decommissioningMaxPercentOfMaxSegmentsToMove,
pauseCoordination
pauseCoordination,
maxNonPrimaryReplicantsToLoad
);
}

Expand All @@ -470,6 +506,7 @@ public static class Builder
private static final int DEFAULT_DECOMMISSIONING_MAX_SEGMENTS_TO_MOVE_PERCENT = 70;
private static final boolean DEFAULT_PAUSE_COORDINATION = false;
private static final boolean DEFAULT_REPLICATE_AFTER_LOAD_TIMEOUT = false;
private static final int DEFAULT_MAX_NON_PRIMARY_REPLICANTS_TO_LOAD = Integer.MAX_VALUE;

private Long leadingTimeMillisBeforeCanMarkAsUnusedOvershadowedSegments;
private Long mergeBytesLimit;
Expand All @@ -488,6 +525,7 @@ public static class Builder
private Integer decommissioningMaxPercentOfMaxSegmentsToMove;
private Boolean pauseCoordination;
private Boolean replicateAfterLoadTimeout;
private Integer maxNonPrimaryReplicantsToLoad;

public Builder()
{
Expand All @@ -513,7 +551,8 @@ public Builder(
@JsonProperty("decommissioningMaxPercentOfMaxSegmentsToMove")
@Nullable Integer decommissioningMaxPercentOfMaxSegmentsToMove,
@JsonProperty("pauseCoordination") @Nullable Boolean pauseCoordination,
@JsonProperty("replicateAfterLoadTimeout") @Nullable Boolean replicateAfterLoadTimeout
@JsonProperty("replicateAfterLoadTimeout") @Nullable Boolean replicateAfterLoadTimeout,
@JsonProperty("maxNonPrimaryReplicantsToLoad") @Nullable Integer maxNonPrimaryReplicantsToLoad
)
{
this.leadingTimeMillisBeforeCanMarkAsUnusedOvershadowedSegments =
Expand All @@ -534,6 +573,7 @@ public Builder(
this.decommissioningMaxPercentOfMaxSegmentsToMove = decommissioningMaxPercentOfMaxSegmentsToMove;
this.pauseCoordination = pauseCoordination;
this.replicateAfterLoadTimeout = replicateAfterLoadTimeout;
this.maxNonPrimaryReplicantsToLoad = maxNonPrimaryReplicantsToLoad;
}

public Builder withLeadingTimeMillisBeforeCanMarkAsUnusedOvershadowedSegments(long leadingTimeMillis)
Expand Down Expand Up @@ -632,6 +672,12 @@ public Builder withReplicateAfterLoadTimeout(boolean replicateAfterLoadTimeout)
return this;
}

public Builder withMaxNonPrimaryReplicantsToLoad(int maxNonPrimaryReplicantsToLoad)
{
this.maxNonPrimaryReplicantsToLoad = maxNonPrimaryReplicantsToLoad;
return this;
}

public CoordinatorDynamicConfig build()
{
return new CoordinatorDynamicConfig(
Expand Down Expand Up @@ -660,7 +706,8 @@ public CoordinatorDynamicConfig build()
? DEFAULT_DECOMMISSIONING_MAX_SEGMENTS_TO_MOVE_PERCENT
: decommissioningMaxPercentOfMaxSegmentsToMove,
pauseCoordination == null ? DEFAULT_PAUSE_COORDINATION : pauseCoordination,
replicateAfterLoadTimeout == null ? DEFAULT_REPLICATE_AFTER_LOAD_TIMEOUT : replicateAfterLoadTimeout
replicateAfterLoadTimeout == null ? DEFAULT_REPLICATE_AFTER_LOAD_TIMEOUT : replicateAfterLoadTimeout,
maxNonPrimaryReplicantsToLoad == null ? DEFAULT_MAX_NON_PRIMARY_REPLICANTS_TO_LOAD : maxNonPrimaryReplicantsToLoad
);
}

Expand Down Expand Up @@ -695,7 +742,8 @@ public CoordinatorDynamicConfig build(CoordinatorDynamicConfig defaults)
? defaults.getDecommissioningMaxPercentOfMaxSegmentsToMove()
: decommissioningMaxPercentOfMaxSegmentsToMove,
pauseCoordination == null ? defaults.getPauseCoordination() : pauseCoordination,
replicateAfterLoadTimeout == null ? defaults.getReplicateAfterLoadTimeout() : replicateAfterLoadTimeout
replicateAfterLoadTimeout == null ? defaults.getReplicateAfterLoadTimeout() : replicateAfterLoadTimeout,
maxNonPrimaryReplicantsToLoad == null ? defaults.getMaxNonPrimaryReplicantsToLoad() : maxNonPrimaryReplicantsToLoad
);
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,23 +41,35 @@ public class ReplicationThrottler

private volatile int maxReplicants;
private volatile int maxLifetime;
private volatile boolean loadPrimaryReplicantsOnly;

public ReplicationThrottler(int maxReplicants, int maxLifetime)
public ReplicationThrottler(int maxReplicants, int maxLifetime, boolean loadPrimaryReplicantsOnly)
{
updateParams(maxReplicants, maxLifetime);
updateParams(maxReplicants, maxLifetime, loadPrimaryReplicantsOnly);
}

public void updateParams(int maxReplicants, int maxLifetime)
public void updateParams(int maxReplicants, int maxLifetime, boolean loadPrimaryReplicantsOnly)
{
this.maxReplicants = maxReplicants;
this.maxLifetime = maxLifetime;
this.loadPrimaryReplicantsOnly = loadPrimaryReplicantsOnly;
}

public void updateReplicationState(String tier)
{
update(tier, currentlyReplicating, replicatingLookup, "create");
}

public boolean isLoadPrimaryReplicantsOnly()
{
return loadPrimaryReplicantsOnly;
}

public void setLoadPrimaryReplicantsOnly(boolean loadPrimaryReplicantsOnly)
{
this.loadPrimaryReplicantsOnly = loadPrimaryReplicantsOnly;
}

private void update(String tier, ReplicatorSegmentHolder holder, Map<String, Boolean> lookup, String type)
{
int size = holder.getNumProcessing(tier);
Expand Down Expand Up @@ -87,7 +99,7 @@ private void update(String tier, ReplicatorSegmentHolder holder, Map<String, Boo

public boolean canCreateReplicant(String tier)
{
return replicatingLookup.get(tier) && !currentlyReplicating.isAtMaxReplicants(tier);
return !loadPrimaryReplicantsOnly && replicatingLookup.get(tier) && !currentlyReplicating.isAtMaxReplicants(tier);
}

public void registerReplicantCreation(String tier, SegmentId segmentId, String serverId)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,8 @@ public RunRules(DruidCoordinator coordinator)
this(
new ReplicationThrottler(
coordinator.getDynamicConfigs().getReplicationThrottleLimit(),
coordinator.getDynamicConfigs().getReplicantLifetime()
coordinator.getDynamicConfigs().getReplicantLifetime(),
false
),
coordinator
);
Expand All @@ -72,7 +73,8 @@ public DruidCoordinatorRuntimeParams run(DruidCoordinatorRuntimeParams params)
{
replicatorThrottler.updateParams(
coordinator.getDynamicConfigs().getReplicationThrottleLimit(),
coordinator.getDynamicConfigs().getReplicantLifetime()
coordinator.getDynamicConfigs().getReplicantLifetime(),
false
);

CoordinatorStats stats = new CoordinatorStats();
Expand Down Expand Up @@ -128,6 +130,18 @@ public DruidCoordinatorRuntimeParams run(DruidCoordinatorRuntimeParams params)
boolean foundMatchingRule = false;
for (Rule rule : rules) {
if (rule.appliesTo(segment, now)) {
if (
stats.getGlobalStat(
"totalNonPrimaryReplicantsLoaded") >= paramsWithReplicationManager.getCoordinatorDynamicConfig()
.getMaxNonPrimaryReplicantsToLoad()
&& !paramsWithReplicationManager.getReplicationManager().isLoadPrimaryReplicantsOnly()
) {
log.info(
"Maximum number of non-primary replicants [%d] have been loaded for the current RunRules execution. Only loading primary replicants from here on for this coordinator run cycle.",
paramsWithReplicationManager.getCoordinatorDynamicConfig().getMaxNonPrimaryReplicantsToLoad()
);
paramsWithReplicationManager.getReplicationManager().setLoadPrimaryReplicantsOnly(true);
}
stats.accumulate(rule.run(coordinator, paramsWithReplicationManager, segment));
foundMatchingRule = true;
break;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ public abstract class LoadRule implements Rule
private static final EmittingLogger log = new EmittingLogger(LoadRule.class);
static final String ASSIGNED_COUNT = "assignedCount";
static final String DROPPED_COUNT = "droppedCount";
public final String NON_PRIMARY_ASSIGNED_COUNT = "totalNonPrimaryReplicantsLoaded";
public static final String REQUIRED_CAPACITY = "requiredCapacity";

private final Object2IntMap<String> targetReplicants = new Object2IntOpenHashMap<>();
Expand Down Expand Up @@ -180,6 +181,10 @@ private void assign(
createLoadQueueSizeLimitingPredicate(params).and(holder -> !holder.equals(primaryHolderToLoad)),
segment
);

// numAssigned - 1 because we don't want to count the primary assignment
stats.addToGlobalStat(NON_PRIMARY_ASSIGNED_COUNT, numAssigned - 1);

stats.addToTieredStat(ASSIGNED_COUNT, tier, numAssigned);

// do assign replicas for the other tiers.
Expand Down Expand Up @@ -305,6 +310,7 @@ private void assignReplicas(
createLoadQueueSizeLimitingPredicate(params),
segment
);
stats.addToGlobalStat(NON_PRIMARY_ASSIGNED_COUNT, numAssigned);
stats.addToTieredStat(ASSIGNED_COUNT, tier, numAssigned);
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ public void bigProfiler()
)
.withEmitter(emitter)
.withDatabaseRuleManager(manager)
.withReplicationManager(new ReplicationThrottler(2, 500))
.withReplicationManager(new ReplicationThrottler(2, 500, false))
.build();

BalanceSegmentsTester tester = new BalanceSegmentsTester(coordinator);
Expand Down
Loading