Create dynamic config that can limit number of non-primary replicants loaded per coordination cycle#11135
Conversation
|
Thanks for the PR! This config should come in handy to reduce coordinator churn in case historicals fall out of the cluster. Have you thought about configuring |
I added the missing docs. I had not thought about making this a per-tier setting. I'm coming at it from the angle of an operator not caring if the non-primary replicants are in tier X, Y, or Z, but rather just wanting to make sure the coordinator never spends too much time loading these segments and not doing its other jobs, mainly discovering and loading newly ingested segments. |
| |`decommissioningMaxPercentOfMaxSegmentsToMove`| The maximum number of segments that may be moved away from 'decommissioning' servers to non-decommissioning (that is, active) servers during one Coordinator run. This value is relative to the total maximum segment movements allowed during one run which is determined by `maxSegmentsToMove`. If `decommissioningMaxPercentOfMaxSegmentsToMove` is 0, segments will neither be moved from _or to_ 'decommissioning' servers, effectively putting them in a sort of "maintenance" mode that will not participate in balancing or assignment by load rules. Decommissioning can also become stalled if there are no available active servers to place the segments. By leveraging the maximum percent of decommissioning segment movements, an operator can prevent active servers from overload by prioritizing balancing, or decrease decommissioning time instead. The value should be between 0 and 100.|70| | ||
| |`pauseCoordination`| Boolean flag for whether or not the coordinator should execute its various duties of coordinating the cluster. Setting this to true essentially pauses all coordination work while allowing the API to remain up. Duties that are paused include all classes that implement the `CoordinatorDuty` Interface. Such duties include: Segment balancing, Segment compaction, Emission of metrics controlled by the dynamic coordinator config `emitBalancingStats`, Submitting kill tasks for unused segments (if enabled), Logging of used segments in the cluster, Marking of newly unused or overshadowed segments, Matching and execution of load/drop rules for used segments, Unloading segments that are no longer marked as used from Historical servers. An example of when an admin may want to pause coordination would be if they are doing deep storage maintenance on HDFS Name Nodes with downtime and don't want the coordinator to be directing Historical Nodes to hit the Name Node with API requests until maintenance is done and the deep store is declared healthy for use again. |false| | ||
| |`replicateAfterLoadTimeout`| Boolean flag for whether or not additional replication is needed for segments that have failed to load due to the expiry of `druid.coordinator.load.timeout`. If this is set to true, the coordinator will attempt to replicate the failed segment on a different historical server. This helps improve the segment availability if there are a few slow historicals in the cluster. However, the slow historical may still load the segment later and the coordinator may issue drop requests if the segment is over-replicated.|false| | ||
| |`maxNonPrimaryReplicantsToLoad`|This is the maximum number of non-primary segment replicants to load per Coordination run. This number can be set to put a hard upper limit on the number of replicants loaded. It is a tool that can help prevent long delays in new data being available for query after events that require many non-primary replicants to be loaded by the cluster; such as a Historical node disconnecting from the cluster. The default value essentially means there is no limit on the number of replicants loaded per coordination cycle.|`Integer.MAX_VALUE`| |
There was a problem hiding this comment.
It would be useful if we add some info regarding what could be a good starting value to set this to.
| && !paramsWithReplicationManager.getReplicationManager().isLoadPrimaryReplicantsOnly() | ||
| ) { | ||
| log.info( | ||
| "Maximum number of non-primary replicants [%d] have been loaded for the current RunRules execution. Only loading primary replicants from here on.", |
There was a problem hiding this comment.
Since this behavior is valid only for the present coordinator run, the log message might be clearer with something like "Only loading primary replicants from here on for this coordinator run period"
|
This PR has a similar issue that resulted in this block of code. I think I will do the same solution for now. but long term it would be cool if this had a more elegant solution. |
| this.pauseCoordination = pauseCoordination; | ||
| this.replicateAfterLoadTimeout = replicateAfterLoadTimeout; | ||
|
|
||
| if (maxNonPrimaryReplicantsToLoad == null) { |
There was a problem hiding this comment.
Should we consider using 0 as an non-configured value and change the check here? That would avoid the primitive type change.
There was a problem hiding this comment.
Hmm, I think I would be ok with that. I don't see any valid use case where a value of 0 would be required by the user. At that point they would want to disable replication via load rules.
There was a problem hiding this comment.
although doing this would kind of hide a user error. If they submit 0 but we change 0 to the default and log it, they wouldn't know 0 is invalid.
There was a problem hiding this comment.
Yeah I'm fine with leaving it as Integer until we have a better solution in place to fix the dynamic config behavior during upgrade. It would be useful to log an issue for that behavior in case somebody would like to work on it.
|
@a2l007 are you okay with merge this week now that the issue for pursuing a cleaner configuration strategy is created? |
|
@capistrant Yup, LGTM. Thanks! |
|
@capistrant , I was taking a look at the I see that you have made a similar observation here:
Could you please help me understand the difference between the two? In which case would we want to tune this config rather than tuning the |
My observation is that
You'd want to tune I'm not aware of reasons for using Let me know if that clarification is still not making sense. |
|
Thanks for the explanation, @capistrant ! I also think Thanks for adding this config, as I am sure it must come in handy for proper coordinator management. |
Changes: [A] Remove config `decommissioningMaxPercentOfMaxSegmentsToMove` - It is a complicated config 😅 , - It is always desirable to prioritize move from decommissioning servers so that they can be terminated quickly, so this should always be 100% - It is already handled by `smartSegmentLoading` (enabled by default) [B] Remove config `maxNonPrimaryReplicantsToLoad` This was added in #11135 to address two requirements: - Prevent coordinator runs from getting stuck assigning too many segments to historicals - Prevent load of replicas from competing with load of unavailable segments Both of these requirements are now already met thanks to: - Round-robin segment assignment - Prioritization in the new coordinator - Modifications to `replicationThrottleLimit` - `smartSegmentLoading` (enabled by default)
Changes: [A] Remove config `decommissioningMaxPercentOfMaxSegmentsToMove` - It is a complicated config 😅 , - It is always desirable to prioritize move from decommissioning servers so that they can be terminated quickly, so this should always be 100% - It is already handled by `smartSegmentLoading` (enabled by default) [B] Remove config `maxNonPrimaryReplicantsToLoad` This was added in apache#11135 to address two requirements: - Prevent coordinator runs from getting stuck assigning too many segments to historicals - Prevent load of replicas from competing with load of unavailable segments Both of these requirements are now already met thanks to: - Round-robin segment assignment - Prioritization in the new coordinator - Modifications to `replicationThrottleLimit` - `smartSegmentLoading` (enabled by default)
Start Release Notes
Adds new Dynamic Coordinator Config
maxNonPrimaryReplicantsToLoadwith default value ofInteger.MAX_VALUE. This configuration can be used to set a hard upper limit on the number of non-primary replicants that will be loaded in a single Druid Coordinator execution cycle. The default value will mimic the behavior that exists today.Example usage: If you set this configuration to 1000, the Coordinator duty
RunRuleswill load a maximum of 1000 non-primary replicants in eachRunRulesexecution. Meaning if you ingested 2000 segments with a replication factor of 2, the coordinator would load 2000 primary replicants and 1000 non-primary replicants on the firstRunRulesexecution. Then the nextRunRulesexecution, the last 1000 non-primary replicants will be loaded.End Release Notes
Description
Add a new dynamic configuration to the coordinator that gives an operator the power to set a hard limit for the number of non-primary segment replicas that are loaded during a single execution of
RunRules#run. This allows the operator to limit the amount of work loading non-primary replicas thatRunRuleswill execute in a single run. An example of a reason to use a non-default value for this new config is if the operator wants to ensure that major events such as historical service(s) leaving the cluster, large ingestion jobs, etc. do not cause an abnormally longRunRulesexecution compared to the cluster's baseline runtime.Example
cluster: 3 historical servers in _default_tier with 18k segments per server. Each segment belongs to a datasource that has the load rule "LoadForever 2 replicas on _default_tier". The cluster load status is 100% loaded.
Event: 1 historical drops out of the cluster.
Today: The coordinator will load all 18k segments that are now under-replicated in a single execution of RunRules (as long as Throttling limits are not hit and there is capacity)
My change: The coordinator can load a limited number of these under-replicated segments IF the operator has tuned the new dynamic config down from its default. For instance, the operator could say that it is 2k. Meaning it would take at least 9 coordination cycles to fully replicate the segments that were on the recently downed host.
Why
Operators need to balance lots of competing needs. Having the cluster fully replicated is great for HA. But if an event causes the coordinator to take 20 minutes to fully replicate because it has to load thousands of replicas, we sacrifice the timeliness of loading newly ingested segments that were inserted into the metastore after this long coordination cycle started. Maybe the operator cares more about that fresh data timeliness than the replication status, so they change the new config to a value that causes RunRules to take less time but require more execution cycles to bring the data back to full replication.
Really what the change aims to do is give an operator more flexibility. As written the default would give the operator the exact same functionality that they see today.
Design
I folded this new configuration and feature into ReplicationThrottler. That is essentially what it is doing, just in a new way compared to the current ReplicationThrottler functionality.
Key changed/added classes in this PR
CoordinatorDynamicConfigReplicationThrottlerRunRulesLoadRuleThis PR has: