Add coordinator dynamic config to limit the number of segments loaded per RunRules execution#12504
Add coordinator dynamic config to limit the number of segments loaded per RunRules execution#12504capistrant wants to merge 21 commits intoapache:masterfrom
Conversation
| ); | ||
| this.maxNonPrimaryReplicantsToLoad = maxNonPrimaryReplicantsToLoad; | ||
|
|
||
| if (maxSegmentsToLoad == null) { |
There was a problem hiding this comment.
#11135 (comment) See this discussion behind reasoning for this handling
paul-rogers
left a comment
There was a problem hiding this comment.
Great real-world improvement! A few minor comments.
| this.maxNonPrimaryReplicantsToLoad = maxNonPrimaryReplicantsToLoad; | ||
|
|
||
| if (maxSegmentsToLoad == null) { | ||
| log.debug( |
There was a problem hiding this comment.
The debug choice seems a bit odd. On the one hand, it would seem reasonable to have defaults for values which are not provided. Ideally, if the user provides no value, then any existing value remains unchanged (though such a change is probably out of scope of this PR.)
The idea is, as we add dynamic configs, the user should not have to first download all the existing settings, change the one of interest, and upload all of them. Just upload the one that needs to change and let Druid do the merge.
If we were to support the "values are optional" approach, then the user would need no warning when using it: doing so would be expected.
On the other hand, if we do require that the user specify all settings, including those added in the most recent release, then we should encourage people to update their dynamic config scripts with the new parameter, deciding on the default value they want. In that case, this error should be more than DEBUG since debug is often turned off. Maybe WARN?
There was a problem hiding this comment.
I agree that the log is useless. Not sure why I continue to include it.. I had a former PR where I introduced a new configuration and initially had the log as a WARN to alert the operator. However, in consultation with the reviewer we decided that replacing a missing value with the default is "normal behavior" and thus shouldn't be logged as a warning. I don't recall exactly what my thoughts were in flipping to debug instead of just deleting.
IMO Druid should gracefully handle newly introduced configs on upgrade by having slipping in the new default. If the operator cares to use a non-default value for any new configs, they can POST their desired config spec to the API directly or use the Druid console to make the update as intended. Otherwise, Druid should just handle everything quietly on deserialization.
I created #11161 a long time ago when we identified that this check for null and set default as being clunky. I'm not sure why we don't leverage the DynamicCoordinatorConfig#Builder to handle deserialization today. I could be missing something here that influenced the current serde, but nevertheless, I am prepping a separate PR to use the Builder instead of the actual CoordinatorDynamicConfig class for deserialization. If that gets accepted, then this whole block of code could be tossed in the dumpster
| "replicateAfterLoadTimeout": false, | ||
| "maxNonPrimaryReplicantsToLoad": 2147483647 | ||
| "maxNonPrimaryReplicantsToLoad": 2147483647, | ||
| "maxSegmentsToLoad": 2147483647 |
There was a problem hiding this comment.
Should the name be more specific? "Max segments to load", by itself, sounds like "the maximum number of segments which the coordinator will load ... period" -- an upper limit on the number of loaded segments overall. This then raises the question, "but, what happens to the others?"
From the description, it sounds like this is "per Coordination run". So, should the name be something like maxSegmentsPerCoordination?
There was a problem hiding this comment.
+1 for a more descriptive name, thanks for calling it out. Will have a change there in a forthcoming commit.
| 9, | ||
| false, | ||
| false, | ||
| Integer.MAX_VALUE, |
There was a problem hiding this comment.
There are a pile of these. Should we introduce a builder or some other way to reduce redundancy?
There was a problem hiding this comment.
I am starting to wonder, in general, what the value of this test class is providing. I see value in testing some things:
- that invalid values blow up serde of
CoordinatorDynamicConfigwhere expected. - testing
CoordinatorDynamicConfig#isKillUnusedSegmentsInAllDataSources - testing handling of nullable fields for
CoordinatorDynamicConfig - testing
CoordinatorDynamicConfig.Builder#build(CoordinatorDynamicConfig)
But as is, the current class is a fairly unorganized set of tests. I even see that some new dynamic config values are completely untested. useBatchedSegmentSampler is an example of that. Since it is a primitive with standard system defaults used when missing in payload being deserialized, maybe it makes sense not to test. But based on the standard in the tests today, omitting it seems like a break in the existing pattern.
In my follow on commit responding to review, I will re-organize the tests into what I see value in. This will be opinionated and should be discussed further at that time. Perhaps there is some standard testing patterns for a class like this out there that I am unaware of and could be followed instead?
…eresting parts of CoordinatorDynamicConfig
|
This pull request has been marked as stale due to 60 days of inactivity. |
|
This pull request/issue has been closed due to lack of activity. If you think that |
Description
Added a new coordinator dynamic configuration item
maxSegmentsToLoadmaxSegmentsToLoadPerCoordinationCycleAdded logic in RunRules to short-circuit if the number of segments loaded reaches the value of the new dynamic config
We track the aggregate number of segments loaded during the execution of RunRules in a global statistic.
LoadRule.javais the class that updates this stat as it loads segments.RunRules.javais continuously checking the value of this stat, making sure we haven't loaded the max amount of segments before it matches another segment to load rules. If the limit is reached, RunRules won't address any more segments and the coordinator moves on to the next duty.Why?
The coordinator refreshes its map of the segments loaded in the cluster at the start of the list of duties it is about to execute. It does not dynamically respond to segments being announced while the duties are in-flight. This means that if for some reason 10% of druid segments were unavailable at the start of coordination, but became available while the coordinator was executing run rules, the coordinator would still proceed with loading all of the segments who were unavailable or under-replicated. This can lead to lots of wasted time and work! If those 10% of unavailable segments take 20 minutes to load, that is 20 minutes that the coordinator could have been doing other work, such as loading actual unavailable segments that were newly ingested, etc.
An example when something like this could happen is if you had a network issue that temporarily forced a number of historical nodes offline, but they re-connected shortly thereafter, but too late to prevent the coordinator from thinking all of their data was gone.
Another example would be some type of negative event where multiple historical servers had to be restarted. Think OOM or GC issues due to an unusual workload. If those historicals restart after the coordinator has decided they were not serving their segments anymore, we end up trying to load all the segments despite them announcing after starting up.
At the end of the day this is a niche configuration that operators may desire access to in some circumstances. I come from a background of operating a large multi-tenant cluster wrote this patch out of necessity due to an existing issue we are having. Our cluster has faced some instability recently due to unexpected workloads causing historicals to wedge in a sub optimal state due to GC. While we work on a solution to the underlying problem, this a mitigating resource to deal with the occasions that we have to restart multiple historical servers at one time to get out of the bad state. I configure the coordinator to load only as many segments as we typically see during peak ingest times. That way, we are operating as normal at all times, and in the case of an unexpected issue, the coordinator will not load thousands of segments that come back online after historical restarts.
How is this different from
maxNonPrimaryReplicantsToLoadThis dynamic configuration was also introduced by me. Again, a reaction to an experience while managing a large cluster. When we took servers out for maintenance, we did not want the coordinator block while replicating all of the segments to get back to full replication. Instead we wanted the coordinator to eat away at finite sized chunks of replicas. This was to prevent blocking other duties from running, such as loading primary replicas.
That configuration still serves that purpose if it is set lower than this new configuration. For instance, I may want to limit my coordinator to loading a max of 10000 segments per RunRules cycle. However, I also want to limit non-primary replicas to 4000.
Key changed/added classes in this PR
RunRulesLoadRuleThis PR has: