I tried to solve #9998 today and found this parameter seems redundant.
According to the doc,
This is not a limit that Historical processes actually enforces, just a value published to the Coordinator process so it can plan accordingly.
There're two problems here
-
Confusion
Why don't we use the sum of all the maxSize in druid.segmentCache.locations ?
What if this value is less than the sum of maxSize in druid.segmentCache.locations ?
What if this value is greater than the sum of maxSize in druid.segmentCache.locations ?
Apparently, it's value should be the same as the total max size of all disks.
-
Redundant configuration
If there're several disks configured, user has to sum the maxSize together by himself to fill this parameter. It's boring.
I checked relevant code, and found no any other special purposes for this parameter. It seems there's no need to keep this parameter. Historical node should calculates the total max size based on user's configuration during start up.
Can we delete this parameter ?
I tried to solve #9998 today and found this parameter seems redundant.
According to the doc,
There're two problems here
Confusion
Why don't we use the sum of all the
maxSizeindruid.segmentCache.locations?What if this value is less than the sum of
maxSizeindruid.segmentCache.locations?What if this value is greater than the sum of
maxSizeindruid.segmentCache.locations?Apparently, it's value should be the same as the total max size of all disks.
Redundant configuration
If there're several disks configured, user has to sum the maxSize together by himself to fill this parameter. It's boring.
I checked relevant code, and found no any other special purposes for this parameter. It seems there's no need to keep this parameter. Historical node should calculates the total max size based on user's configuration during start up.
Can we delete this parameter ?