Get task location should be stored on the lifecycle object#14649
Merged
suneet-s merged 7 commits intoapache:masterfrom Jul 25, 2023
Merged
Get task location should be stored on the lifecycle object#14649suneet-s merged 7 commits intoapache:masterfrom
suneet-s merged 7 commits intoapache:masterfrom
Conversation
added 7 commits
July 19, 2023 14:33
FrankChen021
pushed a commit
that referenced
this pull request
Feb 3, 2025
* Fix issue with long data source names * Use the regular library * Save location and tls enabled * Null out before running * add another comment
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
This is a performance optimization for the KubernetesTaskRunner. For larger numbers of tasks (especially if they are streaming), there are a lot of getTaskLocation calls on the task runner (e.g. supervisors make this call periodically). Currenly the K8s task runner has to find the pod by querying k8s and grab the pod IP even though it doesn't change unless the pod dies (in which case the task would have already failed).
I have tried testing this with on the order of 200 supervisor tasks and the druid console starts getting really slow. After this change I am able to go up to at least 1000 tasks without an issue on the overlord (haven't tried more than this).
Release note
Performance optimization for k8s task runner.
Key changed/added classes in this PR
I considered adding a watcher on the pod instead to be safe but I think that would also cause issues, we can revisit that in the future if we decide we need it.
This PR has: