KubernetesHook should try incluster first when not otherwise configured #23126
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
note this is part of the path to getting k8s hook into KPO, but i've separated it out for easier review.
Currently when K8s hook receives no configuration (e.g. incluster vs config file content vs config file path) the default client generation process will try to load the kube config in the default location. This is inconsistent with airflow core's behavior in kubernetes executor and kubernetes pod operator (in_cluster=True is the default with those).
To make k8s hook's behavior consistent, we can first try incluster, then if that fails, try default kubeconfig. This should be safe to do. The kubernetes client will check for 2 environment variables that an in-cluster environment should have and if it doesn't find them, it will raise ConfigException (see here: https://github.com/kubernetes-client/python/blob/1271465acdb80bf174c50564a384fd6898635ea6/kubernetes/base/config/incluster_config.py#L60-L62). If ConfigException is raised, K8s hook will fall back to looking for the default config.