support alternate controller-manager flags in kubecertagent controller (e.g. for RKE2) #2043
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The controller responsible for creating the kube-cert-agent pod parses the CLI flags of the controller-manager pod to discover the file paths for the cert and key files. However, the controller-manager binary offers two different sets of CLI flags that can be used to specify these paths. Previously, Pinniped only looked for the original flags
--cluster-signing-cert-file, and--cluster-signing-key-file. Now also look for the alternate flags--cluster-signing-kube-apiserver-client-key-fileand--cluster-signing-kube-apiserver-client-cert-file. According to https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/ you can specify either of these flag sets, but not both.This change will hopefully allow the Pinniped Concierge to support more Kubernetes distributions without needing to resort to using the impersonation proxy. For example, RKE2 uses these alternate controller-manager flags.
Note: Instead of using the Pinniped Concierge on RKE2, you could configure the API server flags to use the Pinniped Supervisor as its OIDC provider for authentication. See https://pinniped.dev/docs/tutorials/supervisor-without-concierge-demo/ for details. This does not require installing the Concierge onto the cluster, and therefore would not require the workarounds documented below. Or alternatively, you could install the Concierge but enable the impersonation proxy and use that instead. However, the impersonation proxy has the downside of requiring a means of ingress be configured for incoming HTTPS network traffic.
Note that RKE2 clusters do not have the standard
cluster-infoconfigmap. The Pinniped Concierge wants to read this configmap to get the cluster's endpoint and CA bundle. As a workaround, it can be created by the administrator using this shell script. Note that if you have made the cluster accessible over the network (i.e. not localhost), then theservershould be the URL that you use to access the cluster's API server over the network.This allows you to get further into the authentication flow, but unfortunately it still fails.
Now it fails because the kube-api-server pod specifies the
--anonymous-auth=falseCLI flag to thekube-apiservercommand by default in RKE2. This prevents the user from being able to make calls to the Concierge'sTokenCredentialRequestAPI to complete their authentication using the Concierge. This Concierge authentication endpoint receives the user's proof of identity and returns a credential that they can use to auth directly to the Kube API server as that identity, so it must be allowed to be called without prior authentication.To allow anonymous auth for the Kubernetes API in RKE2, edit (or create) the config file:
Add this line to the file. Then save the file and exit the editor.
Restart the cluster:
After the cluster restarts, make sure that the API server pod has the
--anonymous-auth=trueCLI flag.Now you will be able to use the Pinniped Concierge on your RKE cluster without being required to enable the Concierge impersonation proxy.
Note: The full implications of enabling anonymous auth for the Kube API server (not discussed here) should be considered before doing this on a production cluster.
Thanks to
@Oğuz Yarımtepeon Kubernetes Slack for helping us investigate this on RKE2 clusters.Release note: