Rework DockercfgTokenDeletedController#13579
Conversation
Signed-off-by: Monis Khan <mkhan@redhat.com>
| policyTemplate.Objects = append(policyTemplate.Objects, versionedObject) | ||
| } | ||
|
|
||
| controllerRoles := bootstrappolicy.GetBootstrapControllerRoles() |
There was a problem hiding this comment.
Is this method used anywhere? I seem recall thinking that it should die.
There was a problem hiding this comment.
@liggitt I like the upstream, reconcile on start combined with system:masters authorized separately which obviated the need for an external policy.json. You see any reason to keep creating the "BootstrapPolicyFile"?
|
|
||
| var ( | ||
| // controllerRoles is a slice of roles used for controllers | ||
| controllerRoles = []authorizationapi.ClusterRole{} |
There was a problem hiding this comment.
You'll want a clusterrole local role distinction. Found that out upstream.
| controllerRoleBindings = []authorizationapi.ClusterRoleBinding{} | ||
| ) | ||
|
|
||
| func addControllerRole(role authorizationapi.ClusterRole) { |
There was a problem hiding this comment.
No new openshift roles in our code here. Let's use the rbac roles and our converter. It will make future unification easier.
| CoreClient: c.PrivilegedLoopbackKubernetesClientset.Core(), | ||
| Namespace: bootstrappolicy.DefaultOpenShiftInfraNamespace, | ||
| }, | ||
| Stop: make(chan struct{}), |
There was a problem hiding this comment.
This doesn't look. You didn't anonymously include the kube one, did you?
| serviceaccountcontrollers.NewDockercfgDeletedController(c.KubeClientset(), serviceaccountcontrollers.DockercfgDeletedControllerOptions{}).Run() | ||
| serviceaccountcontrollers.NewDockercfgTokenDeletedController(c.KubeClientset(), serviceaccountcontrollers.DockercfgTokenDeletedControllerOptions{}).Run() | ||
|
|
||
| ctx := origincontroller.ControllerContext{ |
There was a problem hiding this comment.
This block has the right idea. Please pull it out into a method which constructs the context and initializers and starts them all. Having only one in the map (as you have now) is perfect.
| DockercfgTokenDeletedControllerName = "dockercfg-token-deleted-controller" | ||
| ) | ||
|
|
||
| type ControllerContext struct { |
There was a problem hiding this comment.
Add some comments explaining why we have a different context. I think I see why you've done it now. Different informer factory, no options, and no use for avaialbleResources?
|
Relatively minor comments. This is a good start. I suspect that we'll end up swizzling packages around to adjust for this. |
Signed-off-by: Monis Khan <mkhan@redhat.com>
|
[test] |
|
Evaluated for origin test up to 2d48afa |
|
continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin/524/) (Base Commit: 645cac4) |
|
@deads2k the integration test I wrote seems to flake every so often locally. My best guess is kubernetes/kubernetes#41661 which we should get in the rebase? |
|
kubernetes/kubernetes#41661 doesn't fix flakes, it fixes DOA clusters started after changing token signing keys while leaving invalid tokens in etcd |
|
Here is what I have gathered from the logs:
@deads2k @liggitt some questions:
|
It's cleaning the secret, otherwise you might endup with some old crap that's not being linked anywhere
My guess is - yes. To mitigate the problem that you're watching just secrets you need to double check if that secret is actually used/linked in the SA and act upon it only then. |
|
Origin Action Required: Pull request cannot be automatically merged, please rebase your branch from latest HEAD and push again |
|
Superseded by #14293 |
This works for me locally but needs an integration test.
[test]
Signed-off-by: Monis Khan mkhan@redhat.com
Trello xref: https://trello.com/c/ZtOfFpFz