[RELEASE-1.25][SRVCOM-1857] Upgrade unavailable/deprecated K8s APIs#1710
Conversation
| } | ||
| } | ||
|
|
||
| // CheckMinimumVersion checks if the version in the arg meets the requirement or not. |
There was a problem hiding this comment.
the requirement is a little hard to understand.
Can we perhaps link to its definition?
There was a problem hiding this comment.
Yeah will do it is a relic of the past.
| - fs.StringVar(&c.Kubeconfig, "kubeconfig", os.Getenv("KUBECONFIG"), | ||
| - "Path to a kubeconfig. Only required if out-of-cluster.") | ||
| - | ||
| + if fs.Lookup("kubeconfig") == nil { |
There was a problem hiding this comment.
This is required due to double vendoring for controller runtime and go client.
| capabilities: | ||
| drop: | ||
| - all | ||
| - ALL |
There was a problem hiding this comment.
Using all will not work. Warnings still appear. Need capital letters for capabilities.
| - all | ||
|
|
||
| - ALL | ||
| seccompProfile: |
There was a problem hiding this comment.
Afaik this is available since 1.19 so we should not have issues on previous versions.
There was a problem hiding this comment.
Unfortunately on 4.6 you need special handling: https://docs.openshift.com/container-platform/4.6/security/seccomp-profiles.html
| // The policy/v1beta1 API version of PodDisruptionBudget will no longer be served in v1.25. | ||
| // The autoscaling/v2beta2 API version of HorizontalPodAutoscaler will no longer be served in v1.26 | ||
| // TODO: When we move away from releases that bring v1beta1 we can remove this part | ||
| if err := CheckMinimumVersion(d, "1.24.0"); err == nil { |
There was a problem hiding this comment.
Picking 1.24 instead of 1.25 here to avoid warnings 4.11 and a scenario where restricted policy is enforced.
| case "Deployment": | ||
| deployment := &appsv1.Deployment{} | ||
| if err := scheme.Scheme.Convert(u, deployment, nil); err != nil { | ||
| return fmt.Errorf("failed to convert Unstructured to Deployment: %w", err) | ||
| } | ||
| obj := deployment | ||
| podSpec := &deployment.Spec.Template.Spec | ||
| containers := podSpec.Containers | ||
| for i := range containers { | ||
| setPodSecurityContext(&containers[i]) | ||
| } | ||
| if err := scheme.Scheme.Convert(obj, u, nil); err != nil { | ||
| return err | ||
| } |
There was a problem hiding this comment.
Eventing needs stateful set support as well.
I can log an issue and I can follow up if you don't want to do it in this PR.
There was a problem hiding this comment.
Btw I dont see any manifests with StatefulSet.Could you point me to them? Didn't see them in audit reports.
There was a problem hiding this comment.
they are coming in a future release, I thought this will land on main first.
There was a problem hiding this comment.
Oh ok wanted to unblock testing that is why I did it on 1.25 first.
| // TODO: When we move away from releases that bring v1beta1 we can remove this part | ||
| if err := CheckMinimumVersion(d, "1.24.0"); err == nil { | ||
| transformers = append(transformers, UpgradePodDisruptionBudget(), UpgradeHorizontalPodAutoscaler(), SetSecurityContextForAdmissionController()) | ||
| } |
There was a problem hiding this comment.
In the other case, should we try to do the opposite (downgrading)?
There was a problem hiding this comment.
I mentioned in the description that this is an alternative. I see upgrading easier because otherwise we will have to patch 1.4 midstream branches (more work). Does not make sense to only patch the downloaded manifests in S-O repo you have to fix the mid stream ones too to be consistent. We can downgrade on main if that works better but dont see any benefits.
There was a problem hiding this comment.
When a manifest is at v1 (upgraded version) but the k8s version doesn't understand v1 because it was introduced in a future release.
There was a problem hiding this comment.
Yes if we have midstream manifests that are already upgraded yes. Do we have any? For Serving we might have in future branches eg 1.5+. But that is a concern for the main patch I guess.
|
Two timeouts for not getting a claim another infra failure :( |
|
/retest |
|
infra issues. |
|
/retest |
|
Ok need to avoid setting it for 4.6, I accidentally set it for kube-rbac-proxy always by default. Will fix shortly. |
|
/retest |
2 similar comments
|
/retest |
|
/retest |
|
/retest |
/retest |
|
/test 4.6-upstream-e2e-mesh-aws-ocp-46 |
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: matzew, skonto The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
| securityContext: | ||
| allowPrivilegeEscalation: false | ||
| runAsNonRoot: true | ||
| capabilities: | ||
| drop: | ||
| - ALL | ||
| seccompProfile: | ||
| type: RuntimeDefault |
There was a problem hiding this comment.
Should we see if we can land those on upstream? Or directly on midstream first ?
There was a problem hiding this comment.
For 1.26 yes that is also the goal for 1.25 we want to move fast. That is why I didnt start with main branch.
Upstream there is a ticket for fixing stuff for 1.8 let's see what we can port back.
|
Is this known, or infra (b/c of |
|
It is infra afaik, cluster access times out occasionally. |
|
@skonto: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
/retest |
Proposed Changes
Similar to Add DowngradePodDisruptionBudget to modify API version of PodDisruptionBudget #1571 but extending it more and no need to modify existing manifests (just an alternative).
If we are on 1.24+ (we want to avoid warnings on 4.11 too):
a) upgrade: PodDisruptionBudget and HPA (the real problem for the latter will be on 1.26 but let's update anyway to avoid warnings)
b) in Eventing tests use the latest CronJob
This will allow 1.25 to install on 4.12 and then users can upgrade to 1.26 to move further to 4.13 (1.26). In 1.26 we will need to fix HPA also at the Serving branches 1.5+, otherwise autoscaler-hpa will fail too.
Without this when we run on 1.25 (4.12) we will get the following (tested upstream operator on minikube since we don't have yet OCP 4.12 with K8s 1.25, see discussion):
We don't cover:
"jaeger"
"opentelemetry-operator-controller-manager"
"strimzi-cluster-operator"