Skip to content

Bump to 1.25#1647

Merged
openshift-merge-robot merged 3 commits into
openshift-knative:mainfrom
skonto:bump_to_1.25
Jul 20, 2022
Merged

Bump to 1.25#1647
openshift-merge-robot merged 3 commits into
openshift-knative:mainfrom
skonto:bump_to_1.25

Conversation

@skonto
Copy link
Copy Markdown
Collaborator

@skonto skonto commented Jul 15, 2022

Proposed Changes

@skonto skonto requested review from mgencur and nak3 July 15, 2022 12:54
@openshift-ci openshift-ci Bot requested a review from jcrossley3 July 15, 2022 12:54
@skonto skonto removed the request for review from jcrossley3 July 15, 2022 12:55
@nak3
Copy link
Copy Markdown
Contributor

nak3 commented Jul 17, 2022

/test 4.10-upgrade-tests-aws-ocp-410

@nak3
Copy link
Copy Markdown
Contributor

nak3 commented Jul 17, 2022

It seems TestServerlessUpgrade/DowngradeWith/DowngradeServerless is failing?

@skonto
Copy link
Copy Markdown
Collaborator Author

skonto commented Jul 18, 2022

@nak3 Serving is not coming up after downgrade due to:

Warning InternalError 22m knativeserving-controller failed to apply non rbac manifest: Job.batch "storage-version-migration-serving-serving-1.3.0" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"storage-version-migration-serving", "app.kubernetes.io/component":"storage-version-migration-job", "app.kubernetes.io/name":"knative-serving", "app.kubernetes.io/version":"1.3.0", "controller-uid":"8ae4be88-c739-4c0c-88a3-22bf5decbbc8", "job-name":"storage-version-migration-serving-serving-1.3.0"}, Annotations:map[string]string{"sidecar.istio.io/inject":"false"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"migrate", Image:"quay.io/openshift-knative/knative-serving-storage-version-migration:v1.3.0", Command:[]string(nil), Args:[]string{"services.serving.knative.dev", "configurations.serving.knative.dev", "revisions.serving.knative.dev", "routes.serving.knative.dev"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:1, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:1048576000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:core.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(0xc032009d40), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc014b7bab0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"controller", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc03f7faa80), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil), OS:(*core.PodOS)(nil)}}: field is immutable

I think we faced that in the past with empty names. Will check.

@skonto
Copy link
Copy Markdown
Collaborator Author

skonto commented Jul 18, 2022

@mgencur I guess the bug checker here #1605 signals that we have the same name and different images during downgrade. I suppose this should go away when we also bump the serving version on main and it is safe to merge. This PR is an intermediate step anyway that uses the same Serving version. Correct?

@mgencur
Copy link
Copy Markdown
Contributor

mgencur commented Jul 18, 2022

Yes. I think this problem will go away when you bump the Serving and Eventing version because the job has the Serving/Eventing version as the suffix: storage-version-migration-serving-serving-1.3.0 and it's not possible to modify the same job with a different image (that's where the field is immutable error come from).

@skonto
Copy link
Copy Markdown
Collaborator Author

skonto commented Jul 18, 2022

@nak3 we can merge or wait for #1648.

@nak3
Copy link
Copy Markdown
Contributor

nak3 commented Jul 18, 2022

/lgtm
/hold

I prefer to merging #1648 first, but please feel free to unhold if this should be merged before #1648.

@openshift-ci openshift-ci Bot removed the lgtm label Jul 19, 2022
@skonto
Copy link
Copy Markdown
Collaborator Author

skonto commented Jul 19, 2022

@nak3 #1648 got merged let's see.

@nak3
Copy link
Copy Markdown
Contributor

nak3 commented Jul 19, 2022

/retest

@nak3
Copy link
Copy Markdown
Contributor

nak3 commented Jul 20, 2022

It seems TestServerlessUpgrade/DowngradeWith/DowngradeServerless is still failing?

@skonto
Copy link
Copy Markdown
Collaborator Author

skonto commented Jul 20, 2022

This is unrelated to the previous issue as storage jobs completed. This happens because we check the Serving status for the required version here after the upgrade.
We launch tests with:

{{end -}} --catalogsource=serverless-operator --upgradechannel=stable --csv=serverless-operator.v1.25.0 --csvprevious=serverless-operator.v1.24.0 --servingversion=v1.3.0 --eventingversion=v1.3.2 --kafkaversion=v1.3.2 --servingversionprevious=v1.2.0 --eventingversionprevious=v1.2.1 --kafkaversionprevious=v1.2.3 --skipdowngrade=false --resolvabledomain --https

However Serving will be setup with 1.3.0. So we are not getting the 1.2.0 version at the CR. Reason is we are downgrading from 1.25 (does not exist) to 1.24. I will bump the components previous versions in project.yaml and give it a try or we can just merge as this is a PR just updates the metadata. When Serving, Eventing is bumped this should pass. /cc @mgencur

previous:
serving: 1.2.0
eventing: 1.2.1
serving: 1.3.0
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is temporary as we are going to bump the versions anyway. Otherwise downgrade tests will fail.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good, this middle step is a new step to every bump moving forward, right? Maybe we can add a comment like (line 4)

# For minor and major version bumps, bump all `dependenceis.previous` to whatever `dependencies` has set.
version: 1.25.0

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added

@skonto
Copy link
Copy Markdown
Collaborator Author

skonto commented Jul 20, 2022

/retest

@skonto
Copy link
Copy Markdown
Collaborator Author

skonto commented Jul 20, 2022

again the 4.9 image job:

Cannot retrieve metalink for repository: epel/x86_64. Please verify its path and try again
error: build error: error building at STEP "RUN yum install -y kubectl httpd-tools": error while running runtime: exit status 1

@skonto
Copy link
Copy Markdown
Collaborator Author

skonto commented Jul 20, 2022

Flaky

--- FAIL: TestAutoscaleSustaining (255.90s)
}

{Failed  === RUN   TestAutoscaleSustaining
--- FAIL: TestAutoscaleSustaining (309.92s)
}

We need to check how to fix this one. It comes up too often. Upgrade passed will re-test shortly.

@skonto
Copy link
Copy Markdown
Collaborator Author

skonto commented Jul 20, 2022

/retest

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Jul 20, 2022

@skonto: The /retest command does not accept any targets.
The following commands are available to trigger required jobs:

  • /test 4.10-aws-ovn-images
  • /test 4.10-azure-images
  • /test 4.10-gcp-images
  • /test 4.10-images
  • /test 4.10-operator-e2e-aws-ocp-410
  • /test 4.10-osd-images
  • /test 4.10-vsphere-images
  • /test 4.6-images
  • /test 4.7-images
  • /test 4.8-images
  • /test 4.9-images
  • /test unit-test

The following commands are available to trigger optional jobs:

  • /test 4.10-upgrade-tests-aws-ocp-410
  • /test 4.10-upstream-e2e-aws-ocp-410
  • /test 4.10-upstream-e2e-mesh-aws-ocp-410

Use /test all to run all jobs.

Details

In response to this:

/retest 🙏

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@skonto
Copy link
Copy Markdown
Collaborator Author

skonto commented Jul 20, 2022

/retest

@pierDipi
Copy link
Copy Markdown
Member

/lgtm

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Jul 20, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: nak3, pierDipi, skonto

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:
  • OWNERS [nak3,pierDipi,skonto]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@skonto
Copy link
Copy Markdown
Collaborator Author

skonto commented Jul 20, 2022

/unhold

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants