Add ImageDigests for multi container support#7373
Add ImageDigests for multi container support#7373knative-prow-robot merged 13 commits intoknative:masterfrom
Conversation
knative-prow-robot
left a comment
There was a problem hiding this comment.
@savitaashture: 0 warnings.
Details
In response to this:
Proposed Changes
- Added new field
ImageDigestsfor multicontainer- Modified reconcileDigest to support multiple containers
Pending:
Need to update new field ImageDigests to doc after the PR merge
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
| @@ -136,6 +136,13 @@ type RevisionStatus struct { | |||
| // may be empty if the image comes from a registry listed to skip resolution. | |||
| // +optional | |||
| ImageDigest string `json:"imageDigest,omitempty"` | |||
There was a problem hiding this comment.
Mark this as deprecated, add a todo to remove in 2 releases?
There was a problem hiding this comment.
We don't remove stuff from the API, I think?
There was a problem hiding this comment.
While discussing during POC we decided like we will keep existing one to not to break the functionality and along with ImageDigest we will add new field called ImageDigests to support multiple containers
There was a problem hiding this comment.
I am not suggesting we remove it now. But in N knative releases (N=2, I think) this stops being of interest. So we if we rename it deprecatedImageDigest and a todo to remove it in future (0.16 I guess) it should be alright.
There was a problem hiding this comment.
Please note that this also forces clients that link against the API types to be updated. This immediately breaks any code (not only in two versions). E.g. see the kn client failure in https://storage.googleapis.com/knative-prow/pr-logs/pull/knative_client/801/pull-knative-client-unit-tests/1255135293493743616/build-log.txt:
Running tests with 'go test -v -race -mod=vendor ./... '
# knative.dev/client/pkg/serving
pkg/serving/config_changes.go:305:24: baseRevision.Status.ImageDigest undefined (type "knative.dev/serving/pkg/apis/serving/v1".RevisionStatus has no field or method ImageDigest)
pkg/serving/config_changes.go:306:51: baseRevision.Status.ImageDigest undefined (type "knative.dev/serving/pkg/apis/serving/v1".RevisionStatus has no field or method ImageDigest)
# knative.dev/client/pkg/serving [knative.dev/client/pkg/serving.test]
pkg/serving/config_changes.go:305:24: baseRevision.Status.ImageDigest undefined (type "knative.dev/serving/pkg/apis/serving/v1".RevisionStatus has no field or method ImageDigest)
pkg/serving/config_changes.go:306:51: baseRevision.Status.ImageDigest undefined (type "knative.dev/serving/pkg/apis/serving/v1".RevisionStatus has no field or method ImageDigest)
pkg/serving/config_changes_test.go:128:17: revision.Status.ImageDigest undefined (type "knative.dev/serving/pkg/apis/serving/v1".RevisionStatus has no field or method ImageDigest)
This raises the question of what "deprecation" should mean. Only on the API (yaml) level or also on the code level ? Currently renaming fields to a Deprecated prefix is just a sledge-hammer for any client using a code dependency (also, this is not necessarily needed, or ?) and is, despite the naming, not a "deprecation" but a breaking change.
| // +optional | ||
| ImageDigest string `json:"imageDigest,omitempty"` | ||
|
|
||
| // ImageDigests holds the resolved digest for the image specified |
There was a problem hiding this comment.
The comment needs pluralization.
| return rev | ||
| } | ||
|
|
||
| func testRevisionForMultipleContainer() *v1.Revision { |
There was a problem hiding this comment.
can we call regular testRevision here and just annotate podspec?
remove duplicate boilerplate.
| // revision status should not contains imageDigests field in these scenario | ||
| // 1. If flag is disabled | ||
| // 2. If flag enabled but applied revision has single container | ||
| if cfgs.Defaults.EnableMultiContainer && len(rev.Spec.Containers) > 1 { |
There was a problem hiding this comment.
Why wouldn't we switch to the map basically always? Even if the MC is disabled. The map will just have a single entry.
There was a problem hiding this comment.
Yes if MC is disabled map will have just one entry.
But as per the current behavior the revision status don't have ImageDigests field
In that case if we don't add that particular condition even if MC disabled revision will have ImageDigests field .
So in order to keep consistency with current behavior of revision status added this logic
Let me know allowing ImageDigests field even if MC disabled is okay then i will update code
There was a problem hiding this comment.
Yeah, I think it is totally fine to have this map populated as source of truth always going forward.
There was a problem hiding this comment.
I agree, let's add it always.
mattmoor
left a comment
There was a problem hiding this comment.
Produced via:
gofmt -s -w $(find -path './vendor' -prune -o -path './third_party' -prune -o -type f -name '*.go' -print)
0f0daff to
65ca1a3
Compare
| rev.Status.ImageDigests = make(map[string]string, len(rev.Spec.Containers)) | ||
| } | ||
| for _, container := range rev.Spec.Containers { | ||
| digest, err := c.resolver.Resolve(container.Image, |
There was a problem hiding this comment.
Do we want to do this asyncronously?
I remember @markusthoemmes having problem with each taking 10s?
There was a problem hiding this comment.
na, my issues was entirely separate. This is fine usually. I suppose we could run them in parallel though?
There was a problem hiding this comment.
yeah, that's what I meant in go routines.
There was a problem hiding this comment.
Modified
Thank you @markusthoemmes @vagababov
| } | ||
| close(digests) | ||
| for v := range digests { | ||
| if len(rev.Spec.Containers) == 1 || v.isServingContainer { |
There was a problem hiding this comment.
I could see len==1 case be specialcased, but
🤷♂
There was a problem hiding this comment.
Sorry i think i don't understand your comment fully
Do you mean len(rev.Spec.Containers) == 1 should be handled as special case?
or do you mean we should handled same way like v.isServingContainer ?
There was a problem hiding this comment.
In theory, since we don't need to do any of the parallel stuff. But in practice probably doesn't matter.
|
/retest |
markusthoemmes
left a comment
There was a problem hiding this comment.
One more comment on the upgrade path
|
Thank you @markusthoemmes @vagababov |
mattmoor
left a comment
There was a problem hiding this comment.
Produced via:
gofmt -s -w $(find -path './vendor' -prune -o -path './third_party' -prune -o -type f -name '*.go' -print)
f93ad7d to
9a2a047
Compare
dprotaso
left a comment
There was a problem hiding this comment.
It's probably worth talking about the ContainerStatus as the WG meeting/slack to get alignment.
| if v.digestError != nil { | ||
| rev.Status.MarkContainerHealthyFalse(v1.ReasonContainerMissing, | ||
| v1.RevisionContainerMissingMessage( | ||
| v.image, v.digestError.Error())) |
There was a problem hiding this comment.
Now I wonder if individual digest resolution errors should appear in each ContainerStatus
There was a problem hiding this comment.
I would think so
because instead of giving single failure information for more than one images it would be better to give failure error respective to that
There was a problem hiding this comment.
let's leave it for a separate PR/issue
mattmoor
left a comment
There was a problem hiding this comment.
Produced via:
gofmt -s -w $(find -path './vendor' -prune -o -path './third_party' -prune -o -type f -name '*.go' -print)
6fd05bf to
4b21cc1
Compare
|
@dprotaso @mattmoor @vagababov @markusthoemmes PR is ready for another round of review |
vagababov
left a comment
There was a problem hiding this comment.
It's mostly good by me as before.
|
The following is the coverage report on the affected files.
|
dprotaso
left a comment
There was a problem hiding this comment.
/lgtm
/approve
/hold
holding because of the nit - if you want to do it in a separate PR feel free to drop it
| if v.isServingContainer { | ||
| rev.Status.DeprecatedImageDigest = v.digestValue | ||
| } | ||
| rev.Status.ContainerStatuses = append(rev.Status.ContainerStatuses, v1.ContainerStatuses{ |
There was a problem hiding this comment.
nit: can we have the order of the containers in the status be the same as the spec?
There was a problem hiding this comment.
Hi @dprotaso I feel it will be little bit messy because we don't have any comparison element between spec and status in order to have the ordering of the status same like spec
There was a problem hiding this comment.
I was thinking of using the container name - so the first container in the spec would be the first container in the status etc.
If you want to do it in another PR or create a good first issue (for someone else) feel free to unblock this PR with an /unhold
There was a problem hiding this comment.
Okay I have created an issue to handle that in another PR
#7658
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dprotaso, savitaashture The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/unhold |
|
The following jobs failed:
Failed non-flaky tests preventing automatic retry of pull-knative-serving-unit-tests: |
|
/test pull-knative-serving-unit-tests |
| // may be empty if the image comes from a registry listed to skip resolution. | ||
| // If multiple containers specified then DeprecatedImageDigest holds the digest | ||
| // for serving container. | ||
| // DEPRECATED Use ImageDigests instead. |
There was a problem hiding this comment.
Where is that "ImageDigests" field ?
Fixes:
Parts of #5822 #3384
Proposed Changes
ImageDigestsfor multicontainerPending:
Need to update new field ImageDigests to doc after the PR merge