Skip to content

Use GenerateName for Channel's Service and VirtualService#674

Merged
knative-prow-robot merged 10 commits into
knative:masterfrom
Harwayne:svc-generated
Dec 18, 2018
Merged

Use GenerateName for Channel's Service and VirtualService#674
knative-prow-robot merged 10 commits into
knative:masterfrom
Harwayne:svc-generated

Conversation

@Harwayne
Copy link
Copy Markdown
Contributor

@Harwayne Harwayne commented Dec 5, 2018

Fixes #648

Proposed Changes

  • Use GenerateName for Channel's K8s Service and VirtualService.
    • Existing Services and VirtualServices are retained. If they are deleted, then they will be replaced with a generated name.

Release Note

You now _must_ use the Channel's status.address, rather than 'guessing' what it will be.

@knative-prow-robot knative-prow-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Dec 5, 2018
@googlebot googlebot added the cla: yes Indicates the PR's author has signed the CLA. label Dec 5, 2018
@knative-prow-robot knative-prow-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Dec 5, 2018
@Harwayne Harwayne changed the title [WIP] Use GenerateName for Channel's Service and VirtualService Use GenerateName for Channel's Service and VirtualService Dec 5, 2018
@knative-prow-robot knative-prow-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Dec 5, 2018
@Harwayne
Copy link
Copy Markdown
Contributor Author

Harwayne commented Dec 5, 2018

/assign @grantr
/cc @neosab

Comment thread pkg/provisioners/channel_util.go

func k8sServiceLabels(c *eventingv1alpha1.Channel) map[string]string {
return map[string]string{
"channel": c.Name,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in the serving stack, labels like these would be:
serving.knative.dev/route: my-route-name
serving.knative.dev/service: my-service-name

So, maybe while you're here change these to conform?
eventing.knative.dev/channel: c.Name
eventing.knative.dev/provisioner: c.Spec.Provisioner.Name

I'm fine with a follow on issue to track this work, but figured while you're doing this... :)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The downside of this change is that all the existing Services and VirtualServices would immediately be ignored. New generated name versions would be created immediately (although the existing ones would continue to exist until the Channel was deleted).

Hmm, maybe that isn't so bad. Calling the old Service should still work. Calling the new one would work.

One thing I would need to verify is what happens if there are multiple VirtualServices that attempt to rewrite the same host, as both the old and new VirtualService would attempt to rewrite channel-name.channel-namespace.channels.cluster.local. They would both try to rewrite it in the same way, but I don't know if multiple definitions would be problematic.

I'll test out what happens with the multiple VirtualServices. Assuming that works, I'll make the change. Are these the labels you want?

eventing.knative.dev/channel: c.Name
eventing.knative.dev/provisioner: c.Spec.Provisioner.Name

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. One additional option would be to keep them both for one release?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like it is unsupported to have multiple VirtualServices use the same host with a 'catch-all' rule. Also, the sidecar doesn't seem to support multiple VirtualServices with the same host. From the docs:

A VirtualService can only be fragmented this way if it is bound to a gateway. Host merging is not supported in sidecars.

In fact, I can see the following logs in a sidecar when I tried to make the change:

[2018-12-10 21:26:47.448][21][warning][config] bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_mux_subscription_lib/common/config/grpc_mux_subscription_impl.h:70] gRPC config for type.googleapis.com/envoy.api.v2.RouteConfiguration rejected: Only unique values for domains are permitted. Duplicate entry of domain gcp.default.channels.cluster.local

So, if we change the labels, things may happen to work, but Istio could choose to break that compatibility at any time. How they choose to break compatibility is undefined, and may result in non-working Channels (e.g. they reject the entire VirtualService rather than just the problematic portion). If we still choose to change the labels, then I would suggest we tell users to either manually bounce all Channels or to manually delete old VirtualServices.

We can make this backwards compatible. In this release, search based on the old labels and have the controller add the both the new and old labels to all Services and VirtualServices. Next release, only recognize the new labels (and stop writing them to existing Services and VirtualServices). This means that anyone upgrading 0.2 -> 0.3 -> 0.4 works. 0.2 -> 0.4 would have the same possible problems discussed above.

Do you still want to change them? If so, breaking or backwards compatible?

@vaikas
Copy link
Copy Markdown
Contributor

vaikas commented Dec 17, 2018

Ok, thanks for the investigation. I've opened this to track the work for unifying the labels so we catch them all:
#690

@vaikas
Copy link
Copy Markdown
Contributor

vaikas commented Dec 17, 2018

/test pull-knative-eventing-go-coverage

@knative-metrics-robot
Copy link
Copy Markdown

The following is the coverage report on pkg/.
Say /test pull-knative-eventing-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/controller/eventing/inmemory/channel/reconcile.go 97.9% 97.8% -0.1
pkg/provisioners/channel_util.go 92.8% 94.0% 1.2
pkg/provisioners/gcppubsub/controller/channel/reconcile.go 97.9% 97.9% -0.1
pkg/provisioners/kafka/controller/channel/reconcile.go 76.2% 76.1% -0.1
pkg/provisioners/natss/controller/channel/reconcile.go 74.1% 75.9% 1.8
pkg/provisioners/provisioner_util.go 75.0% 78.9% 3.9

@vaikas
Copy link
Copy Markdown
Contributor

vaikas commented Dec 18, 2018

/lgtm
/approve

Please make sure to update the Release Notes with the appropriate comments (for update related concerns.)

@knative-prow-robot knative-prow-robot added the lgtm Indicates that a PR is ready to be merged. label Dec 18, 2018
@knative-prow-robot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Harwayne, vaikas-google

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@knative-prow-robot knative-prow-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 18, 2018
@knative-prow-robot knative-prow-robot merged commit e8e6dd2 into knative:master Dec 18, 2018
@Harwayne Harwayne deleted the svc-generated branch January 29, 2019 22:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. cla: yes Indicates the PR's author has signed the CLA. lgtm Indicates that a PR is ready to be merged. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants