Skip to content

Remove Istio dependency from Eventing (Part - 1)#1044

Merged
knative-prow-robot merged 16 commits into
knative:masterfrom
akashrv:noistio2
Apr 15, 2019
Merged

Remove Istio dependency from Eventing (Part - 1)#1044
knative-prow-robot merged 16 commits into
knative:masterfrom
akashrv:noistio2

Conversation

@akashrv
Copy link
Copy Markdown
Contributor

@akashrv akashrv commented Apr 10, 2019

Part of series of PRs for #294

Proposed Changes

This is part of a series of PRs that I will send to keep PR size small.
Overall change:

  1. Create unique ExternalName K8s service for each channel in user namespace
  2. Set the ExternalName to the dispatcher service in eventing namespace
  3. Dispatcher needs to know how to map the host header (which is the External K8s service fqdn) to the Channel
  4. Have a watch on Channels in each dispatcher and update in-memory map of hostname to channel name.
  5. As part of this change remove dependency on ConfigMaps that were created and shared by the controller and the dispatcher. Now the controller updates the channel and dispatcher watches the channel to get the updates instead of configmap.

What is included in this PR:

  1. Removed Istio virtual service dependency from in-memory channel
  2. Sidecar is still deployed and will be removed in the last PR once all virtual service dependencies are removed.

There is an issue I identified when working with istio and externalname k8s service. I am following up with them separately, and there is a workaround in the code for it.

Release Note

Knative Eventing will create a unique K8s service of type ExternalName for each channel when using in-memory and in-memory-channel clusterchannelprovisioner

@googlebot googlebot added the cla: yes Indicates the PR's author has signed the CLA. label Apr 10, 2019
@knative-prow-robot knative-prow-robot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Apr 10, 2019
@akashrv
Copy link
Copy Markdown
Contributor Author

akashrv commented Apr 10, 2019

/assign @evankanderson

@Harwayne
Copy link
Copy Markdown
Contributor

/assign

@akashrv
Copy link
Copy Markdown
Contributor Author

akashrv commented Apr 10, 2019

/assign @bbrowning

Comment thread pkg/provisioners/inmemory/channel/reconcile.go
@Harwayne
Copy link
Copy Markdown
Contributor

I don't think this fixes #294, I want this PR to reference it rather than closing it.

Comment thread cmd/fanoutsidecar/main.go
Comment thread cmd/fanoutsidecar/main.go Outdated
Comment thread cmd/fanoutsidecar/main.go
Comment thread pkg/channelwatcher/channel_watcher.go Outdated
Comment thread pkg/channelwatcher/channel_watcher.go Outdated
Comment thread pkg/provisioners/inmemory/controller/main.go Outdated
Comment thread pkg/provisioners/provisioner_util.go Outdated
Comment thread pkg/provisioners/provisioner_util.go Outdated
Comment thread pkg/reconciler/v1alpha1/broker/resources/ingress.go
Comment thread pkg/sidecar/multichannelfanout/multi_channel_fanout_handler.go
Comment thread pkg/provisioners/inmemory/channel/reconcile.go Outdated
Comment thread pkg/provisioners/inmemory/controller/main.go Outdated
@knative-metrics-robot
Copy link
Copy Markdown

The following is the coverage report on pkg/.
Say /test pull-knative-eventing-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/provisioners/channel_util.go 92.4% 90.8% -1.5
pkg/provisioners/inmemory/channel/controller.go 50.0% 64.3% 14.3
pkg/provisioners/inmemory/channel/reconcile.go 94.4% 78.6% -15.8
pkg/provisioners/inmemory/clusterchannelprovisioner/reconcile.go 93.4% 93.8% 0.3
pkg/provisioners/provisioner_util.go 78.9% 73.1% -5.9

Comment thread pkg/provisioners/channel_util.go
@Harwayne
Copy link
Copy Markdown
Contributor

/lgtm
/approve

@knative-prow-robot knative-prow-robot added the lgtm Indicates that a PR is ready to be merged. label Apr 15, 2019
@knative-prow-robot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: akashrv, Harwayne

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@knative-prow-robot knative-prow-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 15, 2019
@knative-prow-robot knative-prow-robot merged commit f8317dd into knative:master Apr 15, 2019
Copy link
Copy Markdown
Member

@evankanderson evankanderson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments now, since you asked for them anyway. :-D

"sigs.k8s.io/controller-runtime/pkg/manager"
crlog "sigs.k8s.io/controller-runtime/pkg/runtime/log"
// Uncomment the following line to load the gcp plugin (only required to authenticate against GKE clusters).
// _ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't seem right -- why not just remove it?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I kept it around as it helps with debugging. When I started working on controllers or anything that connects to GKE from my local machine I would get some config related error. After searching online I found threads that it is because of missing package. So I added it here and then commented it out as a nicety that could help others working on the code base.

Comment thread cmd/controller/main.go
"sigs.k8s.io/controller-runtime/pkg/manager"
logf "sigs.k8s.io/controller-runtime/pkg/runtime/log"
// Uncomment the following line to load the gcp plugin (only required to authenticate against GKE clusters).
// _ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, why not remove this here? It's clearly not needed, and seems like it would be simpler to just remove it.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I kept it around as it helps with debugging. When I started working on controllers or anything that connects to GKE from my local machine I would get some config related error. After searching online I found threads that it is because of missing package. So I added it here and then commented it out as a nicety that could help others working on the code base.

Comment thread cmd/fanoutsidecar/main.go
}
if err != nil {
if err = v1alpha1.AddToScheme(mgr.GetScheme()); err != nil {
logger.Error("Error while adding eventing scheme to manager.", zap.Error(err))
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this include the scheme?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the question whether scheme is needed or should it be in this function?

  • Adding this scheme is necessary or else the watch through manager won't work

  • I considered whether I should add it here or in the channelwatcher.New function. Looking at rest of the code where controllers are created, I found that schemes are added in cmd/main rather than inside the controller. So I kept it this way to be consistent with rest of the code.

Comment thread cmd/fanoutsidecar/main.go
return err
func listAllChannels(ctx context.Context, c client.Client) ([]v1alpha1.Channel, error) {
channels := make([]v1alpha1.Channel, 0)
for {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be worth a comment here indicating that this is a do... while loop on opts.Raw.Continue.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In particular, I'm surprised that the client.Client interface doesn't perform the full listing.

A reading of the ListOptions struct suggests that pagination may only be provided if limit is set (and I suspect that many clients would have failures if this wasn't the case). Do you have evidence that the more complicated code with continue tokens is needed?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably copied from other code that added pagination support defensively (@Harwayne may be able to provide context). Based on my reading of the original design for apiserver pagination, it appears @evankanderson is correct: pagination is opt-in by specifying the limit parameter.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am paranoid. I don't think it gets used today. I believe @akashrv looked into the Controller Runtime code in particular and confirmed that it isn't used. But the fact that it is in the interface makes me wary of leaving it out. I agree that ListOptions.Limit clearly talks about Continue with regards to having a limit set, but ListOptions.Continue doesn't specify that is the only time Continue will be used.

I agree with your reading of the origin design that this is purely an opt-in feature and that not providing a limit means get everything (even if that is 500 MB). Having been shown this, I am OK removing it.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good. I'll simplify it and revert it to how it was in first iteration.

Comment thread cmd/fanoutsidecar/main.go
return err
func shouldWatch(ch *v1alpha1.Channel) bool {
if ch.Spec.Provisioner != nil && ch.Spec.Provisioner.Namespace == "" {
for _, v := range channelProvisioners {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make more sense to make channelProvisioners a map at init time, so you can use a map membership check?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't expect to have too many provisioners. Most cases this should be just one check without a for loop.
Moreover when we rename fanoutsidecar to be in-memory provisioner and deprecate the old in-memory-channel. This code will change to a single value check.
Hence left it this way.

Comment thread cmd/fanoutsidecar/main.go
if err != nil {
return err
func shouldWatch(ch *v1alpha1.Channel) bool {
if ch.Spec.Provisioner != nil && ch.Spec.Provisioner.Namespace == "" {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe make namespace non-empty an early exit with a comment that we only support cluster-scoped provisioners (so it's easier to extend to namespace-scoped provisioners later)?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't understand the comment. This function is used to filter out channels in the handler inside the watch. It is not used in the request path.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was suggesting writing this as:

if ch.Spec.Provisioner == nil || ch.Spec.Provisioner.Namespace != "" {
  // Only support cluster-level provisioners right now.
  return false
}
...

Copy link
Copy Markdown
Member

@evankanderson evankanderson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Last cleanup comments; the rest of the logic seems fine, though it might be better to split cleanups (like gofmt) from meaningful changes, and possibly even to break up the PR into smaller chunks based on API boundaries.

Ports: []corev1.ServicePort{
{
Name: "http",
Protocol: corev1.ProtocolTCP,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why remove the name?

If Istio is running sidecar injection, wouldn't you want to mark it for Istio?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check this one: istio/istio#13193.
Istio has a bug where if I name it then it wont work since it is an ExternalName service. So the change is to make it work in case pod that calls the channel has istio sidecar in it. Without istio sidecar it works irrespective of whether the port is named or not

}

func (r *reconciler) Reconcile(req reconcile.Request) (reconcile.Result, error) {
ctx := logging.WithLogger(context.TODO(), r.logger.With(zap.Any("request", req)))
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is reasonably context.Background(). (This makes it easier to find actual TODO contexts where the context should be flowed through later.)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will change it in the one of the follow-up PRs

Comment thread cmd/fanoutsidecar/main.go
return err
func listAllChannels(ctx context.Context, c client.Client) ([]v1alpha1.Channel, error) {
channels := make([]v1alpha1.Channel, 0)
for {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In particular, I'm surprised that the client.Client interface doesn't perform the full listing.

A reading of the ListOptions struct suggests that pagination may only be provided if limit is set (and I suspect that many clients would have failures if this wasn't the case). Do you have evidence that the more complicated code with continue tokens is needed?

Name: "http",
// There is a bug in Istio where named port doesn't work when connecting using an ExternalName service
// Refer to https://github.com/istio/istio/issues/13193 for more details.
// TODO: Uncomment Name:"http" when ISTIO fixes the issue
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this need to be copied to the comment I made above?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes this is the reason I removed it. Didn't put the comment in UT, because when we revert this UTs will anyways fail and we will have to update UTs.

creydr pushed a commit to creydr/knative-eventing that referenced this pull request Feb 5, 2025
Co-authored-by: serverless-qe <serverless-support@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. cla: yes Indicates the PR's author has signed the CLA. lgtm Indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants