Decouple eventing from Istio#378
Conversation
|
/test pull-knative-eventing-integration-tests |
1 similar comment
|
/test pull-knative-eventing-integration-tests |
|
The e2e tests are failing because Istio doesn't recognize that the request from the receive adapter to the channel should be made on the mesh. This could be solved by adding a ServiceEntry like: ...but this would need to be specified for every namespace that will host a channel that also originates a request to that channel from a pod that is Istio injected. Creating a less-than-ideal user experiance. Another option would be to follow the pattern in serving by exposing buses via Ingress. I'm not a fan of this approach as buses should generally be an internal facing concept and not be exposed to the world. The better approach may be to hold this PR and file issues with Istio to:
cc @tcnghia |
General bus enhancements lifted from knative#378, since that PR is blocked. - avoid unnessesary status updates if the status has not changed - avoid supressing errors when updating status - avoid supressing updates for unchanged resources as the local copy may not be fully synched - addressed all govet and golint reports from pkg/buses
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: scothis If they are not already assigned, you can assign the PR to them by writing The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
The following is the coverage report on pkg/.
|
|
/assign @mattmoor This PR is ready for discussion, but not complete. I'm not happy with where it ended up in terms of requiring a ServiceEntry to be created in every namespace, but it seems like the best path for the moment. We can either:
Serving is able to work around these issues by using Ingress. Channels, however, should not be exposed publicly. Perhaps there is way to use Ingress but limit it to traffic already in the cluster?? |
|
Istio has a Gateway of type internal load balancer, which is turned off by default (https://github.com/istio/istio/blob/8daa232df6ef7cdbb2987de1187255b14e3c3989/install/kubernetes/helm/istio/values.yaml#L271). I am looking for ways of how we can use something similar to expose internal Routes. Another way is using RBAC to restrict ingress traffic, but I believe that would require mTLS, which is a deeper Istio integration. @ZhiminXiang |
|
/test pull-knative-eventing-integration-tests |
Replace the Istio VirtualService used by Channels to send traffic to a Bus. The VirtualService did two things for us: 1. connected the Channel's Service to the Bus's Service 2. set the request's Host header to indicate the channel and namespace An ExternalName Service can forward traffic effectively to the Bus's Service, which handles item 1. Item 2 is addressed by added a host name cache that indexes channels known to a bus by the Channel's Service's host name. A significant drawback to using an ExternalName Service is for users who do use Istio. Traffic sent from Istio injected pods to a Channel will need to have an Istio ServiceEntry defined within the namespace that says traffic to the Channel should be handled by the serivce mesh. Otherwise the pod will return a 404 for the request to the Channel. Fixes knative#294
|
Rebased on top of master after the bus refactor |
|
I don't know much about istio, but from what I understand, this looks reasonable and also inline with what serving is going to be moving to with the serving owned virtual service. ? It looks like the mesh is opening internally for any traffic on :80. The downside to this PR I think means that we are giving up on the mesh getting to help auth control between inner-cluster components. (which we are not using at the moment but might in the future. (but we could also enable it via sidecars?)) /lgtm But someone who really understands or usage of istio needs to review and approve. |
|
New changes are detected. LGTM label has been removed. |
|
/close There isn't enough here to be worthy since it's going away. |
|
@scothis: Closing this PR. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
* Run ./hack/update-deps.sh --upgrade --release 1.10 Signed-off-by: Matthias Wessendorf <mwessend@redhat.com> * Manually applying a few patches Signed-off-by: Matthias Wessendorf <mwessend@redhat.com> * Remove unused vendor patch (since we got update to proper via:openshift-knative#306 ) Signed-off-by: Matthias Wessendorf <mwessend@redhat.com> * Remove patch since the patched version is now matching the actual. via:openshift-knative#306 Signed-off-by: Matthias Wessendorf <mwessend@redhat.com> --------- Signed-off-by: Matthias Wessendorf <mwessend@redhat.com>
Replace the Istio VirtualService used by Channels to send traffic to a
Bus.
The VirtualService did two things for us:
An ExternalName Service can forward traffic effectively to the Bus's
Service, which handles item 1. Item 2 is addressed by dropping support
for short hostnames that do not include the namespace.
For a Channel named 'aloha' in the default namespace, an event producer
can make a request using any of these host names:
aloha-channel.defaultaloha-channel.default.svc.cluster.localWhile short names like
aloha-channelwill resolve to the correct bus,ClusterBuses must be able to resolve the namespace in addition to the
channel since different namespaces can have channels with the same name.
A 404 will be returned if the Host header in the request does not
contain the namespace. Namespaced buses do not strictly require the
namespace to be included in the Host to resolve the correct Channel, but
will fail with a 404 for runtime consistency with ClusterBuses. We may
relax this requirement in the future.
Fixes #294