Channel spec: iteration 1#1420
Conversation
|
|
||
| A channel logically receives events on its input domain and forwards them to its subscribers. Below is a specification for the generic parts of each _Channel_. | ||
|
|
||
| A typical channel consists of a _Controller_ and a _Dispatcher_ pod. |
There was a problem hiding this comment.
@nachocano what about the webhook, that we have in Kafka?
|
ping @Abd4llA |
| name: my-channel | ||
| ``` | ||
|
|
||
| Each _Channel Controller_ ensures the required tasks on the backing technology are applied. In this case a Kafka topic with the desired configuration is being created, backing all messages from the channel. |
There was a problem hiding this comment.
Do you want to define the behaviour of the status field to indicate that the channel has created the backing resources and is ready to receive messages?
There was a problem hiding this comment.
yes please. There's this bug to address the fact that Channelable Duck is not currently very useful for deciphering the Status of the Channel CRDs
#1375
|
|
||
| Different Channels handle failures in different ways: | ||
| * gcp-pubsub - Exponential backoff, up to five minutes. Will continue retrying forever. | ||
| * kafka - Immediate retry, no backoff. Will continue retrying forever. |
There was a problem hiding this comment.
Is this right? I don't see it retrying: https://github.com/knative/eventing/blob/master/contrib/kafka/pkg/dispatcher/dispatcher.go#L292
The error is always ignored.
There was a problem hiding this comment.
This might be something we want to configure genrically as part of the "endpoint" configuration for a Channel or Source.
A related issue is reliability - for any form of reliable delivery then we will need to be able to configure QoS settings on endpoints.
There was a problem hiding this comment.
Should then retry and other QoS speced for 0.8+? And tested?
|
Thanks for putting this together. I focused mainly on the 0.7 and jotted down some thoughts / questions. |
|
|
||
| ``` | ||
| apiVersion: messaging.knative.dev/v1alpha1 | ||
| kind: InMemoryChannel |
There was a problem hiding this comment.
| kind: InMemoryChannel | |
| kind: InMemoryChannel |
| name: my-channel | ||
| ``` | ||
|
|
||
| Each _Channel Controller_ ensures the required tasks on the backing technology are applied. In this case a Kafka topic with the desired configuration is being created, backing all messages from the channel. |
There was a problem hiding this comment.
yes please. There's this bug to address the fact that Channelable Duck is not currently very useful for deciphering the Status of the Channel CRDs
#1375
|
|
||
| Each _Channel Controller_ ensures the required tasks on the backing technology are applied. In this case a Kafka topic with the desired configuration is being created, backing all messages from the channel. | ||
|
|
||
| #### Aggregated Channelable ClusterRole |
There was a problem hiding this comment.
nit: maybe:
Aggregated Channelable Manipulator ClusterRole
|
|
||
| #### Aggregated Channelable ClusterRole | ||
|
|
||
| Every CRD must create a corresponding ClusterRole, that will be aggregated into the `channelable-manipulator` ClusterRole. This ClusterRole must include permissions to create, read, patch, and update the CRD's custom objects. Below is an example for the `KafkaChannel`: |
There was a problem hiding this comment.
maybe read=> list, watch?
Also maybe:
... update the CRD's custom objects and their status.
|
|
||
| Channels MAY NOT alter an event that goes through them. All CloudEvent attributes, including the data attribute, MUST be received at the subscriber identical to how they were received by the Channel. | ||
|
|
||
| Channels MUST attach a bearer token to all outgoing requests, likely in the form of a JWT. This bearer token MUST use an identity associated with the Channel, not the individual Subscription. No newline at end of file |
There was a problem hiding this comment.
How does this jive with the security requirements that @mikehelmick outlined in:
#705
That's geared towards Eventing constructs, and just want to make sure that this is compatible with that approach.
There was a problem hiding this comment.
Would this be something that we rely on the (optional) mesh to handle for us instead?
Just curious who handles the auth and how the creds get distributed, etc.
|
|
||
| TODO | ||
|
|
||
| ### Data Plane |
There was a problem hiding this comment.
Should we say something about (even if a TODO for the first draft) about metrics that a channel should expose (failed deliveries, malformed incoming events, etc.). as well as some sort of perf metrics?
|
Looks like there's a dangling link? |
|
|
||
| ### The ClusterChannelProvisioner | ||
|
|
||
| Describes an abstract configuration of a Source system which produces events or a Channel system that receives and delivers events. |
There was a problem hiding this comment.
Should "Source" be "Importer" or is that change still pending?
Are we folding configuration of source and channel together? If so it seems like it might be time to identify some common "endpoint" configuration type that describes either the client or server end of some protocol connection, where the direction of event flow is not assumed to be client->server (it will be for HTTP, but not for other protocols)
There was a problem hiding this comment.
that's the old spec ... matching old terms ?
|
|
||
| ### The ClusterChannelProvisioner | ||
|
|
||
| Describes an abstract configuration of a Source system which produces events or a Channel system that receives and delivers events. |
There was a problem hiding this comment.
Maybe clarify what the provisioner does - it's not just descriptive, it creates new channel instances does it not?
|
|
||
| Different Channels handle failures in different ways: | ||
| * gcp-pubsub - Exponential backoff, up to five minutes. Will continue retrying forever. | ||
| * kafka - Immediate retry, no backoff. Will continue retrying forever. |
There was a problem hiding this comment.
This might be something we want to configure genrically as part of the "endpoint" configuration for a Channel or Source.
A related issue is reliability - for any form of reliable delivery then we will need to be able to configure QoS settings on endpoints.
|
|
||
| ##### Generic | ||
|
|
||
| If a Channel receives an event queueing request and is unable to parse a valid CloudEvent, then it MUST reject the request. |
There was a problem hiding this comment.
Last time I checked there was no validation done (at least for the InMemoryChannel)
|
|
||
| ``` | ||
| apiVersion: messaging.knative.dev/v1alpha1 | ||
| kind: InMemoryChannel |
There was a problem hiding this comment.
| kind: InMemoryChannel | |
| kind: InMemoryChannel |
| ``` | ||
| apiVersion: messaging.knative.dev/v1alpha1 | ||
| kind: InMemoryChannel | ||
| metadata: |
There was a problem hiding this comment.
| metadata: | |
| metadata: |
| apiVersion: messaging.knative.dev/v1alpha1 | ||
| kind: InMemoryChannel | ||
| metadata: | ||
| name: my-channel |
There was a problem hiding this comment.
| name: my-channel | |
| name: my-channel |
|
|
||
| Every event queueing request to the Channel will come with a bearer token, likely a JWT. The bearer token MUST be validated before any other work is done on the request. The specifics of how and what to validate will be identical to Broker ingress verification, which is being [defined](https://github.com/knative/eventing/issues/705#issuecomment-496722527). | ||
|
|
||
| The Channel MUST pass through all tracing information as CloudEvents attributes. In particular, it MUST translate any incoming OpenTracing or B3 headers to the [Distributed Tracing Extension](https://github.com/cloudevents/spec/blob/v0.2/extensions/distributed-tracing.md). The Channel SHOULD sample and write traces to the location specified in [`config-tracing`](https://github.com/cloudevents/spec/blob/v0.2/extensions/distributed-tracing.md). |
There was a problem hiding this comment.
config-tracing link is incorrect, although I don't know the correct one for now.
|
|
||
| If a Channel receives an event queueing request and is unable to parse a valid CloudEvent, then it MUST reject the request. | ||
|
|
||
| Every event queueing request to the Channel will come with a bearer token, likely a JWT. The bearer token MUST be validated before any other work is done on the request. The specifics of how and what to validate will be identical to Broker ingress verification, which is being [defined](https://github.com/knative/eventing/issues/705#issuecomment-496722527). |
There was a problem hiding this comment.
This is still a bit speculative. We'll have to think about how to craft correct policies in this area.
There is overlap in that the channel egress will need to be able to craft a new JWT with an audience of the subscriber.
There was a problem hiding this comment.
we might want to mark that this will be defined for 0.8?
|
|
||
| #### Output | ||
|
|
||
| Channels MUST output CloudEvents. The output MUST be via a binding specified in the [CloudEvents specification](https://github.com/cloudevents/spec/tree/v0.2#cloudevents-documents). Every Channel MUST support sending events via Structured Content Mode HTTP Transport Binding. |
There was a problem hiding this comment.
when will be update channel spec to cloudevents 0.3?
|
Should version_070.md be renamed to 071 or 080 as changes are made or we just keep channel/spec.md with list of changes? |
|
Should there be some kind of migration guide form 0.6? Especially about default channels? |
|
|
||
| Channels MUST output CloudEvents. The output MUST be via a binding specified in the [CloudEvents specification](https://github.com/cloudevents/spec/tree/v0.2#cloudevents-documents). Every Channel MUST support sending events via Structured Content Mode HTTP Transport Binding. | ||
|
|
||
| Channels MAY NOT alter an event that goes through them. All CloudEvent attributes, including the data attribute, MUST be received at the subscriber identical to how they were received by the Channel. |
There was a problem hiding this comment.
| Channels MAY NOT alter an event that goes through them. All CloudEvent attributes, including the data attribute, MUST be received at the subscriber identical to how they were received by the Channel. | |
| Channels MAY NOT alter an event that goes through them. All CloudEvent attributes, including the data attribute, MUST be received at the subscriber identical to how they were received by the Channel. |
|
/test pull-knative-eventing-build-tests |
|
/lgtm Let's submit this as-is and continue improving it here, rather than it @matzew's personal repo. It is marked as in-progress, so should not be used by others for now. |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Harwayne, matzew The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: Matthias Wessendorf <mwessend@redhat.com>
Fixes #1213
Proposed Changes
Conformance Question
what do we want in the conformance test section ?
Let's mention it requires
Some of this is already in the spec doc... but perhaps worth to explicitly add it to some conformance section?
I'd think we word these out, and have a clear, human-readable section. From there we can easily create some TCK / test-conformance-kit ?
Please have a look:
@lberk @aslom @alanconway @n3wscott @Harwayne @nachocano