This is the producer side of #815
When sending events into Knative Eventing today, those events are usually sent to a Channel - specified by a sink object reference in the source custom resource. An example:
apiVersion: sources.eventing.knative.dev/v1alpha1
kind: CronJobSource
metadata:
name: test-cronjob-source
spec:
schedule: '* * * * *'
data: '{"message": "Hello world!"}'
sink:
apiVersion: eventing.knative.dev/v1alpha1
kind: Channel
name: testchannel
This requires creating a decision every time a new source is created - what Channel should this send events to and do I need to create that Channel?
What's lacking is a way to get events into Knative Eventing without having to make a decision about how that event gets routed or delivered. Ideally, there'd be a general catch-all sink that every event could get sent to. And perhaps that's a configured default in the system so that the sink property of event sources becomes optional.
Using the example from above, I'd like to be able to modify it perhaps as below:
apiVersion: sources.eventing.knative.dev/v1alpha1
kind: CronJobSource
metadata:
name: test-cronjob-source
spec:
schedule: '* * * * *'
data: '{"message": "Hello world!"}'
This would dump cron job events into Knative Eventing, once per second, with no worry about how those events are being routed or what is consuming them.
Or, if we still prefer a sink to be specified, it should be a well-known sink that's static across an entire Knative installation or at least static within a single namespace. Something like:
apiVersion: sources.eventing.knative.dev/v1alpha1
kind: CronJobSource
metadata:
name: test-cronjob-source
spec:
schedule: '* * * * *'
data: '{"message": "Hello world!"}'
sink:
apiVersion: eventing.knative.dev/v1alpha1
kind: Bucket
name: default
Bucket here is just a made up name. Other proposals have referred to this as Broker or Router. The point is to not require any kind of decision around routing or delivery at the time the event source is created.
See Knative Eventing use cases and product requirements - 2019 and Broker and Trigger for some prior discussion and ideas around these topics. Those docs should be visible to anyone in the knative-users Google Group.
This is the producer side of #815
When sending events into Knative Eventing today, those events are usually sent to a
Channel- specified by asinkobject reference in the source custom resource. An example:This requires creating a decision every time a new source is created - what
Channelshould this send events to and do I need to create thatChannel?What's lacking is a way to get events into Knative Eventing without having to make a decision about how that event gets routed or delivered. Ideally, there'd be a general catch-all sink that every event could get sent to. And perhaps that's a configured default in the system so that the
sinkproperty of event sources becomes optional.Using the example from above, I'd like to be able to modify it perhaps as below:
This would dump cron job events into Knative Eventing, once per second, with no worry about how those events are being routed or what is consuming them.
Or, if we still prefer a sink to be specified, it should be a well-known sink that's static across an entire Knative installation or at least static within a single namespace. Something like:
Buckethere is just a made up name. Other proposals have referred to this asBrokerorRouter. The point is to not require any kind of decision around routing or delivery at the time the event source is created.See Knative Eventing use cases and product requirements - 2019 and Broker and Trigger for some prior discussion and ideas around these topics. Those docs should be visible to anyone in the knative-users Google Group.