Kafka dispatcher no need for a statefulset#972
Conversation
|
@dubee: changing LGTM is restricted to assignees, and only knative/eventing repo collaborators may be assigned issues. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
evankanderson
left a comment
There was a problem hiding this comment.
It looks like there is no explicit Service here to address the dispatcher. I'm assuming that is okay?
/lgtm
/approve
/hold
(Remove the hold with /hold cancel assuming that you don't need to add a Service here.)
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dubee, evankanderson, matzew The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@matzew |
|
@syedriko ah, right - thanks! will add this ... |
|
@evankanderson right. I also see no specific need. I tested it, and also killed it (the pod), while I had a source sinking to a kafka channel, and a subscription for it. Pod got recreated, and eventually the "dumper" service, subscribed to the channel, did continue receive messages. Perhaps @neosab has some thoughts? |
|
@nak3 mind looking too ? |
|
LGTM 👍 I have tested locally with following env and operations:
|
|
This seems fine to me given my knowledge of the behavior and guarantees of Deployments and Kafka consumers, but I'd like @neosab to weigh in on whether there was any reason to choose a stateful set in the first place. |
|
/lgtm |
|
Can you change the title to include Kafka? |
|
sure thing
On Wed 3. Apr 2019 at 17:41, Scott Nichols ***@***.***> wrote:
Can you change the title to include Kafka?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#972 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAJnzvRSglnSS5ZqnKYlv5yV_s-uSVtaks5vdMu_gaJpZM4cJwaA>
.
--
Sent from Gmail Mobile
|
|
/hold cancel |
|
In case of an upgrade from 0.5 to 0.6 I noticed that the old kafka dispatcher stays alive and now we end up with 2 dispatchers. The old dispatcher should be deleted or some notes must be added to delete it |
|
@matzew can you add a release note section to this PR with the words I'd also like to know what happens when both dispatchers exist. Do subscriptions get double deliveries? |
From what I observed both dispatchers will dispatch the event because the corresponding service selector and the pod labels are the same. |
|
Ok, that needs to be very clearly stated in the release notes then. |
* WIP attempt to not fall into the "CI" branch trap in https://github.com/openshift-knative/serverless-operator/blob/release-1.10/hack/lib/catalogsource.bash#L27-L29 * Update e2e-common.sh
…e retries (knative#8366) (knative#972) * MT-Broker: return appropriate status code based on the state to leverage retries The ingress or filter deployments were returning 400 even in the case where a given resource (like trigger, broker, subscription) wasn't found, however, this is a common case where the lister cache hasn't caught up with the latest state. * Fix unit tests --------- Signed-off-by: Pierangelo Di Pilato <pierdipi@redhat.com>
The broker itself needs to be a stateful set, but the dispatcher is just a client/consumer.
hence a deployment is good.
@grantr I guess we need some release notes 😄
locally tested w/ a 0.4.0 installation and this branch (and Strimzi 0.11 release). messages are distributed from the dispatcher
Release Note