-
Notifications
You must be signed in to change notification settings - Fork 630
[WIP] Add NATS Streaming as a new bus implementation #491
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
7452871
d816c69
1d03981
02068d9
17eb19d
1fb8398
6ff96de
d9f067d
83a46bc
6f668a2
418c398
4fe954c
3d0dd59
f85e7ca
1a8185c
8a77c45
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,101 @@ | ||
| apiVersion: v1 | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The general model other buses have followed is to separating the infrastructure from the bus implementation. For example, with the Kafka bus, deploying the of the kafka broker is independent of the deploying the kafka bus. It's good to have a default nats install that the bus can use, but we should move this config into a subdirectory and update the bus to use a configured nats install and the readme to for the new install step.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It should actually be embedded into the controller for the provisioner for the natss channels. And then published into kubernetes when a channel is attempted to be provisioned with the natss cluster channel provisioner |
||
| kind: ConfigMap | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. bit of a nit, could you organize this: makes it easier to parse mentally :)
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done |
||
| metadata: | ||
| labels: | ||
| app: nats-streaming | ||
| name: nats-streaming | ||
| namespace: knative-eventing | ||
| data: | ||
| gnatsd.conf: | | ||
| # configuration file used to override default NATS server settings | ||
| stan.conf: | | ||
| # content of configuration file used to override default NATS Streaming server settings | ||
| --- | ||
| apiVersion: v1 | ||
| kind: Service | ||
| metadata: | ||
| name: nats-streaming | ||
| namespace: knative-eventing | ||
| labels: | ||
| app: nats-streaming | ||
| spec: | ||
| type: ClusterIP | ||
| ports: | ||
| - name: client | ||
| port: 4222 | ||
| protocol: TCP | ||
| targetPort: client | ||
| selector: | ||
| app: nats-streaming | ||
| sessionAffinity: None | ||
| --- | ||
| apiVersion: apps/v1beta1 | ||
| kind: StatefulSet | ||
| metadata: | ||
| name: nats-streaming | ||
| namespace: knative-eventing | ||
| labels: | ||
| app: nats-streaming | ||
| spec: | ||
| serviceName: nats-streaming | ||
| replicas: 1 | ||
| selector: | ||
| matchLabels: | ||
| app: nats-streaming | ||
| template: | ||
| metadata: | ||
| annotations: | ||
| sidecar.istio.io/inject: "true" | ||
| labels: | ||
| app: nats-streaming | ||
| spec: | ||
| containers: | ||
| - name: nats-streaming | ||
| image: nats-streaming:0.11.0 | ||
| imagePullPolicy: IfNotPresent | ||
| args: | ||
| - -D | ||
| - -SD | ||
| - --cluster_id=knative-nats-streaming | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. is this something that needs to be possibly updated when deploying?
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, this is a kind of a shared "security context" between all NATS Streaming servers running in a cluster and all their clients. Now it's only a simple deployment without clustering. |
||
| - --http_port=8222 | ||
| - --max_age=24h | ||
| - --store=FILE | ||
| - --dir=/var/lib/nats-streaming/core-nats-streaming/$(POD_NAME) | ||
| - --port=4222 | ||
| - --config=/etc/nats-streaming/core-nats-streaming/gnatsd.conf | ||
| - --stan_config=/etc/nats-streaming/core-nats-streaming/stan.conf | ||
| env: | ||
| - name: POD_NAME | ||
| valueFrom: | ||
| fieldRef: | ||
| fieldPath: metadata.name | ||
| ports: | ||
| - containerPort: 4222 | ||
| name: client | ||
| protocol: TCP | ||
| - containerPort: 8222 | ||
| name: monitoring | ||
| protocol: TCP | ||
| volumeMounts: | ||
| - mountPath: /var/lib/nats-streaming/core-nats-streaming | ||
| name: datadir | ||
| - mountPath: /etc/nats-streaming/core-nats-streaming | ||
| name: config-volume | ||
| resources: | ||
| requests: | ||
| cpu: "100m" | ||
| limits: | ||
| memory: "32M" | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This seems low, is it enough?
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's enough for this simple, not production ready, "bus" implementation. We will come later with a NATS Streaming cluster which will be deeper integrated into Eventing. |
||
| volumes: | ||
| - configMap: | ||
| name: nats-streaming | ||
| name: config-volume | ||
| volumeClaimTemplates: | ||
| - metadata: | ||
| name: datadir | ||
| spec: | ||
| accessModes: | ||
| - "ReadWriteOnce" | ||
| resources: | ||
| requests: | ||
| storage: "1Gi" | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this for the persistence of the messages? If so, is there a way to get a rough idea how this might be tweaked depending on message rates / sizes, etc.?
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, this is for persistence, for all messages... We have to improve the strategy for the persistence layer in the future, but it's enough for this simple example. |
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,20 @@ | ||
| apiVersion: channels.knative.dev/v1alpha1 | ||
| kind: Bus | ||
| metadata: | ||
| name: natss | ||
| spec: | ||
| provisioner: | ||
| name: provisioner | ||
| image: github.com/knative/eventing/pkg/buses/natss/provisioner | ||
| args: [ | ||
| "-logtostderr", | ||
| "-stderrthreshold", "INFO", | ||
| ] | ||
| dispatcher: | ||
| name: dispatcher | ||
| image: github.com/knative/eventing/pkg/buses/natss/dispatcher | ||
| args: [ | ||
| "-logtostderr", | ||
| "-stderrthreshold", "INFO", | ||
| ] | ||
|
|
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,9 @@ | ||
| # NATS Streaming - Knative Bus | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The current model does not have busses and sorry to say that we will have to pivot this implementation to conform to the ideas of ClusterChannelProvisioners |
||
|
|
||
| Deployment steps: | ||
| 1. Setup [Knative Eventing](../../../DEVELOPMENT.md) | ||
| 1. Apply the 'natss' bus: | ||
| ```ko apply -f config/buses/natss/``` | ||
| 1. Create Channels that reference the 'natss' bus | ||
|
|
||
| The NATSS Streaming bus uses NATS Streaming based on a simple setup, see [Natss Streaming](./100-natss.yaml) . | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would expect these new dependencies to also update the third_party/VENDOR-LICENSE file. Did you run
hack/update-deps.sh?