Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
122 changes: 121 additions & 1 deletion Gopkg.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

12 changes: 12 additions & 0 deletions Gopkg.toml
Original file line number Diff line number Diff line change
Expand Up @@ -96,3 +96,15 @@ required = [
name = "sigs.k8s.io/controller-runtime"
# HEAD as of 2018-09-19
revision = "5373e8e1f3188ff4266902a6fc86372bc14b3815"

[[override]]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would expect these new dependencies to also update the third_party/VENDOR-LICENSE file. Did you run hack/update-deps.sh?

name = "github.com/nats-io/go-nats"
version = "1.6.0"

[[override]]
name = "github.com/nats-io/go-nats-streaming"
version = "0.4.0"

[[override]]
name = "github.com/nats-io/nats-streaming-server"
version = "0.11.0"
101 changes: 101 additions & 0 deletions config/buses/natss/100-natss.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
apiVersion: v1
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The general model other buses have followed is to separating the infrastructure from the bus implementation. For example, with the Kafka bus, deploying the of the kafka broker is independent of the deploying the kafka bus.

It's good to have a default nats install that the bus can use, but we should move this config into a subdirectory and update the bus to use a configured nats install and the readme to for the new install step.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should actually be embedded into the controller for the provisioner for the natss channels. And then published into kubernetes when a channel is attempted to be provisioned with the natss cluster channel provisioner

kind: ConfigMap
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bit of a nit, could you organize this:
apiversion
kind
metadata
data

makes it easier to parse mentally :)

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

metadata:
labels:
app: nats-streaming
name: nats-streaming
namespace: knative-eventing
data:
gnatsd.conf: |
# configuration file used to override default NATS server settings
stan.conf: |
# content of configuration file used to override default NATS Streaming server settings
---
apiVersion: v1
kind: Service
metadata:
name: nats-streaming
namespace: knative-eventing
labels:
app: nats-streaming
spec:
type: ClusterIP
ports:
- name: client
port: 4222
protocol: TCP
targetPort: client
selector:
app: nats-streaming
sessionAffinity: None
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: nats-streaming
namespace: knative-eventing
labels:
app: nats-streaming
spec:
serviceName: nats-streaming
replicas: 1
selector:
matchLabels:
app: nats-streaming
template:
metadata:
annotations:
sidecar.istio.io/inject: "true"
labels:
app: nats-streaming
spec:
containers:
- name: nats-streaming
image: nats-streaming:0.11.0
imagePullPolicy: IfNotPresent
args:
- -D
- -SD
- --cluster_id=knative-nats-streaming
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this something that needs to be possibly updated when deploying?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is a kind of a shared "security context" between all NATS Streaming servers running in a cluster and all their clients. Now it's only a simple deployment without clustering.

- --http_port=8222
- --max_age=24h
- --store=FILE
- --dir=/var/lib/nats-streaming/core-nats-streaming/$(POD_NAME)
- --port=4222
- --config=/etc/nats-streaming/core-nats-streaming/gnatsd.conf
- --stan_config=/etc/nats-streaming/core-nats-streaming/stan.conf
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 4222
name: client
protocol: TCP
- containerPort: 8222
name: monitoring
protocol: TCP
volumeMounts:
- mountPath: /var/lib/nats-streaming/core-nats-streaming
name: datadir
- mountPath: /etc/nats-streaming/core-nats-streaming
name: config-volume
resources:
requests:
cpu: "100m"
limits:
memory: "32M"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems low, is it enough?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's enough for this simple, not production ready, "bus" implementation. We will come later with a NATS Streaming cluster which will be deeper integrated into Eventing.

volumes:
- configMap:
name: nats-streaming
name: config-volume
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "1Gi"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this for the persistence of the messages? If so, is there a way to get a rough idea how this might be tweaked depending on message rates / sizes, etc.?

Copy link
Copy Markdown
Author

@ghost ghost Oct 5, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is for persistence, for all messages... We have to improve the strategy for the persistence layer in the future, but it's enough for this simple example.

20 changes: 20 additions & 0 deletions config/buses/natss/200-natss-bus.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
apiVersion: channels.knative.dev/v1alpha1
kind: Bus
metadata:
name: natss
spec:
provisioner:
name: provisioner
image: github.com/knative/eventing/pkg/buses/natss/provisioner
args: [
"-logtostderr",
"-stderrthreshold", "INFO",
]
dispatcher:
name: dispatcher
image: github.com/knative/eventing/pkg/buses/natss/dispatcher
args: [
"-logtostderr",
"-stderrthreshold", "INFO",
]

9 changes: 9 additions & 0 deletions config/buses/natss/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# NATS Streaming - Knative Bus
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current model does not have busses and sorry to say that we will have to pivot this implementation to conform to the ideas of ClusterChannelProvisioners


Deployment steps:
1. Setup [Knative Eventing](../../../DEVELOPMENT.md)
1. Apply the 'natss' bus:
```ko apply -f config/buses/natss/```
1. Create Channels that reference the 'natss' bus

The NATSS Streaming bus uses NATS Streaming based on a simple setup, see [Natss Streaming](./100-natss.yaml) .
Loading