-
Notifications
You must be signed in to change notification settings - Fork 630
Adds Kafka Channel Provisioner Controllers #468
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
8ea7c13
6637554
0dd7028
dd502b0
c25ce1b
2ec153b
4c8232d
72a219a
00ab78e
5dd3164
514773c
8f83d7c
356da32
bc5515d
53cb8e4
9097c21
dddeeec
86dd3f8
b66ec8f
e4131f4
712fbc4
7577094
c5dc969
9934290
8a54f24
de54ab9
99f2132
240c125
32aa096
e9dad0d
0190820
c301c95
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,78 @@ | ||
| # Apache Kafka Channels | ||
|
|
||
| Deployment steps: | ||
| 1. Setup [Knative Eventing](../../../DEVELOPMENT.md) | ||
| 1. If not done already, install an Apache Kafka cluster. There are two choices: | ||
| * Simple installation of [Apache Kafka](broker). | ||
| * A production grade installation using the [Strimzi Kafka Operator](strimzi). | ||
| Installation [guides](http://strimzi.io/quickstarts/) are provided for | ||
| kubernetes and Openshift. | ||
|
|
||
| 1. Now that Apache Kafka is installed, you need to configure the | ||
| `bootstrap_servers` value in the `kafka-channel-controller-config` ConfigMap, | ||
| located inside the `config/provisioners/kafka/kafka-provisioner.yaml` file: | ||
| ``` | ||
| ... | ||
| apiVersion: v1 | ||
| kind: ConfigMap | ||
| metadata: | ||
| name: kafka-channel-controller-config | ||
| namespace: knative-eventing | ||
| data: | ||
| # Broker URL's for the provisioner | ||
| bootstrap_servers: kafkabroker.kafka:9092 | ||
| ... | ||
| ``` | ||
| > Note: The `bootstrap_servers` needs to contain the address of at least | ||
| one broker of your Apache Kafka cluster. If you are using Strimzi, you need | ||
| to update the `bootstrap_servers` value to | ||
| `my-cluster-kafka-bootstrap.mynamespace:9092`. | ||
| 1. Apply the 'Kafka' ClusterChannelProvisioner, Controller, and Dispatcher: | ||
| ``` | ||
| ko apply -f config/provisioners/kafka/kafka-provisioner.yaml | ||
| ``` | ||
| 1. Create Channels that reference the 'kafka-channel'. | ||
|
|
||
| ```yaml | ||
| apiVersion: eventing.knative.dev/v1alpha1 | ||
| kind: Channel | ||
| metadata: | ||
| name: my-kafka-channel | ||
| spec: | ||
| provisioner: | ||
| apiVersion: eventing.knative.dev/v1alpha1 | ||
| kind: ClusterChannelProvisioner | ||
| name: kafka-channel | ||
| ``` | ||
|
|
||
| ## Components | ||
|
|
||
| The major components are: | ||
| * ClusterChannelProvisioner Controller | ||
| * Channel Controller | ||
| * Channel Controller Config Map. | ||
| * Channel Dispatcher | ||
| * Channel Dispatcher Config Map. | ||
|
|
||
| The ClusterChannelProvisioner Controller and the Channel Controller are colocated | ||
| in one Pod: | ||
| ```shell | ||
| kubectl get deployment -n knative-eventing kafka-channel-controller | ||
| ``` | ||
|
|
||
| The Channel Controller Config Map is used to configure the `bootstrap_servers` | ||
| of your Apache Kafka installation: | ||
| ```shell | ||
| kubectl get configmap -n knative-eventing kafka-channel-dispatcher-config-map | ||
| ``` | ||
|
|
||
| The Channel Dispatcher receives and distributes all events: | ||
| ```shell | ||
| kubectl get statefulset -n knative-eventing kafka-channel-dispatcher | ||
| ``` | ||
|
|
||
| The Channel Dispatcher Config Map is used to send information about Channels and | ||
| Subscriptions from the Channel Controller to the Channel Dispatcher: | ||
| ```shell | ||
| kubectl get configmap -n knative-eventing kafka-channel-dispatcher-config-map | ||
| ``` | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,13 @@ | ||
| # Apache Kafka - simple installation | ||
|
|
||
| 1. For an installation of a simple (**non production**) Apache Kafka cluster, a setup is provided: | ||
| ``` | ||
| kubectl create namespace kafka | ||
| kubectl apply -n kafka -f kafka-broker.yaml | ||
| ``` | ||
| > Note: If you are running Knative on OpenShift you will need to run the following command first to allow the Kafka broker to run as root: | ||
| ``` | ||
| oc adm policy add-scc-to-user anyuid -z default -n kafka | ||
| ``` | ||
|
|
||
| Continue the configuration of Knative Eventing with [step `3`](../). |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,87 @@ | ||
| ########################################## KAFKA BROKER ###################################### | ||
| # The following does not need to live in the same namespace as the bus. | ||
| --- | ||
| apiVersion: extensions/v1beta1 | ||
| kind: Deployment | ||
| metadata: | ||
| name: kafka-broker | ||
| spec: | ||
| replicas: 1 | ||
| template: | ||
| metadata: | ||
| labels: | ||
| app: kafka-broker | ||
| spec: | ||
| containers: | ||
| - name: kafka-broker | ||
| image: wurstmeister/kafka:1.1.0 | ||
| ports: | ||
| - containerPort: 9092 | ||
| env: | ||
| - name: MY_POD_NAMESPACE | ||
| valueFrom: | ||
| fieldRef: | ||
| fieldPath: metadata.namespace | ||
| - name: KAFKA_BROKER_ID | ||
| value: "0" | ||
| - name: KAFKA_LISTENERS | ||
| value: "INTERNAL://:9093,EXTERNAL://:9092" | ||
| - name: KAFKA_ADVERTISED_LISTENERS | ||
| value: "INTERNAL://:9093,EXTERNAL://kafkabroker.$(MY_POD_NAMESPACE):9092" | ||
| - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP | ||
| value: "INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT" | ||
| - name: KAFKA_INTER_BROKER_LISTENER_NAME | ||
| value: "INTERNAL" | ||
| - name: KAFKA_ZOOKEEPER_CONNECT | ||
| value: "zookeeper.$(MY_POD_NAMESPACE):2181" | ||
| - name: KAFKA_AUTO_CREATE_TOPICS_ENABLE | ||
| value: "false" | ||
| --- | ||
| apiVersion: v1 | ||
| kind: Service | ||
| metadata: | ||
| name: kafkabroker | ||
| spec: | ||
| type: NodePort | ||
| selector: | ||
| app: kafka-broker | ||
| ports: | ||
| - port: 9092 | ||
| name: kafka | ||
| protocol: TCP | ||
| --- | ||
| apiVersion: extensions/v1beta1 | ||
| kind: Deployment | ||
| metadata: | ||
| name: zookeeper | ||
| spec: | ||
| replicas: 1 | ||
| template: | ||
| metadata: | ||
| labels: | ||
| app: zookeeper | ||
| spec: | ||
| containers: | ||
| - name: zookeeper | ||
| image: wurstmeister/zookeeper:3.4.6 | ||
| ports: | ||
| - containerPort: 2181 | ||
| env: | ||
| - name: ZOOKEEPER_ID | ||
| value: "1" | ||
| - name: ZOOKEEPER_SERVER_1 | ||
| value: zookeeper | ||
|
|
||
| --- | ||
| apiVersion: v1 | ||
| kind: Service | ||
| metadata: | ||
| name: zookeeper | ||
| spec: | ||
| selector: | ||
| app: zookeeper | ||
| ports: | ||
| - port: 2181 | ||
| name: zookeeper | ||
| protocol: TCP | ||
|
|
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,85 @@ | ||
| # Copyright 2018 The Knative Authors | ||
| # | ||
| # Licensed under the Apache License, Version 2.0 (the "License"); | ||
| # you may not use this file except in compliance with the License. | ||
| # You may obtain a copy of the License at | ||
| # | ||
| # https://www.apache.org/licenses/LICENSE-2.0 | ||
| # | ||
| # Unless required by applicable law or agreed to in writing, software | ||
| # distributed under the License is distributed on an "AS IS" BASIS, | ||
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. | ||
|
|
||
| apiVersion: eventing.knative.dev/v1alpha1 | ||
| kind: ClusterChannelProvisioner | ||
| metadata: | ||
| name: kafka-channel | ||
| spec: {} | ||
| --- | ||
|
|
||
| apiVersion: v1 | ||
| kind: ServiceAccount | ||
| metadata: | ||
| name: kafka-channel-controller | ||
| namespace: knative-eventing | ||
| --- | ||
|
|
||
| kind: ClusterRole | ||
| apiVersion: rbac.authorization.k8s.io/v1 | ||
| metadata: | ||
| name: kafka-channel-controller | ||
| rules: | ||
| - apiGroups: ["eventing.knative.dev"] | ||
| resources: ["clusterchannelprovisioners", "channels"] | ||
| verbs: ["get", "watch", "list", "update"] | ||
| --- | ||
|
|
||
| apiVersion: rbac.authorization.k8s.io/v1beta1 | ||
| kind: ClusterRoleBinding | ||
| metadata: | ||
| name: kafka-channel-controller-manage | ||
| subjects: | ||
| - kind: ServiceAccount | ||
| name: kafka-channel-controller | ||
| namespace: knative-eventing | ||
| roleRef: | ||
| kind: ClusterRole | ||
| name: kafka-channel-controller | ||
| apiGroup: rbac.authorization.k8s.io | ||
| --- | ||
|
|
||
| apiVersion: v1 | ||
| kind: ConfigMap | ||
| metadata: | ||
| name: kafka-channel-controller-config | ||
| namespace: knative-eventing | ||
| data: | ||
| # Broker URL's for the provisioner | ||
| bootstrap_servers: kafkabroker.kafka:9092 | ||
| --- | ||
|
|
||
| apiVersion: apps/v1beta1 | ||
| kind: Deployment | ||
| metadata: | ||
| name: kafka-channel-controller | ||
| namespace: knative-eventing | ||
| spec: | ||
| replicas: 1 | ||
| template: | ||
| metadata: | ||
| labels: | ||
| app: kafka-channel-controller | ||
| spec: | ||
| serviceAccountName: kafka-channel-controller | ||
| containers: | ||
| - name: kafka-channel-controller-controller | ||
| image: github.com/knative/eventing/pkg/provisioners/kafka | ||
| volumeMounts: | ||
| - name: kafka-channel-controller-config | ||
| mountPath: /etc/config-provisioner | ||
| volumes: | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this would tie the Would it make sense to have a more generic approach, that the
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You are right, the broker info is read from the configmap. I tried to retain the old kafka bus' behavior for this initial work. I am fine with having
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It sounds like this is resolved, but I'd want to be able to use a "default Kafka" as a developer without needing to carry around credentials & endpoint addresses on each object. One possible middleground would be to optionally reference a profile in the Channel, and then have the ConfigMap define the acceptable profiles. Let's start with the simple one-Kafka-per-cluster approach, and then see what customer scenarios actually apply. (For example, the default might be a per-namespace one rather than a global one.)
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sounds good. For this initial implementation, we can keep the endpoint address on the config map. |
||
| - name: kafka-channel-controller-config | ||
| configMap: | ||
| name: kafka-channel-controller-config | ||
Uh oh!
There was an error while loading. Please reload this page.