adds chart for kafka#144
Conversation
|
Can a kubernetes member verify that this patch is reasonable to test? If so, please reply with "@k8s-bot ok to test" on its own line. Regular contributors should join the org to skip this step. |
|
@k8s-bot ok to test |
|
@faraazkhan can you sign the Linux Foundation CLA? |
|
@prydonius I am trying. Added my github account and everything, it says, I am supposed to get an email for with the CLA but nothing so far. |
|
Also looks like we are running an old version of helm in Jenkins right now. Update: Verified Jenkins is running a newer version Not sure why the error then 😞 |
|
@faraazkhan Thanks for signing the CLAs! Namespace shouldn't be set in the manifests, since Helm manages which namespace a Chart's resources gets installed into using the |
There was a problem hiding this comment.
can you add https://github.com/solsson/dockerfiles/tree/master/kafka
There was a problem hiding this comment.
Helm now supports specifying dependencies using a requirements.yaml file. Can you update this to remove this vendored version of zookeeper, and instead just add a requirements.yaml that can be used to bring in the dependency?
See https://github.com/kubernetes/helm/blob/master/docs/charts.md#managing-dependencies-with-requirementsyaml and https://github.com/kubernetes/charts/tree/master/incubator/patroni for an example.
There was a problem hiding this comment.
Can you remove these trailing lines?
|
@prydonius Jenkins is currently failing with |
|
@k8s-bot e2e test this |
chrislovecnm
left a comment
There was a problem hiding this comment.
Multiple tweaks ... looks good
|
|
||
| ## Pre Requisites: | ||
|
|
||
| * Kubernetes 1.3 with alpha APIs enable |
There was a problem hiding this comment.
You are also using storage classes as well. You probably should mention both of those.
|
|
||
| ### Connecting to Kafka | ||
|
|
||
| You can connect to Kafka by running a simple pod in the K8s cluster like this with a configuration like this: |
There was a problem hiding this comment.
You may want to add a notes.txt file with these instructions.
| - containerPort: 9092 | ||
| name: kafka | ||
| resources: | ||
| requests: |
There was a problem hiding this comment.
We need a limits section in here as well.
| mountPath: /tmp/zookeeper | ||
| volumes: | ||
| - name: workdir | ||
| emptyDir: {} |
There was a problem hiding this comment.
Dumb question but does this need persitence?
| - metadata: | ||
| name: datadir | ||
| annotations: | ||
| volume.alpha.kubernetes.io/storage-class: anything |
There was a problem hiding this comment.
Should we have this setup to do both a PVC or an empty dir? Looking for an example ...
There was a problem hiding this comment.
I think the empty dir can be removed. Want to leave the PVC in. I will push that out shortly.
| pod.alpha.kubernetes.io/initialized: "true" | ||
| pod.alpha.kubernetes.io/init-containers: '[ | ||
| ]' | ||
| spec: |
There was a problem hiding this comment.
We need health check probes http://kubernetes.io/docs/user-guide/liveness/
There was a problem hiding this comment.
Still needs a ready probe. We can move into incubator without.
There was a problem hiding this comment.
I could just add the same test for readiness as well. Not sure how valuable that would be though, in Kafka's situation.
|
@viglesiasce any ideas on the build failure "W1023 20:38:31.781] Error: deletion error count 1: no objects visited", do we need to bump the helm version we are using? |
| *.bak | ||
| *.tmp | ||
| *~ | ||
| # Various IDEs |
There was a problem hiding this comment.
Is it best practice to add proprietary file types to a global .ignore file? Shouldn't the user be managing these?
Adds liveness probe Adds resource limits Improves README.md
|
Ok. I have addressed the comments here . But still getting the Update: Please note, despite the error all the resources seem to be deleted fine and I am able to reuse the helm release name even in the same namespace with no errors (so purge worked just fine) |
|
@k8s-bot e2e test this |
chrislovecnm
left a comment
There was a problem hiding this comment.
Still, have storage class issue, and readyprobe not defined, and we can merge this into incubator with these open items.
@viglesiasce any ideas about the build failures?
Thanks for the chart @faraazkhan!!!
| pod.alpha.kubernetes.io/initialized: "true" | ||
| pod.alpha.kubernetes.io/init-containers: '[ | ||
| ]' | ||
| spec: |
There was a problem hiding this comment.
Still needs a ready probe. We can move into incubator without.
|
Sorry for the delay @faraazkhan! Will look into the failures today. |
|
With that deletion error, can you still see the resources in Kubernetes that were supposed to be deleted? That error message usually indicates that the deletion logic went to remove a resource from Kubernetes, and the resource was missing. |
|
One thing I noticed is that after deploy the kf and kf-broker services end up with no endpoints. @faraazkhan is that expected? @technosophos could that be an issue? NAME ENDPOINTS AGE
ep/demo-app-promchart 10.140.5.17:8080,10.140.5.18:8080,10.140.5.25:8080 + 2 more... 1d
ep/demo-grafana 10.140.5.21:3000 1d
ep/kubernetes 104.199.112.125:443 16d
ep/plinking-racoo-alerts 10.140.5.6:9093 5d
ep/plinking-racoo-server 10.140.5.7:9090 5d
ep/steely-panda-nginx 10.140.5.23:80 21h
ep/voting-goose-kf <none> 5m
ep/voting-goose-kf-broker <none> 5m
ep/voting-goose-zk 10.140.1.11:2888,10.140.1.13:2888,10.140.1.14:2888 + 3 more... 5m
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/demo-app-promchart 10.143.244.167 <none> 80/TCP 1d
svc/demo-grafana 10.143.246.48 <none> 80/TCP 1d
svc/kubernetes 10.143.240.1 <none> 443/TCP 16d
svc/plinking-racoo-alerts 10.143.245.26 <none> 9093/TCP 5d
svc/plinking-racoo-server 10.143.244.45 <none> 80/TCP 5d
svc/steely-panda-nginx 10.143.242.164 <none> 80/TCP 21h
svc/voting-goose-kf 10.143.243.96 <none> 9092/TCP 5m
svc/voting-goose-kf-broker None <none> 9092/TCP 5m
svc/voting-goose-zk None <none> 2888/TCP,3888/TCP 5m |
|
Houston that is a problem with the services ;) @faraazkhan did you monkey your selectors or @viglesiasce do yah think this is helm? |
|
@chrislovecnm @viglesiasce @technosophos sorry about the delay in responding, so I just pushed a few changes, that ensure that all the services created have endpoints. I see what @technosophos is saying but not sure what that resource could be, everything created in the helm chart seems to be destroyed correctly: |
|
Thanks @faraazkhan! Things are looking better wrt services. Can you add a NOTES file that explains next steps to the users. |
|
@viglesiasce Can you please be specific, how can I improve this? Maybe I am not following a convention |
|
NOTES.txt file. Look at the charts that are not in incubator. They should have examples |
| @@ -0,0 +1,36 @@ | |||
| You can connect to Kafka by running a simple pod in the K8s cluster like this with a configuration like this: | |||
There was a problem hiding this comment.
Helm will specifically look for a while called NOTES.txt, render it as a template, and print it out after installing/upgrading or running helm status on the release. This will allow you to print out working commands with the name of the release! See https://github.com/kubernetes/charts/blob/master/incubator/patroni/templates/NOTES.txt for an example :)
chrislovecnm
left a comment
There was a problem hiding this comment.
This is also and issue
oot@ip-172-20-82-63:/var/log# kubectl -n kafka describe svc
Name: exacerbated-ar-kafka
Namespace: kafka
Labels: chart=kafka-0.1.0
component=exacerbated-ar-kafka
heritage=Tiller
release=exacerbated-ar
Selector: app=kafka,component=exacerbated-ar-kafka
Type: ClusterIP
IP: 100.69.243.160
Port: <unset> 9092/TCP
Endpoints: <none>
Session Affinity: None
No events.
Name: exacerbated-ar-kafka-bro
Namespace: kafka
Labels: chart=kafka-0.1.0
component=exacerbated-ar-kafka
heritage=Tiller
release=exacerbated-ar
Selector: app=kafka,component=exacerbated-ar-kafka
Type: ClusterIP
IP: None
Port: <unset> 9092/TCP
Endpoints: <none>
Session Affinity: None
No events.
Name: exacerbated-ar-zk
Namespace: kafka
Labels: app=zk
chart=zookeeper-0.1.1
component=exacerbated-ar-zk
heritage=Tiller
release=exacerbated-ar
Selector: app=zk,component=exacerbated-ar-zk
Type: ClusterIP
IP: None
Port: client 2181/TCP
Endpoints: 100.96.3.3:2181,100.96.4.5:2181,100.96.5.2:2181
Port: peer 2888/TCP
Endpoints: 100.96.3.3:2888,100.96.4.5:2888,100.96.5.2:2888
Port: leader-election 3888/TCP
Endpoints: 100.96.3.3:3888,100.96.4.5:3888,100.96.5.2:3888
Session Affinity: None
No events.
Name: zookeeper
Namespace: kafka
Labels: <none>
Selector: app=zk
Type: ClusterIP
IP: 100.65.108.22
Port: client 2181/TCP
Endpoints: 100.96.3.3:2181,100.96.4.5:2181,100.96.5.2:2181
Session Affinity: NoneYou have two services that do not have endpoints. I am uncertain what the purpose of the services are.
|
|
||
| ## Known Limitations | ||
|
|
||
| * Namespace creation is not automated |
There was a problem hiding this comment.
This is an easy fix. Will need to happen for stable. This is may be a reason why unit tests are failing.
There was a problem hiding this comment.
@faraazkhan any word on what these services are for?
|
@k8s-bot ok to test |
|
Jenkins Charts e2e failed for commit ca656b5. Full PR test history. The magic incantation to run this job again is |
viglesiasce
left a comment
There was a problem hiding this comment.
- Move notes.txt to templates/NOTES.txt
- Update NOTES.txt to be templatized
- Remove dependencies in notes on a particular namespace
| kind: Pod | ||
| metadata: | ||
| name: testclient | ||
| namespace: kafka |
There was a problem hiding this comment.
I wouldnt set the namespace explicitly here. Users may not have configured this namespace for installation.
You can make this a kubectl run command and avoid having the pod yaml here. Look at the postgresql chart as an example.
kubectl run client -it --rm --image solsson/kafka:0.10.0.1 --command -- bash
| To stop the listener session above press: Ctrl+C | ||
|
|
||
| To start an interactive message producer session: | ||
| kubectl -n kafka exec -ti testclient -- ./bin/kafka-console-producer.sh --broker-list my-release-broker-kf.kafka.svc.cluster.local:9092 --topic test1 |
There was a problem hiding this comment.
You can simplify the dns name there and make it --broker-list {{ .Release.Name }}-kf:9092. Again this make the command work across namespaces.
|
Any update here? |
|
@faraazkhan any updates here? |
which might not matter because we no longer have a loadbalancing service. These probes won't catch all failure modes, but if they fail we're pretty sure the container is malfunctioning. I found some sources recommending ./bin/kafka-topics.sh for probes but to me it looks risky to introduce a dependency to some other service for such things. One such source is helm/charts#144 The zookeeper probe is from https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ An issue is that zookeeper's logs are quite verbose for every probe.
helm/charts#144 Beware: This image is 841 MB, compared to 301 for the current :latest.
helm/charts#144 Beware: This image is 841 MB, compared to 301 for the current :latest. See README.md for upgrade instructions.
…-papertrail-system [T2A-143] fix oos-detection papertrail system name
[openebs]chore(release): add OpenEBS 2.0.0 YAML for AWS

Adds a chart for Kafka based on the awesome work done here: https://github.com/Yolean/kubernetes-kafka and here: https://github.com/kubernetes/charts/tree/master/incubator/zookeeper