-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
[WFLY-19898] remove PostgreSQL internal image configuration #972
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
ehsavoie
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this was added as part of the k8s testing I've added @kabir as reviewer since he is the one who will know about this
|
maybe the Kubernetes fail means there is something else needed? |
kabir
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't remember EXACTLY how this works, but I think this init container connects to todo-backend-postgresql:5432 in a loop to ensure that the postgresql server started by the Helm chart is up and running. I think this happens before trying to install WildFly.
With this enabled it works since the DB is up and running. Without it I see there are problems in the k8s tests, presumably because we are not waiting to install WildFly until we know postgres is running (i.e. what this block fixes).
Although this might be using a different version of postgres than the image, we're only using the client for this initContainer. Once the initContainer check has passed the initContainer is thrown away.
|
@kabir could we perhaps move this initContainer or have an alternative that runs from somewhere in the kubernetes override dir for this quickstart? If not I guess we should keep it as it is, and perhaps I remove it "later" from the dev branch |
|
I believe the initContainers needs to be associated with the wildfly entry in the Helm chart. So (without much research) I believe it can't be moved |
|
What if we inject such content in the helm chart?
…On Tue, 26 Nov 2024 at 10:15, Kabir Khan ***@***.***> wrote:
I believe the initContainers needs to be associated with the wildfly entry
in the Helm chart. So (without much research) I believe it can't be moved
—
Reply to this email directly, view it on GitHub
<#972 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAEDY6CU7XZZJUTOY3ZBJWD2CRC2BAVCNFSM6AAAAABQ2GNYKSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMBQGIYDOOJXGM>
.
You are receiving this because your review was requested.Message ID:
***@***.***>
|
|
That might be possible by using |
|
Hi @kabir I trying to understand it.
|
|
Honestly my concern with changing this at the moment, is that we would probably also need to rework docs, and this may be too late for that, we are one week away from feature freeze |
|
@kstekovi @emmartins Since the wildfly image is pushed to the k8s registry manually before running Maybe something can be done in the WildFly DS config to reconnect when the DB is not up when WildFly tries to start, but that isn't my area of expertise. If not, we need to block starting WildFly until postgres is ready. Helm does not guarantee any ordering of things within the charts, so initContainers seem like the natural choice. In other words, we WANT WIldFly to wait until postgres is up and running. Here is some background reading for why I went with this approach:
I have no idea how big/small this database image is. Note it contains no data, I am just using it for the pg_isready command. Perhaps the intiContainers could instead use a busybox image and have commands to check the postgres server port via telnet or something. No idea if that is possible or not. Still busybox has some size too probably (if space saving actually is a requirement for a quickstart for users to follow....) Or why not unify that with the postgres version the Helm chart uses? Inspecting the postgres pod from kubernetes, the image is called Or perhaps the Helm chart can be split into two. So the user first installs postgres, and then installs wildfly. I don't know why this one is written as a single Helm chart, I think @ehsavoie or @jmesnil wrote the initial version. My only involvement with it has been getting it to work on Openshift and Kubernetes CI. |
yes, I agree. I will just update the version of the image to same as use the PostgreSQL helm chart. @kabir Thank you for a details.
I think this is solution can be universal for any database. The
The different types of DB are running on different ports so it should be explicitly configured where to check the connection. I don't like this.
That is also possible and could be universal for any database. But I like if can install everything by only one command. as it is now. |
9babecf to
548350f
Compare
|
I think this PR could be simplified to this, I think it is more straightforward than to digging up the image version from particular bitnami chart version and trying to keep them in sync diff --git a/todo-backend/charts/values.yaml b/todo-backend/charts/values.yaml
index 882e1cd03..2b8766e48 100644
--- a/todo-backend/charts/values.yaml
+++ b/todo-backend/charts/values.yaml
@@ -3,6 +3,9 @@
# Declare variables to be passed into your templates.
postgresql:
+ image:
+ repository: bitnami/postgresql
+ tag: 17
auth:
username: todos-db
password: todos-db
@@ -46,7 +49,7 @@ wildfly:
value: "96"
initContainers:
- name: check-db-ready
- image: postgres:9.6.5
+ image: bitnami/postgresql:17
command: [ 'sh', '-c',
'until pg_isready -h todo-backend-postgresql -p 5432;
do echo waiting for database; sleep 2; done;' ] |
Did you try it? It didn't work to me when try update these value a long time ago. The PostgreSQL documentation for command pg_isready say it is not supported for 9.6. https://www.postgresql.org/docs/current/app-pg-isready.html so we have to use the old version of this command. Because I try to use new PostgreSQL image which didn't works as status checker. |
a4e85fb to
75ebdc7
Compare
|
@kabir Kubernetes are not liking these changes, any clue? Maybe @kstekovi also needs to do changes at https://github.com/wildfly/quickstart/tree/main/.github/workflows/scripts/kubernetes/qs-overrides/todo-backend ? |
|
@emmartins Not sure. It looks like the todo-backend image (which IIRC is WIldFly) is not starting. Too me that seems like it can't connect to postgresql. Maybe the initContainer isn't doing its work property with the new image. Does it contain the Does it work for you and @kstekovi locally? |
|
we identified problem with updated Bitnami chart that is defaulting to request 8Gi PVC - which in our case is bigger than max allowed (1Gi). Maybe the Kubernetes used in CI has similar lower limit? https://github.com/bitnami/charts/blob/postgresql/16.2.2/bitnami/postgresql/values.yaml#L803 |
|
8Gi is quite a lot for any cluster, even more when you just want to run this demo app.. maybe we should explicitly set it much lower (like 100Mi or even less if it works, need to try) |
|
Hi @kabir The OpenShift part works. But I have some issues with the minikube and Podman. I start the minikube with the rootless privileges for the Podman. In this mode is not possible enable the repository addon. To start the minikube with privileges for the podman. I had to add my username to sudoers to use the Podman with root privileges. Now i am trying to push the "todo-backend" image to the repository. But it fails due to http server and https client. |
|
New update i push the image into minikube repository but unfortunately the application start ends in the same state as the CI here on github. |
|
Hi, I've finally been able to try it, and the main issue is that thepostgresql pod isn't starting. Likely the jump from 13.1.5 to 16.2.2 of the bitnami Helm chart needs some different configuration. The nice thing about 16.2.2 is that the image is simply 'itnami/postgresql:17', so that can be used in the initContainer. Nice if it had worked :-) I tried rolling back the bitnnami Helm chart to 13.1.5. Now the pod starts. Looking at the image it installs that seems to be The diff for this attempt is: Sadly this doesn't work. Then I tried NOT using the pg_isready command (from the postgres or bitnami images). There is something called busybox, which I see a lot in Kubernetes examples. AIUI it is lightweight and used to diagnose network issues in a Kubernetes cluster. I have managed to replace the initContainer logic to use busybox, and it works. The changes are shown here: So I would say to use busybox, unless @kstekovi or whoever owns this quickstart is able to figure out how to use the latest bitnami Helm chart. Although, since my attempt at using the docker.io/bitnami/postgresql:16.0.0-debian-11-r13 Helm chart failed with 13.1.5, I am not confident that bitnami/postgresql:17 will work on 16.2.2. For the record, both docker.io/bitnami/postgresql:16.0.0-debian-11-r13 and bitnami/postgresql:17 contain the pg_isready command. |
|
It garbled my diffs, the two commits can be found here https://github.com/wildfly/quickstart/compare/main...kabir:quickstart:kstekovi-WFLY-19898-fresh?expand=1 |
|
Hi @kabir thank you for your investigation. Unfortunately i have vacation next 3 weeks. Feel free to create or update this PR to finish it. There is more then version update i see. I like the idea with busybox. It looks like it could be generic solution for another types of DB. (just small modification for the different type of DB) |
|
@kstekovi @emmartins I have added my commit switching to busybox, and force pushed the branch for this PR (I have admin rights so I am able to do so) |
|
thank you @kabir |
|
In general, if it works, I am ok with it since the biggest issue - using two different postgres images - is gone. I am interpreting this as "try to open connection to given host and port, 2 second timeout on response, repeat until ready" I cannot tell whether it is sufficient to rely just on the fact we can see some port as responding (and don't care what it is responding). It might be enough for postgres, but it in general this isn't sufficient. Also, it might be enough for assessing "liveness", but we are trying to assess "readiness" here. But if it works also as readiness for postgres, wouldn't then it be easier to use kubernetes TCP probe directly (https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-tcp-liveness-probe) and avoid the initContainer mechanism completely? I guess others would be using that too if it is that easy.. so that is another question mark for the current solution... |
|
I like the idea with the startup probes. But the solution seems very different and now we have a good solution which works well. I think we can create a jira to track the enhancement to use the startup probes. And merge this MR as it is. I test it and it works well. Thanks @kabir the jira to track the enhancement https://issues.redhat.com/browse/WFLY-20827 |
|
@ehsavoie re-requesting your review since PR has changed direction |
https://issues.redhat.com/browse/WFLY-19898