Skip to content

Conversation

@meonkeys
Copy link
Contributor

fix #1756

fix nextcloud#1756

Signed-off-by: Adam Monsen <haircut@gmail.com>
Copy link
Member

@skjnldsv skjnldsv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't seems to work for me.
If I run two simultaneous containers sharing the same html volume, both of them initialize at the same time. Instead of one waiting for the other to finish.

Also, I'm totally unfamiliar with flock. Is it available for alpine too?

@martadinata666
Copy link

on alpine flock is a dependent package https://pkgs.alpinelinux.org/packages?name=flock&branch=v3.14
on debian based is from util-linux package, usually already included by default.

@skjnldsv
Copy link
Member

@meonkeys
Copy link
Contributor Author

Doesn't seems to work for me. If I run two simultaneous containers sharing the same html volume, both of them initialize at the same time. Instead of one waiting for the other to finish.

Is that the use case? I wasn't aware you could/would run more than one nextcloud container at a time.

@chilli230
Copy link

i have the same issue with migrate nextcloud:22-2.7 to 22.2.8-apache. I was delete the nextcloud-init-sync.lock file manualy, so it was work. On my Server running three Nextcloud instances parallel but with different volumes.

@meonkeys
Copy link
Contributor Author

meonkeys commented May 29, 2022

@skjnldsv so I think this is the current state:

  1. can we just add the flock package for the alpine image? Seems pretty low-risk
  2. is running multiple Nextcloud app containers against the same db backend a use case we need to support? If so, I can look into it, but I'm guessing flock will not do the job. flock is also not usable with samba and NFS... I think there needs to be some OS-level interaction that is only supported on certain filesystems, and I'm guessing Docker filesystems (like overlay / union / whatever) are unsupported. Again, only an issue if multiple concurrent Nextcloud app containers (with a single database backend) must be supported.

If number 2 is not a concern, flock would be a nice solution for auto-cleaning the lockfile.

@skjnldsv
Copy link
Member

skjnldsv commented May 31, 2022

let me revert the feature, this will be easier.

Or add aconfig flag to enable this feature

@skjnldsv
Copy link
Member

On my Server running three Nextcloud instances parallel but with different volumes.

Then this doesn't quite fit the use case 😸

Is that the use case? I wasn't aware you could/would run more than one nextcloud container at a time.

Yeah,sharing the same DB and data/config, you can fire as much containers as you want and load-balance in front, it's very effective 😉

@rugk
Copy link
Contributor

rugk commented Jun 7, 2022

So what again was the problem with this pr and why did you decide for that opt-in solution in the other PR instead?

@meonkeys
Copy link
Contributor Author

meonkeys commented Feb 8, 2023

So what again was the problem with this pr and why did you decide for that opt-in solution in the other PR instead?

@rugk I recall (based on the thread above):

  1. we need the flock shell command but the Alpine image didn't have util-linux
  2. I was not sure if flock would work on NFS and samba, and
  3. the solution by @skjnldsv fixed the problem for me because I only run one Nextcloud container (which might be an irrelevant detail anyway).

Ideas:

number 1: surely an easy fix, just add util-linux (if it hasn't already been added by now). UPDATE: busybox has flock, no need for util-linux.

number 2: @remram44 says in #1903 that flock will work fine on NFS and samba.

number 3 is irrelevant

So that just leaves one problem I'm seeing now: I put nextcloud-init-sync.lock in /var/lock/. That file should be placed in /var/html/ instead. That might be why it broke for @skjnldsv (see their comment above.

Heck, let's fix this thing! Seems like there is some interest to improve it.

@meonkeys
Copy link
Contributor Author

meonkeys commented Feb 8, 2023

Please see #1905 and #1917. Let's move forward with one of those!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

use ephemeral instead of manual/persistent lock during upgrade

5 participants