fix: wait for Swarm task convergence after service update to prevent orphan containers#3809
Closed
jaimehgb wants to merge 1 commit into
Closed
fix: wait for Swarm task convergence after service update to prevent orphan containers#3809jaimehgb wants to merge 1 commit into
jaimehgb wants to merge 1 commit into
Conversation
…orphan containers Docker Swarm's `start-first` update order operates in two phases: start new task, then shut down old task. When `service.update()` is called again before the first update completes, SwarmKit cancels the in-progress update — and the old task's shutdown phase is skipped, leaving orphan containers running indefinitely. This adds a convergence wait after `service.update()` that polls `docker.listTasks()` until only one task remains running (or a 120s timeout). Since the BullMQ deploy queue has concurrency 1, this naturally prevents rapid consecutive updates from creating orphans. Relates to Dokploy#1669, Dokploy#2223, Dokploy#2911, Dokploy#2150
Contributor
|
Why is this closed without action? |
Contributor
Author
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
After
service.update(), Docker Swarm'sstart-firstupdate order can leave orphan containers running indefinitely when multiple deployments happen in quick succession. This PR adds a convergence wait that pollsdocker.listTasks()until the service has settled to a single running task before returning frommechanizeDockerContainer.waitForServiceConvergence()helper that polls task state after each service updateRoot Cause
Docker Swarm's
start-firstupdate operates in two phases:RUNNINGSHUTDOWNWhen a second
service.update()arrives while the first is mid-transition, the SwarmKit UpdateSupervisor cancels the in-progress update. If the cancellation hits between phase 1 and phase 2, the old task's desired state is never set toSHUTDOWN— and since Swarm actively maintains tasks whose desired state isRUNNING(task_model.md), the orphan persists indefinitely.This is a well-documented behavior in Docker Swarm:
docker service updatedoes not stop old taskstart-firstrollback replaces healthy tasksWhy This Happens in Dokploy
Dokploy's BullMQ deploy queue has concurrency 1, so jobs run serially. But
mechanizeDockerContainercallsservice.update()and returns immediately — it doesn't wait for Swarm to finish draining the old task. When the next queued deploy runs itsservice.update(), the previous update is still mid-transition, triggering the orphan race.The
POST /services/{id}/updateAPI is explicitly asynchronous — it returns HTTP 200 after recording the update in the Raft store, not after task convergence.The Fix
After
service.update(), polldocker.listTasks()filtered byserviceanddesired-state: running. Wait until only 1 task hasStatus.State === "running"(meaning the old task has been drained). If convergence doesn't happen within 120 seconds, log a warning and return — the deploy completes anyway, falling back to current behavior.Since the queue has concurrency 1, this naturally serializes the Swarm update lifecycle: the next deploy won't start its
service.update()until the current one has converged.Edge cases handled
starting/failedstate, never reachesrunning→ loop sees ≤1 running task → exits immediatelycreateServicepath (catch block) — no convergence wait neededdockerinstance fromgetRemoteDocker()already handles SSH tunnelingRelated Issues
Test Plan
docker ps -f 'name=<appName>'— should see at most 2 containers (1 old + 1 starting) at any point, never 3+[Dokploy] Service X did not convergemessages on timeout