feat: add debounce support to enqueue_job#520
Open
Floby wants to merge 6 commits intopython-arq:mainfrom
Open
Conversation
When _debounce=True and job key exists without a result, the job data and queue score are overwritten, allowing the defer time to be reset.
Read existing job data to extract the original enqueue timestamp and use it when serializing the updated job.
When the time since original enqueue exceeds _debounce_max, the debounce is refused and the existing job is left to run.
Check the in_progress key prefix before allowing debounce to ensure we don't overwrite a job that a worker is currently executing.
Refactor score computation to use current timestamp (now_ms) for _defer_by calculations and expiry, while preserving the original enqueue_time_ms only in the serialized job data.
|
Codecov Report✅ All modified and coverable lines are covered by tests. @@ Coverage Diff @@
## main #520 +/- ##
==========================================
- Coverage 96.27% 96.16% -0.12%
==========================================
Files 11 11
Lines 1074 1095 +21
Branches 209 147 -62
==========================================
+ Hits 1034 1053 +19
- Misses 19 21 +2
Partials 21 21
Continue to review full report in Codecov by Sentry.
🚀 New features to boost your workflow:
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
_debounceand_debounce_maxparameters toenqueue_job_debounce=True, re-enqueueing a job with the same_job_idresets its defer time instead of being silently droppedDesign choices
Debounce vs deduplication
The existing
_job_idmechanism provides deduplication (at-most-once). Debounce is different — it pushes back execution each time a new call arrives, useful for coalescing bursts of events (e.g. user edits triggering a recomputation)._debounce_maxto prevent starvationWithout a cap, a job could be debounced indefinitely and never run.
_debounce_maxsets a maximum window from the original enqueue time. Once exceeded, debounce calls are refused (Noneis returned) and the existing job runs on its current schedule.Preserving original
enqueue_timeWhen debouncing, the serialized job keeps the original
enqueue_time_msrather than resetting it. This is necessary for_debounce_maxto work, and it also reflects the true time the intent to run was first expressed.In-progress and completed jobs are never debounced
If a worker has already picked up the job (
in_progress_key_prefixexists) or the job has a result, debounce returnsNone. This avoids overwriting a running job's data.Defer is relative to now
When debouncing with
_defer_by, the new score is computed from the current time, not the original enqueue time. This matches the caller's intent of "run N seconds from now".Transaction safety
All checks and writes happen inside the existing
WATCH/MULTI/EXECpipeline, so concurrent debounce calls are safe — exactly one succeeds, others returnNoneviaWatchError.