Skip to content

Comments

feat: add debounce support to enqueue_job#520

Open
Floby wants to merge 6 commits intopython-arq:mainfrom
Floby:feat/debounce-enqueue
Open

feat: add debounce support to enqueue_job#520
Floby wants to merge 6 commits intopython-arq:mainfrom
Floby:feat/debounce-enqueue

Conversation

@Floby
Copy link

@Floby Floby commented Feb 17, 2026

Summary

  • Add _debounce and _debounce_max parameters to enqueue_job
  • When _debounce=True, re-enqueueing a job with the same _job_id resets its defer time instead of being silently dropped

Design choices

Debounce vs deduplication

The existing _job_id mechanism provides deduplication (at-most-once). Debounce is different — it pushes back execution each time a new call arrives, useful for coalescing bursts of events (e.g. user edits triggering a recomputation).

_debounce_max to prevent starvation

Without a cap, a job could be debounced indefinitely and never run. _debounce_max sets a maximum window from the original enqueue time. Once exceeded, debounce calls are refused (None is returned) and the existing job runs on its current schedule.

Preserving original enqueue_time

When debouncing, the serialized job keeps the original enqueue_time_ms rather than resetting it. This is necessary for _debounce_max to work, and it also reflects the true time the intent to run was first expressed.

In-progress and completed jobs are never debounced

If a worker has already picked up the job (in_progress_key_prefix exists) or the job has a result, debounce returns None. This avoids overwriting a running job's data.

Defer is relative to now

When debouncing with _defer_by, the new score is computed from the current time, not the original enqueue time. This matches the caller's intent of "run N seconds from now".

Transaction safety

All checks and writes happen inside the existing WATCH/MULTI/EXEC pipeline, so concurrent debounce calls are safe — exactly one succeeds, others return None via WatchError.

When _debounce=True and job key exists without a result, the job data
and queue score are overwritten, allowing the defer time to be reset.
Read existing job data to extract the original enqueue timestamp
and use it when serializing the updated job.
When the time since original enqueue exceeds _debounce_max, the
debounce is refused and the existing job is left to run.
Check the in_progress key prefix before allowing debounce to ensure
we don't overwrite a job that a worker is currently executing.
Refactor score computation to use current timestamp (now_ms) for
_defer_by calculations and expiry, while preserving the original
enqueue_time_ms only in the serialized job data.
@codecov-commenter
Copy link

codecov-commenter commented Feb 17, 2026

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

✅ All modified and coverable lines are covered by tests.
❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

@@            Coverage Diff             @@
##             main     #520      +/-   ##
==========================================
- Coverage   96.27%   96.16%   -0.12%     
==========================================
  Files          11       11              
  Lines        1074     1095      +21     
  Branches      209      147      -62     
==========================================
+ Hits         1034     1053      +19     
- Misses         19       21       +2     
  Partials       21       21              
Files with missing lines Coverage Δ
arq/connections.py 88.43% <100.00%> (-1.63%) ⬇️

Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update fda407c...d5fe748. Read the comment docs.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@Floby Floby marked this pull request as ready for review February 17, 2026 20:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants