Skip to content

feat: batch enqueue, delivery batching, and smart batch modes#3

Merged
vieiralucas merged 1 commit intomainfrom
feat/26.5-batch-smart-batching
Mar 24, 2026
Merged

feat: batch enqueue, delivery batching, and smart batch modes#3
vieiralucas merged 1 commit intomainfrom
feat/26.5-batch-smart-batching

Conversation

@vieiralucas
Copy link
Copy Markdown
Member

@vieiralucas vieiralucas commented Mar 24, 2026

Summary

  • batch_enqueue(messages): Explicit batch enqueue method using BatchEnqueue RPC. Each message gets an individual BatchEnqueueResult (success with message_id or error string).
  • Delivery batching: Consumer stream handler transparently unpacks ConsumeResponse.messages (repeated field) into individual messages, falling back to singular message field for backward compatibility.
  • Smart batch modes via batch_mode: parameter:
    • :auto (DEFAULT): Background thread with Queue-based opportunistic batching. At low load, messages sent individually (zero latency penalty). At high load, messages cluster into batches naturally.
    • :linger: Timer-based forced batching with linger_ms: and batch_size: controls.
    • :disabled: Each enqueue is a direct RPC (pre-existing behavior).
  • Single-item optimization: When flushing 1 message, uses single-message Enqueue RPC (preserves exact error types like QueueNotFoundError). For 2+ messages, uses BatchEnqueue.
  • Default behavior change: enqueue routes through the auto-batcher by default. Fila::Client.new(addr, batch_mode: :disabled) overrides.
  • close drains pending messages before disconnecting.
  • Proto updated with BatchEnqueue RPC, BatchEnqueueRequest/Response/Result, and ConsumeResponse.messages repeated field.
  • Version bumped to 0.3.0.

Test plan

  • TestBatchEnqueue: explicit batch_enqueue with multiple messages, single message, empty array, mixed success/failure
  • TestBatchEnqueueResult: unit tests for success/error result struct
  • TestAutoBatching: single enqueue, concurrent enqueues (10 threads), nonexistent queue error propagation
  • TestLingerBatching: single enqueue, concurrent enqueues
  • TestDisabledBatching: direct enqueue, error propagation
  • TestBatchModeValidation: invalid mode raises ArgumentError, all valid modes accepted
  • TestCloseFlush: close drains pending messages, double close is safe
  • All 29 tests pass (17 new + 12 existing), 135 assertions, 0 failures

Summary by cubic

Adds batch enqueue, delivery batching, and smart client-side batching to boost throughput while keeping low latency. enqueue now uses the auto-batcher by default; close flushes pending messages. Version bumped to 0.3.0 and proto adds batch APIs.

  • New Features

    • batch_enqueue(messages) returns per-message success/error results.
    • Smart batching modes: :auto (default), :linger (linger_ms, batch_size), :disabled; single message uses Enqueue, 2+ use BatchEnqueue.
    • Consumer supports batched delivery via ConsumeResponse.messages with fallback to message.
    • close drains pending messages.
    • Proto: adds BatchEnqueue RPC and batched consume; version 0.3.0.
  • Migration

    • Default: enqueue routes through the auto-batcher. Disable via batch_mode: :disabled.
    • Use batch_mode: :linger with linger_ms and batch_size for timed batching.
    • No consumer changes needed; batched deliveries are handled transparently.

Written for commit 78aa15d. Summary will update on new commits.

- Add batch_enqueue() for explicit multi-message BatchEnqueue RPC
- Add background batcher with three modes: :auto (default), :linger, :disabled
- Auto mode: opportunistic batching via Queue drain (zero latency at low load)
- Linger mode: timer-based batching with configurable linger_ms and batch_size
- Single-item optimization: 1 message uses Enqueue RPC (preserves error types)
- Delivery batching: consume unpacks repeated messages field transparently
- close() drains pending messages before disconnecting
- Update proto with BatchEnqueue RPC and ConsumeResponse.messages field
- Bump version to 0.3.0
@vieiralucas vieiralucas merged commit 1510b29 into main Mar 24, 2026
2 of 3 checks passed
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 10 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="lib/fila/batcher.rb">

<violation number="1" location="lib/fila/batcher.rb:104">
P1: Shutdown signal consumed without re-enqueue causes `close()` to hang in linger mode. When `:shutdown` is received in the inner while loop, it breaks without re-pushing to the queue, so the outer loop blocks forever on `@queue.pop`. Apply the same pattern used in `drain_nonblocking`.</violation>
</file>

<file name="lib/fila/client.rb">

<violation number="1" location="lib/fila/client.rb:91">
P2: Race condition: `@batcher` can become `nil` between the check and the call if another thread invokes `close()`. Capture in a local variable to avoid `NoMethodError`.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread lib/fila/batcher.rb

begin
item = pop_with_timeout(remaining_ms)
break if item == :shutdown
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: Shutdown signal consumed without re-enqueue causes close() to hang in linger mode. When :shutdown is received in the inner while loop, it breaks without re-pushing to the queue, so the outer loop blocks forever on @queue.pop. Apply the same pattern used in drain_nonblocking.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At lib/fila/batcher.rb, line 104:

<comment>Shutdown signal consumed without re-enqueue causes `close()` to hang in linger mode. When `:shutdown` is received in the inner while loop, it breaks without re-pushing to the queue, so the outer loop blocks forever on `@queue.pop`. Apply the same pattern used in `drain_nonblocking`.</comment>

<file context>
@@ -0,0 +1,198 @@
+
+          begin
+            item = pop_with_timeout(remaining_ms)
+            break if item == :shutdown
+
+            batch << item
</file context>
Fix with Cubic

Comment thread lib/fila/client.rb
Comment on lines +91 to +92
if @batcher
@batcher.submit(req)
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Race condition: @batcher can become nil between the check and the call if another thread invokes close(). Capture in a local variable to avoid NoMethodError.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At lib/fila/client.rb, line 91:

<comment>Race condition: `@batcher` can become `nil` between the check and the call if another thread invokes `close()`. Capture in a local variable to avoid `NoMethodError`.</comment>

<file context>
@@ -8,51 +8,133 @@
-    rescue GRPC::NotFound => e
-      raise QueueNotFoundError, "enqueue: #{e.details}"
+
+      if @batcher
+        @batcher.submit(req)
+      else
</file context>
Suggested change
if @batcher
@batcher.submit(req)
batcher = @batcher
if batcher
batcher.submit(req)
Fix with Cubic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant