Skip to content

Add optional writeToStreamMulti function to the World interface#867

Merged
TooTallNate merged 8 commits intomainfrom
01-27-add_optional_writetostreammulti_function_to_the_world_interface
Feb 4, 2026
Merged

Add optional writeToStreamMulti function to the World interface#867
TooTallNate merged 8 commits intomainfrom
01-27-add_optional_writetostreammulti_function_to_the_world_interface

Conversation

@TooTallNate
Copy link
Copy Markdown
Member

@TooTallNate TooTallNate commented Jan 27, 2026

Added an optional writeToStreamMulti function to the World interface to optimize batch writing of multiple chunks to a stream.

This is an alternative solution to the issue reported in #764.

What changed?

  • Added an optional writeToStreamMulti method to the Streamer interface in the World API
  • Implemented buffering in WorkflowServerWritableStream to batch chunks before flushing
  • Added implementations of writeToStreamMulti in all world providers:
    • world-local: Writes multiple chunks in parallel while preserving order
    • world-postgres: Performs a batch insert for all chunks
    • world-vercel: Encodes multiple chunks into a length-prefixed binary format
  • Added a constant STREAM_FLUSH_INTERVAL_MS (10ms) to control buffer flush timing

Why make this change?

This optimization reduces network overhead when writing many small chunks to a stream by batching them together. This is particularly beneficial for:

  1. The Vercel world implementation where each write previously required a separate HTTP request
  2. The Postgres world implementation where batch inserts are more efficient
  3. Any scenario with high-frequency, small writes to streams

The implementation gracefully falls back to sequential writes for world implementations that don't support the new method.

@changeset-bot
Copy link
Copy Markdown

changeset-bot Bot commented Jan 27, 2026

🦋 Changeset detected

Latest commit: 4d74746

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 17 packages
Name Type
@workflow/world-postgres Patch
@workflow/world-vercel Patch
@workflow/world-local Patch
@workflow/world Patch
@workflow/core Patch
@workflow/cli Patch
@workflow/web-shared Patch
@workflow/world-testing Patch
@workflow/builders Patch
@workflow/docs-typecheck Patch
@workflow/next Patch
@workflow/nitro Patch
workflow Patch
@workflow/astro Patch
@workflow/nest Patch
@workflow/sveltekit Patch
@workflow/nuxt Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Jan 27, 2026

🧪 E2E Test Results

Some tests failed

Summary

Passed Failed Skipped Total
✅ ▲ Vercel Production 457 0 38 495
✅ 💻 Local Development 418 0 32 450
✅ 📦 Local Production 418 0 32 450
✅ 🐘 Local Postgres 418 0 32 450
✅ 🪟 Windows 45 0 0 45
❌ 🌍 Community Worlds 31 161 0 192
✅ 📋 Other 123 0 12 135
Total 1910 161 146 2217

❌ Failed Tests

🌍 Community Worlds (161 failed)

mongodb (40 failed):

  • addTenWorkflow
  • addTenWorkflow
  • should work with react rendering in step
  • promiseAllWorkflow
  • promiseRaceWorkflow
  • promiseAnyWorkflow
  • readableStreamWorkflow
  • hookWorkflow
  • webhookWorkflow
  • sleepingWorkflow
  • nullByteWorkflow
  • workflowAndStepMetadataWorkflow
  • outputStreamWorkflow
  • outputStreamInsideStepWorkflow - getWritable() called inside step functions
  • fetchWorkflow
  • promiseRaceStressTestWorkflow
  • error handling error propagation workflow errors nested function calls preserve message and stack trace
  • error handling error propagation workflow errors cross-file imports preserve message and stack trace
  • error handling error propagation step errors basic step error preserves message and stack trace
  • error handling error propagation step errors cross-file step error preserves message and function names in stack
  • error handling retry behavior regular Error retries until success
  • error handling retry behavior FatalError fails immediately without retries
  • error handling retry behavior RetryableError respects custom retryAfter delay
  • error handling retry behavior maxRetries=0 disables retries
  • error handling catchability FatalError can be caught and detected with FatalError.is()
  • hookCleanupTestWorkflow - hook token reuse after workflow completion
  • concurrent hook token conflict - two workflows cannot use the same hook token simultaneously
  • stepFunctionPassingWorkflow - step function references can be passed as arguments (without closure vars)
  • stepFunctionWithClosureWorkflow - step function with closure variables passed as argument
  • closureVariableWorkflow - nested step functions with closure variables
  • spawnWorkflowFromStepWorkflow - spawning a child workflow using start() inside a step
  • pathsAliasWorkflow - TypeScript path aliases resolve correctly
  • Calculator.calculate - static workflow method using static step methods from another class
  • AllInOneService.processNumber - static workflow method using sibling static step methods
  • ChainableService.processWithThis - static step methods using this to reference the class
  • thisSerializationWorkflow - step function invoked with .call() and .apply()
  • customSerializationWorkflow - custom class serialization with WORKFLOW_SERIALIZE/WORKFLOW_DESERIALIZE
  • pages router addTenWorkflow via pages router
  • pages router promiseAllWorkflow via pages router
  • pages router sleepingWorkflow via pages router

redis (40 failed):

  • addTenWorkflow
  • addTenWorkflow
  • should work with react rendering in step
  • promiseAllWorkflow
  • promiseRaceWorkflow
  • promiseAnyWorkflow
  • readableStreamWorkflow
  • hookWorkflow
  • webhookWorkflow
  • sleepingWorkflow
  • nullByteWorkflow
  • workflowAndStepMetadataWorkflow
  • outputStreamWorkflow
  • outputStreamInsideStepWorkflow - getWritable() called inside step functions
  • fetchWorkflow
  • promiseRaceStressTestWorkflow
  • error handling error propagation workflow errors nested function calls preserve message and stack trace
  • error handling error propagation workflow errors cross-file imports preserve message and stack trace
  • error handling error propagation step errors basic step error preserves message and stack trace
  • error handling error propagation step errors cross-file step error preserves message and function names in stack
  • error handling retry behavior regular Error retries until success
  • error handling retry behavior FatalError fails immediately without retries
  • error handling retry behavior RetryableError respects custom retryAfter delay
  • error handling retry behavior maxRetries=0 disables retries
  • error handling catchability FatalError can be caught and detected with FatalError.is()
  • hookCleanupTestWorkflow - hook token reuse after workflow completion
  • concurrent hook token conflict - two workflows cannot use the same hook token simultaneously
  • stepFunctionPassingWorkflow - step function references can be passed as arguments (without closure vars)
  • stepFunctionWithClosureWorkflow - step function with closure variables passed as argument
  • closureVariableWorkflow - nested step functions with closure variables
  • spawnWorkflowFromStepWorkflow - spawning a child workflow using start() inside a step
  • pathsAliasWorkflow - TypeScript path aliases resolve correctly
  • Calculator.calculate - static workflow method using static step methods from another class
  • AllInOneService.processNumber - static workflow method using sibling static step methods
  • ChainableService.processWithThis - static step methods using this to reference the class
  • thisSerializationWorkflow - step function invoked with .call() and .apply()
  • customSerializationWorkflow - custom class serialization with WORKFLOW_SERIALIZE/WORKFLOW_DESERIALIZE
  • pages router addTenWorkflow via pages router
  • pages router promiseAllWorkflow via pages router
  • pages router sleepingWorkflow via pages router

starter (41 failed):

  • addTenWorkflow
  • addTenWorkflow
  • should work with react rendering in step
  • promiseAllWorkflow
  • promiseRaceWorkflow
  • promiseAnyWorkflow
  • readableStreamWorkflow
  • hookWorkflow
  • webhookWorkflow
  • sleepingWorkflow
  • nullByteWorkflow
  • workflowAndStepMetadataWorkflow
  • outputStreamWorkflow
  • outputStreamInsideStepWorkflow - getWritable() called inside step functions
  • fetchWorkflow
  • promiseRaceStressTestWorkflow
  • error handling error propagation workflow errors nested function calls preserve message and stack trace
  • error handling error propagation workflow errors cross-file imports preserve message and stack trace
  • error handling error propagation step errors basic step error preserves message and stack trace
  • error handling error propagation step errors cross-file step error preserves message and function names in stack
  • error handling retry behavior regular Error retries until success
  • error handling retry behavior FatalError fails immediately without retries
  • error handling retry behavior RetryableError respects custom retryAfter delay
  • error handling retry behavior maxRetries=0 disables retries
  • error handling catchability FatalError can be caught and detected with FatalError.is()
  • hookCleanupTestWorkflow - hook token reuse after workflow completion
  • concurrent hook token conflict - two workflows cannot use the same hook token simultaneously
  • stepFunctionPassingWorkflow - step function references can be passed as arguments (without closure vars)
  • stepFunctionWithClosureWorkflow - step function with closure variables passed as argument
  • closureVariableWorkflow - nested step functions with closure variables
  • spawnWorkflowFromStepWorkflow - spawning a child workflow using start() inside a step
  • health check (CLI) - workflow health command reports healthy endpoints
  • pathsAliasWorkflow - TypeScript path aliases resolve correctly
  • Calculator.calculate - static workflow method using static step methods from another class
  • AllInOneService.processNumber - static workflow method using sibling static step methods
  • ChainableService.processWithThis - static step methods using this to reference the class
  • thisSerializationWorkflow - step function invoked with .call() and .apply()
  • customSerializationWorkflow - custom class serialization with WORKFLOW_SERIALIZE/WORKFLOW_DESERIALIZE
  • pages router addTenWorkflow via pages router
  • pages router promiseAllWorkflow via pages router
  • pages router sleepingWorkflow via pages router

turso (40 failed):

  • addTenWorkflow
  • addTenWorkflow
  • should work with react rendering in step
  • promiseAllWorkflow
  • promiseRaceWorkflow
  • promiseAnyWorkflow
  • readableStreamWorkflow
  • hookWorkflow
  • webhookWorkflow
  • sleepingWorkflow
  • nullByteWorkflow
  • workflowAndStepMetadataWorkflow
  • outputStreamWorkflow
  • outputStreamInsideStepWorkflow - getWritable() called inside step functions
  • fetchWorkflow
  • promiseRaceStressTestWorkflow
  • error handling error propagation workflow errors nested function calls preserve message and stack trace
  • error handling error propagation workflow errors cross-file imports preserve message and stack trace
  • error handling error propagation step errors basic step error preserves message and stack trace
  • error handling error propagation step errors cross-file step error preserves message and function names in stack
  • error handling retry behavior regular Error retries until success
  • error handling retry behavior FatalError fails immediately without retries
  • error handling retry behavior RetryableError respects custom retryAfter delay
  • error handling retry behavior maxRetries=0 disables retries
  • error handling catchability FatalError can be caught and detected with FatalError.is()
  • hookCleanupTestWorkflow - hook token reuse after workflow completion
  • concurrent hook token conflict - two workflows cannot use the same hook token simultaneously
  • stepFunctionPassingWorkflow - step function references can be passed as arguments (without closure vars)
  • stepFunctionWithClosureWorkflow - step function with closure variables passed as argument
  • closureVariableWorkflow - nested step functions with closure variables
  • spawnWorkflowFromStepWorkflow - spawning a child workflow using start() inside a step
  • pathsAliasWorkflow - TypeScript path aliases resolve correctly
  • Calculator.calculate - static workflow method using static step methods from another class
  • AllInOneService.processNumber - static workflow method using sibling static step methods
  • ChainableService.processWithThis - static step methods using this to reference the class
  • thisSerializationWorkflow - step function invoked with .call() and .apply()
  • customSerializationWorkflow - custom class serialization with WORKFLOW_SERIALIZE/WORKFLOW_DESERIALIZE
  • pages router addTenWorkflow via pages router
  • pages router promiseAllWorkflow via pages router
  • pages router sleepingWorkflow via pages router

Details by Category

✅ ▲ Vercel Production
App Passed Failed Skipped
✅ astro 41 0 4
✅ example 41 0 4
✅ express 41 0 4
✅ fastify 41 0 4
✅ hono 41 0 4
✅ nextjs-turbopack 44 0 1
✅ nextjs-webpack 44 0 1
✅ nitro 41 0 4
✅ nuxt 41 0 4
✅ sveltekit 41 0 4
✅ vite 41 0 4
✅ 💻 Local Development
App Passed Failed Skipped
✅ astro-stable 41 0 4
✅ express-stable 41 0 4
✅ fastify-stable 41 0 4
✅ hono-stable 41 0 4
✅ nextjs-turbopack-stable 45 0 0
✅ nextjs-webpack-stable 45 0 0
✅ nitro-stable 41 0 4
✅ nuxt-stable 41 0 4
✅ sveltekit-stable 41 0 4
✅ vite-stable 41 0 4
✅ 📦 Local Production
App Passed Failed Skipped
✅ astro-stable 41 0 4
✅ express-stable 41 0 4
✅ fastify-stable 41 0 4
✅ hono-stable 41 0 4
✅ nextjs-turbopack-stable 45 0 0
✅ nextjs-webpack-stable 45 0 0
✅ nitro-stable 41 0 4
✅ nuxt-stable 41 0 4
✅ sveltekit-stable 41 0 4
✅ vite-stable 41 0 4
✅ 🐘 Local Postgres
App Passed Failed Skipped
✅ astro-stable 41 0 4
✅ express-stable 41 0 4
✅ fastify-stable 41 0 4
✅ hono-stable 41 0 4
✅ nextjs-turbopack-stable 45 0 0
✅ nextjs-webpack-stable 45 0 0
✅ nitro-stable 41 0 4
✅ nuxt-stable 41 0 4
✅ sveltekit-stable 41 0 4
✅ vite-stable 41 0 4
✅ 🪟 Windows
App Passed Failed Skipped
✅ nextjs-turbopack 45 0 0
❌ 🌍 Community Worlds
App Passed Failed Skipped
✅ mongodb-dev 3 0 0
❌ mongodb 5 40 0
✅ redis-dev 3 0 0
❌ redis 5 40 0
✅ starter-dev 3 0 0
❌ starter 4 41 0
✅ turso-dev 3 0 0
❌ turso 5 40 0
✅ 📋 Other
App Passed Failed Skipped
✅ e2e-local-dev-nest-stable 41 0 4
✅ e2e-local-postgres-nest-stable 41 0 4
✅ e2e-local-prod-nest-stable 41 0 4

📋 View full workflow run

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Jan 27, 2026

📊 Benchmark Results

📈 Comparing against baseline from main branch. Green 🟢 = faster, Red 🔺 = slower.

workflow with no steps

💻 Local Development

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
💻 Local 🥇 Express 0.043s (-0.9%) 1.007s (~) 0.965s 10 1.00x
🐘 Postgres Express 0.279s (+21.6% 🔺) 1.015s (~) 0.736s 10 6.55x

▲ Production (Vercel)

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
▲ Vercel 🥇 Express 0.654s (+11.9% 🔺) 1.522s (-1.7%) 0.868s 10 1.00x

🔍 Observability: Express

workflow with 1 step

💻 Local Development

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
💻 Local 🥇 Express 1.117s (~) 2.007s (~) 0.890s 10 1.00x
🐘 Postgres Express 2.122s (-6.1% 🟢) 3.015s (~) 0.893s 10 1.90x

▲ Production (Vercel)

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
▲ Vercel 🥇 Express 3.234s (+10.2% 🔺) 3.998s (+4.6%) 0.764s 10 1.00x

🔍 Observability: Express

workflow with 10 sequential steps

💻 Local Development

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
💻 Local 🥇 Express 10.833s (~) 11.017s (~) 0.185s 5 1.00x
🐘 Postgres Express 20.443s (~) 21.036s (~) 0.593s 5 1.89x

▲ Production (Vercel)

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
▲ Vercel 🥇 Express 22.958s (-1.4%) 24.342s (+2.1%) 1.383s 5 1.00x

🔍 Observability: Express

Promise.all with 10 concurrent steps

💻 Local Development

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
💻 Local 🥇 Express 5.169s (~) 6.020s (-2.3%) 0.850s 6 1.00x
🐘 Postgres Express 30.768s (+7.8% 🔺) 31.687s (+8.8% 🔺) 0.918s 2 5.95x

▲ Production (Vercel)

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
▲ Vercel 🥇 Express 3.277s (-11.1% 🟢) 3.843s (-14.8% 🟢) 0.567s 8 1.00x

🔍 Observability: Express

Promise.all with 25 concurrent steps

💻 Local Development

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
💻 Local 🥇 Express 5.267s (-0.6%) 6.263s (~) 0.996s 5 1.00x
🐘 Postgres Express 30.557s (-7.8% 🟢) 31.135s (-6.2% 🟢) 0.578s 1 5.80x

▲ Production (Vercel)

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
▲ Vercel 🥇 Express 3.415s (+9.8% 🔺) 4.046s (+8.7% 🔺) 0.631s 8 1.00x

🔍 Observability: Express

Promise.race with 10 concurrent steps

💻 Local Development

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
💻 Local 🥇 Express 5.388s (~) 6.340s (~) 0.952s 5 1.00x
🐘 Postgres Express 33.963s (-2.7%) 34.283s (-2.6%) 0.320s 1 6.30x

▲ Production (Vercel)

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
▲ Vercel 🥇 Express 3.226s (~) 3.931s (-3.9%) 0.705s 8 1.00x

🔍 Observability: Express

Promise.race with 25 concurrent steps

💻 Local Development

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
💻 Local 🥇 Express 5.538s (+1.2%) 6.458s (~) 0.920s 5 1.00x
🐘 Postgres Express 29.509s (-9.9% 🟢) 30.154s (-8.8% 🟢) 0.645s 1 5.33x

▲ Production (Vercel)

World Framework Workflow Time Wall Time Overhead Samples vs Fastest
▲ Vercel 🥇 Express 3.060s (-32.7% 🟢) 3.813s (-29.0% 🟢) 0.753s 8 1.00x

🔍 Observability: Express

Stream Benchmarks (includes TTFB metrics)
workflow with stream

💻 Local Development

World Framework Workflow Time TTFB Slurp Wall Time Overhead Samples vs Fastest
💻 Local 🥇 Express 0.181s (-3.2%) 0.994s (~) 0.015s (+3.4%) 1.024s (~) 0.843s 10 1.00x
🐘 Postgres Express 2.231s (+4.1%) 2.814s (-3.0%) 0.000s (~) 3.017s (~) 0.786s 10 12.31x

▲ Production (Vercel)

World Framework Workflow Time TTFB Slurp Wall Time Overhead Samples vs Fastest
▲ Vercel 🥇 Express 2.872s (-5.9% 🟢) 3.217s (~) 0.122s (-85.9% 🟢) 3.790s (-18.6% 🟢) 0.919s 10 1.00x

🔍 Observability: Express

Summary

Fastest Framework by World

Winner determined by most benchmark wins

World 🥇 Fastest Framework Wins
💻 Local Express 8/8
🐘 Postgres Express 8/8
▲ Vercel Express 8/8
Fastest World by Framework

Winner determined by most benchmark wins

Framework 🥇 Fastest World Wins
Express 💻 Local 4/8
Column Definitions
  • Workflow Time: Runtime reported by workflow (completedAt - createdAt) - primary metric
  • TTFB: Time to First Byte - time from workflow start until first stream byte received (stream benchmarks only)
  • Slurp: Time from first byte to complete stream consumption (stream benchmarks only)
  • Wall Time: Total testbench time (trigger workflow + poll for result)
  • Overhead: Testbench overhead (Wall Time - Workflow Time)
  • Samples: Number of benchmark iterations run
  • vs Fastest: How much slower compared to the fastest configuration for this benchmark

Worlds:

  • 💻 Local: In-memory filesystem world (local development)
  • 🐘 Postgres: PostgreSQL database world (local development)
  • ▲ Vercel: Vercel production/preview deployment
  • 🌐 Starter: Community world (local development)
  • 🌐 Turso: Community world (local development)
  • 🌐 MongoDB: Community world (local development)
  • 🌐 Redis: Community world (local development)
  • 🌐 Jazz: Community world (local development)

📋 View full workflow run

@vercel
Copy link
Copy Markdown
Contributor

vercel Bot commented Jan 27, 2026

Copy link
Copy Markdown
Member Author

This stack of pull requests is managed by Graphite. Learn more about stacking.

Comment thread packages/world-vercel/src/utils.ts Outdated
Comment thread packages/core/src/serialization.ts
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request adds an optional writeToStreamMulti method to the World interface to optimize batch writing of multiple chunks to streams. This addresses issue #764 where LLM streaming responses were experiencing significant delays (3+ minutes for ~2000 tokens) in the Vercel world implementation due to the overhead of individual HTTP requests for each small chunk.

Changes:

  • Added optional writeToStreamMulti method to the Streamer interface for batch chunk operations
  • Implemented buffering in WorkflowServerWritableStream with a 10ms flush interval to accumulate chunks before writing
  • Implemented writeToStreamMulti in all world providers (local, postgres, vercel) with provider-specific optimizations

Reviewed changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
packages/world/src/interfaces.ts Adds optional writeToStreamMulti method to Streamer interface with comprehensive documentation
packages/core/src/serialization.ts Implements buffering logic with 10ms flush intervals, uses writeToStreamMulti when available and falls back to sequential writes
packages/core/src/writable-stream.test.ts Comprehensive test coverage for buffering, abort handling, and fallback behavior
packages/world-vercel/src/streamer.ts Implements multi-chunk encoding as length-prefixed binary format with X-Stream-Multi header
packages/world-vercel/src/streamer.test.ts Thorough tests for encoding logic including edge cases and unicode handling
packages/world-postgres/src/streamer.ts Implements batch insert for multiple chunks with sequential notifications
packages/world-local/src/streamer.ts Implements parallel writes with ordered event emission
packages/world-local/src/streamer.test.ts Tests for chunk ordering, empty arrays, and mixed chunk types
.changeset/hip-candles-kick.md Documents changes to all affected packages

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +146 to +157
// Notify for each chunk (could be batched in future if needed)
for (const chunkId of chunkIds) {
postgres.notify(
STREAM_TOPIC,
JSON.stringify(
StreamPublishMessage.encode({
chunkId,
streamId: name,
})
)
);
}
Copy link

Copilot AI Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Performance consideration: The batch insert is efficient, but the notifications are still sent sequentially in a loop. Consider whether postgres.notify supports batch operations or if these notifications could be sent in parallel using Promise.all() for better performance when dealing with many chunks.

Suggested change
// Notify for each chunk (could be batched in future if needed)
for (const chunkId of chunkIds) {
postgres.notify(
STREAM_TOPIC,
JSON.stringify(
StreamPublishMessage.encode({
chunkId,
streamId: name,
})
)
);
}
// Notify for each chunk in parallel
await Promise.all(
chunkIds.map((chunkId) =>
postgres.notify(
STREAM_TOPIC,
JSON.stringify(
StreamPublishMessage.encode({
chunkId,
streamId: name,
})
)
)
)
);

Copilot uses AI. Check for mistakes.
Comment on lines +218 to +221
flushTimer = setTimeout(() => {
flushTimer = null;
flushPromise = flush();
}, STREAM_FLUSH_INTERVAL_MS);
Copy link

Copilot AI Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing error handling: If flush() throws an error (e.g., from writeToStreamMulti or writeToStream), the error will be thrown from the setTimeout callback in an unhandled way, which could crash the process or cause silent failures. The flushPromise should be awaited somewhere or the error should be caught and handled appropriately. Consider wrapping the flush() call in a try-catch or ensuring errors propagate to the write/close methods that trigger the flush.

Copilot uses AI. Check for mistakes.
Comment on lines +85 to +94
// Signal to server that this is a multi-chunk batch
httpConfig.headers.set('X-Stream-Multi', 'true');

const body = encodeMultiChunks(chunks);
await fetch(getStreamUrl(name, resolvedRunId, httpConfig), {
method: 'PUT',
body,
headers: httpConfig.headers,
duplex: 'half',
});
Copy link

Copilot AI Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The writeToStreamMulti implementation sends an X-Stream-Multi header to signal a batch write, but there's no corresponding server-side implementation in this PR to handle this header and decode the length-prefixed format. If the server doesn't recognize this header, it will likely treat the entire encoded payload as a single chunk rather than multiple chunks, breaking the streaming semantics and causing data corruption. This PR should either include the server-side changes or document that they must be deployed first.

Suggested change
// Signal to server that this is a multi-chunk batch
httpConfig.headers.set('X-Stream-Multi', 'true');
const body = encodeMultiChunks(chunks);
await fetch(getStreamUrl(name, resolvedRunId, httpConfig), {
method: 'PUT',
body,
headers: httpConfig.headers,
duplex: 'half',
});
// Fallback implementation: send each chunk as an individual write
// using the same semantics as writeToStream. This avoids relying on
// any server-side support for X-Stream-Multi or custom encodings.
for (const chunk of chunks) {
await fetch(getStreamUrl(name, resolvedRunId, httpConfig), {
method: 'PUT',
body: chunk,
headers: httpConfig.headers,
duplex: 'half',
});
}

Copilot uses AI. Check for mistakes.
Comment on lines +89 to +94
await fetch(getStreamUrl(name, resolvedRunId, httpConfig), {
method: 'PUT',
body,
headers: httpConfig.headers,
duplex: 'half',
});
Copy link

Copilot AI Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing error handling: The fetch request doesn't check the response status. If the server returns an error (e.g., 4xx or 5xx), it will silently fail without throwing an error. This should check res.ok similar to how readFromStream and listStreamsByRunId do, and throw an appropriate error message.

Copilot uses AI. Check for mistakes.
const world = getWorld();

// Buffering state for batched writes
let buffer: Uint8Array[] = [];
Copy link

Copilot AI Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Memory usage consideration: The buffer accumulates chunks for up to 10ms before flushing. For high-throughput streams with large chunks, this could lead to significant memory accumulation. While this is acceptable for the intended use case (LLM token streaming with small chunks), consider documenting this behavior or adding a maximum buffer size limit to prevent unbounded memory growth in edge cases.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Member

@VaguelySerious VaguelySerious left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, can we add one e2e test for this? I didn't validate the postgres solution e2e

- Add abort() handler to WorkflowServerWritableStream to clean up
  timer and discard buffer on abort (prevents leaks)
- Add comprehensive unit tests for WorkflowServerWritableStream
  buffering logic: flush timing, concurrent writes, writeToStreamMulti
  fallback, promise runId handling, and abort behavior
- Add unit tests for encodeMultiChunks in world-vercel
- Export encodeMultiChunks for testing purposes
The vi.mock() call was being hoisted and affecting other tests in
serialization.test.ts. Moving to a separate file isolates the mock.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants