fix: convert BigInt to strings in Arrow IPC row.toJSON()#315
Closed
jamesbroadhead wants to merge 3 commits intomainfrom
Closed
fix: convert BigInt to strings in Arrow IPC row.toJSON()#315jamesbroadhead wants to merge 3 commits intomainfrom
jamesbroadhead wants to merge 3 commits intomainfrom
Conversation
3 tasks
Co-authored-by: Isaac Signed-off-by: James Broadhead <jamesbroadhead@gmail.com>
The Arrow IPC handling code in this PR imports apache-arrow from packages/appkit/src/connectors/sql-warehouse/client.ts. Until now it resolved transitively via packages/appkit-ui, which knip flags as an unlisted dependency. Declare it directly to satisfy knip and make the dependency explicit. Signed-off-by: James Broadhead <jamesbroadhead@gmail.com>
416c654 to
9c9e409
Compare
The previous commit added _transformArrowAttachment but never called it, so serverless-warehouse responses with inline ARROW_STREAM attachments still hit the EXTERNAL_LINKS branch and dropped the data. - Dispatch _transformDataArray to _transformArrowAttachment whenever result.attachment is present, before the legacy ARROW_STREAM check. - Replace the shallow top-level BigInt scan with deepStringifyBigInts, which also recurses into nested objects, plain arrays, and Apache Arrow Vectors (which row.toJSON() leaves in place for LIST columns). Vectors are unwrapped via Array.from so we don't end up with a BigInt64Array that would re-throw on bigint -> string assignment. - Wrap tableFromIPC in try/catch and surface decode failures as a bounded ExecutionError with context, instead of an opaque throw from inside apache-arrow. - Make the destructure of response.result tolerant of an undefined result rather than throwing TypeError. Adds a unit test suite covering the inline-Arrow path with BigInt columns, list-of-bigint columns, the malformed-attachment error case, the EXTERNAL_LINKS fall-through, and a JSON_ARRAY no-regression check. Signed-off-by: James Broadhead <jamesbroadhead@gmail.com>
Contributor
Author
|
Closing — superseded by #256. The architectural change in #256 moved Arrow IPC decoding from server-side to client-side: the SQL warehouse connector now passes the base64 attachment through unchanged (with a 64MiB size cap), and the React hook decodes via the existing Closing without merging. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
row.toJSON()from apache-arrow preservesBigIntfor Int64 columns. When this data is later serialized withJSON.stringify()for SSE streaming, it throwsTypeError: Do not know how to serialize a BigInt._transformArrowAttachmentduring row transformation, before the data reaches JSON serialization.result.attachment.Test plan
TypeError: Do not know how to serialize a BigIntduring SSE streamingReplaces #270 (fork-based PR where CI couldn't run).
This pull request was AI-assisted by Isaac.