feat(rds): add aws_s3 postgres extension (table_import / query_export)#806
Merged
vieiralucas merged 2 commits intomainfrom Apr 28, 2026
Merged
feat(rds): add aws_s3 postgres extension (table_import / query_export)#806vieiralucas merged 2 commits intomainfrom
vieiralucas merged 2 commits intomainfrom
Conversation
Aurora-postgres parity for the third extension in the aws_lambda / aws_commons family. SQL inside RDS PostgreSQL can now import S3 objects into tables (`aws_s3.table_import_from_s3`) and export query results back to S3 (`aws_s3.query_export_to_s3`), with the same composite-typed overload pattern as `aws_lambda.invoke`. - New `aws_s3` extension (control + 1.0 SQL) baked into the prebuilt fakecloud-postgres image. Functions talk to fakecloud over two new bridge endpoints, `/_fakecloud/rds/s3-import` and `/_fakecloud/rds/s3-export`. - `aws_commons` bumped to 1.1 with the `_s3_uri_1` composite type and `create_s3_uri()` constructor, plus a 1.0->1.1 upgrade script for users of the older extension. - Bridge handlers read/write the in-memory S3 state directly, so any bucket reachable via standard `GetObject`/`PutObject` is reachable from `aws_s3` SQL. - E2E test (`rds_aws_s3.rs`) covers the round trip: PutObject -> table import (positional + composite overloads) -> query export -> GetObject. - Docs and README sync the new extension surface.
There was a problem hiding this comment.
1 issue found across 13 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="crates/fakecloud-e2e/tests/rds_aws_s3.rs">
<violation number="1" location="crates/fakecloud-e2e/tests/rds_aws_s3.rs:118">
P2: Importing the CSV into the same table a second time changes the data used by the export step. Use a scratch table for the composite overload (or export before the second import) so the exported rows stay deterministic.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Cubic flagged the second `table_import_from_s3` call appending into `people` after the first import, which polluted the table before the export step could read it deterministically. Reorder so the export runs against the 3-row state, then exercise the composite overload into a separate `people_scratch` table.
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
This was referenced Apr 28, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
aws_s3extension:aws_s3.table_import_from_s3andaws_s3.query_export_to_s3(positional +aws_commons._s3_uri_1composite overloads).aws_commonsto 1.1 with the_s3_uri_1type andcreate_s3_uri()constructor, plus a 1.0->1.1 upgrade script./_fakecloud/rds/s3-import,/_fakecloud/rds/s3-export) wired straight into the in-memory S3 state.fakecloud-postgresimage.Test plan
cargo clippy --workspace --all-targets -- -D warningscargo fmt --allcargo test -p fakecloud-rdscargo test -p fakecloud-e2e --test rds_aws_s3(Docker required — runs in CI)Summary by cubic
Adds the RDS PostgreSQL
aws_s3extension so SQL can import from and export to S3. Also adds server bridges and upgradesaws_commonsto 1.1; the extensions are baked into the prebuiltfakecloud-postgresimage.New Features
aws_s3.table_import_from_s3andaws_s3.query_export_to_s3(positional andaws_commons._s3_uri_1overloads viaaws_commons.create_s3_uri())./_fakecloud/rds/s3-importand/_fakecloud/rds/s3-export(read/write in-memory S3).aws_commons1.1,aws_lambda, andaws_s3.Migration
aws_commons1.0 installed, run:ALTER EXTENSION aws_commons UPDATE TO '1.1'before usingaws_s3.Written for commit 3880c77. Summary will update on new commits. Review in cubic