From 893fa2fdcce471c4ab30c1f361941f6b84090ff9 Mon Sep 17 00:00:00 2001 From: Lucas Vieira Date: Mon, 27 Apr 2026 22:54:00 -0300 Subject: [PATCH] feat(rds): MySQL/MariaDB Aurora Lambda bridge + prebuilt images Aurora MySQL exposes Lambda invocation as built-in stored procedures (`mysql.lambda_async`, `mysql.lambda_sync`). fakecloud now ships the same surface for both MySQL and MariaDB: - New `fakecloud-mysql` and `fakecloud-mariadb` images that bake a small libcurl-backed UDF (`fakecloud_post`, `fakecloud_post_async`) plus an Aurora-compatible bootstrap script that creates the stored procedures on first container start. Procedures POST to `/_fakecloud/rds/lambda-invoke` against `host.docker.internal`, reusing the same bridge endpoint as the postgres `aws_lambda` extension. - `RdsRuntime::ensure_mysql_image` / `ensure_mariadb_image` mirror the existing postgres pull-first/build-fallback path; a shared `ensure_bridge_image` helper centralizes the per-tag mutex + force- rebuild env knob across all three engines. - MySQL/MariaDB containers now receive the same `FAKECLOUD_*` env vars postgres already gets so the bootstrap can render the host endpoint and account ID into the procedure bodies. - `docker-rds-images.yml` extended with an engine-axis matrix (postgres/mysql/mariadb), publishing nine version+engine combinations on each release tag. - E2E test (`rds_mysql_lambda.rs`) drives both engines: create echo Lambda, spin up the container, exercise `mysql.lambda_sync` and `mysql.lambda_async` end-to-end. --- .github/workflows/docker-rds-images.yml | 81 +++-- Cargo.lock | 1 + README.md | 4 +- crates/fakecloud-e2e/Cargo.toml | 1 + .../fakecloud-e2e/tests/rds_mysql_lambda.rs | 102 +++++++ .../mariadb/99-fakecloud-bootstrap.sql.tmpl | 52 ++++ .../fakecloud-rds/assets/mariadb/Dockerfile | 37 +++ .../assets/mariadb/fakecloud-bootstrap.sh | 21 ++ .../assets/mariadb/fakecloud_udf.c | 172 +++++++++++ .../mysql/99-fakecloud-bootstrap.sql.tmpl | 52 ++++ crates/fakecloud-rds/assets/mysql/Dockerfile | 47 +++ .../assets/mysql/fakecloud-bootstrap.sh | 21 ++ .../assets/mysql/fakecloud_udf.c | 172 +++++++++++ crates/fakecloud-rds/src/runtime.rs | 289 +++++++++++++----- website/content/docs/services/rds.md | 24 +- 15 files changed, 963 insertions(+), 113 deletions(-) create mode 100644 crates/fakecloud-e2e/tests/rds_mysql_lambda.rs create mode 100644 crates/fakecloud-rds/assets/mariadb/99-fakecloud-bootstrap.sql.tmpl create mode 100644 crates/fakecloud-rds/assets/mariadb/Dockerfile create mode 100644 crates/fakecloud-rds/assets/mariadb/fakecloud-bootstrap.sh create mode 100644 crates/fakecloud-rds/assets/mariadb/fakecloud_udf.c create mode 100644 crates/fakecloud-rds/assets/mysql/99-fakecloud-bootstrap.sql.tmpl create mode 100644 crates/fakecloud-rds/assets/mysql/Dockerfile create mode 100644 crates/fakecloud-rds/assets/mysql/fakecloud-bootstrap.sh create mode 100644 crates/fakecloud-rds/assets/mysql/fakecloud_udf.c diff --git a/.github/workflows/docker-rds-images.yml b/.github/workflows/docker-rds-images.yml index df6aff46..9a287454 100644 --- a/.github/workflows/docker-rds-images.yml +++ b/.github/workflows/docker-rds-images.yml @@ -1,25 +1,22 @@ name: RDS support images -# Builds and (on tag pushes) publishes the prebuilt postgres image used -# by RdsRuntime. Runtime side: `RdsRuntime::ensure_postgres_image` tries -# to pull `ghcr.io//fakecloud-postgres:-` +# Builds and (on tag pushes) publishes the prebuilt postgres / mysql / +# mariadb images used by RdsRuntime. Runtime side: the matching +# `ensure_*_image` helper tries to pull +# `ghcr.io//fakecloud-:-` # before falling back to a local build. # # Triggers: -# - `push: tags: ["v*"]` — full release path: builds 4 majors × 2 arches, -# pushes per-arch by digest, merges into `-` and a -# rolling `` tag. +# - `push: tags: ["v*"]` — full release path: builds every engine × +# version × arch combination, pushes per-arch by digest, merges into +# `-` and a rolling `` tag. # - `pull_request` (paths-filtered) — dry-run that exercises the build # for both arches without pushing. Catches Dockerfile typos and # workflow syntax regressions before we ever cut a release. -# - `workflow_dispatch` — pushes images tagged `-dev-` so we -# can validate the full publish + manifest-merge path against ghcr.io -# end-to-end without polluting release tags. Rolling `` is NOT -# updated in this mode. -# -# Mirrors the structure of docker.yml: per-arch build with `push-by-digest`, -# then a per-major merge job that creates the manifest list with the -# human-readable tags. +# - `workflow_dispatch` — pushes images tagged `-dev-` so +# we can validate the full publish + manifest-merge path against +# ghcr.io end-to-end without polluting release tags. Rolling +# `` is NOT updated in this mode. on: push: @@ -28,18 +25,31 @@ on: paths: - .github/workflows/docker-rds-images.yml - crates/fakecloud-rds/assets/postgres/** + - crates/fakecloud-rds/assets/mysql/** + - crates/fakecloud-rds/assets/mariadb/** workflow_dispatch: env: REGISTRY: ghcr.io - IMAGE_BASE: ghcr.io/${{ github.repository_owner }}/fakecloud-postgres jobs: build: strategy: fail-fast: false matrix: - pg_version: ["13", "14", "15", "16"] + target: + # Postgres majors. + - { engine: postgres, version: "13", build_arg: "PG_VERSION" } + - { engine: postgres, version: "14", build_arg: "PG_VERSION" } + - { engine: postgres, version: "15", build_arg: "PG_VERSION" } + - { engine: postgres, version: "16", build_arg: "PG_VERSION" } + # MySQL majors. + - { engine: mysql, version: "5.7", build_arg: "MYSQL_VERSION" } + - { engine: mysql, version: "8.0", build_arg: "MYSQL_VERSION" } + # MariaDB majors. + - { engine: mariadb, version: "10.6", build_arg: "MARIADB_VERSION" } + - { engine: mariadb, version: "10.11", build_arg: "MARIADB_VERSION" } + - { engine: mariadb, version: "11.4", build_arg: "MARIADB_VERSION" } platform: - linux/amd64 - linux/arm64 @@ -52,6 +62,8 @@ jobs: permissions: contents: read packages: write + env: + IMAGE_BASE: ghcr.io/${{ github.repository_owner }}/fakecloud-${{ matrix.target.engine }} steps: - uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4 @@ -71,12 +83,12 @@ jobs: id: build uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6 with: - context: crates/fakecloud-rds/assets/postgres + context: crates/fakecloud-rds/assets/${{ matrix.target.engine }} build-args: | - PG_VERSION=${{ matrix.pg_version }} + ${{ matrix.target.build_arg }}=${{ matrix.target.version }} platforms: ${{ matrix.platform }} - cache-from: type=gha,scope=postgres-${{ matrix.pg_version }}-${{ matrix.platform }} - cache-to: type=gha,scope=postgres-${{ matrix.pg_version }}-${{ matrix.platform }},mode=max + cache-from: type=gha,scope=${{ matrix.target.engine }}-${{ matrix.target.version }}-${{ matrix.platform }} + cache-to: type=gha,scope=${{ matrix.target.engine }}-${{ matrix.target.version }}-${{ matrix.platform }},mode=max outputs: | type=image,name=${{ env.IMAGE_BASE }},push-by-digest=true,name-canonical=true,push=${{ github.event_name != 'pull_request' }} @@ -91,7 +103,7 @@ jobs: if: github.event_name != 'pull_request' uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4 with: - name: digest-postgres-${{ matrix.pg_version }}-${{ matrix.runner }} + name: digest-${{ matrix.target.engine }}-${{ matrix.target.version }}-${{ matrix.runner }} path: /tmp/digests/* if-no-files-found: error retention-days: 1 @@ -106,14 +118,25 @@ jobs: strategy: fail-fast: false matrix: - pg_version: ["13", "14", "15", "16"] + target: + - { engine: postgres, version: "13" } + - { engine: postgres, version: "14" } + - { engine: postgres, version: "15" } + - { engine: postgres, version: "16" } + - { engine: mysql, version: "5.7" } + - { engine: mysql, version: "8.0" } + - { engine: mariadb, version: "10.6" } + - { engine: mariadb, version: "10.11" } + - { engine: mariadb, version: "11.4" } + env: + IMAGE_BASE: ghcr.io/${{ github.repository_owner }}/fakecloud-${{ matrix.target.engine }} steps: - name: Download digests uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4 with: path: /tmp/digests - pattern: digest-postgres-${{ matrix.pg_version }}-* + pattern: digest-${{ matrix.target.engine }}-${{ matrix.target.version }}-* merge-multiple: true - name: Set up Docker Buildx @@ -135,15 +158,15 @@ jobs: uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5 with: images: ${{ env.IMAGE_BASE }} - # On a real release tag (`v*`): pinned `-` plus - # a rolling `` tag. - # On `workflow_dispatch`: a one-off `-dev-` + # On a real release tag (`v*`): pinned `-` plus + # a rolling `` tag. + # On `workflow_dispatch`: a one-off `-dev-` # tag so we can validate the full publish + manifest-merge # path end-to-end without overwriting any release tag. tags: | - type=semver,pattern=${{ matrix.pg_version }}-{{version}} - type=raw,value=${{ matrix.pg_version }},enable=${{ startsWith(github.ref, 'refs/tags/v') }} - type=raw,value=${{ matrix.pg_version }}-dev-${{ steps.sha.outputs.short }},enable=${{ github.event_name == 'workflow_dispatch' }} + type=semver,pattern=${{ matrix.target.version }}-{{version}} + type=raw,value=${{ matrix.target.version }},enable=${{ startsWith(github.ref, 'refs/tags/v') }} + type=raw,value=${{ matrix.target.version }}-dev-${{ steps.sha.outputs.short }},enable=${{ github.event_name == 'workflow_dispatch' }} - name: Create manifest list and push working-directory: /tmp/digests diff --git a/Cargo.lock b/Cargo.lock index 69f77a90..25369e51 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -2740,6 +2740,7 @@ dependencies = [ "fakecloud-testkit", "flate2", "mail-parser", + "mysql_async", "p256 0.13.2", "reqwest", "rsa", diff --git a/README.md b/README.md index 184de192..1e7cbd0b 100644 --- a/README.md +++ b/README.md @@ -69,7 +69,7 @@ Other install options (Cargo, Docker, Docker Compose, source) are documented at | SES (v2 + v1 inbound) | 110 | Sending, templates, DKIM, **real receipt rule execution** | | Cognito User Pools | 122 | Pools, clients, MFA, identity providers, full auth flows; verification email -> SES, SMS -> SNS, all 12 Lambda triggers | | Kinesis | 39 | Streams, records, shard iterators, retention | -| RDS | 163 | Real Postgres, MySQL, MariaDB, Oracle, SQL Server, Db2 via Docker; lifecycle ops emit `aws.rds` EventBridge events; PostgreSQL `aws_lambda` + `aws_s3` extensions invoke fakecloud Lambda and import/export S3 objects from SQL | +| RDS | 163 | Real Postgres, MySQL, MariaDB, Oracle, SQL Server, Db2 via Docker; lifecycle ops emit `aws.rds` EventBridge events; PostgreSQL `aws_lambda` + `aws_s3` extensions and Aurora-compatible MySQL/MariaDB `mysql.lambda_async`/`mysql.lambda_sync` invoke fakecloud Lambda + import/export S3 objects from SQL | | ElastiCache | 75 | Real Redis, Valkey, Memcached via Docker | | Step Functions | 37 | Full ASL interpreter, Lambda/SQS/SNS/EventBridge/DynamoDB tasks | | API Gateway v1 | 124 | REST APIs, resources, methods, integrations (`MOCK`/`HTTP`/`HTTP_PROXY`/`AWS_PROXY` Lambda), deployments, stages, API keys, usage plans, authorizers, models, request validators, VPC links, domain names, base path mappings, client certs, gateway responses, docs, tags | @@ -119,7 +119,7 @@ Full guides: [fakecloud.dev/docs/guides](https://fakecloud.dev/docs/guides). | Cognito User Pools | 122 operations | [Paid only](https://docs.localstack.cloud/references/licensing/) | | SES v2 | Full send + templates + DKIM + suppression | [Paid only](https://docs.localstack.cloud/references/licensing/) | | SES inbound email | Real receipt rule action execution | [Stored but never executed](https://docs.localstack.cloud/user-guide/aws/ses/) | -| RDS | 163 operations, PostgreSQL/MySQL/MariaDB/Oracle/SQL Server/Db2 via Docker, PostgreSQL `aws_lambda` + `aws_s3` extensions | [Paid only](https://docs.localstack.cloud/references/licensing/) | +| RDS | 163 operations, PostgreSQL/MySQL/MariaDB/Oracle/SQL Server/Db2 via Docker, PostgreSQL `aws_lambda` + `aws_s3` extensions, Aurora-compatible MySQL/MariaDB `mysql.lambda_async`/`mysql.lambda_sync` | [Paid only](https://docs.localstack.cloud/references/licensing/) | | ElastiCache | 75 operations, Redis, Valkey, and Memcached via Docker | [Paid only](https://docs.localstack.cloud/references/licensing/) | | API Gateway v1 | 124 operations — REST APIs incl. real Lambda proxy data plane | [Paid only](https://docs.localstack.cloud/references/licensing/) | | API Gateway v2 | 103 operations — HTTP APIs + developer portals | [Paid only](https://docs.localstack.cloud/references/licensing/) | diff --git a/crates/fakecloud-e2e/Cargo.toml b/crates/fakecloud-e2e/Cargo.toml index 177abc08..062094a3 100644 --- a/crates/fakecloud-e2e/Cargo.toml +++ b/crates/fakecloud-e2e/Cargo.toml @@ -48,6 +48,7 @@ aws-sdk-applicationautoscaling = "1" aws-sdk-wafv2 = "1" aws-sdk-athena = "1" tokio-postgres = "0.7" +mysql_async = "0.34" aws-smithy-types = "1" aws-credential-types = "1" aws-types = "1" diff --git a/crates/fakecloud-e2e/tests/rds_mysql_lambda.rs b/crates/fakecloud-e2e/tests/rds_mysql_lambda.rs new file mode 100644 index 00000000..466aae08 --- /dev/null +++ b/crates/fakecloud-e2e/tests/rds_mysql_lambda.rs @@ -0,0 +1,102 @@ +//! End-to-end tests for the Aurora-compatible MySQL/MariaDB +//! `mysql.lambda_async` / `mysql.lambda_sync` stored procedures +//! provided by the prebuilt `fakecloud-mysql` and `fakecloud-mariadb` +//! images. Each test creates a Lambda, spins up the engine container, +//! and exercises both the sync and async invocation paths through the +//! libcurl-backed UDF + bridge endpoint round trip. + +mod helpers; + +use std::io::Write; + +use aws_sdk_lambda::primitives::Blob; +use helpers::TestServer; +use mysql_async::prelude::*; + +fn make_echo_zip() -> Vec { + let buf = Vec::new(); + let cursor = std::io::Cursor::new(buf); + let mut writer = zip::ZipWriter::new(cursor); + let options = zip::write::SimpleFileOptions::default(); + writer.start_file("index.py", options).unwrap(); + writer + .write_all(b"def handler(event, context):\n return event\n") + .unwrap(); + let cursor = writer.finish().unwrap(); + cursor.into_inner() +} + +async fn run_lambda_round_trip(engine: &str, engine_version: &str, db_id: &str) { + let server = TestServer::start_with_env(&[("FAKECLOUD_REBUILD_POSTGRES_IMAGE", "1")]).await; + let lambda = server.lambda_client().await; + let rds = server.rds_client().await; + + lambda + .create_function() + .function_name("echo") + .runtime(aws_sdk_lambda::types::Runtime::Python312) + .role("arn:aws:iam::000000000000:role/test-role") + .handler("index.handler") + .code( + aws_sdk_lambda::types::FunctionCode::builder() + .zip_file(Blob::new(make_echo_zip())) + .build(), + ) + .send() + .await + .expect("create echo lambda"); + + rds.create_db_instance() + .db_instance_identifier(db_id) + .allocated_storage(20) + .db_instance_class("db.t3.micro") + .engine(engine) + .engine_version(engine_version) + .master_username("admin") + .master_user_password("secret123") + .db_name("appdb") + .send() + .await + .expect("create db instance"); + + let instance = helpers::wait_for_db_available(&rds, db_id, 360).await; + let endpoint = instance.endpoint().expect("endpoint"); + let host = endpoint.address().expect("address").to_string(); + let port = endpoint.port().expect("port") as u16; + + let opts = mysql_async::OptsBuilder::default() + .ip_or_hostname(host) + .tcp_port(port) + .user(Some("admin")) + .pass(Some("secret123")) + .db_name(Some("appdb")); + let mut conn = mysql_async::Conn::new(opts) + .await + .expect("connect to mysql"); + + // Sync invoke: payload should round-trip through the bridge. + let row: Option = conn + .query_first("SELECT mysql.lambda_sync('echo', '{\"hello\":\"world\"}') AS payload") + .await + .expect("invoke lambda_sync"); + let payload_json = row.expect("payload"); + let parsed: serde_json::Value = serde_json::from_str(&payload_json).unwrap(); + assert_eq!(parsed, serde_json::json!({"hello": "world"})); + + // Async invoke: returns nothing; assert no error. + conn.query_drop("CALL mysql.lambda_async('echo', '{\"async\":true}')") + .await + .expect("invoke lambda_async"); + + let _ = conn.disconnect().await; +} + +#[tokio::test] +async fn aws_lambda_bridge_mysql_round_trip() { + run_lambda_round_trip("mysql", "8.0", "mysql-lambda-db").await; +} + +#[tokio::test] +async fn aws_lambda_bridge_mariadb_round_trip() { + run_lambda_round_trip("mariadb", "10.11", "mariadb-lambda-db").await; +} diff --git a/crates/fakecloud-rds/assets/mariadb/99-fakecloud-bootstrap.sql.tmpl b/crates/fakecloud-rds/assets/mariadb/99-fakecloud-bootstrap.sql.tmpl new file mode 100644 index 00000000..a49ed1a5 --- /dev/null +++ b/crates/fakecloud-rds/assets/mariadb/99-fakecloud-bootstrap.sql.tmpl @@ -0,0 +1,52 @@ +-- fakecloud Aurora-compatible Lambda bridge for MySQL/MariaDB. +-- Loaded by the prebuilt fakecloud-mysql / fakecloud-mariadb image +-- on first container start. Renders FAKECLOUD_* env vars from the +-- entrypoint into baked-in URL/account values; SQL never has to know +-- the host or port. + +CREATE FUNCTION IF NOT EXISTS fakecloud_post RETURNS STRING SONAME 'fakecloud_udf.so'; +CREATE FUNCTION IF NOT EXISTS fakecloud_post_async RETURNS INTEGER SONAME 'fakecloud_udf.so'; + +DELIMITER $$ + +DROP PROCEDURE IF EXISTS mysql.lambda_async $$ +CREATE PROCEDURE mysql.lambda_async(IN function_name TEXT, IN payload TEXT) +BEGIN + DECLARE body TEXT; + SET body = JSON_OBJECT( + 'function_name', function_name, + 'payload', CAST(IFNULL(payload, 'null') AS JSON), + 'invocation_type', 'Event', + 'region', '@FAKECLOUD_REGION@' + ); + DO fakecloud_post_async( + '@FAKECLOUD_ENDPOINT@/_fakecloud/rds/lambda-invoke', + body + ); +END $$ + +DROP FUNCTION IF EXISTS mysql.lambda_sync $$ +CREATE FUNCTION mysql.lambda_sync(function_name TEXT, payload TEXT) +RETURNS TEXT +DETERMINISTIC +BEGIN + DECLARE body TEXT; + DECLARE result TEXT; + SET body = JSON_OBJECT( + 'function_name', function_name, + 'payload', CAST(IFNULL(payload, 'null') AS JSON), + 'invocation_type', 'RequestResponse', + 'region', '@FAKECLOUD_REGION@' + ); + SET result = fakecloud_post( + '@FAKECLOUD_ENDPOINT@/_fakecloud/rds/lambda-invoke', + body, + 300000 + ); + -- Bridge response is `{ status_code, payload, ... }`. Strip down to + -- the payload JSON so callers see the same shape Aurora returns + -- from `mysql.lambda_sync` (a plain JSON value, not the wrapper). + RETURN JSON_EXTRACT(result, '$.payload'); +END $$ + +DELIMITER ; diff --git a/crates/fakecloud-rds/assets/mariadb/Dockerfile b/crates/fakecloud-rds/assets/mariadb/Dockerfile new file mode 100644 index 00000000..8fd196c4 --- /dev/null +++ b/crates/fakecloud-rds/assets/mariadb/Dockerfile @@ -0,0 +1,37 @@ +# Built and pushed on each fakecloud release tag by +# .github/workflows/docker-rds-images.yml as +# ghcr.io/faiscadev/fakecloud-mariadb:- +# (plus a rolling : tag). RdsRuntime::ensure_mariadb_image +# tries to pull that tag first and falls back to building from this +# Dockerfile locally when the pull fails. + +ARG MARIADB_VERSION=10.11 +FROM mariadb:${MARIADB_VERSION} + +USER root +RUN apt-get update \ + && apt-get install -y --no-install-recommends \ + gcc make libcurl4-openssl-dev libmariadb-dev libc6-dev ca-certificates curl \ + && rm -rf /var/lib/apt/lists/* + +COPY fakecloud_udf.c /tmp/fakecloud_udf.c +COPY fakecloud-bootstrap.sh /usr/local/bin/fakecloud-bootstrap.sh +COPY 99-fakecloud-bootstrap.sql.tmpl /tmp/99-fakecloud-bootstrap.sql.tmpl +RUN chmod +x /usr/local/bin/fakecloud-bootstrap.sh + +# MariaDB ships mysql_config-style headers via libmariadb-dev. The +# plugin dir lives at /usr/lib/mysql/plugin (or under the mariadb +# tree on some images); honor whatever mysql_config reports first. +RUN PLUGIN_DIR="$(mariadb_config --plugindir 2>/dev/null \ + || mysql_config --plugindir 2>/dev/null \ + || echo /usr/lib/mysql/plugin)" \ + && mkdir -p "$PLUGIN_DIR" \ + && CFLAGS="$(mariadb_config --cflags 2>/dev/null \ + || mysql_config --cflags 2>/dev/null \ + || echo -I/usr/include/mariadb)" \ + && gcc -O2 -fPIC -shared $CFLAGS -o "$PLUGIN_DIR/fakecloud_udf.so" \ + /tmp/fakecloud_udf.c -lcurl -lpthread \ + && rm /tmp/fakecloud_udf.c + +ENTRYPOINT ["fakecloud-bootstrap.sh"] +CMD ["mariadbd"] diff --git a/crates/fakecloud-rds/assets/mariadb/fakecloud-bootstrap.sh b/crates/fakecloud-rds/assets/mariadb/fakecloud-bootstrap.sh new file mode 100644 index 00000000..e63fe0be --- /dev/null +++ b/crates/fakecloud-rds/assets/mariadb/fakecloud-bootstrap.sh @@ -0,0 +1,21 @@ +#!/usr/bin/env bash +# Render the FAKECLOUD_* env vars into the bootstrap SQL and drop the +# result into the standard mysql initdb directory before delegating to +# the upstream mysql entrypoint. The official image only sources files +# from /docker-entrypoint-initdb.d/ on first start (when the data dir +# is empty), which is exactly when we want the procedures created. +set -euo pipefail + +ENDPOINT="${FAKECLOUD_ENDPOINT:-http://host.docker.internal:4566}" +ACCOUNT_ID="${FAKECLOUD_ACCOUNT_ID:-000000000000}" +REGION="${FAKECLOUD_REGION:-us-east-1}" + +mkdir -p /docker-entrypoint-initdb.d +sed \ + -e "s|@FAKECLOUD_ENDPOINT@|${ENDPOINT}|g" \ + -e "s|@FAKECLOUD_ACCOUNT_ID@|${ACCOUNT_ID}|g" \ + -e "s|@FAKECLOUD_REGION@|${REGION}|g" \ + /tmp/99-fakecloud-bootstrap.sql.tmpl \ + > /docker-entrypoint-initdb.d/99-fakecloud-bootstrap.sql + +exec docker-entrypoint.sh "$@" diff --git a/crates/fakecloud-rds/assets/mariadb/fakecloud_udf.c b/crates/fakecloud-rds/assets/mariadb/fakecloud_udf.c new file mode 100644 index 00000000..23d6ee6d --- /dev/null +++ b/crates/fakecloud-rds/assets/mariadb/fakecloud_udf.c @@ -0,0 +1,172 @@ +/* + * fakecloud_udf - tiny MySQL UDF that POSTs JSON to a fakecloud bridge + * endpoint and returns the response body. Loaded by the prebuilt + * fakecloud-mysql image so that Aurora-compatible stored procedures + * (mysql.lambda_sync, mysql.lambda_async) can call into fakecloud + * Lambda from inside the DB container. + * + * Two functions are exported: + * + * fakecloud_post(url TEXT, body TEXT, timeout_ms INT) RETURNS TEXT + * Synchronous POST. Returns the response body (or "" on network + * failure). NULL inputs are treated as empty strings / 5000 ms. + * + * fakecloud_post_async(url TEXT, body TEXT) RETURNS INT + * Spawns a detached worker that performs the POST in the background + * and returns 0 immediately. Used by `mysql.lambda_async`. The + * response is discarded. + * + * Build: gcc -O2 -fPIC -shared -o fakecloud_udf.so fakecloud_udf.c -lcurl -lpthread + * Install: copy the .so into the MySQL plugin dir (`SHOW VARIABLES LIKE + * 'plugin_dir'`) and `CREATE FUNCTION`. + */ + +#include +#include +#include +#include +#include + +/* ── shared helpers ─────────────────────────────────────────────────── */ + +struct curl_buf { + char *data; + size_t len; +}; + +static size_t curl_write_cb(void *ptr, size_t size, size_t nmemb, void *userdata) { + size_t total = size * nmemb; + struct curl_buf *buf = (struct curl_buf *)userdata; + char *grown = (char *)realloc(buf->data, buf->len + total + 1); + if (!grown) return 0; + buf->data = grown; + memcpy(buf->data + buf->len, ptr, total); + buf->len += total; + buf->data[buf->len] = '\0'; + return total; +} + +static char *do_post(const char *url, const char *body, long timeout_ms, + size_t *out_len) { + CURL *curl = curl_easy_init(); + if (!curl) return NULL; + struct curl_buf buf = { NULL, 0 }; + struct curl_slist *headers = NULL; + headers = curl_slist_append(headers, "Content-Type: application/json"); + curl_easy_setopt(curl, CURLOPT_URL, url); + curl_easy_setopt(curl, CURLOPT_POSTFIELDS, body ? body : ""); + curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE, (long)(body ? strlen(body) : 0)); + curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers); + curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, curl_write_cb); + curl_easy_setopt(curl, CURLOPT_WRITEDATA, &buf); + curl_easy_setopt(curl, CURLOPT_TIMEOUT_MS, timeout_ms); + curl_easy_setopt(curl, CURLOPT_NOSIGNAL, 1L); + CURLcode res = curl_easy_perform(curl); + curl_slist_free_all(headers); + curl_easy_cleanup(curl); + if (res != CURLE_OK) { + free(buf.data); + if (out_len) *out_len = 0; + return NULL; + } + if (out_len) *out_len = buf.len; + return buf.data; +} + +/* ── fakecloud_post ─────────────────────────────────────────────────── */ + +bool fakecloud_post_init(UDF_INIT *initid, UDF_ARGS *args, char *message) { + if (args->arg_count < 2 || args->arg_count > 3) { + strcpy(message, "fakecloud_post(url, body[, timeout_ms])"); + return 1; + } + args->arg_type[0] = STRING_RESULT; + args->arg_type[1] = STRING_RESULT; + if (args->arg_count == 3) args->arg_type[2] = INT_RESULT; + initid->maybe_null = 1; + initid->ptr = NULL; + return 0; +} + +void fakecloud_post_deinit(UDF_INIT *initid) { + free(initid->ptr); +} + +char *fakecloud_post(UDF_INIT *initid, UDF_ARGS *args, char *result, + unsigned long *length, char *is_null, char *error) { + const char *url = args->args[0]; + const char *body = args->args[1]; + long timeout_ms = 5000; + if (args->arg_count == 3 && args->args[2]) + timeout_ms = *((long long *)args->args[2]); + if (!url) { + *is_null = 1; + return NULL; + } + size_t len = 0; + char *resp = do_post(url, body, timeout_ms, &len); + if (!resp) { + *is_null = 1; + return NULL; + } + initid->ptr = resp; + *length = (unsigned long)len; + return resp; +} + +/* ── fakecloud_post_async ───────────────────────────────────────────── */ + +struct async_args { + char *url; + char *body; +}; + +static void *async_worker(void *p) { + struct async_args *a = (struct async_args *)p; + char *resp = do_post(a->url, a->body, 30000, NULL); + free(resp); + free(a->url); + free(a->body); + free(a); + return NULL; +} + +bool fakecloud_post_async_init(UDF_INIT *initid, UDF_ARGS *args, char *message) { + if (args->arg_count != 2) { + strcpy(message, "fakecloud_post_async(url, body)"); + return 1; + } + args->arg_type[0] = STRING_RESULT; + args->arg_type[1] = STRING_RESULT; + initid->maybe_null = 0; + return 0; +} + +void fakecloud_post_async_deinit(UDF_INIT *initid) { + (void)initid; +} + +long long fakecloud_post_async(UDF_INIT *initid, UDF_ARGS *args, char *is_null, + char *error) { + (void)initid; (void)is_null; (void)error; + if (!args->args[0]) return -1; + struct async_args *a = (struct async_args *)malloc(sizeof(*a)); + if (!a) return -1; + a->url = strdup(args->args[0]); + a->body = args->args[1] ? strdup(args->args[1]) : strdup(""); + if (!a->url || !a->body) { + free(a->url); free(a->body); free(a); + return -1; + } + pthread_t tid; + pthread_attr_t attr; + pthread_attr_init(&attr); + pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED); + int rc = pthread_create(&tid, &attr, async_worker, a); + pthread_attr_destroy(&attr); + if (rc != 0) { + free(a->url); free(a->body); free(a); + return -1; + } + return 0; +} diff --git a/crates/fakecloud-rds/assets/mysql/99-fakecloud-bootstrap.sql.tmpl b/crates/fakecloud-rds/assets/mysql/99-fakecloud-bootstrap.sql.tmpl new file mode 100644 index 00000000..a49ed1a5 --- /dev/null +++ b/crates/fakecloud-rds/assets/mysql/99-fakecloud-bootstrap.sql.tmpl @@ -0,0 +1,52 @@ +-- fakecloud Aurora-compatible Lambda bridge for MySQL/MariaDB. +-- Loaded by the prebuilt fakecloud-mysql / fakecloud-mariadb image +-- on first container start. Renders FAKECLOUD_* env vars from the +-- entrypoint into baked-in URL/account values; SQL never has to know +-- the host or port. + +CREATE FUNCTION IF NOT EXISTS fakecloud_post RETURNS STRING SONAME 'fakecloud_udf.so'; +CREATE FUNCTION IF NOT EXISTS fakecloud_post_async RETURNS INTEGER SONAME 'fakecloud_udf.so'; + +DELIMITER $$ + +DROP PROCEDURE IF EXISTS mysql.lambda_async $$ +CREATE PROCEDURE mysql.lambda_async(IN function_name TEXT, IN payload TEXT) +BEGIN + DECLARE body TEXT; + SET body = JSON_OBJECT( + 'function_name', function_name, + 'payload', CAST(IFNULL(payload, 'null') AS JSON), + 'invocation_type', 'Event', + 'region', '@FAKECLOUD_REGION@' + ); + DO fakecloud_post_async( + '@FAKECLOUD_ENDPOINT@/_fakecloud/rds/lambda-invoke', + body + ); +END $$ + +DROP FUNCTION IF EXISTS mysql.lambda_sync $$ +CREATE FUNCTION mysql.lambda_sync(function_name TEXT, payload TEXT) +RETURNS TEXT +DETERMINISTIC +BEGIN + DECLARE body TEXT; + DECLARE result TEXT; + SET body = JSON_OBJECT( + 'function_name', function_name, + 'payload', CAST(IFNULL(payload, 'null') AS JSON), + 'invocation_type', 'RequestResponse', + 'region', '@FAKECLOUD_REGION@' + ); + SET result = fakecloud_post( + '@FAKECLOUD_ENDPOINT@/_fakecloud/rds/lambda-invoke', + body, + 300000 + ); + -- Bridge response is `{ status_code, payload, ... }`. Strip down to + -- the payload JSON so callers see the same shape Aurora returns + -- from `mysql.lambda_sync` (a plain JSON value, not the wrapper). + RETURN JSON_EXTRACT(result, '$.payload'); +END $$ + +DELIMITER ; diff --git a/crates/fakecloud-rds/assets/mysql/Dockerfile b/crates/fakecloud-rds/assets/mysql/Dockerfile new file mode 100644 index 00000000..6e92d728 --- /dev/null +++ b/crates/fakecloud-rds/assets/mysql/Dockerfile @@ -0,0 +1,47 @@ +# Built and pushed on each fakecloud release tag by +# .github/workflows/docker-rds-images.yml as +# ghcr.io/faiscadev/fakecloud-mysql:- +# (plus a rolling : tag). RdsRuntime::ensure_mysql_image tries +# to pull that tag first and falls back to building from this Dockerfile +# locally when the pull fails. +# +# Bakes a small libcurl-backed UDF (`fakecloud_post`, `fakecloud_post_async`) +# plus Aurora-compatible `mysql.lambda_sync`/`mysql.lambda_async` stored +# procedures so SQL inside an RDS-managed MySQL instance can invoke +# fakecloud Lambda functions. + +ARG MYSQL_VERSION=8.0 +FROM mysql:${MYSQL_VERSION} + +USER root +RUN microdnf install -y \ + gcc \ + make \ + libcurl-devel \ + glibc-devel \ + || ( apt-get update \ + && apt-get install -y --no-install-recommends \ + gcc make libcurl4-openssl-dev libc6-dev ca-certificates curl \ + && rm -rf /var/lib/apt/lists/* ) + +# UDF source + bootstrap SQL. +COPY fakecloud_udf.c /tmp/fakecloud_udf.c +COPY fakecloud-bootstrap.sh /usr/local/bin/fakecloud-bootstrap.sh +COPY 99-fakecloud-bootstrap.sql.tmpl /tmp/99-fakecloud-bootstrap.sql.tmpl +RUN chmod +x /usr/local/bin/fakecloud-bootstrap.sh + +# Compile the UDF against the running MySQL's plugin headers and drop +# it into the standard plugin dir. The runtime image ships +# mysql_config (or pkg-config + libmysqlclient-dev when on Debian). +RUN PLUGIN_DIR="$(mysql_config --plugindir 2>/dev/null || echo /usr/lib/mysql/plugin)" \ + && mkdir -p "$PLUGIN_DIR" \ + && CFLAGS="$(mysql_config --cflags 2>/dev/null || echo -I/usr/include/mysql)" \ + && gcc -O2 -fPIC -shared $CFLAGS -o "$PLUGIN_DIR/fakecloud_udf.so" \ + /tmp/fakecloud_udf.c -lcurl -lpthread \ + && rm /tmp/fakecloud_udf.c + +# The bootstrap shell script substitutes FAKECLOUD_* env vars into the +# template SQL and drops the result into /docker-entrypoint-initdb.d/ +# so the official mysql entrypoint picks it up on first start. +ENTRYPOINT ["fakecloud-bootstrap.sh"] +CMD ["mysqld"] diff --git a/crates/fakecloud-rds/assets/mysql/fakecloud-bootstrap.sh b/crates/fakecloud-rds/assets/mysql/fakecloud-bootstrap.sh new file mode 100644 index 00000000..e63fe0be --- /dev/null +++ b/crates/fakecloud-rds/assets/mysql/fakecloud-bootstrap.sh @@ -0,0 +1,21 @@ +#!/usr/bin/env bash +# Render the FAKECLOUD_* env vars into the bootstrap SQL and drop the +# result into the standard mysql initdb directory before delegating to +# the upstream mysql entrypoint. The official image only sources files +# from /docker-entrypoint-initdb.d/ on first start (when the data dir +# is empty), which is exactly when we want the procedures created. +set -euo pipefail + +ENDPOINT="${FAKECLOUD_ENDPOINT:-http://host.docker.internal:4566}" +ACCOUNT_ID="${FAKECLOUD_ACCOUNT_ID:-000000000000}" +REGION="${FAKECLOUD_REGION:-us-east-1}" + +mkdir -p /docker-entrypoint-initdb.d +sed \ + -e "s|@FAKECLOUD_ENDPOINT@|${ENDPOINT}|g" \ + -e "s|@FAKECLOUD_ACCOUNT_ID@|${ACCOUNT_ID}|g" \ + -e "s|@FAKECLOUD_REGION@|${REGION}|g" \ + /tmp/99-fakecloud-bootstrap.sql.tmpl \ + > /docker-entrypoint-initdb.d/99-fakecloud-bootstrap.sql + +exec docker-entrypoint.sh "$@" diff --git a/crates/fakecloud-rds/assets/mysql/fakecloud_udf.c b/crates/fakecloud-rds/assets/mysql/fakecloud_udf.c new file mode 100644 index 00000000..23d6ee6d --- /dev/null +++ b/crates/fakecloud-rds/assets/mysql/fakecloud_udf.c @@ -0,0 +1,172 @@ +/* + * fakecloud_udf - tiny MySQL UDF that POSTs JSON to a fakecloud bridge + * endpoint and returns the response body. Loaded by the prebuilt + * fakecloud-mysql image so that Aurora-compatible stored procedures + * (mysql.lambda_sync, mysql.lambda_async) can call into fakecloud + * Lambda from inside the DB container. + * + * Two functions are exported: + * + * fakecloud_post(url TEXT, body TEXT, timeout_ms INT) RETURNS TEXT + * Synchronous POST. Returns the response body (or "" on network + * failure). NULL inputs are treated as empty strings / 5000 ms. + * + * fakecloud_post_async(url TEXT, body TEXT) RETURNS INT + * Spawns a detached worker that performs the POST in the background + * and returns 0 immediately. Used by `mysql.lambda_async`. The + * response is discarded. + * + * Build: gcc -O2 -fPIC -shared -o fakecloud_udf.so fakecloud_udf.c -lcurl -lpthread + * Install: copy the .so into the MySQL plugin dir (`SHOW VARIABLES LIKE + * 'plugin_dir'`) and `CREATE FUNCTION`. + */ + +#include +#include +#include +#include +#include + +/* ── shared helpers ─────────────────────────────────────────────────── */ + +struct curl_buf { + char *data; + size_t len; +}; + +static size_t curl_write_cb(void *ptr, size_t size, size_t nmemb, void *userdata) { + size_t total = size * nmemb; + struct curl_buf *buf = (struct curl_buf *)userdata; + char *grown = (char *)realloc(buf->data, buf->len + total + 1); + if (!grown) return 0; + buf->data = grown; + memcpy(buf->data + buf->len, ptr, total); + buf->len += total; + buf->data[buf->len] = '\0'; + return total; +} + +static char *do_post(const char *url, const char *body, long timeout_ms, + size_t *out_len) { + CURL *curl = curl_easy_init(); + if (!curl) return NULL; + struct curl_buf buf = { NULL, 0 }; + struct curl_slist *headers = NULL; + headers = curl_slist_append(headers, "Content-Type: application/json"); + curl_easy_setopt(curl, CURLOPT_URL, url); + curl_easy_setopt(curl, CURLOPT_POSTFIELDS, body ? body : ""); + curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE, (long)(body ? strlen(body) : 0)); + curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers); + curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, curl_write_cb); + curl_easy_setopt(curl, CURLOPT_WRITEDATA, &buf); + curl_easy_setopt(curl, CURLOPT_TIMEOUT_MS, timeout_ms); + curl_easy_setopt(curl, CURLOPT_NOSIGNAL, 1L); + CURLcode res = curl_easy_perform(curl); + curl_slist_free_all(headers); + curl_easy_cleanup(curl); + if (res != CURLE_OK) { + free(buf.data); + if (out_len) *out_len = 0; + return NULL; + } + if (out_len) *out_len = buf.len; + return buf.data; +} + +/* ── fakecloud_post ─────────────────────────────────────────────────── */ + +bool fakecloud_post_init(UDF_INIT *initid, UDF_ARGS *args, char *message) { + if (args->arg_count < 2 || args->arg_count > 3) { + strcpy(message, "fakecloud_post(url, body[, timeout_ms])"); + return 1; + } + args->arg_type[0] = STRING_RESULT; + args->arg_type[1] = STRING_RESULT; + if (args->arg_count == 3) args->arg_type[2] = INT_RESULT; + initid->maybe_null = 1; + initid->ptr = NULL; + return 0; +} + +void fakecloud_post_deinit(UDF_INIT *initid) { + free(initid->ptr); +} + +char *fakecloud_post(UDF_INIT *initid, UDF_ARGS *args, char *result, + unsigned long *length, char *is_null, char *error) { + const char *url = args->args[0]; + const char *body = args->args[1]; + long timeout_ms = 5000; + if (args->arg_count == 3 && args->args[2]) + timeout_ms = *((long long *)args->args[2]); + if (!url) { + *is_null = 1; + return NULL; + } + size_t len = 0; + char *resp = do_post(url, body, timeout_ms, &len); + if (!resp) { + *is_null = 1; + return NULL; + } + initid->ptr = resp; + *length = (unsigned long)len; + return resp; +} + +/* ── fakecloud_post_async ───────────────────────────────────────────── */ + +struct async_args { + char *url; + char *body; +}; + +static void *async_worker(void *p) { + struct async_args *a = (struct async_args *)p; + char *resp = do_post(a->url, a->body, 30000, NULL); + free(resp); + free(a->url); + free(a->body); + free(a); + return NULL; +} + +bool fakecloud_post_async_init(UDF_INIT *initid, UDF_ARGS *args, char *message) { + if (args->arg_count != 2) { + strcpy(message, "fakecloud_post_async(url, body)"); + return 1; + } + args->arg_type[0] = STRING_RESULT; + args->arg_type[1] = STRING_RESULT; + initid->maybe_null = 0; + return 0; +} + +void fakecloud_post_async_deinit(UDF_INIT *initid) { + (void)initid; +} + +long long fakecloud_post_async(UDF_INIT *initid, UDF_ARGS *args, char *is_null, + char *error) { + (void)initid; (void)is_null; (void)error; + if (!args->args[0]) return -1; + struct async_args *a = (struct async_args *)malloc(sizeof(*a)); + if (!a) return -1; + a->url = strdup(args->args[0]); + a->body = args->args[1] ? strdup(args->args[1]) : strdup(""); + if (!a->url || !a->body) { + free(a->url); free(a->body); free(a); + return -1; + } + pthread_t tid; + pthread_attr_t attr; + pthread_attr_init(&attr); + pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED); + int rc = pthread_create(&tid, &attr, async_worker, a); + pthread_attr_destroy(&attr); + if (rc != 0) { + free(a->url); free(a->body); free(a); + return -1; + } + return 0; +} diff --git a/crates/fakecloud-rds/src/runtime.rs b/crates/fakecloud-rds/src/runtime.rs index cdaee9f6..0d0844d9 100644 --- a/crates/fakecloud-rds/src/runtime.rs +++ b/crates/fakecloud-rds/src/runtime.rs @@ -14,6 +14,17 @@ const AWS_LAMBDA_SQL: &str = include_str!("../assets/postgres/aws_lambda--1.0.sq const AWS_S3_CONTROL: &str = include_str!("../assets/postgres/aws_s3.control"); const AWS_S3_SQL: &str = include_str!("../assets/postgres/aws_s3--1.0.sql"); +const MYSQL_DOCKERFILE: &str = include_str!("../assets/mysql/Dockerfile"); +const MYSQL_UDF_C: &str = include_str!("../assets/mysql/fakecloud_udf.c"); +const MYSQL_BOOTSTRAP_SH: &str = include_str!("../assets/mysql/fakecloud-bootstrap.sh"); +const MYSQL_BOOTSTRAP_SQL: &str = include_str!("../assets/mysql/99-fakecloud-bootstrap.sql.tmpl"); + +const MARIADB_DOCKERFILE: &str = include_str!("../assets/mariadb/Dockerfile"); +const MARIADB_UDF_C: &str = include_str!("../assets/mariadb/fakecloud_udf.c"); +const MARIADB_BOOTSTRAP_SH: &str = include_str!("../assets/mariadb/fakecloud-bootstrap.sh"); +const MARIADB_BOOTSTRAP_SQL: &str = + include_str!("../assets/mariadb/99-fakecloud-bootstrap.sql.tmpl"); + /// Default registry that hosts the prebuilt postgres images. CI publishes /// to `ghcr.io/faiscadev/fakecloud-postgres:-` on each /// release tag (see `.github/workflows/docker-rds-images.yml`). @@ -98,11 +109,14 @@ impl RdsRuntime { ) -> Result { self.stop_container(db_instance_identifier).await; - // Determine Docker image and port based on engine. The postgres - // image is built locally (lazily) so we can ship the aws_lambda / - // aws_commons extensions plus plpython3u; other engines stay on - // upstream images. - let (image, port, env_vars, postgres_major) = match engine { + // Determine Docker image and port based on engine. Postgres, + // MySQL, and MariaDB all use prebuilt fakecloud-* images that + // bake in the bridge UDFs / extensions and call back into the + // host fakecloud server; the heavier engines (oracle/mssql/db2) + // stay on upstream images. `bridge_engine_version` is `Some(_)` + // for the bridge-aware engines and gates the `--add-host` + // setup below. + let (image, port, env_vars, bridge_engine_version) = match engine { "postgres" => { let major_version = engine_version.split('.').next().unwrap_or("16"); let image = self.ensure_postgres_image(major_version).await?; @@ -125,29 +139,43 @@ impl RdsRuntime { } else { "8.0" }; - let image = format!("mysql:{}", major_version); + let image = self.ensure_mysql_image(major_version).await?; let env_vars = vec![ format!("MYSQL_ROOT_PASSWORD={password}"), format!("MYSQL_USER={username}"), format!("MYSQL_PASSWORD={password}"), format!("MYSQL_DATABASE={db_name}"), + format!( + "FAKECLOUD_ENDPOINT=http://host.docker.internal:{}", + self.server_port + ), + format!("FAKECLOUD_ACCOUNT_ID={account_id}"), + format!("FAKECLOUD_REGION={region}"), ]; - (image, "3306", env_vars, None) + (image, "3306", env_vars, Some(major_version.to_string())) } "mariadb" => { let major_version = if engine_version.starts_with("10.11") { "10.11" + } else if engine_version.starts_with("11.4") { + "11.4" } else { "10.6" }; - let image = format!("mariadb:{}", major_version); + let image = self.ensure_mariadb_image(major_version).await?; let env_vars = vec![ format!("MARIADB_ROOT_PASSWORD={password}"), format!("MARIADB_USER={username}"), format!("MARIADB_PASSWORD={password}"), format!("MARIADB_DATABASE={db_name}"), + format!( + "FAKECLOUD_ENDPOINT=http://host.docker.internal:{}", + self.server_port + ), + format!("FAKECLOUD_ACCOUNT_ID={account_id}"), + format!("FAKECLOUD_REGION={region}"), ]; - (image, "3306", env_vars, None) + (image, "3306", env_vars, Some(major_version.to_string())) } "oracle-ee" | "oracle-se2" | "oracle-ee-cdb" | "oracle-se2-cdb" => { // Oracle Database Free is the no-cost dev edition shipped by @@ -216,10 +244,11 @@ impl RdsRuntime { args.push("--privileged".to_string()); } - // Postgres runs the aws_lambda extension which calls back into - // fakecloud over HTTP. Wire the bridge alias so plpython3u code - // can resolve host.docker.internal on every platform. - if postgres_major.is_some() { + // Bridge-aware engines (postgres aws_lambda, mysql/mariadb + // fakecloud_post UDF) call back into fakecloud over HTTP. Wire + // the host gateway alias so the in-container code can resolve + // host.docker.internal on every platform. + if bridge_engine_version.is_some() { args.push("--add-host".to_string()); args.push(format!("host.docker.internal:{}", self.host_ip)); } @@ -602,48 +631,11 @@ impl RdsRuntime { &self, major_version: &str, ) -> Result { - let tag = postgres_image_tag(major_version); - - // Per-tag mutex so concurrent first-creates don't all shell out - // to docker. Inner bool tracks whether resolution has succeeded - // in this process (regardless of whether it landed via inspect, - // pull, or build). - let lock = { - let mut cache = self.image_cache.write(); - cache - .entry(tag.clone()) - .or_insert_with(|| Arc::new(tokio::sync::Mutex::new(false))) - .clone() - }; - let mut resolved = lock.lock().await; - if *resolved { - return Ok(tag); - } - - let force_rebuild = std::env::var("FAKECLOUD_REBUILD_POSTGRES_IMAGE") - .map(|v| !v.is_empty()) - .unwrap_or(false); - - if !force_rebuild { - // Already on the daemon (prior pull, prior build, prior - // session)? Use it as-is. - if self.docker_image_exists(&tag).await { - *resolved = true; - return Ok(tag); - } - - // Try the prebuilt image published by CI. Any failure - // (404 for unreleased version, network error, auth) falls - // through to the local build branch. - if self.try_pull_image(&tag).await { - *resolved = true; - return Ok(tag); - } - } - - self.build_postgres_image_local(major_version, &tag).await?; - *resolved = true; - Ok(tag) + let tag = bridge_image_tag("fakecloud-postgres", major_version); + self.ensure_bridge_image(&tag, |tag| async move { + self.build_postgres_image_local(major_version, &tag).await + }) + .await } async fn docker_image_exists(&self, tag: &str) -> bool { @@ -686,8 +678,6 @@ impl RdsRuntime { major_version: &str, tag: &str, ) -> Result<(), RuntimeError> { - let build_dir = - tempfile::tempdir().map_err(|e| RuntimeError::ContainerStartFailed(e.to_string()))?; let assets: [(&str, &str); 8] = [ ("Dockerfile", POSTGRES_DOCKERFILE), ("aws_commons.control", AWS_COMMONS_CONTROL), @@ -698,6 +688,135 @@ impl RdsRuntime { ("aws_s3.control", AWS_S3_CONTROL), ("aws_s3--1.0.sql", AWS_S3_SQL), ]; + self.build_image_local( + tag, + &assets, + &format!("PG_VERSION={major_version}"), + "fakecloud-postgres", + ) + .await + } + + /// Pull-first / build-fallback for the prebuilt fakecloud-mysql + /// image. Mirrors `ensure_postgres_image`. The image bakes a small + /// libcurl-backed UDF + Aurora-compatible `mysql.lambda_async` / + /// `mysql.lambda_sync` stored procedures. + pub(crate) async fn ensure_mysql_image( + &self, + major_version: &str, + ) -> Result { + let tag = bridge_image_tag("fakecloud-mysql", major_version); + self.ensure_bridge_image(&tag, |tag| async move { + self.build_mysql_image_local(major_version, &tag).await + }) + .await + } + + pub(crate) async fn ensure_mariadb_image( + &self, + major_version: &str, + ) -> Result { + let tag = bridge_image_tag("fakecloud-mariadb", major_version); + self.ensure_bridge_image(&tag, |tag| async move { + self.build_mariadb_image_local(major_version, &tag).await + }) + .await + } + + /// Shared pull-first/build-fallback orchestration used by every + /// bridge-aware engine. Holds the per-tag mutex, checks the local + /// daemon first, then tries the prebuilt image, and finally + /// invokes the supplied local-build closure. + async fn ensure_bridge_image( + &self, + tag: &str, + build_local: F, + ) -> Result + where + F: FnOnce(String) -> Fut, + Fut: std::future::Future>, + { + let lock = { + let mut cache = self.image_cache.write(); + cache + .entry(tag.to_string()) + .or_insert_with(|| Arc::new(tokio::sync::Mutex::new(false))) + .clone() + }; + let mut resolved = lock.lock().await; + if *resolved { + return Ok(tag.to_string()); + } + + let force_rebuild = std::env::var("FAKECLOUD_REBUILD_POSTGRES_IMAGE") + .map(|v| !v.is_empty()) + .unwrap_or(false); + + if !force_rebuild { + if self.docker_image_exists(tag).await { + *resolved = true; + return Ok(tag.to_string()); + } + if self.try_pull_image(tag).await { + *resolved = true; + return Ok(tag.to_string()); + } + } + + build_local(tag.to_string()).await?; + *resolved = true; + Ok(tag.to_string()) + } + + async fn build_mysql_image_local( + &self, + major_version: &str, + tag: &str, + ) -> Result<(), RuntimeError> { + let assets: [(&str, &str); 4] = [ + ("Dockerfile", MYSQL_DOCKERFILE), + ("fakecloud_udf.c", MYSQL_UDF_C), + ("fakecloud-bootstrap.sh", MYSQL_BOOTSTRAP_SH), + ("99-fakecloud-bootstrap.sql.tmpl", MYSQL_BOOTSTRAP_SQL), + ]; + self.build_image_local( + tag, + &assets, + &format!("MYSQL_VERSION={major_version}"), + "fakecloud-mysql", + ) + .await + } + + async fn build_mariadb_image_local( + &self, + major_version: &str, + tag: &str, + ) -> Result<(), RuntimeError> { + let assets: [(&str, &str); 4] = [ + ("Dockerfile", MARIADB_DOCKERFILE), + ("fakecloud_udf.c", MARIADB_UDF_C), + ("fakecloud-bootstrap.sh", MARIADB_BOOTSTRAP_SH), + ("99-fakecloud-bootstrap.sql.tmpl", MARIADB_BOOTSTRAP_SQL), + ]; + self.build_image_local( + tag, + &assets, + &format!("MARIADB_VERSION={major_version}"), + "fakecloud-mariadb", + ) + .await + } + + async fn build_image_local( + &self, + tag: &str, + assets: &[(&str, &str)], + build_arg: &str, + image_label: &str, + ) -> Result<(), RuntimeError> { + let build_dir = + tempfile::tempdir().map_err(|e| RuntimeError::ContainerStartFailed(e.to_string()))?; for (name, contents) in assets { tokio::fs::write(build_dir.path().join(name), contents) .await @@ -706,18 +825,12 @@ impl RdsRuntime { tracing::info!( tag = %tag, - "Building fakecloud-postgres image locally (first use can take ~60s)" + image = %image_label, + "Building {image_label} image locally (first use can take ~60s)" ); let output = tokio::process::Command::new(&self.cli) - .args([ - "build", - "--build-arg", - &format!("PG_VERSION={major_version}"), - "-t", - tag, - ".", - ]) + .args(["build", "--build-arg", build_arg, "-t", tag, "."]) .current_dir(build_dir.path()) .output() .await @@ -925,20 +1038,22 @@ fn detect_bridge_gateway(cli: &str) -> Option { Some(gateway) } -/// Build the postgres image reference for a given major version. Uses -/// `/fakecloud-postgres:-`, where -/// the registry comes from `FAKECLOUD_POSTGRES_REGISTRY` (defaults to -/// the public `ghcr.io/faiscadev`). The version pin guarantees the -/// runtime asks the daemon for the same image CI publishes for this -/// fakecloud release; mismatched assets force a local rebuild via -/// the fall-through in `ensure_postgres_image`. -fn postgres_image_tag(major_version: &str) -> String { +/// Build the prebuilt-image reference for a given engine + major +/// version. Uses `/:-`, +/// where the registry comes from `FAKECLOUD_POSTGRES_REGISTRY` (kept +/// historical name; defaults to the public `ghcr.io/faiscadev`). +/// The version pin guarantees the runtime asks the daemon for the +/// same image CI publishes for this fakecloud release; mismatched +/// assets force a local rebuild via the fall-through in +/// `ensure_bridge_image`. +fn bridge_image_tag(image: &str, major_version: &str) -> String { let registry = std::env::var("FAKECLOUD_POSTGRES_REGISTRY") .unwrap_or_else(|_| DEFAULT_POSTGRES_REGISTRY.to_string()); let registry = registry.trim_end_matches('/'); format!( - "{}/fakecloud-postgres:{}-{}", + "{}/{}:{}-{}", registry, + image, major_version, env!("CARGO_PKG_VERSION") ) @@ -949,24 +1064,38 @@ mod tests { use super::*; /// Single test (rather than three) so the cases run sequentially — - /// `postgres_image_tag` reads a process-global env var and parallel + /// `bridge_image_tag` reads a process-global env var and parallel /// `cargo test` workers would race over it otherwise. #[test] - fn postgres_image_tag_resolves_registry_overrides() { + fn bridge_image_tag_resolves_registry_overrides() { let prev = std::env::var("FAKECLOUD_POSTGRES_REGISTRY").ok(); std::env::remove_var("FAKECLOUD_POSTGRES_REGISTRY"); assert_eq!( - postgres_image_tag("16"), + bridge_image_tag("fakecloud-postgres", "16"), format!( "ghcr.io/faiscadev/fakecloud-postgres:16-{}", env!("CARGO_PKG_VERSION") ) ); + assert_eq!( + bridge_image_tag("fakecloud-mysql", "8.0"), + format!( + "ghcr.io/faiscadev/fakecloud-mysql:8.0-{}", + env!("CARGO_PKG_VERSION") + ) + ); + assert_eq!( + bridge_image_tag("fakecloud-mariadb", "10.11"), + format!( + "ghcr.io/faiscadev/fakecloud-mariadb:10.11-{}", + env!("CARGO_PKG_VERSION") + ) + ); std::env::set_var("FAKECLOUD_POSTGRES_REGISTRY", "registry.example.com/team"); assert_eq!( - postgres_image_tag("15"), + bridge_image_tag("fakecloud-postgres", "15"), format!( "registry.example.com/team/fakecloud-postgres:15-{}", env!("CARGO_PKG_VERSION") @@ -975,7 +1104,7 @@ mod tests { std::env::set_var("FAKECLOUD_POSTGRES_REGISTRY", "registry.example.com/team/"); assert_eq!( - postgres_image_tag("13"), + bridge_image_tag("fakecloud-postgres", "13"), format!( "registry.example.com/team/fakecloud-postgres:13-{}", env!("CARGO_PKG_VERSION") diff --git a/website/content/docs/services/rds.md b/website/content/docs/services/rds.md index f38669f7..378d692e 100644 --- a/website/content/docs/services/rds.md +++ b/website/content/docs/services/rds.md @@ -24,6 +24,7 @@ fakecloud implements **163 of 163** RDS operations at 100% Smithy conformance. D - **EventBridge events** — lifecycle ops emit `aws.rds` events on the `default` bus, deliverable to SQS, SNS, Lambda, etc. via standard EB rules - **PostgreSQL `aws_lambda` extension** — call fakecloud Lambda functions from inside RDS PostgreSQL via `CREATE EXTENSION aws_lambda CASCADE` and `aws_lambda.invoke(...)` (subset of the AWS RDS extension surface; see below) - **PostgreSQL `aws_s3` extension** — import objects from fakecloud S3 into tables (`aws_s3.table_import_from_s3`) and export query results back to S3 (`aws_s3.query_export_to_s3`); see below +- **MySQL / MariaDB Aurora Lambda bridge** — Aurora-compatible `mysql.lambda_async` / `mysql.lambda_sync` stored procedures invoke fakecloud Lambda functions from inside the DB container; see below ## EventBridge integration @@ -120,6 +121,25 @@ The `options` argument is forwarded verbatim into the underlying postgres `COPY` The bridges (`/_fakecloud/rds/s3-import`, `/_fakecloud/rds/s3-export`) read and write the in-memory S3 state of the same fakecloud server, so any object that's visible to a `GetObject`/`PutObject` call against fakecloud is reachable from `aws_s3`. +## MySQL / MariaDB Aurora Lambda bridge + +Aurora MySQL exposes Lambda invocation as built-in stored procedures (`mysql.lambda_async`, `mysql.lambda_sync`). fakecloud's prebuilt `fakecloud-mysql` and `fakecloud-mariadb` images provide the same surface so SQL inside an RDS-managed instance can invoke fakecloud Lambda functions: + +```sql +-- Async, fire-and-forget. Returns immediately. +CALL mysql.lambda_async('my_function', '{"k":1}'); + +-- Synchronous, returns the function payload as a JSON string. +SELECT mysql.lambda_sync('my_function', '{"hello":"world"}'); +``` + +Implemented procedures (subset of the AWS Aurora MySQL Lambda surface): + +- `mysql.lambda_async(function_name TEXT, payload TEXT)` — `Event`-style invocation; returns nothing. +- `mysql.lambda_sync(function_name TEXT, payload TEXT) RETURNS TEXT` — `RequestResponse`; returns the function payload as JSON. + +Under the hood the prebuilt image bakes a small libcurl-backed UDF (`fakecloud_post`, `fakecloud_post_async`) that POSTs to `/_fakecloud/rds/lambda-invoke` against `host.docker.internal`. A bootstrap script renders the host endpoint, account ID, and region from the container's `FAKECLOUD_*` env vars (set automatically by `RdsRuntime`) so SQL never has to know the host. Like the postgres image, the runtime tries to pull the published `fakecloud-mysql:-` (or `fakecloud-mariadb:-`) tag first and falls back to a local build when the pull fails. + ## Asynchronous instance creation `CreateDBInstance` returns ~immediately with `DBInstanceStatus = "creating"`. The container start (and the underlying image pull/build for postgres) runs in the background; `DescribeDBInstances` returns the live status. Callers should poll until the status flips to `available` before connecting: @@ -159,8 +179,8 @@ When you call `CreateDBInstance` for PostgreSQL/MySQL/MariaDB/Oracle/SQL Server/ | Engine | Image | Port | Wait probe | |--------|-------|------|------------| | `postgres` | `ghcr.io/faiscadev/fakecloud-postgres:-` (prebuilt with `plpython3u` + the `aws_commons`, `aws_lambda`, and `aws_s3` extensions on top of `postgres:`; falls back to a local build if the pull fails) | 5432 | `tokio-postgres` ping | -| `mysql` | `mysql:` | 3306 | `mysql_async` ping | -| `mariadb` | `mariadb:` | 3306 | `mysql_async` ping | +| `mysql` | `ghcr.io/faiscadev/fakecloud-mysql:-` (prebuilt with the libcurl-backed `fakecloud_post` UDF + Aurora-compatible `mysql.lambda_async` / `mysql.lambda_sync` stored procedures on top of `mysql:`; falls back to a local build if the pull fails) | 3306 | `mysql_async` ping | +| `mariadb` | `ghcr.io/faiscadev/fakecloud-mariadb:-` (same UDF + stored procedures, on top of `mariadb:`) | 3306 | `mysql_async` ping | | `oracle-ee` / `oracle-se2` (+`-cdb`) | `gvenzl/oracle-free:23-slim` | 1521 | log marker `DATABASE IS READY TO USE!` + TCP probe | | `sqlserver-ee` / `-se` / `-ex` / `-web` | `mcr.microsoft.com/mssql/server:2022-latest` | 1433 | log marker `SQL Server is now ready for client connections` + TCP probe | | `db2-se` / `db2-ae` | `icr.io/db2_community/db2:latest` | 50000 | log marker `Setup has completed` + TCP probe |