[codex] Add local Playwright workflow smoke coverage#24
Conversation
📝 WalkthroughWalkthroughThis PR introduces comprehensive Playwright-based end-to-end testing infrastructure alongside targeted instrumentation updates. Changes include new test helper modules, shell orchestration scripts, a Playwright config, and a smoke test suite. Frontend components are updated with data-testid attributes for test targeting. Backend systems are extended with export workflow permissions, environment seeding updates, and minor API/storage refactoring. Docker Compose configuration transitions to ephemeral demo databases. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (5)
sql_api/tests.py (1)
3224-3309: Tighten backup-toggle assertions to validate config effect (not just payload echo).Both tests currently assert that persisted
is_backupmatches request input, so they still pass even ifenable_backup_switchis ignored. Please add one explicit assertion case that distinguishes toggle behavior (or rename tests to reflect payload passthrough semantics).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sql_api/tests.py` around lines 3224 - 3309, The tests currently only echo the request payload so they don't prove the enable_backup_switch is applied; change test_submit_workflow_does_not_force_backup_when_toggle_disabled to send a payload with "is_backup": True while SysConfig.set("enable_backup_switch", False) and assert the persisted SqlWorkflow.is_backup is False (verify via SqlWorkflow.objects.get(id=workflow_id).is_backup), and keep test_submit_workflow_preserves_backup_when_toggle_enabled sending "is_backup": True with SysConfig.set("enable_backup_switch", True) and asserting the persisted SqlWorkflow.is_backup is True; reference the test methods test_submit_workflow_does_not_force_backup_when_toggle_disabled, test_submit_workflow_preserves_backup_when_toggle_enabled, SysConfig, SqlWorkflow, and response_data to locate and update the assertions and request payload.frontend/src/views/WorkflowCreateView.vue (1)
745-752: Give the backup checkbox an explicit accessible name.This control has an
id, but no associated<label for>oraria-label, which weakens screen-reader usability.Suggested minimal fix
- <input + <input id="workflow-backup-toggle" v-model="form.isBackup" data-testid="workflow-backup-toggle" + aria-label="Backup SQL" class="h-4 w-4 rounded border-slate-300" type="checkbox" :disabled="submitting" />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/views/WorkflowCreateView.vue` around lines 745 - 752, The backup checkbox input with id "workflow-backup-toggle" lacks an accessible name; add an explicit label or aria-label tied to that input so screen readers can announce it. Update the template in WorkflowCreateView.vue to either add a <label for="workflow-backup-toggle">Backup workflow</label> adjacent to the input or add an aria-label (e.g., aria-label="Enable workflow backup") on the input element bound to form.isBackup; ensure the label text or aria-label clearly describes the control and that the :disabled="submitting" behavior is preserved.sql/management/commands/smoke_local_demo.py (1)
97-100: Consider asserting expected demo instance names, not only non-empty export scope.A
>= 1check can pass even if one or more seeded demo instances silently drop from export visibility.Proposed tightening
- if len(export_metadata.get("instances", [])) < 1: + export_instances = { + item.get("instance_name") + for item in export_metadata.get("instances", []) + if item.get("instance_name") + } + expected_export_instances = { + "demo-mysql-workflow", + "demo-pgsql-workflow", + } + if not expected_export_instances.issubset(export_instances): raise CommandError( - "Export submission metadata is missing readable demo instances." + "Export submission metadata is missing expected demo instances." )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sql/management/commands/smoke_local_demo.py` around lines 97 - 100, The current check in smoke_local_demo (where export_metadata.get("instances", []) is validated) only ensures non-empty export scope; update it to verify that the exported instance names match the expected demo instances by name. In the command handler (smoke_local_demo) compute the set of expected demo instance names (e.g., expected_instances = {"demo1", "demo2", ...} or derive from DEMO_SEED or similar) and compare against {inst["name"] for inst in export_metadata.get("instances", [])}; if any expected names are missing, raise CommandError listing the missing instance names so missing seeded demos fail loudly instead of silently passing the >=1 check.frontend/tests/e2e/support/workflow-helpers.ts (1)
148-165: Synchronous Docker exec blocks the Node event loop.
execFileSyncblocks the Node process while the Docker command runs. For a short-lived configuration command this is acceptable, but if the container is slow to respond or the command times out, it could cause test hangs without clear error reporting.Consider adding a timeout option to
execFileSyncto bound the blocking duration.♻️ Optional: Add timeout to prevent indefinite blocking
export function setBackupSwitchEnabled(enabled: boolean) { execFileSync( 'docker', [ 'exec', 'datamingle-app', 'python', 'manage.py', 'shell', '-c', `from common.config import SysConfig; SysConfig().set("enable_backup_switch", ${enabled ? 'True' : 'False'})`, ], { cwd: REPO_ROOT, stdio: 'pipe', + timeout: 30_000, }, ) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/tests/e2e/support/workflow-helpers.ts` around lines 148 - 165, The setBackupSwitchEnabled function uses execFileSync which can block the Node event loop indefinitely if Docker is slow; add a timeout option to the execFileSync call (e.g., via the options object passed to execFileSync in setBackupSwitchEnabled) to bound how long the sync call can block and surface a clear error if the command exceeds the timeout; optionally consider switching to the async execFile or spawn APIs for non-blocking behavior, but at minimum add a reasonable timeout value and handle the thrown error from execFileSync in setBackupSwitchEnabled to improve test reliability and error reporting.scripts/e2e/run-local-playwright.sh (1)
33-36: Consider extracting the verification script for readability.The inline Node script on line 35 is functional but difficult to read and maintain. Consider extracting it to a separate file or at least using a heredoc for better formatting.
♻️ Optional: Use heredoc for readability
verify_frontend() { cd "$FRONTEND_DIR" - node -e "const { chromium } = require('@playwright/test'); (async () => { const browser = await chromium.launch(); const page = await browser.newPage(); await page.goto('${FRONTEND_URL}/login', { waitUntil: 'networkidle' }); const count = await page.locator('[data-testid=login-username]').count(); await browser.close(); if (count === 0) { process.exit(1); } })().catch(async (error) => { console.error(error); process.exit(1); });" + node <<EOF +const { chromium } = require('@playwright/test'); +(async () => { + const browser = await chromium.launch(); + const page = await browser.newPage(); + await page.goto('${FRONTEND_URL}/login', { waitUntil: 'networkidle' }); + const count = await page.locator('[data-testid=login-username]').count(); + await browser.close(); + if (count === 0) process.exit(1); +})().catch((error) => { + console.error(error); + process.exit(1); +}); +EOF }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/e2e/run-local-playwright.sh` around lines 33 - 36, The inline Node script inside verify_frontend is hard to read; extract it into a standalone script (e.g., run_headless_verify.js) or emit it via a heredoc and then invoke node from verify_frontend so the logic is readable and maintainable; move the Playwright logic that imports chromium, launches the browser, navigates to "${FRONTEND_URL}/login", checks locator '[data-testid=login-username]' count, exits nonzero on failure, and prints errors into that file, then change verify_frontend to cd "$FRONTEND_DIR" and run node against the extracted script (keeping use of FRONTEND_DIR, FRONTEND_URL, verify_frontend and the '[data-testid=login-username]' locator).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@frontend/src/views/WorkflowCreateView.vue`:
- Around line 745-752: The backup checkbox input with id
"workflow-backup-toggle" lacks an accessible name; add an explicit label or
aria-label tied to that input so screen readers can announce it. Update the
template in WorkflowCreateView.vue to either add a <label
for="workflow-backup-toggle">Backup workflow</label> adjacent to the input or
add an aria-label (e.g., aria-label="Enable workflow backup") on the input
element bound to form.isBackup; ensure the label text or aria-label clearly
describes the control and that the :disabled="submitting" behavior is preserved.
In `@frontend/tests/e2e/support/workflow-helpers.ts`:
- Around line 148-165: The setBackupSwitchEnabled function uses execFileSync
which can block the Node event loop indefinitely if Docker is slow; add a
timeout option to the execFileSync call (e.g., via the options object passed to
execFileSync in setBackupSwitchEnabled) to bound how long the sync call can
block and surface a clear error if the command exceeds the timeout; optionally
consider switching to the async execFile or spawn APIs for non-blocking
behavior, but at minimum add a reasonable timeout value and handle the thrown
error from execFileSync in setBackupSwitchEnabled to improve test reliability
and error reporting.
In `@scripts/e2e/run-local-playwright.sh`:
- Around line 33-36: The inline Node script inside verify_frontend is hard to
read; extract it into a standalone script (e.g., run_headless_verify.js) or emit
it via a heredoc and then invoke node from verify_frontend so the logic is
readable and maintainable; move the Playwright logic that imports chromium,
launches the browser, navigates to "${FRONTEND_URL}/login", checks locator
'[data-testid=login-username]' count, exits nonzero on failure, and prints
errors into that file, then change verify_frontend to cd "$FRONTEND_DIR" and run
node against the extracted script (keeping use of FRONTEND_DIR, FRONTEND_URL,
verify_frontend and the '[data-testid=login-username]' locator).
In `@sql_api/tests.py`:
- Around line 3224-3309: The tests currently only echo the request payload so
they don't prove the enable_backup_switch is applied; change
test_submit_workflow_does_not_force_backup_when_toggle_disabled to send a
payload with "is_backup": True while SysConfig.set("enable_backup_switch",
False) and assert the persisted SqlWorkflow.is_backup is False (verify via
SqlWorkflow.objects.get(id=workflow_id).is_backup), and keep
test_submit_workflow_preserves_backup_when_toggle_enabled sending "is_backup":
True with SysConfig.set("enable_backup_switch", True) and asserting the
persisted SqlWorkflow.is_backup is True; reference the test methods
test_submit_workflow_does_not_force_backup_when_toggle_disabled,
test_submit_workflow_preserves_backup_when_toggle_enabled, SysConfig,
SqlWorkflow, and response_data to locate and update the assertions and request
payload.
In `@sql/management/commands/smoke_local_demo.py`:
- Around line 97-100: The current check in smoke_local_demo (where
export_metadata.get("instances", []) is validated) only ensures non-empty export
scope; update it to verify that the exported instance names match the expected
demo instances by name. In the command handler (smoke_local_demo) compute the
set of expected demo instance names (e.g., expected_instances = {"demo1",
"demo2", ...} or derive from DEMO_SEED or similar) and compare against
{inst["name"] for inst in export_metadata.get("instances", [])}; if any expected
names are missing, raise CommandError listing the missing instance names so
missing seeded demos fail loudly instead of silently passing the >=1 check.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f4c97362-ab8d-4a19-bd51-d362d8b1635f
⛔ Files ignored due to path filters (1)
frontend/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (26)
.gitignorefrontend/.gitignorefrontend/README.mdfrontend/package.jsonfrontend/playwright.config.tsfrontend/src/components/queries/SqlCodeEditor.vuefrontend/src/views/ExportWorkflowCreateView.vuefrontend/src/views/LoginView.vuefrontend/src/views/WorkflowCreateView.vuefrontend/src/views/WorkflowsView.vuefrontend/tests/e2e/support/workflow-helpers.tsfrontend/tests/e2e/workflow-smoke.spec.tsscripts/e2e/reset-local-env.shscripts/e2e/run-local-playwright.shscripts/e2e/run-local-smoke.shsql/local_demo.pysql/management/commands/smoke_local_demo.pysql/offlinedownload.pysql/test_local_demo.pysql/test_offlinedownload.pysql_api/api_workflow.pysql_api/serializers.pysql_api/tests.pysrc/docker-compose/LOCAL_DEMO.mdsrc/docker-compose/docker-compose.local-arm.ymlsrc/docker-compose/mysql-demo/init/001-demo-schema.sql
💤 Files with no reviewable changes (2)
- sql_api/serializers.py
- src/docker-compose/docker-compose.local-arm.yml
What changed
This PR adds a local-first Playwright smoke suite for the seeded demo environment and wires in the supporting reset scripts, frontend test hooks, and backend/demo fixes needed to make the end-to-end flows deterministic.
It covers:
It also makes the local demo database containers ephemeral again so the seeded demo state is recreated from scratch for each destructive E2E reset.
Why it changed
We wanted browser-level confidence that the real user workflows work against the local demo stack, not just API-level checks.
While wiring the suite, the runs exposed two backend/demo issues:
is_backup=truewhen the backup toggle was disabledRoot cause and fixes
sql_apino longer overrides the submitted backup value when the backup switch is hidden.REPLICATION CLIENTon*.*todemo_archery, which allows the backup/binlog path to execute in the local demo environment.enable_backup_switchduring the run and asserts the savedis_backupflag through the workflow detail API for both backup-off and backup-on DML paths.Impact
frontend/npm run e2enow performs a destructive Docker reset, reseeds the local demo stack, starts the frontend locally on port5173, and runs the Playwright smoke suite.Validation
bash scripts/e2e/reset-local-env.shdocker exec datamingle-app python manage.py smoke_local_demodocker exec datamingle-app python manage.py test sql_api.tests.TestWorkflow.test_submit_workflow_does_not_force_backup_when_toggle_disabled sql_api.tests.TestWorkflow.test_submit_workflow_preserves_backup_when_toggle_enableddocker exec datamingle-app python manage.py test sql_api.tests.TestWorkflow.test_submit_export_workflow sql_api.tests.TestWorkflow.test_sqlcheck_ignores_schema_name_for_engine_execute_checkblack --check sql_api/serializers.py sql_api/tests.pynpm run buildinfrontend/E2E_START_FRONTEND=1 bash ../scripts/e2e/run-local-playwright.shinfrontend/Summary by CodeRabbit
Release Notes
New Features
Improvements
Chores