fix(db): unblock Supabase Preview — add created_by guard in t25 migration#229
Conversation
Supabase Preview was failing with: ERROR column "created_by" does not exist (SQLSTATE 42703) at the ALTER POLICY ck_insert_self statement. Root cause: custom_kits.created_by was added in PROD via Lovable Dashboard before the project migrated to git. No git migration creates the column. Production has it; Supabase Preview branches (fresh DB) explode. The DO ... EXCEPTION block catches undefined_table/object/function but NOT undefined_column (42703), so the failure bubbles up. Fix: idempotent ALTER TABLE ... ADD COLUMN IF NOT EXISTS just before the ALTER POLICY. No-op in prod, fixes Preview, documents the column in git.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
WalkthroughInclui migrações RLS idempotentes (custom_kits + material_groups), adiciona função de rate-limit de login, e ajusta validação de tema: Playwright project, testes atualizados, script de relatório, .gitignore e workflow CI para coverage/reporting. ChangesRLS migrations: custom_kits & material_groups
Database utilities
Theme validation, Playwright e CI
Sequence Diagram(s)sequenceDiagram
participant Playwright as Playwright (theme-validation)
participant Reporter as scripts/generate-theme-report.mjs
participant CI as GitHub Actions
participant Artifact as Actions Artifact
Playwright->>Reporter: escreve JSON em theme-validation-output/theme-validation-data.json
Reporter->>CI: produz HTML/CSV em playwright-report/
CI->>Artifact: faz upload dos artifacts (inclui theme-validation-output/theme-validation-data.json)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
|
Updates to Preview Branch (fix/supabase-preview-custom-kits-created-by) ↗︎
Tasks are run on every commit but only new migration files are pushed.
❌ Branch Error • Fri, 15 May 2026 17:21:20 UTC View logs for this Workflow Run ↗︎. |
There was a problem hiding this comment.
Pull request overview
Adds an idempotent ADD COLUMN IF NOT EXISTS created_by uuid to public.custom_kits immediately before the ALTER POLICY ck_insert_self statement in the t25 migration, so Supabase Preview (which builds DB from scratch) doesn't fail with 42703 column "created_by" does not exist — the existing EXCEPTION WHEN undefined_table OR undefined_object OR undefined_function clause does not catch undefined_column.
Changes:
- Add
ALTER TABLE public.custom_kits ADD COLUMN IF NOT EXISTS created_by uuidinside the same DO block that re-definesck_insert_self. - Add a PT comment documenting that the column was created out-of-band in prod via Lovable Dashboard.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c5f4e49c8b
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| DO $$ | ||
| BEGIN | ||
| -- Coluna criada em prod fora do git (Lovable Dashboard). Adiciona se faltar para alinhar Preview/Prod. | ||
| ALTER TABLE public.custom_kits ADD COLUMN IF NOT EXISTS created_by uuid; |
There was a problem hiding this comment.
Prevent forged ownership on kit inserts
Adding created_by as a client-writable nullable column makes the existing ck_insert_self check on the next line effective in fresh databases; an authenticated caller can directly insert a custom_kits row with user_id set to another user's UUID and created_by set to their own UUID, satisfying the OR (created_by = auth.uid()) branch. Because other policies/key paths use user_id to select/manage kits, this lets one user create records under another user's ownership unless the column is server-populated or the policy also requires user_id = auth.uid().
Useful? React with 👍 / 👎.
| DO $$ | ||
| BEGIN | ||
| -- Coluna criada em prod fora do git (Lovable Dashboard). Adiciona se faltar para alinhar Preview/Prod. | ||
| ALTER TABLE public.custom_kits ADD COLUMN IF NOT EXISTS created_by uuid; |
There was a problem hiding this comment.
Move the schema repair outside the policy guard
When ck_insert_self is absent in a database built from these migrations (repo-wide search only finds ALTER POLICY entries for that policy), the following ALTER POLICY raises undefined_object; the handler catches it, but PostgreSQL rolls back statements already executed in that protected block, so this ADD COLUMN is undone and the fresh schema still does not contain the documented created_by column. Put the column creation in its own block before the best-effort policy alteration so the schema repair persists even when the policy does not exist.
Useful? React with 👍 / 👎.
| DO $$ | ||
| BEGIN | ||
| -- Coluna criada em prod fora do git (Lovable Dashboard). Adiciona se faltar para alinhar Preview/Prod. | ||
| ALTER TABLE public.custom_kits ADD COLUMN IF NOT EXISTS created_by uuid; |
There was a problem hiding this comment.
Populate created_by or exclude it from ownership audit
The daily ownership-audit RPC scans every UUID owner column named created_by and counts WHERE created_by IS NULL as an issue, while the existing custom_kits insert paths only send user_id and never populate this new nullable column. On fresh environments this makes every kit created through the app appear as a null-owner violation in ownership_audit_reports even though user_id is present, so either backfill/default this column from user_id or keep custom_kits.created_by out of that audit.
Useful? React with 👍 / 👎.
…otent) Same pattern as the custom_kits.created_by fix in the previous commit. material_groups was created in PROD outside of git (10 rows in prod). Supabase Preview builds fresh DB → table missing → t35 explodes. T35 was the only one in the T25-T36 wave NOT wrapped in DO/EXCEPTION. Wrapping each ALTER POLICY aligns it with the rest of the wave. In prod policies apply normally; in Preview they're skipped silently (coherent — no table to attach them to anyway). This commit also brings in PR #227 fixes via merge (login locator, theme orphan, etc.) to unblock the Critical Flows E2E and Theme & Accessibility Gate which were red on this branch because it was branched from main BEFORE PR #227 was merged.
…tc.) Worker MCP push_files creates single commits, so the earlier merge from main didn't propagate the PR #227 fixes to the remote branch. This commit brings them in: - e2e/login.spec.ts: data-testid selector for forgot password link - playwright.config.ts: dedicated theme-validation project - e2e/theme-validation.spec.ts: a11y-only (screenshots removed) - scripts/generate-theme-report.mjs: neutral path, tolerates missing JSON - .github/workflows/ci.yml: critical-coverage step in correct job - .gitignore: theme-validation-output/ Unblocks Critical Flows E2E + Theme & Accessibility Gate on this PR.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@scripts/generate-theme-report.mjs`:
- Around line 20-29: The current block that sets failures (JSON_FILE, failures,
fs.existsSync(JSON_FILE)) silently falls back to an empty array when the JSON is
missing; change it to fail fast in CI by checking process.env.CI (or another CI
flag) after fs.existsSync(JSON_FILE) returns false and calling process.exit(1)
or throwing an Error with a clear message so the pipeline fails, while
preserving the existing console.warn + failures = [] behavior for non-CI/local
runs; update the logic around the failures assignment and the missing-file
branch where fs.existsSync(JSON_FILE) is used.
In
`@supabase/migrations/20260512000013_t35_fix_material_groups_rls_broken_subquery.sql`:
- Line 18: O trecho que engole exceções está na cláusula "EXCEPTION WHEN
undefined_table OR undefined_object OR undefined_function OR undefined_column
THEN NULL;" — ajuste para capturar apenas undefined_table (por exemplo, mudar
para "EXCEPTION WHEN undefined_table THEN NULL") para não suprimir erros que
indiquem policies/tabelas/colunas faltantes; aplique a mesma correção nos quatro
blocos que usam a mesma cláusula (as ocorrências que listam undefined_object,
undefined_function e undefined_column devem ser removidas).
- Around line 16-17: The migration is missing an index on
public.material_groups(organization_id) while the RLS policies mg_select,
mg_insert, mg_update and mg_delete all filter by organization_id; add a CREATE
INDEX (suggested name idx_material_groups_org_id) on the organization_id column
in this migration or immediately in a follow-up migration so RLS queries use the
index and avoid full table scans.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: e01c3132-75be-4754-b0c0-79de4ac41140
📒 Files selected for processing (7)
.github/workflows/ci.yml.gitignoree2e/login.spec.tse2e/theme-validation.spec.tsplaywright.config.tsscripts/generate-theme-report.mjssupabase/migrations/20260512000013_t35_fix_material_groups_rls_broken_subquery.sql
✅ Files skipped from review due to trivial changes (2)
- playwright.config.ts
- .gitignore
| // Tolerância: se JSON ausente, assumir failures=[] (testes não geraram saída | ||
| // mas se chegamos aqui é pq playwright passou — gera relatório vazio em vez | ||
| // de falhar hard). | ||
| let failures = []; | ||
| if (fs.existsSync(JSON_FILE)) { | ||
| failures = JSON.parse(fs.readFileSync(JSON_FILE, 'utf-8')); | ||
| } else { | ||
| console.warn('[generate-theme-report] JSON ausente em', JSON_FILE); | ||
| console.warn('[generate-theme-report] Gerando relatório vazio (0 failures).'); | ||
| } |
There was a problem hiding this comment.
Fallback silencioso para JSON ausente pode mascarar falha do gate.
Quando o arquivo não existe, o script segue com failures = [] e termina com sucesso. Em CI isso pode publicar relatório “verde” sem ter executado/coletado dados corretamente. Recomendo falhar explicitamente em CI e manter o fallback só localmente.
💡 Ajuste sugerido
let failures = [];
if (fs.existsSync(JSON_FILE)) {
failures = JSON.parse(fs.readFileSync(JSON_FILE, 'utf-8'));
} else {
- console.warn('[generate-theme-report] JSON ausente em', JSON_FILE);
- console.warn('[generate-theme-report] Gerando relatório vazio (0 failures).');
+ if (process.env.CI === 'true') {
+ console.error('[generate-theme-report] JSON ausente em', JSON_FILE);
+ process.exit(1);
+ }
+ console.warn('[generate-theme-report] JSON ausente em', JSON_FILE);
+ console.warn('[generate-theme-report] Gerando relatório vazio (0 failures).');
}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@scripts/generate-theme-report.mjs` around lines 20 - 29, The current block
that sets failures (JSON_FILE, failures, fs.existsSync(JSON_FILE)) silently
falls back to an empty array when the JSON is missing; change it to fail fast in
CI by checking process.env.CI (or another CI flag) after
fs.existsSync(JSON_FILE) returns false and calling process.exit(1) or throwing
an Error with a clear message so the pipeline fails, while preserving the
existing console.warn + failures = [] behavior for non-CI/local runs; update the
logic around the failures assignment and the missing-file branch where
fs.existsSync(JSON_FILE) is used.
| ALTER POLICY "mg_select" ON public.material_groups | ||
| USING (organization_id = (SELECT current_setting('app.current_org_id'::text, true))::uuid); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Procura criação de índice para material_groups.organization_id nas migrações
rg -nP --type=sql -C2 \
'create\s+(unique\s+)?index\b.*\bon\s+public\.material_groups\b.*\(\s*organization_id\s*\)|\bon\s+public\.material_groups\s*\(\s*organization_id\s*\)' \
supabase/migrations || trueRepository: adm01-debug/Promo_Gifts
Length of output: 49
🏁 Script executed:
cat -n supabase/migrations/20260512000013_t35_fix_material_groups_rls_broken_subquery.sqlRepository: adm01-debug/Promo_Gifts
Length of output: 2346
🏁 Script executed:
# Procura por índices na tabela material_groups (qualquer índice, não apenas organization_id)
rg -nP --type=sql 'CREATE.*INDEX.*material_groups|ON\s+public\.material_groups' supabase/migrationsRepository: adm01-debug/Promo_Gifts
Length of output: 1560
🏁 Script executed:
# Procura pela definição/criação da tabela material_groups
rg -nP --type=sql -A 20 'CREATE TABLE.*material_groups' supabase/migrationsRepository: adm01-debug/Promo_Gifts
Length of output: 49
Índice faltando em public.material_groups(organization_id) para RLS.
Todas as 4 policies (mg_select, mg_insert, mg_update, mg_delete) filtram por organization_id, mas não há índice nessa coluna. Sem índice, cada query via RLS vai fazer full table scan — performance degradará conforme a tabela cresce.
Crie o índice na mesma migração ou em uma seguinte para evitar regressão:
CREATE INDEX idx_material_groups_org_id ON public.material_groups(organization_id);🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In
`@supabase/migrations/20260512000013_t35_fix_material_groups_rls_broken_subquery.sql`
around lines 16 - 17, The migration is missing an index on
public.material_groups(organization_id) while the RLS policies mg_select,
mg_insert, mg_update and mg_delete all filter by organization_id; add a CREATE
INDEX (suggested name idx_material_groups_org_id) on the organization_id column
in this migration or immediately in a follow-up migration so RLS queries use the
index and avoid full table scans.
| BEGIN | ||
| ALTER POLICY "mg_select" ON public.material_groups | ||
| USING (organization_id = (SELECT current_setting('app.current_org_id'::text, true))::uuid); | ||
| EXCEPTION WHEN undefined_table OR undefined_object OR undefined_function OR undefined_column THEN NULL; |
There was a problem hiding this comment.
Evite engolir exceções que podem ocultar regressão de segurança nas policies.
Do jeito atual, undefined_object, undefined_function e undefined_column são silenciados. Isso pode esconder falha real na aplicação do fix (ex.: policy ausente/renomeada) e deixar o comportamento antigo sem alerta. Para idempotência de Preview, tratar ausência de tabela já cobre o caso principal.
💡 Ajuste sugerido (replicar nos 4 blocos)
-EXCEPTION WHEN undefined_table OR undefined_object OR undefined_function OR undefined_column THEN NULL;
+EXCEPTION
+ WHEN undefined_table THEN
+ NULL; -- ambiente sem a tabela (Preview/fresh DB)Also applies to: 25-25, 33-33, 40-40
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In
`@supabase/migrations/20260512000013_t35_fix_material_groups_rls_broken_subquery.sql`
at line 18, O trecho que engole exceções está na cláusula "EXCEPTION WHEN
undefined_table OR undefined_object OR undefined_function OR undefined_column
THEN NULL;" — ajuste para capturar apenas undefined_table (por exemplo, mudar
para "EXCEPTION WHEN undefined_table THEN NULL") para não suprimir erros que
indiquem policies/tabelas/colunas faltantes; aplique a mesma correção nos quatro
blocos que usam a mesma cláusula (as ocorrências que listam undefined_object,
undefined_function e undefined_column devem ser removidas).
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 19ed9bb708
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| mask: [page.locator('[data-testid="dynamic-content"]')], // Mascarar conteúdo dinâmico se houver | ||
| }); | ||
| } | ||
| // 3. (Visual regression removida — ver decisão em PR #227) |
There was a problem hiding this comment.
Fail the theme gate on recorded violations
With the screenshot assertion removed here, the theme-validation tests only append contrast/typography violations to the failures array; neither this spec nor scripts/generate-theme-report.mjs exits non-zero when that array is non-empty. In the theme-validation workflow, a preset that violates WCAG contrast or typography expectations now still passes and just uploads a report, so the advertised accessibility gate no longer blocks regressions.
Useful? React with 👍 / 👎.
| # bloqueio com gate global — esse step é sinal informativo per-file. | ||
| - name: Enforce Critical Modules Coverage (per-file gate) | ||
| if: always() | ||
| continue-on-error: true |
There was a problem hiding this comment.
Make critical coverage failures block CI
After the same job now runs Vitest coverage with all global thresholds forced to 0, this is the only remaining check in the coverage job for the critical modules listed in scripts/check-critical-modules-coverage.mjs. Because the step is marked continue-on-error: true, a real drop below the 40% floor for FiltersPage, KitBuilderPage, or MockupGenerator is reported but the job still succeeds, so the new per-file gate does not actually prevent the regression it is meant to catch.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
1 issue found across 7 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="e2e/theme-validation.spec.ts">
<violation number="1" location="e2e/theme-validation.spec.ts:120">
P1: This change removes the only assertion in the route loop, so contrast/typography failures are only logged to JSON and the test can still pass. Add an assertion after the checks to fail the test when the current route accumulates validation failures.</violation>
</file>
Tip: Review your code locally with the cubic CLI to iterate faster.
Fix all with cubic
Re-trigger cubic
| mask: [page.locator('[data-testid="dynamic-content"]')], // Mascarar conteúdo dinâmico se houver | ||
| }); | ||
| } | ||
| // 3. (Visual regression removida — ver decisão em PR #227) |
There was a problem hiding this comment.
P1: This change removes the only assertion in the route loop, so contrast/typography failures are only logged to JSON and the test can still pass. Add an assertion after the checks to fail the test when the current route accumulates validation failures.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At e2e/theme-validation.spec.ts, line 120:
<comment>This change removes the only assertion in the route loop, so contrast/typography failures are only logged to JSON and the test can still pass. Add an assertion after the checks to fail the test when the current route accumulates validation failures.</comment>
<file context>
@@ -114,13 +117,20 @@ test.describe('Theme Consistency & Visual Regression', () => {
- mask: [page.locator('[data-testid="dynamic-content"]')], // Mascarar conteúdo dinâmico se houver
- });
- }
+ // 3. (Visual regression removida — ver decisão em PR #227)
+ //
+ // Antes: `toHaveScreenshot(${preset.id}-${mode}-home.png)` para cada
</file context>
| // 3. (Visual regression removida — ver decisão em PR #227) | |
| const routeFailures = failures.filter((f) => f.preset === preset.name && f.mode === mode && f.route === route); | |
| expect(routeFailures, routeFailures.map((f) => `${f.type}: ${f.details}`).join('\n')).toHaveLength(0); |
Tip: Review your code locally with the cubic CLI to iterate faster.
Supabase Preview was failing with: Remote migration versions not found in local migrations directory. Diagnostic: production schema_migrations table had 2 versions registered that had no corresponding file in supabase/migrations/: - 20260515124035 fix_audit_ownership_orphans_only_uuid_columns - 20260515125758 onda_20a_check_login_rate_limit_rpc Both were applied directly via apply_migration in another Claude session on 2026-05-15 (Onda 20.A hardening wave) but never committed to git. Reconstructed the SQL content from supabase_migrations.schema_migrations.statements and committed as proper migration files. Preview now builds same schema as Prod.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In
`@supabase/migrations/20260515124035_fix_audit_ownership_orphans_only_uuid_columns.sql`:
- Around line 31-33: The SECURITY DEFINER function currently allows anonymous
callers because the condition only checks roles; update the authorization check
in the function (the block that raises 'audit_ownership_orphans: acesso negado')
to explicitly block NULL auth.uids by changing the predicate to reject when
auth.uid() IS NULL or the caller lacks admin/dev roles (i.e., add an auth.uid()
IS NULL check combined with the existing has_role(...) checks), so anonymous
callers cannot execute the SECURITY DEFINER function that writes to
ownership_audit_reports and reads auth.users/information_schema.
In `@supabase/migrations/20260515125758_onda_20a_check_login_rate_limit_rpc.sql`:
- Around line 52-58: The current query uses max(created_at) which returns the
most recent failure and makes the lockout "sliding"; instead, for the email- and
IP-based checks (the blocks that set _failed_count and _last_failure), change
the logic to compute the timestamp of the failure that reached the threshold
(the Nth failure) rather than the most recent one: first get the ordered set of
failures from public.login_attempts filtered the same way, then SELECT count(*)
INTO _failed_count and SELECT created_at INTO _last_failure FROM (SELECT
created_at FROM public.login_attempts WHERE ... ORDER BY created_at ASC LIMIT 1
OFFSET (threshold-1)) t; use threshold=5 for the email check and threshold=20
for the IP check (replace the existing max(created_at) usage in the blocks that
set _failed_count/_last_failure at lines referenced), so _lockout_until is
computed from the Nth failure timestamp instead of the max timestamp.
- Around line 102-108: A exceção atual retorna a mensagem crua do Postgres via
SQLERRM no bloco EXCEPTION WHEN OTHERS (a linha que monta jsonb_build_object com
'error', 'remaining_seconds' etc.), o que vaza informação para
anon/authenticated; altere para manter o comportamento fail-closed mas
substituir SQLERRM por um valor estável e não informativo (por exemplo um código
fixo como 'internal_error' ou NULL) e registre o SQLERRM apenas em logs
internos/telemetria dentro do SECURITY DEFINER context; edite o retorno no bloco
EXCEPTION (onde 'error' é definido) para usar esse código estável em vez de
SQLERRM.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: bbce9c13-d02e-4e2d-9331-ca50e6e94069
📒 Files selected for processing (2)
supabase/migrations/20260515124035_fix_audit_ownership_orphans_only_uuid_columns.sqlsupabase/migrations/20260515125758_onda_20a_check_login_rate_limit_rpc.sql
| IF auth.uid() IS NOT NULL AND NOT (has_role(auth.uid(), 'admin'::app_role) OR has_role(auth.uid(), 'dev'::app_role)) THEN | ||
| RAISE EXCEPTION 'audit_ownership_orphans: acesso negado'; | ||
| END IF; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "20260515124035_fix_audit_ownership_orphans_only_uuid_columns.sql" -type fRepository: adm01-debug/Promo_Gifts
Length of output: 154
🏁 Script executed:
cat -n ./supabase/migrations/20260515124035_fix_audit_ownership_orphans_only_uuid_columns.sqlRepository: adm01-debug/Promo_Gifts
Length of output: 4269
Bloqueie execução anônima em função SECURITY DEFINER.
A autorização atual (linha 31) permite execução quando auth.uid() IS NULL. Combinado com SECURITY DEFINER, isso deixa a função acessível a chamadores anônimos com privilégios elevados—violando a guideline de segurança para funções SECURITY DEFINER.
A função escreve em ownership_audit_reports e lê auth.users e information_schema. Anônimos conseguem triggar operações custosas indefinidamente (DoS). Força fix:
Patch necessário
- IF auth.uid() IS NOT NULL AND NOT (has_role(auth.uid(), 'admin'::app_role) OR has_role(auth.uid(), 'dev'::app_role)) THEN
+ IF NOT (
+ auth.role() = 'service_role'
+ OR (
+ auth.uid() IS NOT NULL
+ AND (
+ public.has_role(auth.uid(), 'admin'::app_role)
+ OR public.has_role(auth.uid(), 'dev'::app_role)
+ )
+ )
+ ) THEN
RAISE EXCEPTION 'audit_ownership_orphans: acesso negado';
END IF;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| IF auth.uid() IS NOT NULL AND NOT (has_role(auth.uid(), 'admin'::app_role) OR has_role(auth.uid(), 'dev'::app_role)) THEN | |
| RAISE EXCEPTION 'audit_ownership_orphans: acesso negado'; | |
| END IF; | |
| IF NOT ( | |
| auth.role() = 'service_role' | |
| OR ( | |
| auth.uid() IS NOT NULL | |
| AND ( | |
| public.has_role(auth.uid(), 'admin'::app_role) | |
| OR public.has_role(auth.uid(), 'dev'::app_role) | |
| ) | |
| ) | |
| ) THEN | |
| RAISE EXCEPTION 'audit_ownership_orphans: acesso negado'; | |
| END IF; |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In
`@supabase/migrations/20260515124035_fix_audit_ownership_orphans_only_uuid_columns.sql`
around lines 31 - 33, The SECURITY DEFINER function currently allows anonymous
callers because the condition only checks roles; update the authorization check
in the function (the block that raises 'audit_ownership_orphans: acesso negado')
to explicitly block NULL auth.uids by changing the predicate to reject when
auth.uid() IS NULL or the caller lacks admin/dev roles (i.e., add an auth.uid()
IS NULL check combined with the existing has_role(...) checks), so anonymous
callers cannot execute the SECURITY DEFINER function that writes to
ownership_audit_reports and reads auth.users/information_schema.
| SELECT count(*), max(created_at) | ||
| INTO _failed_count, _last_failure | ||
| FROM public.login_attempts | ||
| WHERE email = _email | ||
| AND success = false | ||
| AND created_at > now() - (_window_minutes || ' minutes')::interval | ||
| AND (_last_success IS NULL OR created_at > _last_success); |
There was a problem hiding this comment.
O lockout vira deslizante e pode manter email/IP bloqueado por tempo indefinido.
Aqui o desbloqueio é calculado a partir da última falha (max(created_at)), então a 6ª, 7ª, 8ª tentativa empurram _lockout_until para frente. Isso contradiz a regra descrita no cabeçalho (“5 min após a 5ª falha”) e abre margem para DoS mantendo um usuário preso com retries contínuos. O cálculo precisa usar o timestamp da falha que atingiu o limiar (5ª por email / 20ª por IP), não da falha mais recente.
Also applies to: 61-69, 76-84
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@supabase/migrations/20260515125758_onda_20a_check_login_rate_limit_rpc.sql`
around lines 52 - 58, The current query uses max(created_at) which returns the
most recent failure and makes the lockout "sliding"; instead, for the email- and
IP-based checks (the blocks that set _failed_count and _last_failure), change
the logic to compute the timestamp of the failure that reached the threshold
(the Nth failure) rather than the most recent one: first get the ordered set of
failures from public.login_attempts filtered the same way, then SELECT count(*)
INTO _failed_count and SELECT created_at INTO _last_failure FROM (SELECT
created_at FROM public.login_attempts WHERE ... ORDER BY created_at ASC LIMIT 1
OFFSET (threshold-1)) t; use threshold=5 for the email check and threshold=20
for the IP check (replace the existing max(created_at) usage in the blocks that
set _failed_count/_last_failure at lines referenced), so _lockout_until is
computed from the Nth failure timestamp instead of the max timestamp.
| EXCEPTION WHEN OTHERS THEN | ||
| -- Fail-CLOSED em erro (padrão das Ondas 6/7/B-7) | ||
| RETURN jsonb_build_object( | ||
| 'allowed', false, | ||
| 'reason', 'rate_limit_check_error', | ||
| 'error', SQLERRM, | ||
| 'remaining_seconds', 0 |
There was a problem hiding this comment.
Não devolva SQLERRM para anon/authenticated.
Essa função é pública no fluxo de login e roda como SECURITY DEFINER. Retornar a mensagem bruta do Postgres vaza detalhes internos de schema/permissões justamente para quem ainda nem autenticou. Mantenha o fail-closed, mas responda só com um código estável.
💡 Ajuste sugerido
EXCEPTION WHEN OTHERS THEN
-- Fail-CLOSED em erro (padrão das Ondas 6/7/B-7)
RETURN jsonb_build_object(
'allowed', false,
'reason', 'rate_limit_check_error',
- 'error', SQLERRM,
'remaining_seconds', 0
);🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@supabase/migrations/20260515125758_onda_20a_check_login_rate_limit_rpc.sql`
around lines 102 - 108, A exceção atual retorna a mensagem crua do Postgres via
SQLERRM no bloco EXCEPTION WHEN OTHERS (a linha que monta jsonb_build_object com
'error', 'remaining_seconds' etc.), o que vaza informação para
anon/authenticated; altere para manter o comportamento fail-closed mas
substituir SQLERRM por um valor estável e não informativo (por exemplo um código
fixo como 'internal_error' ou NULL) e registre o SQLERRM apenas em logs
internos/telemetria dentro do SECURITY DEFINER context; edite o retorno no bloco
EXCEPTION (onde 'error' é definido) para usar esse código estável em vez de
SQLERRM.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 7b313974c1
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| _email := lower(trim(_email)); | ||
|
|
||
| -- Último sucesso por este email (delimita janela de falhas) | ||
| SELECT max(created_at) INTO _last_success | ||
| FROM public.login_attempts | ||
| WHERE email = _email AND success = true; |
There was a problem hiding this comment.
Normalize stored login emails before counting
When failed attempts are logged with mixed-case addresses, the new rate-limit check lowercases only the lookup value while log-login-attempt stores the raw email body into the text column. Because this query then compares with WHERE email = _email, rows recorded as User@Example.com are missed when the next check uses user@example.com, letting repeated failures bypass the 5-attempt email lockout by varying capitalization. Normalize on insert/backfill or compare against lower(email) here.
Useful? React with 👍 / 👎.
| CREATE OR REPLACE FUNCTION public.check_login_rate_limit( | ||
| _email text, | ||
| _ip text DEFAULT NULL | ||
| ) |
There was a problem hiding this comment.
Call the server-side login limiter before sign-in
This migration adds and grants the server-side limiter, but it is not wired into the login flow: a repo-wide search for check_login_rate_limit only finds this migration, while AuthContext.signIn still checks only the sessionStorage-based checkLoginAllowed before calling signInWithPassword. In browsers/incognito sessions where sessionStorage is cleared, the new persistent login_attempts policy is therefore never enforced, so the brute-force bypass described in this migration's comments remains open until the RPC is invoked before sign-in.
Useful? React with 👍 / 👎.
| -- Último sucesso por este email (delimita janela de falhas) | ||
| SELECT max(created_at) INTO _last_success | ||
| FROM public.login_attempts | ||
| WHERE email = _email AND success = true; |
There was a problem hiding this comment.
Don't reset lockouts from public telemetry rows
The new limiter treats any success = true row as the delimiter that clears prior failures, but supabase/functions/log-login-attempt/index.ts is a public endpoint that inserts the caller-supplied success value using the service role. Once this RPC is used before sign-in, an unauthenticated caller can forge a successful login event for any email between failed attempts, making _last_success skip the real failures and preventing the 5-attempt lockout from ever triggering. Only trusted auth success events should reset the counter.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="supabase/migrations/20260515125758_onda_20a_check_login_rate_limit_rpc.sql">
<violation number="1" location="supabase/migrations/20260515125758_onda_20a_check_login_rate_limit_rpc.sql:107">
P1: Do not return `SQLERRM` to `anon/authenticated` callers; expose a generic error string instead.</violation>
</file>
Tip: Review your code locally with the cubic CLI to iterate faster.
Fix all with cubic
Re-trigger cubic
| RETURN jsonb_build_object( | ||
| 'allowed', false, | ||
| 'reason', 'rate_limit_check_error', | ||
| 'error', SQLERRM, |
There was a problem hiding this comment.
P1: Do not return SQLERRM to anon/authenticated callers; expose a generic error string instead.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At supabase/migrations/20260515125758_onda_20a_check_login_rate_limit_rpc.sql, line 107:
<comment>Do not return `SQLERRM` to `anon/authenticated` callers; expose a generic error string instead.</comment>
<file context>
@@ -0,0 +1,118 @@
+ RETURN jsonb_build_object(
+ 'allowed', false,
+ 'reason', 'rate_limit_check_error',
+ 'error', SQLERRM,
+ 'remaining_seconds', 0
+ );
</file context>
Tip: Review your code locally with the cubic CLI to iterate faster.
Causa-raiz
O gate
Supabase Previewestava falhando com:A coluna
custom_kits.created_byé fantasma:custom_kits)EXCEPTION WHEN undefined_table OR undefined_object OR undefined_functionnão capturaundefined_column(42703)Confirmado via query no DB de prod:
Fix
Adicionar
ALTER TABLE public.custom_kits ADD COLUMN IF NOT EXISTS created_by uuidantes doALTER POLICY ck_insert_self, dentro do mesmo bloco DO.IF NOT EXISTSé no-op porque a coluna já existeValidação esperada
Supabase Previewgate passa (era a única tarefa deste PR)Decisão senior
Escolhida Opção B (
ADD COLUMN IF NOT EXISTS+ALTER POLICYno mesmo bloco) em vez de Opção A (adicionarOR undefined_columnao EXCEPTION), porque:Opção C (nova migration anterior) descartada — timestamp anterior em prod já aplicada reproduz exatamente o erro
Remote migration versions not foundque travou o PR #222.Summary by cubic
Makes t25/t35 DB migrations idempotent and adds two missing prod migrations to align Preview with Prod, unblocking Supabase Preview and keeping CI green.
Bug Fixes
ALTER TABLE ... ADD COLUMN IF NOT EXISTS created_by uuidbeforeALTER POLICYin the sameDOblock.material_groupsALTER POLICYstatements inDO ... EXCEPTION, includingundefined_column, for idempotency.20260515124035_fix_audit_ownership_orphans_only_uuid_columns:audit_ownership_orphansscans only UUID owner columns to avoid TEXT cast errors.20260515125758_onda_20a_check_login_rate_limit_rpc:check_login_rate_limitRPC (SECURITY DEFINER, fail-closed) to gate sign-in attempts.CI & Tests
coveragejob; set global thresholds to 0 in report-only runs.theme-validation-output/; report script tolerates missing JSON and creates report dirs; update artifact paths.data-testid="login-forgot-link"in the login spec; remove screenshot-based visual regression from theme validation (a11y-only).Written for commit 7b31397. Summary will update on new commits. Review in cubic
Summary by CodeRabbit
Bug Fixes
New Features
CI
Tests
Chores