Harden backend auth paths, token lifecycle, and spoofable request metadata#64
Harden backend auth paths, token lifecycle, and spoofable request metadata#64weroperking wants to merge 2 commits intomainfrom
Conversation
WalkthroughAdds DB-backed admin JWT revocation (jti, 8h expiry, issuer/audience), enforces JWT verification and server-side logout; migrates IP capture to X-Real-IP (nginx + app), tightens storage/env validation and upload input validation, adds per-IP rate limiting for device codes, secures WS upgrade, exports an Inngest cron to cleanup expired revocations, and hardens CSV exports. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Server as API Server
participant DB as Database
participant Audit as Audit Log
Client->>Server: POST /admin/auth/logout\n(Authorization: Bearer token)
Server->>Server: Extract token\nverifyAdminToken(token)
Server->>DB: SELECT FROM revoked_admin_tokens WHERE jti=?
alt Token valid & not revoked (jti & exp present)
Server->>DB: INSERT INTO revoked_admin_tokens (jti, admin_user_id, expires_at) ON CONFLICT DO NOTHING
Server->>DB: SELECT admin user by id (payload.sub)
Server->>Audit: INSERT audit entry (action:"admin.logout", ipAddress, userAgent)
Server->>Client: { success: true }
else Token missing/invalid/revoked
Server->>Client: { success: true } (no revocation/audit)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@CodeRabbit Full review |
|
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 15
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
docker/nginx/nginx.conf (1)
22-117:⚠️ Potential issue | 🔴 CriticalAdd explicit
/betterbase/location block to route to betterbase_server.No
/betterbase/location block exists in nginx.conf. Requests to/betterbase/wsand/betterbase/storage/generate-upload-urlfall through to the catch-alllocation /which proxies to betterbase_dashboard:80 instead of betterbase_server:3001. The auth checks added in these backend endpoints are unreachable from outside the Docker network.This block is needed:
Required nginx routing
location /betterbase/ { proxy_pass http://betterbase_server; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 3600s; }(Must come before the catch-all
location /to take precedence.)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker/nginx/nginx.conf` around lines 22 - 117, Requests under /betterbase/ are currently falling through to the catch-all location and hitting betterbase_dashboard instead of betterbase_server; add a new location block named "location /betterbase/" (before the existing "location /" catch-all) that proxies to http://betterbase_server, includes websocket support (proxy_http_version 1.1, Upgrade/Connection headers), forwards Host, X-Real-IP, X-Forwarded-For and X-Forwarded-Proto, and sets a long proxy_read_timeout so endpoints like /betterbase/ws and /betterbase/storage/generate-upload-url reach betterbase_server.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docker/nginx/nginx.conf`:
- Around line 69-73: The /health location block in nginx.conf is missing the
X-Real-IP header which causes the request-logging middleware
(packages/server/src/index.ts reading X-Real-IP around line 39) to record
ip=null for health checks; update the location /health proxy configuration (the
same block that sets proxy_pass http://betterbase_server and proxy_set_header
X-Forwarded-For / X-Forwarded-Proto) to also set proxy_set_header X-Real-IP
$remote_addr so health-check requests provide the real client IP in
request_logs.
In `@packages/server/migrations/017_revoked_admin_tokens.sql`:
- Around line 1-9: The revoked_admin_tokens table has an unused expires_at (and
index idx_revoked_admin_tokens_expires_at) because the logout insert only stores
(jti, admin_user_id); update code and schema: modify the logout flow in
admin/auth.ts (where the insert of revoked token occurs) to include the JWT exp
converted to a TIMESTAMPTZ (e.g., to_timestamp(payload.exp)) so expires_at is
populated, and then either (a) alter the migration for
betterbase_meta.revoked_admin_tokens to set expires_at NOT NULL (or keep
nullable) and ensure the index remains, plus add a periodic cleanup job that
runs DELETE FROM betterbase_meta.revoked_admin_tokens WHERE expires_at < NOW(),
or (b) if you prefer not to use expiry, drop the expires_at column and the
idx_revoked_admin_tokens_expires_at index from the migration; choose one of
these fixes and apply the corresponding change to the logout INSERT and
migration.
In `@packages/server/src/lib/auth.ts`:
- Line 50: The current check rejects tokens that lack a jti (if (!payload.sub ||
!payload.jti) return null), which invalidates all previously minted admin
tokens; change this to only require payload.sub (e.g., if (!payload.sub) return
null) and adapt downstream logic that expects payload.jti to tolerate undefined
jti (generate a fallback id or skip jti-specific checks), and optionally log a
warning when payload.jti is missing so operators can detect legacy tokens;
update references to payload.jti in the same module/function to handle the
undefined case safely.
- Around line 43-62: The current try/catch in verifyAdminToken (the block using
validateEnv, jwtVerify, getSecret, payload checks and the revocation check via
getPool()/pool.query) swallows DB/query errors and makes DB outages look like
invalid tokens; split the logic so jwtVerify and payload validation remain in
their own try/catch (so cryptographic failures return null as before) and
perform the revocation lookup in a separate try/catch; on DB/query errors log a
clear error via your logger (include payload.jti/sub and the error) and then
explicitly choose and implement behavior (fail-closed: return null after
logging, or fail-open: return the admin info despite the DB error), rather than
silently returning null.
In `@packages/server/src/lib/env.ts`:
- Around line 58-65: The current env validation only checks
result.data.STORAGE_ENDPOINT and allows callers to use defaults
(env.STORAGE_ENDPOINT ?? "http://minio:9000" and env.STORAGE_ACCESS_KEY ?? "")
which lets storage code run with empty creds; update validation in the env
module so that when storage routes or functions may be used you fail fast:
either (A) remove the fallbacks at call sites (references: buildStorageCtx and
the direct S3Client in betterbase route) and require callers to pass a non-null
env.STORAGE_ENDPOINT, STORAGE_ACCESS_KEY and STORAGE_SECRET_KEY, or (B) extend
the env validation to treat a defaulted STORAGE_ENDPOINT (e.g., empty or the
implicit minio URL) the same as an explicit one and exit with an error if
STORAGE_ACCESS_KEY or STORAGE_SECRET_KEY are missing; pick one approach and
apply it consistently to the symbols result.data.STORAGE_ENDPOINT,
STORAGE_ACCESS_KEY, STORAGE_SECRET_KEY, buildStorageCtx and the S3 client
construction in betterbase/index.ts.
In `@packages/server/src/routes/admin/project-scoped/users.ts`:
- Around line 233-237: There are two implementations of CSV-escaping (the local
escapeCsv in the users route and escapeCSVValue in lib/inngest.ts); extract a
single shared helper (e.g., export function escapeCSVValue in a new lib/csv.ts)
that uses the more complete dangerous-char regex and conditional quoting logic
from lib/inngest.ts, update the users route to import and use that exported
escapeCSVValue (remove the local escapeCsv), and update lib/inngest.ts to import
the same helper so both call sites use the identical implementation.
- Line 8: CSV_DANGEROUS_PREFIX currently misses newline and can be bypassed by
values starting with "\n"; update the regex used in
packages/server/src/routes/admin/project-scoped/users.ts (symbol
CSV_DANGEROUS_PREFIX) to include \n in the character class so it matches the
same set as the sibling helper escapeCSVValue (i.e., add \n alongside = + - @ \t
\r) to prevent formula-injection bypasses.
In `@packages/server/src/routes/betterbase/index.ts`:
- Around line 113-114: The current fallback to empty strings for
env.STORAGE_ACCESS_KEY and env.STORAGE_SECRET_KEY masks missing credentials and
yields opaque S3 errors; remove the "" fallback and instead enforce these
variables as required in validateEnv when storage is enabled. Update validateEnv
(and any storage-enable branch that checks env.STORAGE_PROVIDER) to throw or
return a clear configuration error if STORAGE_ACCESS_KEY or STORAGE_SECRET_KEY
are missing/empty, and ensure the code paths that construct the S3 client (where
accessKey/secretKey are used) no longer substitute empty strings but rely on
validated values.
- Around line 60-61: adminPayload is non-null after the early return, so remove
the dead optional chaining and null coalescing when building authCtx; set
authCtx to use adminPayload.sub directly (e.g., authCtx = { userId:
adminPayload.sub, token }) to avoid masking future shape changes and improve
clarity in the auth context construction.
- Line 23: The SAFE_PROJECT_SLUG regex currently allows underscores and forbids
hyphens which contradicts betterbase_meta.projects.slug; update the
SAFE_PROJECT_SLUG constant to enforce the canonical charset by replacing
/^[a-z][a-z0-9_]{0,62}$/ with a pattern that permits hyphens and disallows
underscores (e.g. /^[a-z][a-z0-9-]{0,62}$/) so project slugs with '-' validate
and '_' are rejected; keep the same length bounds and ensure any code paths
using SAFE_PROJECT_SLUG (the constant) rely on the new pattern.
- Around line 192-210: The route handler registered at
betterbaseRouter.post("/storage/generate-upload-url") currently trusts filename
and derives ext with filename?.split(".").pop(), which can produce path
traversal or throw on non-strings; fix by validating filename is a primitive
string (e.g., typeof filename === "string") and limit/normalize ext: compute ext
only after trimming filename, reject if it contains "/" or "?" or exceeds a
short length (e.g., >16 chars), and fallback to "" if invalid; ensure s3Key
construction uses the sanitized ext and does not include any path separators,
and wrap the filename parsing in a try/catch to return a 400 error instead of
allowing the handler to throw (update variables filename, ext, and s3Key
accordingly).
- Around line 197-202: Replace the direct await c.req.json() and ad-hoc typeof
check with a `@hono/zod-validator` schema that declares contentType and filename,
attach that validator to the route so the handler receives a validated body
(instead of calling c.req.json()), and remove the manual typeof/contentType
parsing; then check the validated contentType against
ALLOWED_UPLOAD_CONTENT_TYPES (using the existing safeContentType logic) and
return c.json({ error: "Unsupported content type" }, 400) on mismatch. Ensure
the handler does not call c.req.json() or throw on malformed input—validation
failures should be handled by the zod-validator middleware so the handler only
deals with the validated values (contentType, filename) and returns clean JSON
errors rather than letting exceptions bubble to onError.
In `@packages/server/src/routes/betterbase/ws.ts`:
- Around line 144-149: The code currently accepts a JWT from the query string
(token) and passes it into server.upgrade (verifyAdminToken, projectSlug), which
leaks credentials into logs and allows any valid admin token to bind to any
project; instead mint and validate a short-lived single‑use WS ticket via an
authenticated HTTP endpoint and accept ?ticket=… (not the raw JWT) here, or at
minimum remove/scrub token from the URL/searchParams before calling
server.upgrade (so it cannot be logged) and ensure you validate that the
verified payload (verifyAdminToken result) is authorized for the requested
projectSlug (check payload.sub and roles/scopes against projectSlug) before
proceeding with server.upgrade.
- Around line 141-152: The fetch handler is using await when calling
verifyAdminToken(token) but is not declared async; change the function signature
for fetch(req: Request, server: any) to an async function (e.g., async
fetch(...)) so await is valid, and ensure any callers expecting a synchronous
return (like server.upgrade) still work with the promise return (adjust
surrounding call sites if needed); verifyAdminToken, payload checks,
server.upgrade and the Response returns should remain unchanged except for the
async signature update.
In `@packages/server/src/routes/device/index.ts`:
- Around line 14-27: The current in-memory rate limiter (deviceCodeRateLimits)
leaks and groups all header-less requests under "unknown"; update the POST /code
handler to 1) derive a safer client key instead of a literal "unknown" (e.g.,
prefer a validated X-Real-IP when present else fallback to
c.req.socket.remoteAddress or another non-spoofable identifier and
normalize/validate it), 2) after filtering the timestamps (using
DEVICE_CODE_RATE_LIMIT_WINDOW_MS) remove the Map entry when the resulting array
is empty so entries are pruned, and 3) optionally cap the Map (or attach a
TTL/eviction like scheduling a delete after the window) to prevent unbounded
growth; keep DEVICE_CODE_RATE_LIMIT_MAX logic but ensure keys are not a shared
literal and entries are garbage-collected.
---
Outside diff comments:
In `@docker/nginx/nginx.conf`:
- Around line 22-117: Requests under /betterbase/ are currently falling through
to the catch-all location and hitting betterbase_dashboard instead of
betterbase_server; add a new location block named "location /betterbase/"
(before the existing "location /" catch-all) that proxies to
http://betterbase_server, includes websocket support (proxy_http_version 1.1,
Upgrade/Connection headers), forwards Host, X-Real-IP, X-Forwarded-For and
X-Forwarded-Proto, and sets a long proxy_read_timeout so endpoints like
/betterbase/ws and /betterbase/storage/generate-upload-url reach
betterbase_server.
🪄 Autofix (Beta)
✅ Autofix completed
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: e4527a20-cd88-4d85-ae41-b36d0591e78f
📒 Files selected for processing (11)
docker/nginx/nginx.confpackages/server/migrations/017_revoked_admin_tokens.sqlpackages/server/src/index.tspackages/server/src/lib/audit.tspackages/server/src/lib/auth.tspackages/server/src/lib/env.tspackages/server/src/routes/admin/auth.tspackages/server/src/routes/admin/project-scoped/users.tspackages/server/src/routes/betterbase/index.tspackages/server/src/routes/betterbase/ws.tspackages/server/src/routes/device/index.ts
|
Note Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it. Fixes Applied SuccessfullyFixed 9 file(s) based on 15 unresolved review comments. Files modified:
Commit: The changes have been pushed to the Time taken: |
Fixed 9 file(s) based on 15 unresolved review comments. Co-authored-by: CodeRabbit <noreply@coderabbit.ai>
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/server/src/routes/admin/auth.ts`:
- Around line 67-99: The logout handler (authRoutes.post("/logout")) currently
performs DB queries (revocation INSERT and admin SELECT) without error handling
and awaits writeAuditLog; wrap the DB calls that use getPool().query (the INSERT
into betterbase_meta.revoked_admin_tokens and the SELECT from
betterbase_meta.admin_users) in a try/catch so the route does not throw: decide
fail-closed for revocation (if the INSERT fails return c.json({ error:
"revocation_failed" }) and do not proceed) or fail-open if you prefer (log the
error and continue to return success); ensure the SELECT is similarly guarded
and returns a handled error instead of throwing; change the writeAuditLog call
to fire-and-forget (call writeAuditLog(...) without await and attach a
.catch(...) to log failures) so audit writes are never awaited.
In `@packages/server/src/routes/betterbase/index.ts`:
- Around line 264-276: The S3Client is being recreated per request (see the
local variable s3 and new S3Client(...) inside the route); move S3Client to
module scope so all requests reuse one client: declare a module-level holder
(e.g. let s3: S3Client | null) and lazily initialize it once after verifying
env.STORAGE_ENDPOINT / env.STORAGE_ACCESS_KEY / env.STORAGE_SECRET_KEY (use the
same constructor options currently used). Replace the per-request new S3Client
call with the module-level instance and ensure error handling still returns the
"Storage not configured" response when env vars are missing.
- Around line 199-207: The zod schema passed to zValidator in the
betterbaseRouter.post for "/storage/generate-upload-url" currently requires
filename and has no length bounds; change the schema so filename is optional
(use .optional()) and treat absent filename as empty when downstream logic
(e.g., filename?.split...) expects it, and add size limits: apply .max(255) to
filename and .max(127) to contentType to prevent huge allocations; update the
schema object inside the zValidator call accordingly so callers that omit
filename continue to work and both fields are capped.
- Line 24: validateSchemaName currently uses a regex that disallows hyphens and
allows underscores, which diverges from SAFE_PROJECT_SLUG and causes
exportProjectUsers to throw for hyphenated slugs; update validateSchemaName to
use a regex that permits hyphens and forbids underscores (match
SAFE_PROJECT_SLUG semantics) so project slugs like my-project are accepted
everywhere. Locate the validateSchemaName function and replace its pattern with
one that starts with a lowercase letter and allows lowercase letters, digits and
hyphens (no underscores), ensuring its validation logic mirrors the
SAFE_PROJECT_SLUG rule used elsewhere.
In `@packages/server/src/routes/device/index.ts`:
- Around line 42-50: The current eviction in the deviceCodeRateLimits cleanup
block performs Array.from + sort on every request once size >
RATE_LIMIT_MAP_MAX_SIZE, causing O(n log n) work per request; replace it with a
cheaper eviction strategy: either run the expensive cleanup periodically via
setInterval (e.g., a background sweep that performs the sort/slice against
deviceCodeRateLimits) or perform an insertion-order eviction using the Map's
order to drop the oldest N entries in O(N) by iterating
deviceCodeRateLimits.keys() and deleting the first
Math.floor(RATE_LIMIT_MAP_MAX_SIZE * 0.1) keys; update the code around
deviceCodeRateLimits and the cleanup block (where entries/entries.sort/toDelete
are used) to use one of these approaches and ensure the eviction batch size and
scheduling are clearly defined.
- Around line 32-39: The cleanup branch is dead because recent.push(now) runs
before the empty check; change the logic in the deviceCodeRateLimits handling so
you prune stale timestamps (the existing filter on recent) and check
recent.length before calling recent.push(now): if the filtered recent is empty
then call deviceCodeRateLimits.delete(ip) and do not push/set, otherwise
push(now) and deviceCodeRateLimits.set(ip, recent). This ensures the delete path
can run and prevents relying only on the cap-based eviction.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: f1f30851-9c7b-4dc0-8d26-34dc47cb03c3
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock,!**/bun.lock,!**/*.lock
📒 Files selected for processing (8)
docker/nginx/nginx.confpackages/server/src/lib/auth.tspackages/server/src/lib/inngest.tspackages/server/src/routes/admin/auth.tspackages/server/src/routes/admin/project-scoped/users.tspackages/server/src/routes/betterbase/index.tspackages/server/src/routes/betterbase/ws.tspackages/server/src/routes/device/index.ts
| authRoutes.post("/logout", async (c) => { | ||
| const token = extractBearerToken(c.req.header("Authorization")); | ||
| if (!token) return c.json({ success: true }); | ||
|
|
||
| const payload = await verifyAdminToken(token); | ||
| if (!payload) return c.json({ success: true }); | ||
|
|
||
| const pool = getPool(); | ||
| // Only revoke if jti is present | ||
| if (payload.jti && payload.exp) { | ||
| await pool.query( | ||
| `INSERT INTO betterbase_meta.revoked_admin_tokens (jti, admin_user_id, expires_at) | ||
| VALUES ($1, $2, to_timestamp($3)) | ||
| ON CONFLICT (jti) DO NOTHING`, | ||
| [payload.jti, payload.sub, payload.exp], | ||
| ); | ||
| } | ||
|
|
||
| const { rows } = await pool.query("SELECT id, email FROM betterbase_meta.admin_users WHERE id = $1", [ | ||
| payload.sub, | ||
| ]); | ||
| if (rows.length > 0) { | ||
| await writeAuditLog({ | ||
| actorId: rows[0].id, | ||
| actorEmail: rows[0].email, | ||
| action: "admin.logout", | ||
| ipAddress: getClientIp(c.req.raw.headers), | ||
| userAgent: c.req.header("User-Agent") ?? undefined, | ||
| }); | ||
| } | ||
|
|
||
| return c.json({ success: true }); | ||
| }); |
There was a problem hiding this comment.
Logout handler can throw on DB errors and awaits the audit log.
Two issues against the coding guidelines:
-
Audit log is awaited (line 89). As per coding guidelines: "Audit log writes: fire-and-forget. Never await."
-
Unhandled exceptions at lines 77 and 85: a Postgres failure on either the revocation
INSERTor the adminSELECTpropagates out of the handler, hits the globalonError, and returns 500. The client perceives logout failure even though the token may already have been revoked (or worse — believes they're logged out when revocation actually failed). As per coding guidelines: "Route handlers must not throw — errors should be caught and return c.json({error}). The global onError handler catches the rest but shouldn't be the primary mechanism."
Wrap the DB calls in try/catch and decide deliberately: fail-closed (return 500 with explicit error if revocation fails — admin must know their token is still valid) or fail-open (return success but log the revocation failure for ops).
Proposed shape
- const pool = getPool();
- // Only revoke if jti is present
- if (payload.jti && payload.exp) {
- await pool.query(
- `INSERT INTO betterbase_meta.revoked_admin_tokens (jti, admin_user_id, expires_at)
- VALUES ($1, $2, to_timestamp($3))
- ON CONFLICT (jti) DO NOTHING`,
- [payload.jti, payload.sub, payload.exp],
- );
- }
-
- const { rows } = await pool.query("SELECT id, email FROM betterbase_meta.admin_users WHERE id = $1", [
- payload.sub,
- ]);
- if (rows.length > 0) {
- await writeAuditLog({
+ const pool = getPool();
+ if (payload.jti && payload.exp) {
+ try {
+ await pool.query(
+ `INSERT INTO betterbase_meta.revoked_admin_tokens (jti, admin_user_id, expires_at)
+ VALUES ($1, $2, to_timestamp($3))
+ ON CONFLICT (jti) DO NOTHING`,
+ [payload.jti, payload.sub, payload.exp],
+ );
+ } catch (err) {
+ console.error("[auth] failed to revoke token", err);
+ return c.json({ error: "Logout failed; please retry." }, 500);
+ }
+ }
+
+ let rows: any[] = [];
+ try {
+ ({ rows } = await pool.query("SELECT id, email FROM betterbase_meta.admin_users WHERE id = $1", [payload.sub]));
+ } catch (err) {
+ console.error("[auth] failed to load admin for audit", err);
+ }
+ if (rows.length > 0) {
+ writeAuditLog({
actorId: rows[0].id,
actorEmail: rows[0].email,
action: "admin.logout",
ipAddress: getClientIp(c.req.raw.headers),
userAgent: c.req.header("User-Agent") ?? undefined,
- });
+ }).catch(() => {});
}As per coding guidelines: "Audit log writes: fire-and-forget. Never await." and "Route handlers must not throw — errors should be caught and return c.json({error})."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/routes/admin/auth.ts` around lines 67 - 99, The logout
handler (authRoutes.post("/logout")) currently performs DB queries (revocation
INSERT and admin SELECT) without error handling and awaits writeAuditLog; wrap
the DB calls that use getPool().query (the INSERT into
betterbase_meta.revoked_admin_tokens and the SELECT from
betterbase_meta.admin_users) in a try/catch so the route does not throw: decide
fail-closed for revocation (if the INSERT fails return c.json({ error:
"revocation_failed" }) and do not proceed) or fail-open if you prefer (log the
error and continue to return success); ensure the SELECT is similarly guarded
and returns a handled error instead of throwing; change the writeAuditLog call
to fire-and-forget (call writeAuditLog(...) without await and attach a
.catch(...) to log failures) so audit writes are never awaited.
| import { getSignedUrl } from "@aws-sdk/s3-request-presigner"; | ||
|
|
||
| export const betterbaseRouter = new Hono(); | ||
| const SAFE_PROJECT_SLUG = /^[a-z][a-z0-9-]{0,62}$/; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find all slug-validating regexes in the server package to confirm divergence
rg -nP --type=ts -C2 '\[a-z(\]|0-9[_-])'
# Confirm the inngest validator
rg -nP --type=ts -C3 'validateSchemaName|Invalid project slug'Repository: weroperking/Betterbase
Length of output: 4749
🏁 Script executed:
rg -n 'validateSchemaName' packages/server/srcRepository: weroperking/Betterbase
Length of output: 251
Fix regex divergence in validateSchemaName — hyphenated project slugs break CSV export.
SAFE_PROJECT_SLUG at line 24 permits hyphens (/^[a-z][a-z0-9-]{0,62}$/, correct per the [a-z0-9-] canonical constraint). But validateSchemaName in packages/server/src/lib/inngest.ts:6–12 rejects hyphens and only accepts underscores (/^[a-z][a-z0-9_]*$/).
Any project with a hyphen passes validation here and in admin routes, but exportProjectUsers (line 377) calls validateSchemaName and throws Invalid project slug, breaking CSV export for those tenants. All other slug validators in the codebase (admin/projects, admin/functions, shared/utils) permit hyphens; the inngest regex is the outlier.
Update validateSchemaName to accept hyphens and reject underscores to match the architectural constraint.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/routes/betterbase/index.ts` at line 24,
validateSchemaName currently uses a regex that disallows hyphens and allows
underscores, which diverges from SAFE_PROJECT_SLUG and causes exportProjectUsers
to throw for hyphenated slugs; update validateSchemaName to use a regex that
permits hyphens and forbids underscores (match SAFE_PROJECT_SLUG semantics) so
project slugs like my-project are accepted everywhere. Locate the
validateSchemaName function and replace its pattern with one that starts with a
lowercase letter and allows lowercase letters, digits and hyphens (no
underscores), ensuring its validation logic mirrors the SAFE_PROJECT_SLUG rule
used elsewhere.
| betterbaseRouter.post( | ||
| "/storage/generate-upload-url", | ||
| zValidator( | ||
| "json", | ||
| z.object({ | ||
| contentType: z.string(), | ||
| filename: z.string(), | ||
| }), | ||
| ), |
There was a problem hiding this comment.
filename is now required where it was optional, and unbounded in length.
Two issues with the zod schema:
filename: z.string()— previouslyfilenamewas optional (the old code usedfilename?.split(".").pop()). Existing CLI/dashboard callers that omit it now get a 400. If that's intentional, call it out in release notes; otherwise add.optional()and treat absent as empty extension.- No
.max(...)on eithercontentTypeorfilename. A 10MB filename string will be parsed, trimmed, and split before being thrown away at the 16-char extension cap — pointless allocation. Add.max(255)onfilenameand.max(127)oncontentType(RFC-ish bounds).
- z.object({
- contentType: z.string(),
- filename: z.string(),
- }),
+ z.object({
+ contentType: z.string().max(127),
+ filename: z.string().max(255).optional(),
+ }),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/routes/betterbase/index.ts` around lines 199 - 207, The
zod schema passed to zValidator in the betterbaseRouter.post for
"/storage/generate-upload-url" currently requires filename and has no length
bounds; change the schema so filename is optional (use .optional()) and treat
absent filename as empty when downstream logic (e.g., filename?.split...)
expects it, and add size limits: apply .max(255) to filename and .max(127) to
contentType to prevent huge allocations; update the schema object inside the
zValidator call accordingly so callers that omit filename continue to work and
both fields are capped.
| if (!env.STORAGE_ENDPOINT || !env.STORAGE_ACCESS_KEY || !env.STORAGE_SECRET_KEY) { | ||
| return c.json({ error: "Storage not configured" }, 500); | ||
| } | ||
|
|
||
| const s3 = new S3Client({ | ||
| endpoint: env.STORAGE_ENDPOINT, | ||
| region: "us-east-1", | ||
| credentials: { | ||
| accessKeyId: env.STORAGE_ACCESS_KEY, | ||
| secretAccessKey: env.STORAGE_SECRET_KEY, | ||
| }, | ||
| forcePathStyle: true, | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
S3Client reinstantiated on every upload-URL request.
Each call to this endpoint allocates a new S3Client (HTTP agent, signer state, credentials provider). The credentials are static env vars — hoist the client to module scope (lazy-initialized after env validation) so all requests reuse a single connection pool.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/routes/betterbase/index.ts` around lines 264 - 276, The
S3Client is being recreated per request (see the local variable s3 and new
S3Client(...) inside the route); move S3Client to module scope so all requests
reuse one client: declare a module-level holder (e.g. let s3: S3Client | null)
and lazily initialize it once after verifying env.STORAGE_ENDPOINT /
env.STORAGE_ACCESS_KEY / env.STORAGE_SECRET_KEY (use the same constructor
options currently used). Replace the per-request new S3Client call with the
module-level instance and ensure error handling still returns the "Storage not
configured" response when env vars are missing.
| recent.push(now); | ||
|
|
||
| // Prune entry if empty (cleanup), otherwise update | ||
| if (recent.length === 0) { | ||
| deviceCodeRateLimits.delete(ip); | ||
| } else { | ||
| deviceCodeRateLimits.set(ip, recent); | ||
| } |
There was a problem hiding this comment.
Dead branch — recent.length is always ≥ 1 at line 35.
recent.push(now) runs unconditionally at line 32, so the recent.length === 0 check at line 35 can never be true and the delete cleanup is unreachable. The intended per-request pruning never fires; you're relying entirely on the size-cap eviction at lines 42–50 to bound memory. Either move the cleanup before the push (only delete if filter dropped everything and the request would still record), or drop the dead branch and rely on a periodic sweep.
Proposed fix
- recent.push(now);
-
- // Prune entry if empty (cleanup), otherwise update
- if (recent.length === 0) {
- deviceCodeRateLimits.delete(ip);
- } else {
- deviceCodeRateLimits.set(ip, recent);
- }
+ recent.push(now);
+ deviceCodeRateLimits.set(ip, recent);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| recent.push(now); | |
| // Prune entry if empty (cleanup), otherwise update | |
| if (recent.length === 0) { | |
| deviceCodeRateLimits.delete(ip); | |
| } else { | |
| deviceCodeRateLimits.set(ip, recent); | |
| } | |
| recent.push(now); | |
| deviceCodeRateLimits.set(ip, recent); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/routes/device/index.ts` around lines 32 - 39, The cleanup
branch is dead because recent.push(now) runs before the empty check; change the
logic in the deviceCodeRateLimits handling so you prune stale timestamps (the
existing filter on recent) and check recent.length before calling
recent.push(now): if the filtered recent is empty then call
deviceCodeRateLimits.delete(ip) and do not push/set, otherwise push(now) and
deviceCodeRateLimits.set(ip, recent). This ensures the delete path can run and
prevents relying only on the cap-based eviction.
| if (deviceCodeRateLimits.size > RATE_LIMIT_MAP_MAX_SIZE) { | ||
| // Remove oldest entries | ||
| const entries = Array.from(deviceCodeRateLimits.entries()); | ||
| entries.sort((a, b) => Math.min(...a[1]) - Math.min(...b[1])); | ||
| const toDelete = entries.slice(0, Math.floor(RATE_LIMIT_MAP_MAX_SIZE * 0.1)); | ||
| for (const [key] of toDelete) { | ||
| deviceCodeRateLimits.delete(key); | ||
| } | ||
| } |
There was a problem hiding this comment.
O(n log n) sort on every request once the cap is breached.
When size > 10_000, every subsequent POST /device/code allocates Array.from(entries), sorts all 10k entries by Math.min(...timestamps), and slices 10%. Under sustained rate-limit-violating traffic (the exact case this is meant to handle), this runs on every request, amplifying the DoS vector instead of mitigating it. Switch to either a periodic setInterval sweep or an insertion-order eviction (LRU-ish: Map already preserves insertion order — for (const k of map.keys()) { map.delete(k); if (--n === 0) break; } is O(k) where k is the eviction batch).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/server/src/routes/device/index.ts` around lines 42 - 50, The current
eviction in the deviceCodeRateLimits cleanup block performs Array.from + sort on
every request once size > RATE_LIMIT_MAP_MAX_SIZE, causing O(n log n) work per
request; replace it with a cheaper eviction strategy: either run the expensive
cleanup periodically via setInterval (e.g., a background sweep that performs the
sort/slice against deviceCodeRateLimits) or perform an insertion-order eviction
using the Map's order to drop the oldest N entries in O(N) by iterating
deviceCodeRateLimits.keys() and deleting the first
Math.floor(RATE_LIMIT_MAP_MAX_SIZE * 0.1) keys; update the code around
deviceCodeRateLimits and the cleanup block (where entries/entries.sort/toDelete
are used) to use one of these approaches and ensure the eviction batch size and
scheduling are clearly defined.
Summary
This PR implements a focused security hardening pass based on the CodeRabbit findings, with emphasis on unauthenticated execution paths and token/session risk.
✅ Fixed
/betterbase/:kind/*function execution endpoints./betterbase/storage/generate-upload-url./betterbase/wsupgrade handler (requires valid token).30dto8hiss,aud, andjticlaims017_revoked_admin_tokens.sqladmin.logoutaudit log entryX-Forwarded-For: $remote_addr(no client-supplied chain passthrough)X-Real-IPX-Real-IPand falls back safelyPOST /device/code.Files changed
packages/server/src/routes/betterbase/index.tspackages/server/src/routes/betterbase/ws.tspackages/server/src/lib/auth.tspackages/server/src/routes/admin/auth.tspackages/server/migrations/017_revoked_admin_tokens.sqlpackages/server/src/lib/env.tspackages/server/src/index.tspackages/server/src/lib/audit.tsdocker/nginx/nginx.confpackages/server/src/routes/device/index.tspackages/server/src/routes/admin/project-scoped/users.tsNotes
Codex Task
Summary by CodeRabbit
New Features
Security
Reliability