Auth revocation, WS ticketing, CSV escaping, rate limiting, and env/security hardening#65
Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 39 minutes and 12 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (13)
WalkthroughThe pull request introduces token revocation for admin sessions, implements ticket-based WebSocket authentication, hardens upload and rate-limiting policies, refactors IP address extraction to use explicit headers, and centralizes CSV escaping logic. Changes span database migrations, server configuration, authentication libraries, and multiple API endpoints. Changes
Sequence DiagramssequenceDiagram
participant Admin as Admin Client
participant Server
participant Database
Admin->>Server: POST /admin/logout (bearer token)
Server->>Server: verify token
Server->>Database: upsert revoked_admin_tokens (jti, expires_at)
Database-->>Server: revocation recorded
Server->>Database: query admin user by sub
Database-->>Server: admin user (if exists)
Server->>Database: write audit log (admin.logout, clientIP, User-Agent)
Database-->>Server: audit logged
Server-->>Admin: { success: true }
Note over Server,Database: Future request with same token
Admin->>Server: request with revoked token
Server->>Server: extract jti from token
Server->>Database: check if jti in revoked_admin_tokens
Database-->>Server: found (revoked)
Server-->>Admin: null (token rejected)
sequenceDiagram
participant Admin as Admin Client
participant Server
participant Database
participant WSClient as WS Client
Admin->>Server: POST /betterbase/ws-ticket (admin-verified)
Server->>Server: validate projectSlug
Server->>Database: verify project exists
Database-->>Server: project found
Server->>Database: create/store ticket (projectSlug, userId, expires_at)
Database-->>Server: ticket stored
Server-->>Admin: { ticket: "xxx" }
WSClient->>Server: GET /betterbase/ws?ticket=xxx&project=slug
Server->>Database: fetch ticket by value
Database-->>Server: ticket { projectSlug, userId, expires_at }
Server->>Server: validate ticket not expired
Server->>Server: validate project matches ticket.projectSlug
Server->>Database: verify project exists in betterbase_meta.projects
Database-->>Server: project found
Server->>Database: delete ticket (after validation)
Database-->>Server: ticket deleted
Server->>WSClient: upgrade connection (projectSlug, userId in data)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@CodeRabbit full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 23
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
packages/server/src/routes/admin/project-scoped/users.ts (2)
233-240: 🛠️ Refactor suggestion | 🟠 MajorInconsistent escaping:
email_verifiedandbannedbypassescapeCSVValue.
inngest.ts(build-csv step, L426–434) routes all six fields throughescapeCSVValue, while this row builder interpolatesr.email_verifiedandr.bannedraw. They are booleans today so output is safe, but:
- The two CSV producers now diverge on quoting rules (a boolean
falsehere is unquoted; if upstream ever returns a string value, it goes in unescaped).- A future schema change (e.g.,
bannedbecoming a nullable text/enum, or pg driver returning these as something other than JS booleans) silently regresses to a CSV-injection / row-shape bug with no test coverage.Run every column through the helper for parity with the Inngest exporter.
Proposed diff
- (r) => - `${escapeCSVValue(r.id)},${escapeCSVValue(r.name)},${escapeCSVValue(r.email)},${r.email_verified},${escapeCSVValue(r.created_at)},${r.banned}`, + (r) => + [ + escapeCSVValue(r.id), + escapeCSVValue(r.name), + escapeCSVValue(r.email), + escapeCSVValue(r.email_verified), + escapeCSVValue(r.created_at), + escapeCSVValue(r.banned), + ].join(","),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/routes/admin/project-scoped/users.ts` around lines 233 - 240, The CSV row builder is inconsistently escaping two fields: r.email_verified and r.banned bypass escapeCSVValue; update the template in the csv variable construction so every column uses escapeCSVValue (i.e., call escapeCSVValue(r.email_verified) and escapeCSVValue(r.banned)) to match the Inngest exporter and ensure all six fields (id, name, email, email_verified, created_at, banned) are escaped uniformly.
223-249:⚠️ Potential issue | 🟠 MajorNo audit log on bulk user export.
PATCH /:userId/banandDELETE /:userIdboth emitwriteAuditLogentries, butPOST /export— which exfiltrates every user record (id, name, email, verification, created_at, banned) for a project — writes nothing tobetterbase_meta.audit_log. This is exactly the action you want a forensic trail for, and the PR description calls out "improve logging/audit capture."Proposed diff
projectUserRoutes.post("/export", async (c) => { const pool = getPool(); const project = c.get("project") as { id: string; slug: string }; + const admin = c.get("adminUser") as { id: string; email: string }; const s = schemaName(project); const { rows } = await pool.query( `SELECT id, name, email, email_verified, created_at, banned FROM ${s}."user" ORDER BY created_at DESC`, ); + + writeAuditLog({ + actorId: admin.id, + actorEmail: admin.email, + action: "project.users.export", + resourceType: "project", + resourceId: project.id, + resourceName: project.slug, + afterData: { row_count: rows.length }, + ipAddress: getClientIp(c.req.raw.headers), + }).catch(() => {});(Per the repo's fire-and-forget convention for audit writes — not awaited.)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/routes/admin/project-scoped/users.ts` around lines 223 - 249, Add an audit log entry when exporting users: after you build the CSV in projectUserRoutes.post("/export") call the existing writeAuditLog function (fire-and-forget, do not await) with an action like "project.user.export", include the project id (from schemaName/project or c.get("project")), the actor performing the export (from the request context, e.g. c.get("actor") or the same key used in other routes), and metadata such as exported_count and which fields were exported (id,name,email,email_verified,created_at,banned) so the export is recorded in betterbase_meta.audit_log; ensure you reference the same writeAuditLog symbol used elsewhere so the audit follows the repo convention.packages/server/src/routes/device/index.ts (1)
105-128:⚠️ Potential issue | 🟠 Major
POST /device/verifyis an unrate-limited password oracle.The new limiter only covers
/code./verifyrunsverifyPassword(bcrypt rounds=12) against any submitted email and is reachable without authentication, so it is the actual brute-force/credential-stuffing target. Apply the same (or stricter) IP+key throttle here, and consider returning a constant-time generic error regardless of whether the email exists to avoid user enumeration via timing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/routes/device/index.ts` around lines 105 - 128, The POST /device/verify handler (deviceRouter.post("/verify")) is currently an unrate-limited password oracle; add the same IP+key throttling used by the /code endpoint (reuse the rate limiter instance or middleware applied to the deviceRouter.post("/code") route) so calls to this handler are limited per remote IP and per key; also ensure you perform user-existence and password verification in constant time and always return the same generic error response (e.g., "Invalid credentials.") without revealing whether the email exists or how long verification took, and keep the verifyPassword usage intact but run it against a constant-time fallback hash when the user is not found to avoid timing leaks.packages/server/src/routes/betterbase/index.ts (1)
46-69: 🧹 Nitpick | 🔵 TrivialValidate
X-Project-Slugbefore parsing the body.Slug validation runs after
c.req.json()consumes the request body. Reorder so a malformed slug is rejected without spending CPU/memory on JSON parsing — minor, but trivial to fix and matters for large mutation payloads.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/server/src/routes/betterbase/index.ts` around lines 46 - 69, Move the X-Project-Slug validation to run before parsing the request body: read and validate the header value from c.req.header("X-Project-Slug") (using SAFE_PROJECT_SLUG) and return the 400 error if invalid, then proceed to call c.req.json() and parse args; update the code around getPool(), projectSlug, SAFE_PROJECT_SLUG, and the body parsing block (where c.req.json(), args, and (fn.handler as any)._args.safeParse are used) so that header validation occurs first and short-circuits invalid requests before JSON parsing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docker/nginx/nginx.conf`:
- Line 56: The current proxy_set_header X-Forwarded-For $remote_addr; drops
upstream proxy chains and loses original client IP when nginx is behind a
CDN/ALB; change the nginx config to use proxy_set_header X-Forwarded-For
$proxy_add_x_forwarded_for; and add a trusted-proxies block with
set_real_ip_from <trusted-cidr>; real_ip_header X-Forwarded-For; (or, if you
intentionally only support single-proxy deployments, add a clear
comment/documentation that nginx must be the outermost proxy and that
$remote_addr is used intentionally). Locate the proxy_set_header line and update
it and/or add the set_real_ip_from/real_ip_header directives to accept trusted
upstreams.
- Around line 111-116: The `@dashboard_fallback` location block is missing the
X-Real-IP header so requests routed through the SPA fallback are logged with
ip=NULL; add a proxy_set_header X-Real-IP $remote_addr; line to the location
`@dashboard_fallback` block (so it matches the other locations) so src/index.ts's
request logger (which reads X-Real-IP) receives the real client IP.
- Around line 76-86: The /betterbase/ location block is missing a
client_max_body_size setting so large multipart uploads will be rejected; edit
the nginx location /betterbase/ block (the location /betterbase/ { ... } stanza)
and add client_max_body_size 100m; (or a larger value matching the /storage/
configuration) inside that block to allow file uploads ≥100MB to pass through
the proxy.
- Around line 83-84: Add an nginx-level map to compute a Connection value from
$http_upgrade (e.g. create a variable named $connection_upgrade that is
"upgrade" by default and "close" when $http_upgrade is empty) and then replace
the hard-coded Connection "upgrade" header inside the location /betterbase/ and
location /realtime/ blocks by using proxy_set_header Connection
$connection_upgrade;; specifically locate the existing proxy_set_header
Connection "upgrade"; lines and swap them to use the $connection_upgrade
variable so Connection is only set for actual upgrade requests.
In `@packages/server/migrations/018_revoked_admin_tokens_expiry_hardening.sql`:
- Around line 8-9: The migration 018 contains a duplicate index creation for
idx_revoked_admin_tokens_expires_at that already exists in migration 017; either
remove the CREATE INDEX line from migration 018, or instead move the index
creation out of 017 into migration 018 so it runs after you set the expires_at
column to NOT NULL—if you choose the move, delete the index statement in 017 and
keep a single CREATE INDEX IF NOT EXISTS idx_revoked_admin_tokens_expires_at ON
betterbase_meta.revoked_admin_tokens (expires_at); in 018 placed after the NOT
NULL alteration to ensure the index is created once with the final column
constraint.
In `@packages/server/src/index.ts`:
- Line 48: The request-logging middleware currently uses only X-Real-IP (const
ip = c.req.header("X-Real-IP") ?? null;) causing inconsistent IP extraction;
update it to reuse the existing helper by importing lib/audit.ts:getClientIp (or
extend getClientIp to accept a Hono context or Headers if it currently only
accepts plain headers) and replace the direct header lookup with a call to
getClientIp(c) (or getClientIp(c.req.headers)). Ensure the helper returns the
same fallbacks (X-Real-IP → X-Forwarded-For → UA fingerprint) so
request_logs.ip, audit_log.ip_address, and getRateLimitKey all use the single
canonical function.
- Around line 19-27: The problem is the top-level await on the initial cleanup
query in the startup code that can cause the server to crash if the Postgres
query fails. To fix it, remove the top-level await call and instead rely only on
the setInterval cleanup logic that includes proper error handling with a catch
block. This means modifying the block with the immediate deletion query and
replacing it so it doesn't await or throw, similar to the cleanup done inside
the setInterval function.
In `@packages/server/src/lib/audit.ts`:
- Line 74: The current extraction returns the rightmost X-Forwarded-For hop
which is wrong for chained proxies; update the logic in
packages/server/src/lib/audit.ts where you call headers.get("x-real-ip") ??
headers.get("x-forwarded-for") to instead select the leftmost client IP (e.g.
use .split(",")[0]?.trim()) or, if you need to support a trusted-proxy chain,
read an env var like TRUSTED_PROXY_COUNT and pick the entry at index (length -
trustedCount - 1) (document the chosen approach in a comment), ensuring fallback
to "unknown" remains.
In `@packages/server/src/lib/auth.ts`:
- Around line 65-79: The revocation check in the auth path currently does a DB
roundtrip for every request (see getPool(), the SELECT against
betterbase_meta.revoked_admin_tokens using tokenJti); add an in-process LRU/TTL
cache keyed by jti to short-circuit the DB lookup: on lookup, check the cache
first and return cached "revoked" or "not revoked" results; when missing, query
the DB as now, store a positive cache entry for revoked tokens until their
expires_at and store a negative cache entry for "not revoked" with a short TTL
(bounded by tokenExp) to avoid stale negatives; ensure cache eviction/LRU
behavior and that you still consult the DB when cache entry is expired or absent
so token revocations are respected.
- Around line 56-63: The legacy branch that returns a session when tokenJti is
missing must be removed because tokens without iss/aud are already rejected by
jwtVerify and returning a non-revocable session bypasses revocation; instead,
when tokenJti is undefined after jwtVerify, treat this as a verification failure
(throw or return an error/undefined) so revocation checks always apply. Update
the code around tokenJti (the tokenJti check after jwtVerify) to stop logging
and returning a session for missing jti, and ensure signAdminToken still sets
jti so valid tokens pass; if you truly need a grandfathering window implement it
by changing jwtVerify options (issuer/audience) in a controlled branch, not by
allowing missing jti to produce sessions.
In `@packages/server/src/lib/env.ts`:
- Around line 9-11: The route helper getS3Client() in
packages/server/src/routes/admin/storage.ts currently reads
process.env.STORAGE_* with "minioadmin" fallbacks; change it to import and call
validateEnv() from packages/server/src/lib/env.ts and use the returned
STORAGE_ENDPOINT, STORAGE_ACCESS_KEY, and STORAGE_SECRET_KEY values (drop the ??
"minioadmin" defaults and any direct process.env access). Ensure getS3Client()
uses the validated env object fields (and adjust its imports/parameters if
necessary) so all environment access goes through validateEnv() as required by
the project's env.ts rules.
- Around line 9-11: The env schema now forces STORAGE_ENDPOINT,
STORAGE_ACCESS_KEY, and STORAGE_SECRET_KEY to be required which breaks existing
deployments; change those three back to optional (e.g.,
z.string().min(1).optional() or z.string().optional()) in the env schema in
env.ts and keep validateEnv() behavior unchanged, then add a runtime check in
the storage initialization code (the function that creates the storage client or
handles uploads) to assert/throw a clear error if those env vars are missing
when storage is actually used; reference the STORAGE_ENDPOINT,
STORAGE_ACCESS_KEY, STORAGE_SECRET_KEY symbols and the validateEnv() call so the
schema revert and the new runtime validation are applied in the correct places.
In `@packages/server/src/routes/admin/auth.ts`:
- Around line 67-98: The /admin/auth/logout route (authRoutes.post("/logout"))
is currently unauthenticated and unrate-limited and can be used to amplify DoS
by triggering verifyAdminToken, DB INSERT into
betterbase_meta.revoked_admin_tokens, admin lookup, and writeAuditLog; to fix,
add the same IP-based rate limiting middleware used for /device/code to the
logout route, ensure extractBearerToken + verifyAdminToken are called and
checked first (return immediately on missing/invalid token or non-admin token
type) so no DB calls (getPool / INSERT into revoked_admin_tokens) or
writeAuditLog are executed for malformed tokens, and only perform the INSERT and
admin lookup/writeAuditLog when payload is valid and payload.jti exists.
- Around line 88-95: The call to writeAuditLog in the admin.logout block is
currently awaited which contradicts the project's fire-and-forget audit-log
convention; remove the leading await so writeAuditLog({...}) is invoked without
awaiting its completion (keep the same arguments: actorId rows[0].id, actorEmail
rows[0].email, action "admin.logout", ipAddress getClientIp(c.req.raw.headers),
userAgent c.req.header("User-Agent") ?? undefined) to match lib/audit.ts
behavior and coding guidelines.
In `@packages/server/src/routes/betterbase/index.ts`:
- Around line 215-230: Remove the unnecessary try/catch around the filename
parsing (the string ops and regex can't throw) and simplify the block: when
typeof filename === "string" trim it into trimmed, reject if
trimmed.includes("/") || trimmed.includes("?"), extract parsedExt with
trimmed.includes(".") ? trimmed.split(".").pop() ?? "" : "" then validate with
/^[a-zA-Z0-9]{1,16}$/.test(parsedExt) and set ext = parsedExt.toLowerCase();
keep the current behavior that uses only the trailing segment of multi-dot names
(e.g., archive.tar.gz => gz) and note in code comments near variables filename,
trimmed, parsedExt, and ext that the original filename is not used for s3 keys
so other chars (\, NUL, .., control chars) are currently irrelevant to key
generation.
- Around line 204-208: The typeof check is redundant because
uploadUrlSchema.contentType is validated as a non-empty string by
c.req.valid("json"); replace the ternary that computes safeContentType to just
check ALLOWED_UPLOAD_CONTENT_TYPES.has(contentType) and set safeContentType
accordingly (use contentType when allowed, otherwise null) and keep the existing
unsupported content type response; update references to safeContentType,
contentType, and ALLOWED_UPLOAD_CONTENT_TYPES in this block so the lookup is a
simple Set.has(contentType) instead of typeof + Set.has.
- Around line 293-295: Replace the hardcoded TTL literal with the canonical
exported constant from ws.ts: import WS_TICKET_TTL_MS and use it instead of
60_000 in the response (the return in the handler that calls createWSTicket).
Specifically, add an import for WS_TICKET_TTL_MS (from "./ws"), then change the
response to return c.json({ ticket, expiresInMs: WS_TICKET_TTL_MS }) so the
client TTL stays in sync with createWSTicket/ws token lifecycle.
In `@packages/server/src/routes/betterbase/ws.ts`:
- Around line 154-175: The fetch handler for the WebSocket endpoint
(/betterbase/ws) performs an unnecessary DB lookup ("SELECT id FROM
betterbase_meta.projects") on every handshake even though /ws-ticket already
validated and bound the project to the ticket; remove this redundant query (the
pool.query call) to avoid DB roundtrips on the hot path, keeping the existing
ticket validation logic that checks wsTickets and projectSlug, or if you prefer
to keep safety, gate the query behind a short-lived cache keyed by projectSlug;
update the fetch function accordingly (remove or replace the
getPool()/pool.query usage and related rows check) and ensure
wsTickets.delete(ticket) and the projectSlug equality check remain intact.
- Line 28: The TTL constant for websocket tickets is hardcoded elsewhere; export
the existing WS_TICKET_TTL_MS constant from
packages/server/src/routes/betterbase/ws.ts and import and use that exported
symbol in packages/server/src/routes/betterbase/index.ts (replace the hardcoded
60_000 used for expiresInMs with WS_TICKET_TTL_MS) so both modules share a
single source of truth and avoid drift when the TTL is adjusted.
- Around line 27-34: The wsTickets Map (and WS_TICKET_TTL_MS) can grow unbounded
because expired tickets are only removed on consumption; add a periodic or lazy
sweeper to prune expired entries: implement a short interval (e.g.,
WS_TICKET_TTL_MS or a fraction like WS_TICKET_TTL_MS/2) that iterates wsTickets
and calls wsTickets.delete(key) for any entry whose expiresAt <= Date.now(), and
also add a cheap lazy cleanup inside createWSTicket to remove expired entries
before inserting the new ticket; reference wsTickets, WS_TICKET_TTL_MS, and
createWSTicket when locating where to add the interval and lazy prune.
In `@packages/server/src/routes/device/index.ts`:
- Line 18: Replace the untyped parameter in getRateLimitKey with Hono's Context:
add import type { Context } from "hono" at the top and change the function
signature to getRateLimitKey(c: Context) so that calls like c.req.header(...)
are type-checked; update any related references or overloads expecting the
previous any if necessary.
- Around line 13-48: The current in-memory rate limiter using
deviceCodeRateLimits in deviceRouter.post("/code") (with constants
DEVICE_CODE_RATE_LIMIT_WINDOW_MS, DEVICE_CODE_RATE_LIMIT_MAX,
DEVICE_CODE_RATE_LIMIT_MAX_KEYS and helper getRateLimitKey) cannot survive
restarts or multiple replicas; replace it with a persistent/shared store
(Postgres or Redis) for counting/TTL. Implement a
transactional/increment-with-expiry operation keyed by the getRateLimitKey value
(or a hashed variant) that atomically increments the counter, sets an expiry
equal to DEVICE_CODE_RATE_LIMIT_WINDOW_MS, and returns the current count so you
can enforce DEVICE_CODE_RATE_LIMIT_MAX; also remove the in-process map pruning
logic (size cap and oldestKey deletion) and use the DB/Redis cleanup/TTL
instead. Ensure code in deviceRouter.post("/code") queries/updates the
persistent counter and returns 429 when the persisted count >=
DEVICE_CODE_RATE_LIMIT_MAX.
- Around line 18-28: The getRateLimitKey function currently trusts X-Real-IP and
X-Forwarded-For unconditionally; change it to honor a TRUSTED_PROXY flag (e.g.,
process.env.TRUSTED_PROXY) and only read c.req.header("X-Real-IP") /
"X-Forwarded-For" when that flag is set, otherwise derive the client IP from the
socket (c.req.socket.remoteAddress or c.req.connection.remoteAddress); also
strengthen the fallback by combining the resolved IP with a short UA fingerprint
(use createHash("sha256").update(ua).digest("hex").slice(0,16)) into a single
key like `ipfp:${ip}:${fp}` so UA churn can't bypass IP limits, and consider
switching your rate limiter storage to a shared backend (Redis/Postgres) instead
of in-memory so limits persist across restarts/replicas.
---
Outside diff comments:
In `@packages/server/src/routes/admin/project-scoped/users.ts`:
- Around line 233-240: The CSV row builder is inconsistently escaping two
fields: r.email_verified and r.banned bypass escapeCSVValue; update the template
in the csv variable construction so every column uses escapeCSVValue (i.e., call
escapeCSVValue(r.email_verified) and escapeCSVValue(r.banned)) to match the
Inngest exporter and ensure all six fields (id, name, email, email_verified,
created_at, banned) are escaped uniformly.
- Around line 223-249: Add an audit log entry when exporting users: after you
build the CSV in projectUserRoutes.post("/export") call the existing
writeAuditLog function (fire-and-forget, do not await) with an action like
"project.user.export", include the project id (from schemaName/project or
c.get("project")), the actor performing the export (from the request context,
e.g. c.get("actor") or the same key used in other routes), and metadata such as
exported_count and which fields were exported
(id,name,email,email_verified,created_at,banned) so the export is recorded in
betterbase_meta.audit_log; ensure you reference the same writeAuditLog symbol
used elsewhere so the audit follows the repo convention.
In `@packages/server/src/routes/betterbase/index.ts`:
- Around line 46-69: Move the X-Project-Slug validation to run before parsing
the request body: read and validate the header value from
c.req.header("X-Project-Slug") (using SAFE_PROJECT_SLUG) and return the 400
error if invalid, then proceed to call c.req.json() and parse args; update the
code around getPool(), projectSlug, SAFE_PROJECT_SLUG, and the body parsing
block (where c.req.json(), args, and (fn.handler as any)._args.safeParse are
used) so that header validation occurs first and short-circuits invalid requests
before JSON parsing.
In `@packages/server/src/routes/device/index.ts`:
- Around line 105-128: The POST /device/verify handler
(deviceRouter.post("/verify")) is currently an unrate-limited password oracle;
add the same IP+key throttling used by the /code endpoint (reuse the rate
limiter instance or middleware applied to the deviceRouter.post("/code") route)
so calls to this handler are limited per remote IP and per key; also ensure you
perform user-existence and password verification in constant time and always
return the same generic error response (e.g., "Invalid credentials.") without
revealing whether the email exists or how long verification took, and keep the
verifyPassword usage intact but run it against a constant-time fallback hash
when the user is not found to avoid timing leaks.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 8fa6a1b6-b7c8-4019-be0b-ee6cb8ff6349
📒 Files selected for processing (14)
docker/nginx/nginx.confpackages/server/migrations/017_revoked_admin_tokens.sqlpackages/server/migrations/018_revoked_admin_tokens_expiry_hardening.sqlpackages/server/src/index.tspackages/server/src/lib/audit.tspackages/server/src/lib/auth.tspackages/server/src/lib/csv.tspackages/server/src/lib/env.tspackages/server/src/lib/inngest.tspackages/server/src/routes/admin/auth.tspackages/server/src/routes/admin/project-scoped/users.tspackages/server/src/routes/betterbase/index.tspackages/server/src/routes/betterbase/ws.tspackages/server/src/routes/device/index.ts
Motivation
Description
betterbase_meta.revoked_admin_tokensand ensureexpires_atis non-null (017_and018_migrations).jti,issuer, andaudience, andverifyAdminTokenchecks therevoked_admin_tokenstable;logoutinserts revoked JTIs and writes an audit entry; expired revoked entries are deleted on startup and hourly insrc/index.ts.30dto8hand wiredBETTERBASE_JWT_ISSUER/BETTERBASE_JWT_AUDIENCEthroughenvvalidation insrc/lib/env.ts.src/lib/csv.tsand replaced ad-hoc CSV handling withescapeCSVValuein exports and Inngest code to mitigate CSV injection and quoting issues.wsTickets) withcreateWSTicket, a/betterbase/ws-ticketAPI, and server-side ticket validation before upgrade inroutes/betterbase/ws.ts.projectSlug, restrict allowed uploadContent-Types, validatefilename/extension, require storage env vars, and use a signed S3 URL with safe content type inroutes/betterbase/index.ts.routes/device/index.tsusing IP or UA fingerprint to mitigate abuse.X-Real-IPand setX-Forwarded-Forto the remote address; server code now readsX-Real-IPin logging and audit helpers.Content-Security-Policyheader and UTF-8 charset for CSV downloads, stricterSAFE_PROJECT_SLUGvalidation, improved WebSocket upgrade sanitization, and general proxy header/timeouts for/betterbase/location indocker/nginx/nginx.conf.Testing
npm run build/tsc) to validate types and compilation successfully.GET /health, which returned{ status: "ok" }.Codex Task
Summary by CodeRabbit
Security Enhancements
New Features
Improvements