Conversation
Summary by CodeRabbit
WalkthroughSummarise the repo as a monorepo adding an Actix-Web Rust backend (REST, WebSocket, OpenAPI), a Vite React PWA with generated API client and shared design tokens, deployment manifests (Docker, Nginx, K8s), and a CI rewrite to a Node/Bun multi-stage pipeline. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Developer
participant CI as GitHub Actions
participant Node as actions/setup-node
participant Bun as oven-sh/setup-bun
participant Cargo as cargo
participant Tokens as packages/tokens
participant FE as frontend-pwa
Developer->>CI: push / pull_request
CI->>Node: setup Node 22
CI->>Bun: setup Bun 1.1.x
CI->>Cargo: restore cache
CI->>Cargo: fmt -> clippy -> test -> build --release
CI->>Cargo: run openapi_dump -> write spec/openapi.json (fallback)
CI->>Tokens: cd packages/tokens && bun install && bun run build
CI->>FE: cd frontend-pwa && bun install && bun run build
CI-->>Developer: report status
sequenceDiagram
autonumber
participant Browser
participant Nginx
participant Backend
participant Swagger
Browser->>Nginx: GET /
Nginx-->>Browser: index.html + assets
Browser->>Nginx: GET /api/users
Nginx->>Backend: proxy /api/users
Backend-->>Nginx: 200 [User[]]
Nginx-->>Browser: 200 [User[]]
Browser->>Backend: GET /api-docs/openapi.json
Backend-->>Browser: OpenAPI JSON
Browser->>Backend: GET /ws (upgrade)
Backend-->>Browser: WebSocket upgrade (heartbeat)
Browser->>Swagger: GET /docs
Swagger-->>Browser: Swagger UI
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Sorry @leynos, you have reached your 24-hour rate limit for Sourcery. Please try again later
There was a problem hiding this comment.
Actionable comments posted: 65
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled
You can enable these settings in your CodeRabbit configuration.
⛔ Files ignored due to path filters (2)
backend/Cargo.lockis excluded by!**/*.lockbun.lockis excluded by!**/*.lock
📒 Files selected for processing (38)
.github/workflows/ci.yml(1 hunks).gitignore(1 hunks)AGENTS.md(6 hunks)Makefile(1 hunks)backend/Cargo.toml(1 hunks)backend/src/api/mod.rs(1 hunks)backend/src/api/users.rs(1 hunks)backend/src/bin/openapi_dump.rs(1 hunks)backend/src/doc.rs(1 hunks)backend/src/lib.rs(1 hunks)backend/src/main.rs(1 hunks)backend/src/models/mod.rs(1 hunks)backend/src/models/user.rs(1 hunks)backend/src/ws/mod.rs(1 hunks)backend/src/ws/socket.rs(1 hunks)deploy/docker-compose.yml(1 hunks)deploy/docker/backend.Dockerfile(1 hunks)deploy/docker/frontend.Dockerfile(1 hunks)deploy/k8s/backend/deployment.yaml(1 hunks)deploy/k8s/ingress/api.yaml(1 hunks)deploy/nginx/default.conf(1 hunks)frontend-pwa/orval.config.yaml(1 hunks)frontend-pwa/package.json(1 hunks)frontend-pwa/src/api/client.ts(1 hunks)frontend-pwa/src/api/fetcher.ts(1 hunks)frontend-pwa/src/app/App.tsx(1 hunks)frontend-pwa/src/index.css(1 hunks)frontend-pwa/src/main.tsx(1 hunks)frontend-pwa/tailwind.config.js(1 hunks)frontend-pwa/vite.config.ts(1 hunks)package.json(1 hunks)packages/tokens/build/style-dictionary.js(1 hunks)packages/tokens/package.json(1 hunks)packages/tokens/src/themes/dark.json(1 hunks)packages/tokens/src/themes/light.json(1 hunks)packages/tokens/src/tokens.json(1 hunks)spec/asyncapi.yaml(1 hunks)spec/openapi.json(1 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
**/*.css
📄 CodeRabbit Inference Engine (AGENTS.md)
Define design tokens as CSS variables; expose to Tailwind via theme.extend
Files:
frontend-pwa/src/index.css
**/*.rs
📄 CodeRabbit Inference Engine (AGENTS.md)
**/*.rs: Run make fmt, make lint, and make test before committing (wrapping cargo fmt, clippy, test)
Disallow Clippy warnings; fix warnings in code rather than silencing
Extract helper functions when a function is too long; enforce separation of concerns and CQRS
Group many function parameters into structs with meaningful names
Consider using Arc for large error returns to reduce data returned
Every module must begin with a //! module-level comment explaining purpose and utility
Document public APIs with Rustdoc comments (///) for cargo doc
Prefer immutable data; avoid unnecessary mut
Handle errors with Result instead of panicking where feasible
Avoid unsafe code unless necessary and document any usage clearly
Place function attributes after doc comments
Do not use return in single-line functions
Use predicate functions for conditionals with more than two branches
Do not silence lints except as a last resort; suppressions must be tightly scoped and include a clear reason
Prefer #[expect] over #[allow] for lint suppressions
Use conditional compilation (#[cfg]/#[cfg_attr]) for unused functions under specific features
Prefer .expect() over .unwrap()
Use concat!() for long string literals rather than escaping newlines
Prefer single-line function forms when appropriate
Prefer semantic error enums deriving std::error::Error via thiserror for inspectable conditions
Use eyre::Report only at the app boundary; do not expose in public APIs
Never export eyre::Report from libraries; convert at API boundaries and only use eyre in main/top-level task
Files:
backend/src/api/mod.rsbackend/src/models/mod.rsbackend/src/main.rsbackend/src/lib.rsbackend/src/models/user.rsbackend/src/bin/openapi_dump.rsbackend/src/api/users.rsbackend/src/doc.rsbackend/src/ws/mod.rsbackend/src/ws/socket.rs
⚙️ CodeRabbit Configuration File
**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.
Adhere to single responsibility and CQRS
Place function attributes after doc comments.
Do not use
returnin single-line functions.Move conditionals with >2 branches into a predicate function.
Avoid
unsafeunless absolutely necessary.Every module must begin with a
//!doc comment that explains the module's purpose and utility.Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar
Lints must not be silenced except as a last resort.
#[allow]is forbidden.- Only narrowly scoped
#[expect(lint, reason = "...")]is allowed.- No lint groups, no blanket or file-wide suppression.
- Include
FIXME:with link if a fix is expected.Where code is only used by specific features, it must be conditionally compiled or a conditional expectation for unused_code applied.
Use
rstestfixtures for shared setup and to avoid repetition between tests.Replace duplicated tests with
#[rstest(...)]parameterised cases.Prefer
mockallfor mocks/stubs.Prefer
.expect()over.unwrap()Ensure that any API or behavioural changes are reflected in the documentation in
docs/Ensure that any completed roadmap steps are recorded in the appropriate roadmap in
docs/Files must not exceed 400 lines in length
- Large modules must be decomposed
- Long match statements or dispatch tables should be decomposed by domain and collocated with targets
- Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.
Files:
backend/src/api/mod.rsbackend/src/models/mod.rsbackend/src/main.rsbackend/src/lib.rsbackend/src/models/user.rsbackend/src/bin/openapi_dump.rsbackend/src/api/users.rsbackend/src/doc.rsbackend/src/ws/mod.rsbackend/src/ws/socket.rs
**/*.json
📄 CodeRabbit Inference Engine (AGENTS.md)
Keep JSON stable (deterministic key order) for snapshots and caches
Files:
packages/tokens/src/themes/light.jsonpackages/tokens/package.jsonfrontend-pwa/package.jsonpackage.jsonpackages/tokens/src/tokens.jsonpackages/tokens/src/themes/dark.jsonspec/openapi.json
**/*.{ts,tsx}
📄 CodeRabbit Inference Engine (AGENTS.md)
**/*.{ts,tsx}: ESM-only: source/build outputs are ES Modules; configure Vite accordingly; no CommonJS
Place JSDoc comments above declarations/decorators; keep docs close to code
Prefer immutability: const, readonly, Readonly; avoid mutating inputs
Extract helpers for long functions; keep trivial functions on one line when readable
Group related parameters into typed objects/builders; avoid long positional lists
Extract predicate functions or use lookup tables for multi-branch conditionals; enforce exhaustive switch with never guard
Every module begins with /** @file … */ describing purpose and usage
Validate I/O with runtime schemas (zod/valibot) at boundaries; keep types and schemas in sync
Use branded types for identifiers/tokens
Accept AbortSignal for cancellable async operations; wire through TanStack Query fetchers
Centralise time and RNG adapters; avoid direct Date.now()/Math.random() in business logic
Use discriminated unions for recoverable errors; reserve Error subclasses for exceptional paths with cause
Use Vitest with jsdom/happy-dom; keep tests deterministic and parallel-safe
Use factories/builders for fixtures; prefer parameterised tests with test.each/it.each
Use vi.mock for module boundaries; inject adapters for env/time/storage/fetch; avoid monkey-patching globals
Use vi.useFakeTimers() for time-based logic; restore after each test
Keep snapshots deterministic (sorted keys, fixed seeds) and scoped to stable UI fragments
Use Biome for format+lint; disallow any, non-null !; forbid @ts-ignore in favour of @ts-expect-error with reason
Enforce import hygiene: sorted/grouped imports; no unused or extraneous deps
Use dynamic import() for code-splitting; enable Vite prefetch and asset hashing
Configure TanStack Query with appropriate stale-time, cache-time; avoid refetchOnWindowFocus unless necessary
Avoid await in loops; batch with Promise.allSettled; use async iterables/streams for large data
Ship CSP, avoid eval/new Function; use Trusted Types and sa...
Files:
frontend-pwa/src/main.tsxfrontend-pwa/src/api/client.tsfrontend-pwa/src/app/App.tsxfrontend-pwa/src/api/fetcher.tsfrontend-pwa/vite.config.ts
**/*.{tsx,ts}
📄 CodeRabbit Inference Engine (AGENTS.md)
**/*.{tsx,ts}: Map domain errors to user messages at UI boundary; do not leak raw stacks to DOM/analytics
Enable React StrictMode in dev; memoise expensive components; derive data via selectors
Prefer semantic HTML; use daisyUI components only when they don’t harm semantics; audit focus and contrast
Files:
frontend-pwa/src/main.tsxfrontend-pwa/src/api/client.tsfrontend-pwa/src/app/App.tsxfrontend-pwa/src/api/fetcher.tsfrontend-pwa/vite.config.ts
**/package.json
📄 CodeRabbit Inference Engine (AGENTS.md)
Use caret versions for all direct dependencies; avoid *, >=, tag aliases; use tilde only with justification
Files:
packages/tokens/package.jsonfrontend-pwa/package.jsonpackage.json
package.json
📄 CodeRabbit Inference Engine (AGENTS.md)
package.json: ESM-only: ensure package publishes ESM/static assets via package.json configuration
Define Bun project scripts (fmt, lint, typecheck, dev, build, preview, test, audit) as specified
For libraries, export explicit entry points via package.json exports/types; avoid wildcard re-exports
Files:
package.json
**/*.md
📄 CodeRabbit Inference Engine (AGENTS.md)
**/*.md: Documentation must use en-GB-oxendict spelling and grammar (except LICENSE name)
Validate Markdown with make markdownlint
Run make fmt after doc changes to format Markdown and fix tables
Validate Mermaid diagrams by running make nixie
Wrap Markdown paragraphs and bullets at 80 columns; do not wrap tables/headings; code blocks at 120 columns; use dashes for bullets; use GFM footnotes
Files:
AGENTS.md
⚙️ CodeRabbit Configuration File
**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")
- Use en-GB-oxendict (-ize / -our) spelling and grammar
- Headings must not be wrapped.
- Documents must start with a level 1 heading
- Headings must correctly increase or decrease by no more than one level at a time
- Use GitHub-flavoured Markdown style for footnotes and endnotes.
- Numbered footnotes must be numbered by order of appearance in the document.
Files:
AGENTS.md
Makefile
📄 CodeRabbit Inference Engine (AGENTS.md)
Provide and use make targets: make fmt, make lint, make test, make check-fmt
Files:
Makefile
🧠 Learnings (6)
📚 Learning: 2025-07-18T18:35:14.639Z
Learnt from: CR
PR: leynos/ddlint#0
File: AGENTS.md:0-0
Timestamp: 2025-07-18T18:35:14.639Z
Learning: Separate Atomic Refactors: If refactoring is deemed necessary: Perform the refactoring as a separate, atomic commit after the functional change commit. Ensure the refactoring adheres to the testing guidelines (behavioral tests pass before and after, unit tests added for new units). Ensure the refactoring commit itself passes all quality gates.
Applied to files:
AGENTS.md
📚 Learning: 2025-08-03T14:59:07.928Z
Learnt from: CR
PR: leynos/comenq#0
File: AGENTS.md:0-0
Timestamp: 2025-08-03T14:59:07.928Z
Learning: If refactoring is deemed necessary: Perform the refactoring as a separate, atomic commit after the functional change commit. Ensure the refactoring adheres to the testing guidelines (behavioral tests pass before and after, unit tests added for new units). Ensure the refactoring commit itself passes all quality gates.
Applied to files:
AGENTS.md
📚 Learning: 2025-07-20T21:31:16.982Z
Learnt from: CR
PR: leynos/ortho-config#0
File: AGENTS.md:0-0
Timestamp: 2025-07-20T21:31:16.982Z
Learning: Separate Atomic Refactors: If refactoring is deemed necessary, perform the refactoring as a separate, atomic commit after the functional change commit.
Applied to files:
AGENTS.md
📚 Learning: 2025-08-06T22:18:56.467Z
Learnt from: CR
PR: leynos/lille#0
File: AGENTS.md:0-0
Timestamp: 2025-08-06T22:18:56.467Z
Learning: If refactoring is deemed necessary, perform the refactoring as a separate, atomic commit after the functional change commit. Ensure the refactoring adheres to the testing guidelines and passes all quality gates.
Applied to files:
AGENTS.md
📚 Learning: 2025-06-26T01:03:03.478Z
Learnt from: CR
PR: leynos/bournemouth#0
File: .cursor/global.mdc:0-0
Timestamp: 2025-06-26T01:03:03.478Z
Learning: Perform refactoring as a separate, atomic commit after the functional change, ensuring all tests pass before and after, and that the refactoring commit itself passes all quality gates.
Applied to files:
AGENTS.md
📚 Learning: 2025-06-25T23:37:47.576Z
Learnt from: CR
PR: leynos/mxd#0
File: AGENTS.md:0-0
Timestamp: 2025-06-25T23:37:47.576Z
Learning: Perform refactoring as a separate, atomic commit after functional changes, ensuring all tests pass before and after, and that the refactoring itself passes all quality gates.
Applied to files:
AGENTS.md
🧬 Code Graph Analysis (10)
frontend-pwa/src/main.tsx (1)
frontend-pwa/src/app/App.tsx (1)
App(7-46)
backend/src/main.rs (4)
frontend-pwa/src/app/App.tsx (1)
App(7-46)backend/src/api/users.rs (1)
list_users(19-25)backend/src/bin/openapi_dump.rs (1)
main(6-13)backend/src/ws/mod.rs (1)
ws_entry(12-18)
frontend-pwa/tailwind.config.js (1)
packages/tokens/build/style-dictionary.js (1)
preset(37-47)
backend/src/models/user.rs (1)
frontend-pwa/src/api/client.ts (1)
User(6-9)
backend/src/bin/openapi_dump.rs (1)
backend/src/main.rs (1)
main(14-29)
backend/src/api/users.rs (1)
frontend-pwa/src/api/client.ts (1)
User(6-9)
backend/src/doc.rs (2)
frontend-pwa/src/api/client.ts (1)
User(6-9)backend/src/api/users.rs (1)
list_users(19-25)
frontend-pwa/src/api/client.ts (1)
frontend-pwa/src/api/fetcher.ts (1)
customFetch(7-40)
frontend-pwa/src/app/App.tsx (1)
frontend-pwa/src/api/client.ts (1)
listUsers(11-11)
backend/src/ws/mod.rs (1)
backend/src/ws/socket.rs (1)
default(19-23)
🪛 Checkov (3.2.334)
deploy/docker/frontend.Dockerfile
[LOW] 1-11: Ensure that HEALTHCHECK instructions have been added to container images
(CKV_DOCKER_2)
[LOW] 1-11: Ensure that a user for the container has been created
(CKV_DOCKER_3)
deploy/docker/backend.Dockerfile
[LOW] 1-14: Ensure that HEALTHCHECK instructions have been added to container images
(CKV_DOCKER_2)
deploy/k8s/ingress/api.yaml
[LOW] 1-17: The default namespace should not be used
(CKV_K8S_21)
spec/openapi.json
[MEDIUM] 21-26: Ensure that arrays have a maximum number of items
(CKV_OPENAPI_21)
🪛 YAMLlint (1.37.1)
deploy/k8s/backend/deployment.yaml
[error] 3-3: too many spaces inside braces
(braces)
[error] 3-3: too many spaces inside braces
(braces)
[error] 6-6: too many spaces inside braces
(braces)
[error] 6-6: too many spaces inside braces
(braces)
[error] 6-6: too many spaces inside braces
(braces)
[error] 6-6: too many spaces inside braces
(braces)
[error] 8-8: too many spaces inside braces
(braces)
[error] 8-8: too many spaces inside braces
(braces)
[error] 8-8: too many spaces inside braces
(braces)
[error] 8-8: too many spaces inside braces
(braces)
[error] 11-11: wrong indentation: expected 8 but found 6
(indentation)
[error] 13-13: too many spaces inside braces
(braces)
[error] 13-13: too many spaces inside braces
(braces)
[error] 15-15: too many spaces inside braces
(braces)
[error] 15-15: too many spaces inside braces
(braces)
[error] 16-16: too many spaces inside braces
(braces)
[error] 16-16: too many spaces inside braces
(braces)
[error] 18-18: too many spaces inside braces
(braces)
[error] 18-18: too many spaces inside braces
(braces)
[error] 20-20: too many spaces inside braces
(braces)
[error] 20-20: too many spaces after comma
(commas)
[error] 20-20: too many spaces inside braces
(braces)
[error] 24-24: too many spaces inside braces
(braces)
[error] 24-24: too many spaces inside braces
(braces)
[error] 26-26: too many spaces inside braces
(braces)
[error] 26-26: too many spaces inside braces
(braces)
[error] 27-27: too many spaces inside braces
(braces)
[error] 27-27: too many spaces inside braces
(braces)
deploy/k8s/ingress/api.yaml
[error] 9-9: wrong indentation: expected 4 but found 2
(indentation)
[error] 12-12: wrong indentation: expected 8 but found 6
(indentation)
[error] 17-17: too many spaces inside braces
(braces)
[error] 17-17: too many spaces inside braces
(braces)
🪛 Hadolint (2.12.0)
deploy/docker/backend.Dockerfile
[warning] 2-2: Pin versions in apk add. Instead of apk add <package> use apk add <package>=<version>
(DL3018)
🪛 Biome (2.1.2)
frontend-pwa/src/app/App.tsx
[error] 16-16: The elements with this role can be changed to the following elements:
For examples and more information, see WAI-ARIA Roles
(lint/a11y/useSemanticElements)
🔇 Additional comments (23)
packages/tokens/package.json (1)
10-11: LGTM on dependency policyUse of a caret range for style-dictionary and the engines constraint aligns with the repo policy.
AGENTS.md (4)
31-34: Affirm the 400‑line cap for code files, not docs.Keep the “<= 400 lines” rule scoped to code files as written. This aligns with repo standards and avoids constraining prose documents.
60-64: Tighten test gate phrasing to avoid ambiguity.Explicitly require both unit and behavioural tests to pass, and reference
make testas the authority. The added line does this; keep it.
104-110: Retain atomic refactor guidance and tie to quality gates.Mandate a separate commit for refactors post functional change and require tests to pass before/after. This matches long‑standing practice captured in prior learnings and should remain.
Using previously retrieved learnings from your repos to validate this guidance.
233-235: Add top-level audit scripts to root package.jsonAdd proxy scripts in
package.jsonso users can runnpm run auditandnpm run audit:snykfrom the repo root:• File: package.json
"scripts": { + "audit": "npm run audit --workspace frontend-pwa", + "audit:snyk":"npm run audit:snyk --workspace frontend-pwa", …existing scripts… }This surfaces a single entry point and delegates to the
frontend-pwaworkspace.Likely an incorrect or invalid review comment.
backend/Cargo.toml (1)
25-28: LGTM on release profile.
codegen-units = 1,lto = "thin", andopt-level = 3are sensible defaults for a small service binary.package.json (1)
30-33: Pin package manager for reproducibilityDeclare the package manager to align local and CI environments. Apply:
"engines": { "node": ">=18.17" }, + "packageManager": "bun@1.1.0"If CI standardises on npm, set
"packageManager": "npm@<version>"instead.backend/src/ws/socket.rs (1)
41-63: LGTM on message handling and heartbeat updates.Handle Ping/Pong/Text/Binary correctly, respond to Ping, and stop on Close/errors. This aligns with actix‑web‑actors patterns.
backend/src/api/mod.rs (1)
1-3: LGTM.Document the module up-front and expose
userscleanly. This aligns with the Rust guidelines for module docs.frontend-pwa/orval.config.yaml (1)
1-8: Confirm fetch mutator signature—no changes requiredOrval invokes the
override.mutatorwith(url: string, options?: RequestInit)when usingclient: fetch, so yourcustomFetchwrapper already matches the expected signature. No adjustments needed.deploy/k8s/ingress/api.yaml (1)
1-17: Pin namespace and terminate TLS at the Ingress
No Namespace manifest found under deploy/k8s. Create and reference a non-default namespace and terminate TLS at the Ingress.• File: deploy/k8s/ingress/api.yaml
– Under metadata, add:
yaml namespace: myapp
– Under spec, add:
yaml tls: - hosts: - api.example.com secretName: myapp-api-tls• Create a Namespace manifest at deploy/k8s/myapp-namespace.yaml:
apiVersion: v1 kind: Namespace metadata: name: myappVerify that the namespace manifest exists and is applied before deploying the Ingress.
backend/src/bin/openapi_dump.rs (1)
1-13: LGTM: Use the crate re-export and trait import correctly.Document the module, import the OpenApi trait, and emit JSON with an en‑GB message. This binary meets the guidelines and purpose.
backend/src/main.rs (1)
3-6: Retain the trait import; it is required for method resolution.Keep use utoipa::OpenApi; to bring the trait into scope for ApiDoc::openapi().
backend/src/lib.rs (1)
1-9: LGTM: Structure and re-export are idiomatic.Document the crate modules and re-export ApiDoc at the root for binaries and tools. This aligns with the PR goals.
frontend-pwa/vite.config.ts (1)
11-12: Fix missing alias target for @app/tokensAlias '@app/tokens' in frontend-pwa/vite.config.ts (line 11) points at packages/tokens/dist, which doesn’t exist.
Add one of the following:
- Add a workspace predev/prebuild hook to build the tokens package before running Vite (e.g. in the root package.json,
"predev": "pnpm --filter tokens build").- Change the alias to point at the source entry with proper ESM exports, for example:
- '@app/tokens': resolve(__dirname, '../packages/tokens/dist') + '@app/tokens': resolve(__dirname, '../packages/tokens/src/index.ts')[fіx_required]
frontend-pwa/package.json (1)
5-12: Add fmt, lint, typecheck, and test scripts to meet the repo workflow requirements.The coding guidelines require running “bun run fmt, lint, test” and maintaining type-safety. Add the scripts so CI and local workflows can rely on them.
"scripts": { "dev": "vite", "build": "vite build", "preview": "vite preview", "gen:api": "bunx orval --config orval.config.yaml", "audit": "bun x npm@latest audit", - "audit:snyk": "bun x snyk test" + "audit:snyk": "bun x snyk test", + "fmt": "biome format --write .", + "lint": "biome lint .", + "typecheck": "tsc --noEmit", + "test": "vitest run" },Follow up:
- If Biome/Vitest aren’t yet configured, wire them in this package or root workspace to avoid script failures. I can generate minimal configs if helpful.
Likely an incorrect or invalid review comment.
frontend-pwa/tailwind.config.js (1)
10-10: Fix DaisyUI config shape and confirm theme exportDaisyUI requires an options object with a
themeskey. Updatefrontend-pwa/tailwind.config.js:- daisyui: daisy + daisyui: { themes: daisy }If
@app/tokens/dist/daisy/themealready exports the full DaisyUI options object, rename the import and assign it directly:-import daisy from '@app/tokens/dist/daisy/theme' +import daisyuiOptions from '@app/tokens/dist/daisy/theme' … - daisyui: { themes: daisy } + daisyui: daisyuiOptionsVerify the correct module path and inspect its export shape (themes array vs full options). The theme file wasn’t found in the repo—manually check your source or build output before merging.
deploy/docker/backend.Dockerfile (1)
8-14: Add orchestrator liveness/readiness probes or implement in-image health checksRust binary exposes no health endpoint and Dockerfile contains no HEALTHCHECK. Decide one:
- Implement an HTTP
/healthendpoint in the Rust service.- Switch base image to one with a probe tool (e.g.
gcr.io/distroless/cc-debian12:nonrootor minimal distro) to runHEALTHCHECK.- Otherwise, configure liveness and readiness probes in your Kubernetes or Docker-Compose manifests against port 8080.
[fіx_required]
backend/src/doc.rs (2)
1-1: LGTM: Module-level docs present and follow en‑GB spelling.The module header satisfies the repository’s Rust doc requirements.
6-13: LGTM: OpenAPI derive and tags are correct and minimal.The documented path and components are wired correctly, and the tag description clarifies intent. No Clippy or docstyle issues detected.
packages/tokens/src/themes/light.json (1)
1-20: LGTM: Theme structure mirrors dark theme and uses en‑GB spelling.Token references look valid, ordering is stable, and descriptions use en‑GB (“colour”).
packages/tokens/build/style-dictionary.js (1)
37-47: Export a Tailwind ESM preset with predictable structure; ensure colors are plain values.The mapping looks good once unwrap() is fixed. Keep the preset minimal and deterministic for snapshotting.
Makefile (1)
5-7: Harden clean to tolerate absent directories.Prefix rm with a dash or add the -f flag per path to avoid non-zero exit codes when paths are missing in some shells.
clean: cargo clean --manifest-path backend/Cargo.toml - rm -rf frontend-pwa/node_modules packages/tokens/node_modules + -rm -rf frontend-pwa/node_modules packages/tokens/node_modulesLikely an incorrect or invalid review comment.
| - `bun run audit` passes or has justified, time‑boxed exceptions. | ||
| - No `any`, no `@ts-ignore`; use `@ts-expect-error` only with a reason. |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Gate commits on audit status or justified exceptions.
Keep the audit pass/exception requirement in the pre‑commit checklist. Add a CI job that enforces this to prevent drift.
| # OpenAPI | ||
| utoipa = { version = "5", features = ["macros", "uuid", "yaml", "actix_extras"] } | ||
| utoipa-swagger-ui = { version = "9", features = ["actix-web"] } | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Trim unused features from utoipa to reduce compile time.
Drop uuid unless the API exposes Uuid types. Keeping it enabled when User.id is a String adds unnecessary compile surface.
Apply:
-utoipa = { version = "5", features = ["macros", "uuid", "yaml", "actix_extras"] }
+utoipa = { version = "5", features = ["macros", "yaml", "actix_extras"] }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # OpenAPI | |
| utoipa = { version = "5", features = ["macros", "uuid", "yaml", "actix_extras"] } | |
| utoipa-swagger-ui = { version = "9", features = ["actix-web"] } | |
| # OpenAPI | |
| utoipa = { version = "5", features = ["macros", "yaml", "actix_extras"] } | |
| utoipa-swagger-ui = { version = "9", features = ["actix-web"] } |
🤖 Prompt for AI Agents
In backend/Cargo.toml around lines 15 to 18, the utoipa dependency unnecessarily
enables the "uuid" feature which increases compile time even though your API
uses String IDs; remove "uuid" from the features array for utoipa (leaving other
features like "macros", "yaml", "actix_extras" intact), save the file, and run
cargo build / cargo test to verify nothing else requires the uuid feature.
| # JSON Schema (share with AsyncAPI if desired) | ||
| schemars = { version = "0.8", features = ["derive"] } | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Remove schemars until actively used.
shemars isn’t used by utoipa and adds build time. Add it back when generating JSON Schema or sharing schemas with AsyncAPI.
Apply:
-# JSON Schema (share with AsyncAPI if desired)
-schemars = { version = "0.8", features = ["derive"] }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # JSON Schema (share with AsyncAPI if desired) | |
| schemars = { version = "0.8", features = ["derive"] } |
🤖 Prompt for AI Agents
In backend/Cargo.toml around lines 19 to 21, remove the unused schemars
dependency entry (schemars = { version = "0.8", features = ["derive"] }) so it
doesn’t add unnecessary build time; delete that line from Cargo.toml and then
run cargo build (or cargo update) to refresh the lockfile and verify the project
still builds; re-add schemars later only when you actually generate JSON Schema
or share schemas with AsyncAPI.
| "scripts": { | ||
| "build": "npm run build --workspaces" | ||
| }, |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Expose workspace orchestration scripts.
Add aggregator scripts so bun run/npm run at the repo root can invoke common tasks across workspaces (build, lint, test, audit).
Apply:
"scripts": {
- "build": "npm run build --workspaces"
+ "build": "npm run build --workspaces",
+ "fmt": "npm run fmt --workspaces",
+ "lint": "npm run lint --workspaces",
+ "typecheck": "npm run typecheck --workspaces",
+ "test": "npm run test --workspaces",
+ "audit": "npm run audit --workspaces",
+ "audit:snyk": "npm run audit:snyk --workspaces"
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "scripts": { | |
| "build": "npm run build --workspaces" | |
| }, | |
| "scripts": { | |
| "build": "npm run build --workspaces", | |
| "fmt": "npm run fmt --workspaces", | |
| "lint": "npm run lint --workspaces", | |
| "typecheck": "npm run typecheck --workspaces", | |
| "test": "npm run test --workspaces", | |
| "audit": "npm run audit --workspaces", | |
| "audit:snyk": "npm run audit:snyk --workspaces" | |
| }, |
🤖 Prompt for AI Agents
In package.json around lines 27 to 29, add top-level aggregator scripts so
root-level runners can invoke workspace tasks; replace the single "build" entry
with scripts like "build": "npm --workspaces run build", "lint": "npm
--workspaces run lint", "test": "npm --workspaces run test", and "audit": "npm
--workspaces audit" (or "npm --workspaces run audit" if your workspaces expose
an audit script) so running npm run or bun run at the repo root executes the
corresponding script across all workspaces.
| "scripts": { | ||
| "build": "node build/style-dictionary.js", | ||
| "postinstall": "node build/style-dictionary.js" | ||
| }, |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Avoid duplicating the build command
Invoke the build script from postinstall to keep a single source of truth.
"scripts": {
"build": "node build/style-dictionary.js",
- "postinstall": "node build/style-dictionary.js"
+ "postinstall": "npm run build"
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "scripts": { | |
| "build": "node build/style-dictionary.js", | |
| "postinstall": "node build/style-dictionary.js" | |
| }, | |
| "scripts": { | |
| "build": "node build/style-dictionary.js", | |
| "postinstall": "npm run build" | |
| }, |
🤖 Prompt for AI Agents
In packages/tokens/package.json around lines 6 to 9, the build command is
duplicated in both "build" and "postinstall"; change the postinstall to invoke
the build script instead (e.g., set "postinstall" to "npm run build") so there
is a single source of truth for the build command and postinstall simply calls
that script.
| "brand": { | ||
| "default": { "value": "{color.primary.500}", "description": "Brand colour" }, | ||
| "hover": { "value": "{color.primary.600}", "description": "Brand hover" }, | ||
| "contrast": { "value": "{color.neutral.900}", "description": "Text on brand" }, | ||
| "muted": { "value": "{color.primary.400}", "description": "Subtle brand elements" } |
There was a problem hiding this comment.
Fix brand contrast colour for legibility.
"contrast": "{color.neutral.900}" yields near‑black text on a mid/dark brand background, harming contrast on dark theme. Use white for text-on-brand to meet WCAG.
Apply this diff:
- "contrast": { "value": "{color.neutral.900}", "description": "Text on brand" },
+ "contrast": { "value": "{color.neutral.0}", "description": "Text on brand" },📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "brand": { | |
| "default": { "value": "{color.primary.500}", "description": "Brand colour" }, | |
| "hover": { "value": "{color.primary.600}", "description": "Brand hover" }, | |
| "contrast": { "value": "{color.neutral.900}", "description": "Text on brand" }, | |
| "muted": { "value": "{color.primary.400}", "description": "Subtle brand elements" } | |
| "brand": { | |
| "default": { "value": "{color.primary.500}", "description": "Brand colour" }, | |
| "hover": { "value": "{color.primary.600}", "description": "Brand hover" }, | |
| "contrast":{ "value": "{color.neutral.0}", "description": "Text on brand" }, | |
| "muted": { "value": "{color.primary.400}", "description": "Subtle brand elements" } |
🤖 Prompt for AI Agents
In packages/tokens/src/themes/dark.json around lines 13 to 17 the brand contrast
token uses "{color.neutral.900}" which is too dark for text-on-brand in the dark
theme; update the "contrast" token value to white (use "{color.white}" or
"#FFFFFF" per project token conventions) so text-on-brand meets WCAG legibility,
keeping the description unchanged.
| "primary": { | ||
| "50": { "value": "#eef2ff", "description": "Primary 50" }, | ||
| "500": { "value": "#6366f1", "description": "Primary 500" }, | ||
| "600": { "value": "#5457e0", "description": "Primary 600" }, | ||
| "700": { "value": "#4338ca", "description": "Primary 700" } | ||
| }, |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Define color.primary.400 referenced by semantic tokens and themes.
{color.primary.400} is used in semantic brand muted values but is missing here, which will break the token build.
Apply this diff to add the missing key (use Indigo 400 to align with the existing scale):
"primary": {
"50": { "value": "#eef2ff", "description": "Primary 50" },
+ "400": { "value": "#818cf8", "description": "Primary 400" },
"500": { "value": "#6366f1", "description": "Primary 500" },
"600": { "value": "#5457e0", "description": "Primary 600" },
"700": { "value": "#4338ca", "description": "Primary 700" }
},Run this to verify no unresolved references remain:
If the command prints nothing, all referenced colour tokens are defined.
🏁 Script executed:
#!/bin/bash
# Detect unresolved token references like {color.xxx.yyy} without a definition
rg -n '\{color\.([a-zA-Z0-9_]+)\.([0-9]+)\}' packages/tokens | sed 's/.*{color\.\([^.]\+\)\.\([0-9]\+\)}.*/\1.\2/' | sort -u > /tmp/refs.txt
jq -r '
.color | to_entries[] | "\(.key)." as $cat
| (.value | to_entries[] | "\($cat)\(.key)")' packages/tokens/src/tokens.json | sort -u > /tmp/defs.txt
comm -23 /tmp/refs.txt /tmp/defs.txtLength of output: 322
Add missing primary.400 token definition
The unresolved reference check reports primary.400 is undefined. Define it in packages/tokens/src/tokens.json:
• File: packages/tokens/src/tokens.json
• Insert after line with "50" under "primary":
"primary": {
"50": { "value": "#eef2ff", "description": "Primary 50" },
+ "400": { "value": "#818cf8", "description": "Primary 400" },
"500": { "value": "#6366f1", "description": "Primary 500" },
"600": { "value": "#5457e0", "description": "Primary 600" },
"700": { "value": "#4338ca", "description": "Primary 700" }
},Run the reference check again to confirm no missing {color.xxx.yyy} entries remain.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "primary": { | |
| "50": { "value": "#eef2ff", "description": "Primary 50" }, | |
| "500": { "value": "#6366f1", "description": "Primary 500" }, | |
| "600": { "value": "#5457e0", "description": "Primary 600" }, | |
| "700": { "value": "#4338ca", "description": "Primary 700" } | |
| }, | |
| "primary": { | |
| "50": { "value": "#eef2ff", "description": "Primary 50" }, | |
| "400": { "value": "#818cf8", "description": "Primary 400" }, | |
| "500": { "value": "#6366f1", "description": "Primary 500" }, | |
| "600": { "value": "#5457e0", "description": "Primary 600" }, | |
| "700": { "value": "#4338ca", "description": "Primary 700" } | |
| }, |
🤖 Prompt for AI Agents
In packages/tokens/src/tokens.json around lines 16 to 21, the tokens object for
"primary" is missing the "400" entry referenced elsewhere; add a "400" token
definition under "primary" (insert it after the "50" entry) with an appropriate
hex value and description (e.g., "Primary 400") so the primary.400 reference
resolves, then re-run the unresolved reference check to confirm no missing color
tokens remain.
| asyncapi: 3.0.0 | ||
| info: | ||
| title: myapp events | ||
| version: 0.1.0 | ||
| channels: | ||
| /ws: | ||
| messages: | ||
| UserCreated: | ||
| payload: | ||
| type: object | ||
| properties: | ||
| id: | ||
| type: string | ||
| display_name: | ||
| type: string |
There was a problem hiding this comment.
Fix invalid AsyncAPI structure and add required fields
The current document is not valid AsyncAPI. Define an operation under the channel and attach the message payload. Also mark fields as required to align with the backend model.
-asyncapi: 3.0.0
+asyncapi: 2.6.0
info:
title: myapp events
version: 0.1.0
channels:
/ws:
- messages:
- UserCreated:
- payload:
- type: object
- properties:
- id:
- type: string
- display_name:
- type: string
+ subscribe:
+ message:
+ name: UserCreated
+ payload:
+ type: object
+ properties:
+ id:
+ type: string
+ display_name:
+ type: string
+ required:
+ - id
+ - display_nameIf you intentionally target AsyncAPI 3.0, restructure to the 3.0 operation/message shape; otherwise keep 2.6.0 for CLI/tooling compatibility today.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| asyncapi: 3.0.0 | |
| info: | |
| title: myapp events | |
| version: 0.1.0 | |
| channels: | |
| /ws: | |
| messages: | |
| UserCreated: | |
| payload: | |
| type: object | |
| properties: | |
| id: | |
| type: string | |
| display_name: | |
| type: string | |
| asyncapi: 2.6.0 | |
| info: | |
| title: myapp events | |
| version: 0.1.0 | |
| channels: | |
| /ws: | |
| subscribe: | |
| message: | |
| name: UserCreated | |
| payload: | |
| type: object | |
| properties: | |
| id: | |
| type: string | |
| display_name: | |
| type: string | |
| required: | |
| - id | |
| - display_name |
🤖 Prompt for AI Agents
In spec/asyncapi.yaml lines 1-15, the document lacks a channel operation and
required fields for the message payload; update the channel /ws to include an
operation (publish or subscribe) and attach the message under that operation
(either inline as operation.message.payload or via components/messages with a
$ref), and add a required array listing "id" and "display_name" under the
payload schema; ensure the AsyncAPI version stays 3.0.0 (or change to 2.6.0 if
you intend to target older tooling) so the structure matches the chosen spec.
| - uses: actions/checkout@v4 | ||
| - name: Setup Rust | ||
| uses: leynos/shared-actions/.github/actions/setup-rust@c6559452842af6a83b83429129dccaf910e34562 | ||
| - name: Show Ninja version | ||
| run: ninja --version | ||
| - name: Format | ||
| run: make check-fmt | ||
| - name: Lint | ||
| run: make lint | ||
| - name: Test | ||
| run: make test | ||
| - name: Test and Measure Coverage | ||
| uses: leynos/shared-actions/.github/actions/generate-coverage@c6559452842af6a83b83429129dccaf910e34562 | ||
| - uses: actions/setup-node@v4 | ||
| with: | ||
| output-path: lcov.info | ||
| format: lcov | ||
| - name: Upload coverage data to CodeScene | ||
| env: | ||
| CS_ACCESS_TOKEN: ${{ secrets.CS_ACCESS_TOKEN }} | ||
| if: ${{ env.CS_ACCESS_TOKEN }} | ||
| uses: leynos/shared-actions/.github/actions/upload-codescene-coverage@c6559452842af6a83b83429129dccaf910e34562 | ||
| node-version: '22' | ||
| - uses: oven-sh/setup-bun@v1 | ||
| with: | ||
| format: lcov | ||
| access-token: ${{ env.CS_ACCESS_TOKEN }} | ||
| installer-checksum: ${{ vars.CODESCENE_CLI_SHA256 }} | ||
| bun-version: '1.1.x' | ||
|
|
There was a problem hiding this comment.
Install a Rust toolchain explicitly.
The runner does not guarantee Rust is present. Install a stable toolchain to avoid sporadic failures.
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '22'
- uses: oven-sh/setup-bun@v1
with:
bun-version: '1.1.x'
+
+ - name: Setup Rust toolchain
+ uses: dtolnay/rust-toolchain@stableAlso applies to: 17-19
🤖 Prompt for AI Agents
.github/workflows/ci.yml lines 9-16 (and also apply the same change to lines
17-19): the workflow assumes Rust is available on the runner — add an explicit
step to install a stable Rust toolchain before any Bun/Node steps (e.g., use
actions-rs/toolchain@v1 or rust-lang/setup-rust with toolchain: stable and
optional components like clippy/cargo-watch if needed), placing it before
build/test steps and duplicate the same installation in the other job block
referenced in lines 17-19.
| with: | ||
| output-path: lcov.info | ||
| format: lcov | ||
| - name: Upload coverage data to CodeScene | ||
| env: | ||
| CS_ACCESS_TOKEN: ${{ secrets.CS_ACCESS_TOKEN }} | ||
| if: ${{ env.CS_ACCESS_TOKEN }} | ||
| uses: leynos/shared-actions/.github/actions/upload-codescene-coverage@c6559452842af6a83b83429129dccaf910e34562 | ||
| node-version: '22' | ||
| - uses: oven-sh/setup-bun@v1 | ||
| with: | ||
| format: lcov | ||
| access-token: ${{ env.CS_ACCESS_TOKEN }} | ||
| installer-checksum: ${{ vars.CODESCENE_CLI_SHA256 }} | ||
| bun-version: '1.1.x' | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Add caches for Bun/Node to speed up installs.
Cache Bun and node_modules keyed by lockfiles to trim CI time.
- uses: actions/setup-node@v4
with:
node-version: '22'
- uses: oven-sh/setup-bun@v1
with:
bun-version: '1.1.x'
+ - name: Cache Bun
+ uses: actions/cache@v4
+ with:
+ path: ~/.cache/bun
+ key: ${{ runner.os }}-bun-${{ hashFiles('**/bun.lockb') }}
+ restore-keys: ${{ runner.os }}-bun- - name: Build tokens
working-directory: packages/tokens
run: |
- bun install
+ bun install --frozen-lockfile
bun run build - name: Build PWA
working-directory: frontend-pwa
run: |
- bun install
+ bun install --frozen-lockfile
bun run buildAlso applies to: 48-58
🤖 Prompt for AI Agents
In .github/workflows/ci.yml around lines 10-15 (and similarly at 48-58), add
caching steps to speed installs: add actions/cache entries for node_modules
keyed by package-lock.json (or yarn.lock/pnpm-lock.yaml as appropriate) and for
Bun’s installation directory keyed by bun.lockb (or the Bun lockfile) and Node
version; ensure the restore-keys include the lockfile and fallback to
node-version, and place cache actions before running install steps so CI
restores caches and saves them after successful installs.
| uses: actions/cache@v4 | ||
| with: | ||
| path: | | ||
| ~/.cargo/registry | ||
| ~/.cargo/git | ||
| target | ||
| key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }} | ||
| restore-keys: ${{ runner.os }}-cargo- | ||
| - name: Rust fmt check | ||
| run: cargo fmt --manifest-path backend/Cargo.toml --all -- --check |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Move Cargo cache before the build to actually cache dependencies.
The cache step runs after the build, providing no benefit. Reorder so cache restores before building.
- - name: Rust build
- run: cargo build --manifest-path backend/Cargo.toml --release
- - name: Cache cargo
+ - name: Cache cargo
uses: actions/cache@v4
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
restore-keys: ${{ runner.os }}-cargo-
+ - name: Rust build
+ run: cargo build --manifest-path backend/Cargo.toml --releaseAlso applies to: 18-19
🤖 Prompt for AI Agents
In .github/workflows/ci.yml around lines 18 to 29, the "Cache cargo" step is
placed after the build so it never speeds the build; move the entire "Cache
cargo" step so it runs before any cargo build/test/format steps (i.e., place it
above the job step that runs cargo build/fmt/check), keeping the same 'uses',
'with', key and restore-keys settings so the cache is restored at job start and
saved after the run.
| echo "OpenAPI generated from backend" | ||
| else | ||
| echo "openapi-dump bin not available; using existing spec or placeholder" | ||
| test -s spec/openapi.json || echo '{"openapi":"3.0.3","info":{"title":"placeholder","version":"0.0.0"},"paths":{}}' > spec/openapi.json | ||
| fi | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Run the OpenAPI dumper in release and drop or align the placeholder spec.
Avoid recompiling in debug and emitting a mismatched placeholder (3.0.3 vs the repo’s 3.1.0). Fail fast if generation is required, or keep a 3.1.0 placeholder.
- if cargo run --manifest-path backend/Cargo.toml --bin openapi-dump > spec/openapi.json; then
+ if cargo run --manifest-path backend/Cargo.toml --bin openapi-dump --release > spec/openapi.json; then
echo "OpenAPI generated from backend"
else
- echo "openapi-dump bin not available; using existing spec or placeholder"
- test -s spec/openapi.json || echo '{"openapi":"3.0.3","info":{"title":"placeholder","version":"0.0.0"},"paths":{}}' > spec/openapi.json
+ echo "openapi-dump bin not available; using existing spec or 3.1.0 placeholder"
+ test -s spec/openapi.json || echo '{"openapi":"3.1.0","info":{"title":"placeholder","version":"0.0.0"},"paths":{}}' > spec/openapi.json
fiAlternatively, drop the fallback entirely and let the job fail to prevent stale client generation. Say the word and I will patch it accordingly.
🤖 Prompt for AI Agents
.github/workflows/ci.yml lines 41-46: the workflow runs the openapi-dump in
debug mode and falls back to a mismatched 3.0.3 placeholder, which can produce
stale clients; either run the dumper in release and update the placeholder to
3.1.0, or remove the fallback and fail the job on generation failure. Fix by
changing the cargo invocation to use --release, and if you keep a fallback
replace the placeholder openapi version and minimal fields to match 3.1.0, or
delete the else block / replace it with a non-zero exit so the job fails fast
when generation is required.
There was a problem hiding this comment.
🛠️ Refactor suggestion
Generate the API client before building the PWA.
Ensure the frontend has the latest client from the freshly dumped spec.
- name: Build tokens
working-directory: packages/tokens
run: |
bun install
bun run build
+ - name: Generate API client
+ working-directory: frontend-pwa
+ run: |
+ bun install
+ bunx orval --config orval.config.yaml
- name: Build PWA
working-directory: frontend-pwa
run: |
- bun install
bun run buildIf codegen already runs inside your build script, document that in package.json and ignore this change.
Also applies to: 48-53
🤖 Prompt for AI Agents
In .github/workflows/ci.yml around lines 54-58 (and also apply same change to
lines 48-53), the PWA build step runs without generating the API client first;
update the workflow so the frontend-pwa working-directory runs the codegen step
before installing/building (e.g., call the repository's client generation script
such as npm/yarn/bun run generate:client or equivalent) so the latest client
from the dumped spec is present, or if the build script already performs
generation, add a note in frontend-pwa/package.json scripts documenting that and
then no workflow change is needed.
| { | ||
| "openapi": "3.1.0", | ||
| "info": { | ||
| "title": "backend", | ||
| "description": "", | ||
| "version": "0.1.0" | ||
| }, |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Add servers and info.description for better client generation and documentation.
Expose at least one server URL and provide a short API description.
{
"openapi": "3.1.0",
"info": {
"title": "backend",
- "description": "",
+ "description": "Wildside backend public API.",
"version": "0.1.0"
},
+ "servers": [
+ { "url": "http://localhost:8080", "description": "Local development server" }
+ ],Also applies to: 49-56
🤖 Prompt for AI Agents
In spec/openapi.json around lines 1 to 7 (and also update the document's info
block at lines 49 to 56), the OpenAPI spec is missing a servers array and a
descriptive info.description; add a "servers" array with at least one server
object containing a reachable "url" (and optional "description") and update the
"info.description" field with a short, meaningful API summary so client
generators and documentation tools have server endpoint and description
metadata.
| "tags": ["users"], | ||
| "operationId": "listUsers", | ||
| "responses": { |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Add operation summary/description and tag description to match PR objective.
Document the users tag and the GET operation. The PR objective mentions adding a tag description; the spec currently lacks it.
"get": {
- "tags": ["users"],
+ "tags": ["users"],
+ "summary": "List users",
+ "description": "List users visible to the current authenticated session.",
"operationId": "listUsers", "tags": [
{
- "name": "users"
+ "name": "users",
+ "description": "Operations related to user resources."
}
]Also applies to: 86-90
🤖 Prompt for AI Agents
In spec/openapi.json around lines 14-16 (and similarly lines 86-90), the GET
users operation and the OpenAPI tags lack human-readable documentation; add a
"summary" and/or "description" fields to the GET /users operation (operationId
"listUsers") to describe what the endpoint does and any important
params/behaviour, and add a top-level "tags" entry with a description for the
"users" tag (e.g., include "name": "users" and "description": "Manage and
retrieve user accounts" under the OpenAPI top-level tags array) so the spec
includes both the operation documentation and the tag description.
| "application/json": { | ||
| "schema": { | ||
| "type": "array", | ||
| "items": { | ||
| "$ref": "#/components/schemas/User" | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
💡 Verification agent
❓ Verification inconclusive
Constrain array size or introduce pagination to satisfy policy checks.
Static analysis flagged missing maximum items for arrays. Either introduce pagination (preferred) or cap the array.
Option A (quick compliance):
"schema": {
"type": "array",
+ "maxItems": 100,
"items": {
"$ref": "#/components/schemas/User"
}
}Option B (preferred): Replace the 200 response body with an envelope including items, next_cursor, and total, then document query parameters page/limit or cursor. Confirm the API contract before proceeding. Do you want me to draft the paginated schema and parameters?
Limit User array responses or implement pagination
Static analysis flagged missing maxItems on the User array schema in spec/openapi.json. Enforce a cap or introduce pagination to satisfy policy.
• Option A – Cap the array
"schema": {
"type": "array",
+ "maxItems": 100,
"items": {
"$ref": "#/components/schemas/User"
}
}• Option B – Implement pagination
• Define a response envelope:
json { "items": [ { "$ref": "#/components/schemas/User" } ], "next_cursor": "string", "total": 123 }
• Add limit and cursor query parameters.
Confirm the preferred approach and draft the paginated schema if needed.
🧰 Tools
🪛 Checkov (3.2.334)
[MEDIUM] 21-26: Ensure that arrays have a maximum number of items
(CKV_OPENAPI_21)
🤖 Prompt for AI Agents
In spec/openapi.json around lines 20–27 the response schema returns an unbounded
array of User objects without maxItems or pagination; either add a maxItems
integer constraint to that array (e.g., "maxItems": <number>) or replace the
array response with a paginated envelope schema containing "items" (array of
User), "next_cursor" (string|null) and "total" (integer) and update the
operation to accept "limit" and "cursor" query parameters; also update
components/schemas to include the new envelope schema and adjust references in
the response to point to it.
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Add 403 response for authenticated-but-forbidden cases.
Differentiate unauthorised (401) from forbidden (403).
},
+ "403": {
+ "description": "Forbidden",
+ "content": {
+ "application/json": {
+ "schema": { "$ref": "#/components/schemas/Error" }
+ }
+ }
+ },
"500": {
"description": "Internal server error",
"content": {
"application/json": {
"schema": { "$ref": "#/components/schemas/Error" }
}
}
}Also applies to: 38-45
🤖 Prompt for AI Agents
In spec/openapi.json around lines 46-47 (and also lines 38-45), the responses
currently only include 401 for auth failures; add a distinct 403 response object
for authenticated-but-forbidden cases. Update each relevant operation's
responses to include a 403 entry with a suitable description (e.g., "Forbidden —
authenticated but lacks permission") and the same response schema/headers used
for errors, leaving 401 for unauthenticated cases. Ensure the new 403 is present
where authorization checks can deny access.
| "properties": { | ||
| "display_name": { | ||
| "type": "string" | ||
| }, | ||
| "id": { | ||
| "type": "string", | ||
| "description": "Stable user identifier" | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Tighten schemas for contract clarity (additionalProperties=false and field metadata).
Disallow undeclared properties and add field-level metadata to improve codegen (e.g., Zod) and validation.
"Error": {
"type": "object",
"properties": {
- "code": { "type": "string" },
- "message": { "type": "string" },
+ "code": { "type": "string", "description": "Stable error code for programmatic handling." },
+ "message": { "type": "string", "description": "Human-readable error message." },
"details": { "type": "object", "additionalProperties": true }
},
- "required": ["code", "message"]
+ "required": ["code", "message"],
+ "additionalProperties": false
},
"User": {
"type": "object",
"required": [
"id",
"display_name"
],
"properties": {
"display_name": {
- "type": "string"
+ "type": "string",
+ "minLength": 1,
+ "description": "Display name shown in the UI."
},
"id": {
"type": "string",
- "description": "Stable user identifier"
+ "description": "Stable user identifier",
+ "minLength": 1
}
- }
+ },
+ "additionalProperties": false
}Also applies to: 58-67
🤖 Prompt for AI Agents
In spec/openapi.json around lines 74-83 (and similarly for lines 58-67), the
object schemas allow undeclared properties and lack per-field metadata; update
each object schema to include "additionalProperties": false and add field-level
metadata (e.g., "description", "title" and optional "example") for each
property, and mark required fields using the "required" array where applicable
so codegen/validation tools (like Zod) produce strict, well-documented types.
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (8)
deploy/nginx/default.conf (2)
6-16: Remove unconditional WebSocket headers from /api and add X-Forwarded-HostStop sending Connection: upgrade on every API request; this breaks HTTP semantics and can interfere with keep-alive. Keep Upgrade/Connection only on a dedicated WebSocket location. Preserve the original host with X-Forwarded-Host for correct upstream origin handling.
location /api/ { proxy_pass http://backend:8080; proxy_http_version 1.1; - proxy_set_header Upgrade $http_upgrade; - proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; + proxy_set_header X-Forwarded-Host $host; proxy_read_timeout 60s; }
16-17: Proxy WebSocket endpoint with proper Upgrade handlingAdd a dedicated /ws location with the correct Upgrade/Connection headers and a longer read timeout to avoid dropped connections.
} + location /ws { + proxy_pass http://backend:8080; + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "upgrade"; + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header X-Forwarded-Proto $scheme; + proxy_set_header X-Forwarded-Host $host; + proxy_read_timeout 3600s; + } location / {Run this script to verify the actual WebSocket route path in the backend and adjust the Nginx location (e.g., /api/ws) if needed:
#!/bin/bash # Find potential WebSocket handlers/routes in the backend rg -n -i -A3 -B2 $'/ws\\b|websocket|ws::|upgrade|actix_web::web::Payload|actix_web_actors|on_connect|ws::start'frontend-pwa/orval.config.yaml (1)
5-7: Name the mutator export to match your fetcher.Without
name, Orval imports the mutator as a default export. ExportcustomFetchfromsrc/api/fetcher.tsand name it here so codegen imports it correctly.Apply this diff:
override: mutator: path: src/api/fetcher.ts + name: customFetchdeploy/docker/backend.Dockerfile (1)
2-8: Produce a truly static binary for distroless:static and lock the build.Linking against OpenSSL on Alpine typically yields a dynamically linked binary, which will not run on
gcr.io/distroless/static:nonroot. Force static OpenSSL linkage (or switch dependencies to rustls) and build with--locked.Apply this diff:
-FROM rust:1.79-alpine AS build -RUN apk add --no-cache build-base musl-dev pkgconfig openssl-dev -RUN rustup target add x86_64-unknown-linux-musl +FROM rust:1.79-alpine AS build +# Install static OpenSSL to enable fully static linkage +RUN apk add --no-cache build-base musl-dev pkgconfig openssl-dev openssl-libs-static +ENV OPENSSL_STATIC=1 +RUN rustup target add x86_64-unknown-linux-musl WORKDIR /app/backend COPY backend/Cargo.toml backend/Cargo.lock ./ -RUN cargo fetch --locked +RUN cargo fetch --locked COPY backend/ ./ -RUN cargo build --release --target x86_64-unknown-linux-musl +RUN cargo build --release --locked --target x86_64-unknown-linux-muslIf OpenSSL still pulls dynamic libs, enable rustls features in your HTTP/TLS stack and drop OpenSSL to simplify static builds.
frontend-pwa/vite.config.ts (1)
4-12: Fix ESM-incompatible __dirname and import Node built-ins via node: specifiers.
__dirnameis undefined in ESM. Compute it viafileURLToPathand import fromnode:path/node:urlto comply with the ESM-only guideline.Apply this diff:
import { defineConfig } from 'vite'; -import { resolve } from 'path'; +import { resolve, dirname } from 'node:path'; +import { fileURLToPath } from 'node:url'; import react from '@vitejs/plugin-react'; +const __dirname = dirname(fileURLToPath(import.meta.url));Makefile (3)
3-3: Fail fast on formatting drift by using check-fmt in the default target.Align local runs with CI by invoking
check-fmtinall.Apply this diff:
-all: fmt lint test +all: check-fmt lint test
16-19: Generate OpenAPI via the backend bin; avoid coupling to a running server.Make the spec generation self-contained and CI-friendly by invoking the dump binary directly.
Apply this diff:
openapi: - # Replace with a bin that prints OpenAPI - mkdir -p spec - curl -s http://localhost:8080/api-docs/openapi.json > spec/openapi.json + @mkdir -p spec + cargo run --manifest-path backend/Cargo.toml --bin openapi-dump --release > spec/openapi.json
21-22: Ensure codegen depends on an up-to-date OpenAPI document.Express the dependency so
genalways regenerates against the latest spec.Apply this diff:
-gen: +gen: openapi cd frontend-pwa && bunx orval --config orval.config.yaml
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled
You can enable these settings in your CodeRabbit configuration.
📒 Files selected for processing (5)
Makefile(1 hunks)deploy/docker/backend.Dockerfile(1 hunks)deploy/nginx/default.conf(1 hunks)frontend-pwa/orval.config.yaml(1 hunks)frontend-pwa/vite.config.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx}
📄 CodeRabbit Inference Engine (AGENTS.md)
**/*.{ts,tsx}: ESM-only: source/build outputs are ES Modules; configure Vite accordingly; no CommonJS
Place JSDoc comments above declarations/decorators; keep docs close to code
Prefer immutability: const, readonly, Readonly; avoid mutating inputs
Extract helpers for long functions; keep trivial functions on one line when readable
Group related parameters into typed objects/builders; avoid long positional lists
Extract predicate functions or use lookup tables for multi-branch conditionals; enforce exhaustive switch with never guard
Every module begins with /** @file … */ describing purpose and usage
Validate I/O with runtime schemas (zod/valibot) at boundaries; keep types and schemas in sync
Use branded types for identifiers/tokens
Accept AbortSignal for cancellable async operations; wire through TanStack Query fetchers
Centralise time and RNG adapters; avoid direct Date.now()/Math.random() in business logic
Use discriminated unions for recoverable errors; reserve Error subclasses for exceptional paths with cause
Use Vitest with jsdom/happy-dom; keep tests deterministic and parallel-safe
Use factories/builders for fixtures; prefer parameterised tests with test.each/it.each
Use vi.mock for module boundaries; inject adapters for env/time/storage/fetch; avoid monkey-patching globals
Use vi.useFakeTimers() for time-based logic; restore after each test
Keep snapshots deterministic (sorted keys, fixed seeds) and scoped to stable UI fragments
Use Biome for format+lint; disallow any, non-null !; forbid @ts-ignore in favour of @ts-expect-error with reason
Enforce import hygiene: sorted/grouped imports; no unused or extraneous deps
Use dynamic import() for code-splitting; enable Vite prefetch and asset hashing
Configure TanStack Query with appropriate stale-time, cache-time; avoid refetchOnWindowFocus unless necessary
Avoid await in loops; batch with Promise.allSettled; use async iterables/streams for large data
Ship CSP, avoid eval/new Function; use Trusted Types and sa...
Files:
frontend-pwa/vite.config.ts
**/*.{tsx,ts}
📄 CodeRabbit Inference Engine (AGENTS.md)
**/*.{tsx,ts}: Map domain errors to user messages at UI boundary; do not leak raw stacks to DOM/analytics
Enable React StrictMode in dev; memoise expensive components; derive data via selectors
Prefer semantic HTML; use daisyUI components only when they don’t harm semantics; audit focus and contrast
Files:
frontend-pwa/vite.config.ts
Makefile
📄 CodeRabbit Inference Engine (AGENTS.md)
Provide and use make targets: make fmt, make lint, make test, make check-fmt
Files:
Makefile
🪛 Checkov (3.2.334)
deploy/docker/backend.Dockerfile
[LOW] 1-16: Ensure that HEALTHCHECK instructions have been added to container images
(CKV_DOCKER_2)
🪛 Hadolint (2.12.0)
deploy/docker/backend.Dockerfile
[warning] 2-2: Pin versions in apk add. Instead of apk add <package> use apk add <package>=<version>
(DL3018)
[info] 3-3: Multiple consecutive RUN instructions. Consider consolidation.
(DL3059)
🔇 Additional comments (3)
frontend-pwa/orval.config.yaml (1)
4-4: LGTM: Use the fetch client and drop redundant httpClient.Keep only
client: fetch; avoiding a duplicatehttpCliententry removes ambiguity and matches Orval docs.deploy/docker/backend.Dockerfile (1)
12-12: Binary name matches Cargo crate nameVerified that
backend/Cargo.tomldefinesname = "backend"with no explicit[[bin]]entries. TheCOPYpath indeploy/docker/backend.Dockerfilealigns with the built binary. No updates required.frontend-pwa/vite.config.ts (1)
27-28: Gate sourcemaps behind an explicit flag (already done).Using
process.env.SOURCEMAP === 'true'avoids leaking full source in production by default. Keep this pattern.
| RUN apk add --no-cache build-base musl-dev pkgconfig openssl-dev | ||
| RUN rustup target add x86_64-unknown-linux-musl |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Reduce layer churn and pin Alpine packages where feasible.
Combine adjacent RUNs and pin apk package versions to improve reproducibility and satisfy hadolint DL3018.
Apply this diff:
-RUN apk add --no-cache build-base musl-dev pkgconfig openssl-dev openssl-libs-static
-RUN rustup target add x86_64-unknown-linux-musl
+RUN apk add --no-cache \
+ build-base \
+ musl-dev \
+ pkgconfig \
+ openssl-dev \
+ openssl-libs-static && \
+ rustup target add x86_64-unknown-linux-muslPin versions if your release process requires strict reproducibility.
Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Hadolint (2.12.0)
[warning] 2-2: Pin versions in apk add. Instead of apk add <package> use apk add <package>=<version>
(DL3018)
[info] 3-3: Multiple consecutive RUN instructions. Consider consolidation.
(DL3059)
🤖 Prompt for AI Agents
In deploy/docker/backend.Dockerfile around lines 2-3, combine the two adjacent
RUN lines into a single RUN to reduce layer churn and pin the apk packages to
specific versions (e.g. build-base=<version> musl-dev=<version>
pkgconfig=<version> openssl-dev=<version>) to satisfy hadolint DL3018 and
improve reproducibility; move the rustup invocation into that same RUN (or
ensure rustup is available in the same layer) so the final command is a single
RUN that installs pinned packages and then runs rustup target add
x86_64-unknown-linux-musl, and document/lock the chosen package versions
according to your release process.
| FROM gcr.io/distroless/static:nonroot | ||
| WORKDIR /srv | ||
| COPY --from=build /app/backend/target/x86_64-unknown-linux-musl/release/backend /srv/app | ||
| USER nonroot:nonroot | ||
| EXPOSE 8080 | ||
| ENV RUST_LOG=info | ||
| ENTRYPOINT ["/srv/app"] |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Add a health check at the deployment layer.
Distroless images lack shells/curl, so Dockerfile-level health checks are awkward. Define a healthcheck in docker-compose/Kubernetes instead, targeting a /health endpoint in the app.
Add this to your docker-compose service (deploy/docker-compose.yml):
services:
backend:
healthcheck:
test: ["CMD", "wget", "-qO-", "http://localhost:8080/health"]
interval: 30s
timeout: 3s
retries: 3
start_period: 10sIf wget is unavailable in the runtime image, expose a lightweight /health and use Compose’s TCP check:
healthcheck:
test: ["CMD", "nc", "-z", "localhost", "8080"]🧰 Tools
🪛 Checkov (3.2.334)
[LOW] 1-16: Ensure that HEALTHCHECK instructions have been added to container images
(CKV_DOCKER_2)
🤖 Prompt for AI Agents
In deploy/docker/backend.Dockerfile around lines 10 to 16, there is no container
healthcheck defined and distroless base images make in-image checks impractical;
add a deployment-level healthcheck in docker-compose or Kubernetes that targets
your app's /health endpoint (or a TCP probe) instead of modifying the distroless
image: in deploy/docker-compose.yml, add a healthcheck for the backend service
using a CMD that fetches http://localhost:8080/health (or fallback to a TCP
check like nc -z localhost 8080 if wget/curl are unavailable), and set
reasonable interval, timeout, retries and start_period values so the
orchestrator can mark and restart unhealthy containers.
| location / { | ||
| try_files $uri $uri/ /index.html; | ||
| } | ||
| location ~* \.(?:js|css|svg|png|jpg|jpeg|gif|ico|woff2?)$ { | ||
| expires 1y; | ||
| add_header Cache-Control "public, immutable"; | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Prevent caching of SPA index.html to avoid stale deploys
Serve immutable assets aggressively, but stop caching index.html so users get the latest app shell after deploys.
location / {
try_files $uri $uri/ /index.html;
}
+ # Never cache the SPA shell
+ location = /index.html {
+ add_header Cache-Control "no-store, no-cache, must-revalidate" always;
+ }
location ~* \.(?:js|css|svg|png|jpg|jpeg|gif|ico|woff2?)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| location / { | |
| try_files $uri $uri/ /index.html; | |
| } | |
| location ~* \.(?:js|css|svg|png|jpg|jpeg|gif|ico|woff2?)$ { | |
| expires 1y; | |
| add_header Cache-Control "public, immutable"; | |
| } | |
| location / { | |
| try_files $uri $uri/ /index.html; | |
| } | |
| # Never cache the SPA shell | |
| location = /index.html { | |
| add_header Cache-Control "no-store, no-cache, must-revalidate" always; | |
| } | |
| location ~* \.(?:js|css|svg|png|jpg|jpeg|gif|ico|woff2?)$ { | |
| expires 1y; | |
| add_header Cache-Control "public, immutable"; | |
| } |
🤖 Prompt for AI Agents
In deploy/nginx/default.conf around lines 17 to 23 the current config
aggressively caches static assets but still serves index.html via try_files,
which can cause clients to keep a stale SPA shell; add an explicit location for
the index file (location = /index.html) placed before the static-assets block
that disables caching (e.g. set no caching headers like Cache-Control: no-cache,
must-revalidate and disable expires) so that index.html is not served with
long-lived immutable caching while leaving the existing long-term caching rules
for other static assets intact.
| markdownlint: | ||
| find . -type f -name '*.md' -not -path './target/*' -print0 | xargs -0 -- markdownlint |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Exclude node_modules from Markdown lint to speed up and de-noise scans.
Avoid traversing large dependency trees.
Apply this diff:
markdownlint:
- find . -type f -name '*.md' -not -path './target/*' -print0 | xargs -0 -- markdownlint
+ find . -type f -name '*.md' \
+ -not -path './target/*' \
+ -not -path './frontend-pwa/node_modules/*' \
+ -not -path './packages/tokens/node_modules/*' \
+ -print0 | xargs -0 -- markdownlint📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| markdownlint: | |
| find . -type f -name '*.md' -not -path './target/*' -print0 | xargs -0 -- markdownlint | |
| markdownlint: | |
| find . -type f -name '*.md' \ | |
| -not -path './target/*' \ | |
| -not -path './frontend-pwa/node_modules/*' \ | |
| -not -path './packages/tokens/node_modules/*' \ | |
| -print0 | xargs -0 -- markdownlint |
🤖 Prompt for AI Agents
In Makefile around lines 47 to 48, the markdownlint target currently searches
all directories and should exclude node_modules to avoid scanning large
dependency trees; update the find command to add -not -path './node_modules/*'
(or multiple -not -path patterns if needed) so the target excludes node_modules
(and keep the existing exclusion for ./target/*), ensuring the modified find
still uses -type f -name '*.md' -print0 piped to xargs -0 -- markdownlint.
There was a problem hiding this comment.
Actionable comments posted: 3
🔭 Outside diff range comments (2)
AGENTS.md (2)
82-100: Use Oxford -ize spelling and include Rustmatchin the heuristic.Align with the en‑GB‑oxendict rule stated earlier and cover both language constructs.
-- **Recognising Refactoring Needs:** Regularly assess the codebase for potential +- **Recognizing Refactoring Needs:** Regularly assess the codebase for potential @@ - - **Complex Conditionals:** Deeply nested or overly complex `if`/`else` or - `switch` statements (high cyclomatic complexity). + - **Complex Conditionals:** Deeply nested or overly complex `if`/`else` or + `switch`/`match` statements (high cyclomatic complexity).
60-64: Include frontend tests in Makefile’stesttarget
Add a Bun test invocation for the PWA so thatmake testruns both backend and frontend checks.
- In
Makefileat thetest:recipe (around lines 41–43), append:test: RUSTFLAGS="-D warnings" cargo test --manifest-path backend/Cargo.toml --all-targets --all-features cd frontend-pwa && bun run test- Update AGENTS.md (lines 60–64) to state that
make testnow covers both Rust and PWA tests.
♻️ Duplicate comments (2)
AGENTS.md (2)
295-296: Document and time‑box audit exceptions; back with a tracked allowlist and CI guard.State where exceptions live and that CI fails on expiry to avoid silent drift.
-- **Audit**: Run `bun run audit` locally and in automation. Track exceptions with explicit expiry dates. +- **Audit**: Run `bun run audit` locally and in automation. Track exceptions with + explicit expiry dates in a repo‑tracked allowlist (e.g., `security/audit-exceptions.json`); + CI must fail on missing/expired exceptions.
355-356: Gate merges on audit status or justified, unexpired exceptions.Make the checklist reflect CI gating to enforce compliance.
-- `bun run audit` passes or has justified, time‑boxed exceptions. +- `bun run audit` passes, or exceptions are justified, time‑boxed, and unexpired; + CI gates merges on this status.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled
You can enable these settings in your CodeRabbit configuration.
📒 Files selected for processing (1)
AGENTS.md(7 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.md
📄 CodeRabbit Inference Engine (AGENTS.md)
**/*.md: Documentation must use en-GB-oxendict spelling and grammar (except LICENSE name)
Validate Markdown with make markdownlint
Run make fmt after doc changes to format Markdown and fix tables
Validate Mermaid diagrams by running make nixie
Wrap Markdown paragraphs and bullets at 80 columns; do not wrap tables/headings; code blocks at 120 columns; use dashes for bullets; use GFM footnotes
Files:
AGENTS.md
⚙️ CodeRabbit Configuration File
**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")
- Use en-GB-oxendict (-ize / -our) spelling and grammar
- Headings must not be wrapped.
- Documents must start with a level 1 heading
- Headings must correctly increase or decrease by no more than one level at a time
- Use GitHub-flavoured Markdown style for footnotes and endnotes.
- Numbered footnotes must be numbered by order of appearance in the document.
Files:
AGENTS.md
🧠 Learnings (7)
📚 Learning: 2025-07-18T18:35:14.639Z
Learnt from: CR
PR: leynos/ddlint#0
File: AGENTS.md:0-0
Timestamp: 2025-07-18T18:35:14.639Z
Learning: Recognizing Refactoring Needs: Regularly assess the codebase for potential refactoring opportunities. Consider refactoring when you observe: Long Methods/Functions, Duplicated Code, Complex Conditionals, Large Code Blocks for Single Values, Primitive Obsession / Data Clumps, Excessive Parameters, Feature Envy, Shotgun Surgery.
Applied to files:
AGENTS.md
📚 Learning: 2025-07-18T18:35:14.639Z
Learnt from: CR
PR: leynos/ddlint#0
File: AGENTS.md:0-0
Timestamp: 2025-07-18T18:35:14.639Z
Learning: Separate Atomic Refactors: If refactoring is deemed necessary: Perform the refactoring as a separate, atomic commit after the functional change commit. Ensure the refactoring adheres to the testing guidelines (behavioral tests pass before and after, unit tests added for new units). Ensure the refactoring commit itself passes all quality gates.
Applied to files:
AGENTS.md
📚 Learning: 2025-08-03T14:59:07.928Z
Learnt from: CR
PR: leynos/comenq#0
File: AGENTS.md:0-0
Timestamp: 2025-08-03T14:59:07.928Z
Learning: If refactoring is deemed necessary: Perform the refactoring as a separate, atomic commit after the functional change commit. Ensure the refactoring adheres to the testing guidelines (behavioral tests pass before and after, unit tests added for new units). Ensure the refactoring commit itself passes all quality gates.
Applied to files:
AGENTS.md
📚 Learning: 2025-07-28T19:14:46.188Z
Learnt from: CR
PR: leynos/femtologging#0
File: AGENTS.md:0-0
Timestamp: 2025-07-28T19:14:46.188Z
Learning: If refactoring is deemed necessary, perform the refactoring as a separate, atomic commit after the functional change commit. Ensure the refactoring adheres to the testing guidelines and passes all quality gates.
Applied to files:
AGENTS.md
📚 Learning: 2025-08-10T12:40:16.561Z
Learnt from: CR
PR: leynos/netsuke#0
File: AGENTS.md:0-0
Timestamp: 2025-08-10T12:40:16.561Z
Learning: Separate Atomic Refactors: If refactoring is deemed necessary, perform the refactoring as a separate, atomic commit after the functional change commit.
Applied to files:
AGENTS.md
📚 Learning: 2025-06-26T01:03:03.478Z
Learnt from: CR
PR: leynos/bournemouth#0
File: .cursor/global.mdc:0-0
Timestamp: 2025-06-26T01:03:03.478Z
Learning: Perform refactoring as a separate, atomic commit after the functional change, ensuring all tests pass before and after, and that the refactoring commit itself passes all quality gates.
Applied to files:
AGENTS.md
📚 Learning: 2025-06-25T23:37:47.576Z
Learnt from: CR
PR: leynos/mxd#0
File: AGENTS.md:0-0
Timestamp: 2025-06-25T23:37:47.576Z
Learning: Perform refactoring as a separate, atomic commit after functional changes, ensuring all tests pass before and after, and that the refactoring itself passes all quality gates.
Applied to files:
AGENTS.md
🪛 LanguageTool
AGENTS.md
[style] ~82-~82: Would you like to use the Oxford spelling “Recognizing”? The spelling ‘Recognising’ is also correct.
Context: ... Refactoring Heuristics & Workflow - Recognising Refactoring Needs: Regularly assess t...
(OXFORD_SPELLING_Z_NOT_S)
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (6)
.github/workflows/ci.yml (6)
17-19: Install a Rust toolchain explicitly before running cargo.Add a stable toolchain with clippy and rustfmt to avoid sporadic failures on runners without Rust.
- uses: oven-sh/setup-bun@v2 with: bun-version: '1.1.32' + - name: Setup Rust toolchain + uses: dtolnay/rust-toolchain@stable + with: + components: clippy, rustfmt
18-29: Restore the Cargo cache before building to actually speed up builds.Move the cache step above the build so dependencies are restored early and saved after.
- - name: Rust build - run: cargo build --manifest-path backend/Cargo.toml --release - - name: Cache cargo + - name: Cache cargo uses: actions/cache@v4 with: path: | ~/.cargo/registry ~/.cargo/git target key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }} restore-keys: ${{ runner.os }}-cargo- + - name: Rust build + run: cargo build --manifest-path backend/Cargo.toml --release
10-15: Cache Bun’s package store to cut install times.Add a cache keyed by bun.lockb so installs reuse the store.
- uses: actions/setup-node@v4 with: node-version: '22' - uses: oven-sh/setup-bun@v2 with: bun-version: '1.1.32' + - name: Cache Bun + uses: actions/cache@v4 + with: + path: ~/.cache/bun + key: ${{ runner.os }}-bun-${{ hashFiles('**/bun.lockb') }} + restore-keys: ${{ runner.os }}-bun-
41-46: Run the OpenAPI dumper in release and align the placeholder to 3.1.0 (or fail fast).Emit a release build for faster runs and a correct spec version if you keep the fallback.
- if cargo run --manifest-path backend/Cargo.toml --bin openapi-dump > spec/openapi.json; then + if cargo run --manifest-path backend/Cargo.toml --bin openapi-dump --release > spec/openapi.json; then echo "OpenAPI generated from backend" else - echo "openapi-dump bin not available; using existing spec or placeholder" - test -s spec/openapi.json || echo '{"openapi":"3.0.3","info":{"title":"placeholder","version":"0.0.0"},"paths":{}}' > spec/openapi.json + echo "openapi-dump bin not available; using existing spec or 3.1.0 placeholder" + test -s spec/openapi.json || echo '{"openapi":"3.1.0","info":{"title":"placeholder","version":"0.0.0"},"paths":{}}' > spec/openapi.json fiIf generation must succeed, drop the fallback and
exit 1to prevent stale clients.
48-58: Generate the API client before building the PWA.Run Orval against the freshly dumped spec so the frontend uses the latest client.
- name: Build tokens working-directory: packages/tokens run: | - bun install + bun install --frozen-lockfile bun run build + - name: Generate API client + working-directory: frontend-pwa + run: | + bun install --frozen-lockfile + bunx orval --config orval.config.yaml - name: Build PWA working-directory: frontend-pwa run: | - bun install bun run build
52-57: Enforce reproducible installs with Bun’s lockfile.Pass --frozen-lockfile to fail on lockfile drift and improve determinism.
- bun install + bun install --frozen-lockfile
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled
You can enable these settings in your CodeRabbit configuration.
📒 Files selected for processing (1)
.github/workflows/ci.yml(1 hunks)
⏰ Context from checks skipped due to timeout of 120000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (5)
AGENTS.md (5)
31-34: Mention Rust’s match alongside switch.Apply the previous nit to avoid ambiguity across languages.
- lines. Long switch statements or dispatch tables should be broken up by + lines. Long switch/match statements or dispatch tables should be broken up by
107-108: Cross-link the testing guidance for refactors.Keep the instruction tight and actionable by referencing the Testing section.
- - Ensure the refactoring adheres to the testing guidelines (behavioural tests - pass before and after, unit tests added for new units). + - Ensure the refactoring adheres to the testing guidelines (behavioural tests + pass before and after, unit tests added for new units). See “### Testing”.
233-235: Document Snyk CLI authentication and CI usage.Prevent audit drift by stating how to authenticate and run in CI.
- `test`: `vitest run --coverage` - - `audit`: `bun x npm@latest audit` - - `audit:snyk`: `bun x snyk test` + - `audit`: `bun x npm@latest audit` + - `audit:snyk`: `bun x snyk test` (requires the `snyk` CLI and authentication; + run `snyk auth` locally, and set `SNYK_TOKEN` in CI secrets)
294-296: Back audit exceptions with an allowlist and enforce expiry in CI.Tie the guidance to a concrete process and enforcement.
-- **Lockfile**: Commit `bun.lock`. Recreate on major tool upgrades; keep `bun.lockb` ignored. -**Audit**: Run `bun run audit` locally and in automation. Track exceptions with explicit expiry dates. +- **Lockfile**: Commit `bun.lock`. Recreate on major tool upgrades; keep `bun.lockb` ignored. +- **Audit**: Run `bun run audit` locally and in automation. Track exceptions with explicit expiry dates and a repo‑tracked allowlist (e.g., `security/audit-exceptions.json`) enforced by CI (fail on expiry).
355-356: Gate commits on audit status or justified exceptions.Make the checklist enforceable by CI.
-- `bun run audit` passes or has justified, time‑boxed exceptions. +- `bun run audit` passes or has justified, time‑boxed exceptions; CI gates on this status.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled
You can enable these settings in your CodeRabbit configuration.
📒 Files selected for processing (1)
AGENTS.md(7 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.md
📄 CodeRabbit Inference Engine (AGENTS.md)
**/*.md: Documentation must use en-GB-oxendict spelling and grammar (except LICENSE name)
Validate Markdown with make markdownlint
Run make fmt after doc changes to format Markdown and fix tables
Validate Mermaid diagrams by running make nixie
Wrap Markdown paragraphs and bullets at 80 columns; do not wrap tables/headings; code blocks at 120 columns; use dashes for bullets; use GFM footnotes
Files:
AGENTS.md
⚙️ CodeRabbit Configuration File
**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")
- Use en-GB-oxendict (-ize / -our) spelling and grammar
- Headings must not be wrapped.
- Documents must start with a level 1 heading
- Headings must correctly increase or decrease by no more than one level at a time
- Use GitHub-flavoured Markdown style for footnotes and endnotes.
- Numbered footnotes must be numbered by order of appearance in the document.
Files:
AGENTS.md
🧠 Learnings (7)
📚 Learning: 2025-07-18T18:35:14.639Z
Learnt from: CR
PR: leynos/ddlint#0
File: AGENTS.md:0-0
Timestamp: 2025-07-18T18:35:14.639Z
Learning: Recognizing Refactoring Needs: Regularly assess the codebase for potential refactoring opportunities. Consider refactoring when you observe: Long Methods/Functions, Duplicated Code, Complex Conditionals, Large Code Blocks for Single Values, Primitive Obsession / Data Clumps, Excessive Parameters, Feature Envy, Shotgun Surgery.
Applied to files:
AGENTS.md
📚 Learning: 2025-07-18T18:35:14.639Z
Learnt from: CR
PR: leynos/ddlint#0
File: AGENTS.md:0-0
Timestamp: 2025-07-18T18:35:14.639Z
Learning: Separate Atomic Refactors: If refactoring is deemed necessary: Perform the refactoring as a separate, atomic commit after the functional change commit. Ensure the refactoring adheres to the testing guidelines (behavioral tests pass before and after, unit tests added for new units). Ensure the refactoring commit itself passes all quality gates.
Applied to files:
AGENTS.md
📚 Learning: 2025-08-03T14:59:07.928Z
Learnt from: CR
PR: leynos/comenq#0
File: AGENTS.md:0-0
Timestamp: 2025-08-03T14:59:07.928Z
Learning: If refactoring is deemed necessary: Perform the refactoring as a separate, atomic commit after the functional change commit. Ensure the refactoring adheres to the testing guidelines (behavioral tests pass before and after, unit tests added for new units). Ensure the refactoring commit itself passes all quality gates.
Applied to files:
AGENTS.md
📚 Learning: 2025-07-28T19:14:46.188Z
Learnt from: CR
PR: leynos/femtologging#0
File: AGENTS.md:0-0
Timestamp: 2025-07-28T19:14:46.188Z
Learning: If refactoring is deemed necessary, perform the refactoring as a separate, atomic commit after the functional change commit. Ensure the refactoring adheres to the testing guidelines and passes all quality gates.
Applied to files:
AGENTS.md
📚 Learning: 2025-08-10T12:40:16.561Z
Learnt from: CR
PR: leynos/netsuke#0
File: AGENTS.md:0-0
Timestamp: 2025-08-10T12:40:16.561Z
Learning: Separate Atomic Refactors: If refactoring is deemed necessary, perform the refactoring as a separate, atomic commit after the functional change commit.
Applied to files:
AGENTS.md
📚 Learning: 2025-06-26T01:03:03.478Z
Learnt from: CR
PR: leynos/bournemouth#0
File: .cursor/global.mdc:0-0
Timestamp: 2025-06-26T01:03:03.478Z
Learning: Perform refactoring as a separate, atomic commit after the functional change, ensuring all tests pass before and after, and that the refactoring commit itself passes all quality gates.
Applied to files:
AGENTS.md
📚 Learning: 2025-06-25T23:37:47.576Z
Learnt from: CR
PR: leynos/mxd#0
File: AGENTS.md:0-0
Timestamp: 2025-06-25T23:37:47.576Z
Learning: Perform refactoring as a separate, atomic commit after functional changes, ensuring all tests pass before and after, and that the refactoring itself passes all quality gates.
Applied to files:
AGENTS.md
🪛 LanguageTool
AGENTS.md
[style] ~82-~82: Would you like to use the Oxford spelling “Recognizing”? The spelling ‘Recognising’ is also correct.
Context: ... Refactoring Heuristics & Workflow - Recognising Refactoring Needs: Regularly assess t...
(OXFORD_SPELLING_Z_NOT_S)
⏰ Context from checks skipped due to timeout of 120000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build
There was a problem hiding this comment.
Actionable comments posted: 6
🔭 Outside diff range comments (2)
.github/workflows/ci.yml (2)
50-61: Cache Bun and use frozen lockfiles; generate the API client before building.Speed builds and ensure the client matches the latest spec.
+ - name: Cache Bun + uses: actions/cache@v4 + with: + path: ~/.cache/bun + key: ${{ runner.os }}-bun-${{ hashFiles('**/bun.lock') }}-${{ steps.bun.outputs.version || '1.1.32' }} + restore-keys: ${{ runner.os }}-bun- - name: Build tokens working-directory: packages/tokens run: | - bun install + bun install --frozen-lockfile bun run build + - name: Generate API client + working-directory: frontend-pwa + run: | + bun install --frozen-lockfile + bunx orval --config orval.config.yaml - name: Build PWA working-directory: frontend-pwa run: | - bun install + bun install --frozen-lockfile bun run buildIf codegen already runs inside your build, document it in package.json and drop the explicit generation step.
50-61: Gate builds on audits or documented exceptions.Run the audits and wire Snyk auth via secrets to prevent drift.
- name: Build PWA working-directory: frontend-pwa run: | bun install --frozen-lockfile bun run build + - name: Run security audits + working-directory: frontend-pwa + env: + SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }} + run: | + bun run audit + bun run audit:snykIf audits fail for known reasons, require an entry in security/audit-exceptions.json and fail on expired entries.
♻️ Duplicate comments (7)
AGENTS.md (2)
31-34: Mention Rust’s match alongside switch.Avoid ambiguity by listing both constructs.
- lines. Long switch statements or dispatch tables should be broken up by + lines. Long switch/match statements or dispatch tables should be broken up by
107-108: Cross‑link the Testing section for refactors.Point readers directly to the guidance.
- - Ensure the refactoring adheres to the testing guidelines (behavioural tests - pass before and after, unit tests added for new units). + - Ensure the refactoring adheres to the testing guidelines (behavioural tests + pass before and after, unit tests added for new units). See “### Testing”.frontend-pwa/src/app/App.tsx (2)
8-12: Forward TanStack Query’s AbortSignal explicitly.Make the query function signature explicit to satisfy strict function types and ensure cancellation.
- const { data, isLoading, isError, error } = useQuery({ - queryKey: ['users'], - queryFn: listUsers, - staleTime: 60_000, - }); + const { data, isLoading, isError, error } = useQuery({ + queryKey: ['users'], + queryFn: ({ signal }) => listUsers({ signal }), + staleTime: 60_000, + });
16-22: Use a semantic element for status.Replace the role’d paragraph with an output element to satisfy the a11y lint.
- <p - className="p-6 min-h-screen bg-base-200 text-base-content" - role="status" - aria-live="polite" - > - Loading users… - </p> + <div className="p-6 min-h-screen bg-base-200 text-base-content"> + <output aria-live="polite" aria-busy="true">Loading users…</output> + </div>.github/workflows/ci.yml (3)
12-17: Install a Rust toolchain explicitly.Avoid relying on preinstalled toolchains on runners.
- uses: actions/setup-node@v4 with: node-version: '22' - uses: oven-sh/setup-bun@v2 with: bun-version: '1.1.32' + - name: Setup Rust toolchain + uses: dtolnay/rust-toolchain@stable + with: + components: rustfmt, clippy
38-48: Run the OpenAPI dumper in release and align the placeholder to 3.1.0.Avoid stale clients and mismatched specs.
- if cargo run --manifest-path backend/Cargo.toml --bin openapi-dump > spec/openapi.json; then + if cargo run --manifest-path backend/Cargo.toml --bin openapi-dump --release > spec/openapi.json; then echo "OpenAPI generated from backend" else echo "openapi-dump bin not available; using existing spec or placeholder" - test -s spec/openapi.json || echo '{"openapi":"3.0.3","info":{"title":"placeholder","version":"0.0.0"},"paths":{}}' > spec/openapi.json + test -s spec/openapi.json || echo '{"openapi":"3.1.0","info":{"title":"placeholder","version":"0.0.0"},"paths":{}}' > spec/openapi.json fiAlternatively, fail the job instead of writing a placeholder.
19-36: Move Cargo cache before build and tests.Cache after building gives no benefit; restore before any cargo commands.
- # Backend - - name: Rust build - run: cargo build --manifest-path backend/Cargo.toml --release - - name: Cache cargo + # Backend + - name: Cache cargo uses: actions/cache@v4 with: path: | ~/.cargo/registry ~/.cargo/git target key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }} restore-keys: ${{ runner.os }}-cargo- + - name: Rust build + run: cargo build --manifest-path backend/Cargo.toml --release - name: Rust fmt check run: cargo fmt --manifest-path backend/Cargo.toml --all -- --check - name: Rust clippy run: cargo clippy --manifest-path backend/Cargo.toml --all-targets --all-features -- -D warnings - name: Rust tests run: RUSTFLAGS="-D warnings" cargo test --manifest-path backend/Cargo.toml --all-targets --all-features
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- Jira integration is disabled
You can enable these settings in your CodeRabbit configuration.
📒 Files selected for processing (6)
.github/workflows/ci.yml(1 hunks).gitignore(1 hunks)AGENTS.md(6 hunks)frontend-pwa/src/api/client.ts(1 hunks)frontend-pwa/src/app/App.tsx(1 hunks)security/audit-exceptions.json(1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.json
📄 CodeRabbit Inference Engine (AGENTS.md)
Keep JSON stable (deterministic key order) for snapshots and caches
Files:
security/audit-exceptions.json
**/*.{ts,tsx}
📄 CodeRabbit Inference Engine (AGENTS.md)
**/*.{ts,tsx}: ESM-only: source/build outputs are ES Modules; configure Vite accordingly; no CommonJS
Place JSDoc comments above declarations/decorators; keep docs close to code
Prefer immutability: const, readonly, Readonly; avoid mutating inputs
Extract helpers for long functions; keep trivial functions on one line when readable
Group related parameters into typed objects/builders; avoid long positional lists
Extract predicate functions or use lookup tables for multi-branch conditionals; enforce exhaustive switch with never guard
Every module begins with /** @file … */ describing purpose and usage
Validate I/O with runtime schemas (zod/valibot) at boundaries; keep types and schemas in sync
Use branded types for identifiers/tokens
Accept AbortSignal for cancellable async operations; wire through TanStack Query fetchers
Centralise time and RNG adapters; avoid direct Date.now()/Math.random() in business logic
Use discriminated unions for recoverable errors; reserve Error subclasses for exceptional paths with cause
Use Vitest with jsdom/happy-dom; keep tests deterministic and parallel-safe
Use factories/builders for fixtures; prefer parameterised tests with test.each/it.each
Use vi.mock for module boundaries; inject adapters for env/time/storage/fetch; avoid monkey-patching globals
Use vi.useFakeTimers() for time-based logic; restore after each test
Keep snapshots deterministic (sorted keys, fixed seeds) and scoped to stable UI fragments
Use Biome for format+lint; disallow any, non-null !; forbid @ts-ignore in favour of @ts-expect-error with reason
Enforce import hygiene: sorted/grouped imports; no unused or extraneous deps
Use dynamic import() for code-splitting; enable Vite prefetch and asset hashing
Configure TanStack Query with appropriate stale-time, cache-time; avoid refetchOnWindowFocus unless necessary
Avoid await in loops; batch with Promise.allSettled; use async iterables/streams for large data
Ship CSP, avoid eval/new Function; use Trusted Types and sa...
Files:
frontend-pwa/src/app/App.tsxfrontend-pwa/src/api/client.ts
**/*.{tsx,ts}
📄 CodeRabbit Inference Engine (AGENTS.md)
**/*.{tsx,ts}: Map domain errors to user messages at UI boundary; do not leak raw stacks to DOM/analytics
Enable React StrictMode in dev; memoise expensive components; derive data via selectors
Prefer semantic HTML; use daisyUI components only when they don’t harm semantics; audit focus and contrast
Files:
frontend-pwa/src/app/App.tsxfrontend-pwa/src/api/client.ts
**/*.md
📄 CodeRabbit Inference Engine (AGENTS.md)
**/*.md: Documentation must use en-GB-oxendict spelling and grammar (except LICENSE name)
Validate Markdown with make markdownlint
Run make fmt after doc changes to format Markdown and fix tables
Validate Mermaid diagrams by running make nixie
Wrap Markdown paragraphs and bullets at 80 columns; do not wrap tables/headings; code blocks at 120 columns; use dashes for bullets; use GFM footnotes
Files:
AGENTS.md
⚙️ CodeRabbit Configuration File
**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")
- Use en-GB-oxendict (-ize / -our) spelling and grammar
- Headings must not be wrapped.
- Documents must start with a level 1 heading
- Headings must correctly increase or decrease by no more than one level at a time
- Use GitHub-flavoured Markdown style for footnotes and endnotes.
- Numbered footnotes must be numbered by order of appearance in the document.
Files:
AGENTS.md
🧠 Learnings (6)
📚 Learning: 2025-07-18T18:35:14.639Z
Learnt from: CR
PR: leynos/ddlint#0
File: AGENTS.md:0-0
Timestamp: 2025-07-18T18:35:14.639Z
Learning: Separate Atomic Refactors: If refactoring is deemed necessary: Perform the refactoring as a separate, atomic commit after the functional change commit. Ensure the refactoring adheres to the testing guidelines (behavioral tests pass before and after, unit tests added for new units). Ensure the refactoring commit itself passes all quality gates.
Applied to files:
AGENTS.md
📚 Learning: 2025-08-03T14:59:07.928Z
Learnt from: CR
PR: leynos/comenq#0
File: AGENTS.md:0-0
Timestamp: 2025-08-03T14:59:07.928Z
Learning: If refactoring is deemed necessary: Perform the refactoring as a separate, atomic commit after the functional change commit. Ensure the refactoring adheres to the testing guidelines (behavioral tests pass before and after, unit tests added for new units). Ensure the refactoring commit itself passes all quality gates.
Applied to files:
AGENTS.md
📚 Learning: 2025-07-28T19:14:46.188Z
Learnt from: CR
PR: leynos/femtologging#0
File: AGENTS.md:0-0
Timestamp: 2025-07-28T19:14:46.188Z
Learning: If refactoring is deemed necessary, perform the refactoring as a separate, atomic commit after the functional change commit. Ensure the refactoring adheres to the testing guidelines and passes all quality gates.
Applied to files:
AGENTS.md
📚 Learning: 2025-08-10T12:40:16.561Z
Learnt from: CR
PR: leynos/netsuke#0
File: AGENTS.md:0-0
Timestamp: 2025-08-10T12:40:16.561Z
Learning: Separate Atomic Refactors: If refactoring is deemed necessary, perform the refactoring as a separate, atomic commit after the functional change commit.
Applied to files:
AGENTS.md
📚 Learning: 2025-06-26T01:03:03.478Z
Learnt from: CR
PR: leynos/bournemouth#0
File: .cursor/global.mdc:0-0
Timestamp: 2025-06-26T01:03:03.478Z
Learning: Perform refactoring as a separate, atomic commit after the functional change, ensuring all tests pass before and after, and that the refactoring commit itself passes all quality gates.
Applied to files:
AGENTS.md
📚 Learning: 2025-06-25T23:37:47.576Z
Learnt from: CR
PR: leynos/mxd#0
File: AGENTS.md:0-0
Timestamp: 2025-06-25T23:37:47.576Z
Learning: Perform refactoring as a separate, atomic commit after functional changes, ensuring all tests pass before and after, and that the refactoring itself passes all quality gates.
Applied to files:
AGENTS.md
🧬 Code Graph Analysis (2)
frontend-pwa/src/app/App.tsx (1)
frontend-pwa/src/api/client.ts (1)
listUsers(14-15)
frontend-pwa/src/api/client.ts (1)
frontend-pwa/src/api/fetcher.ts (1)
customFetchParsed(45-52)
🪛 Biome (2.1.2)
frontend-pwa/src/app/App.tsx
[error] 18-18: The elements with this role can be changed to the following elements:
For examples and more information, see WAI-ARIA Roles
(lint/a11y/useSemanticElements)
🪛 markdownlint-cli2 (0.17.2)
AGENTS.md
333-333: Trailing spaces
Expected: 0 or 2; Actual: 1
(MD009, no-trailing-spaces)
🪛 LanguageTool
AGENTS.md
[uncategorized] ~437-~437: Loose punctuation mark.
Context: ...ecklist (Before Commit) - bun run fmt, bun run lint, bun test all clean; n...
(UNLIKELY_OPENING_PUNCTUATION)
⏰ Context from checks skipped due to timeout of 120000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build
- GitHub Check: build
🔇 Additional comments (6)
.gitignore (2)
6-7: Approve Node artefact ignores
node_modules/(recursive) and**/dist/are correct for a monorepo and avoid per-package churn.
11-11: Confirm lockfile policy for BunDecide whether to commit lockfiles for reproducible builds. If you intend to commit Bun’s lockfile, remove
bun.lockbfrom.gitignore; otherwise keep it ignored consistently across packages.Optional change if you choose to commit it:
bun.lockb(remove this line)
AGENTS.md (1)
60-61: Keep the cross‑link; this change is on‑point.frontend-pwa/src/api/client.ts (1)
14-15: LGTM: runtime validation + AbortSignal are correctly wired.frontend-pwa/src/app/App.tsx (2)
26-35: Do not leak internal error details; current copy is safe.
45-57: Provide an explicit empty state; current implementation is clear.
| **/target/ | ||
| /target |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Remove redundant Rust target ignore
Deduplicate target rules. **/target/ already covers the root target/.
# Rust
**/target/
-/target📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| **/target/ | |
| /target | |
| # Rust | |
| **/target/ |
🤖 Prompt for AI Agents
In .gitignore around lines 2 to 3, the two target ignore entries are redundant;
remove the duplicated "/target" line and keep only the more general "**/target/"
rule so root and nested target directories are both ignored.
| # Misc | ||
| .DS_Store | ||
| bun.lockb |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Add common env, log, coverage, and editor ignores
Harden the ignore set for typical backend/PWA workflows.
# Misc
.DS_Store
bun.lockb
+
+# Env
+.env
+.env.local
+.env.*.local
+
+# Logs
+*.log
+npm-debug.log*
+yarn-error.log*
+pnpm-debug.log*
+bun-*.log
+
+# Coverage
+coverage/
+
+# Editor/OS
+.idea/
+.vscode/
+Thumbs.db
+*.swp
+*~📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Misc | |
| .DS_Store | |
| bun.lockb | |
| # Node | |
| node_modules/ | |
| **/dist/ | |
| # Misc | |
| .DS_Store | |
| bun.lockb | |
| # Env | |
| .env | |
| .env.local | |
| .env.*.local | |
| # Logs | |
| *.log | |
| npm-debug.log* | |
| yarn-error.log* | |
| pnpm-debug.log* | |
| bun-*.log | |
| # Coverage | |
| coverage/ | |
| # Editor/OS | |
| .idea/ | |
| .vscode/ | |
| Thumbs.db | |
| *.swp | |
| *~ |
🤖 Prompt for AI Agents
In .gitignore around lines 9 to 11, the ignore list is minimal; expand it to
include common environment, log, coverage, and editor artifacts by adding
entries such as .env, .env.local, *.log, logs/, npm-debug.log*, yarn-debug.log*,
yarn-error.log*, coverage/, .nyc_output/, .vscode/, .idea/, *.sublime-workspace,
*.sublime-project, and build/dist output directories; update the file by
appending these patterns (and any project-specific output dirs) so typical
sensitive files, logs, coverage outputs and editor metadata are excluded from
version control.
| - **Lockfile**: Commit `bun.lock`. Recreate on major tool upgrades; keep | ||
| `bun.lockb` ignored. | ||
| - **Audit**: Run `bun run audit` locally and in automation. Track exceptions | ||
| with explicit expiry dates. |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Fix trailing space and lockfile wording.
Remove trailing whitespace (MD009) and align with the stated lockfile policy.
-- **Lockfile**: Commit `bun.lock`. Recreate on major tool upgrades; keep
- `bun.lockb` ignored.
+- **Lockfile**: Commit `bun.lock`. Recreate on major tool upgrades; keep
+ `bun.lockb` ignored.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - **Lockfile**: Commit `bun.lock`. Recreate on major tool upgrades; keep | |
| `bun.lockb` ignored. | |
| - **Audit**: Run `bun run audit` locally and in automation. Track exceptions | |
| with explicit expiry dates. | |
| - **Lockfile**: Commit `bun.lock`. Recreate on major tool upgrades; keep | |
| `bun.lockb` ignored. | |
| - **Audit**: Run `bun run audit` locally and in automation. Track exceptions | |
| with explicit expiry dates. |
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
333-333: Trailing spaces
Expected: 0 or 2; Actual: 1
(MD009, no-trailing-spaces)
🤖 Prompt for AI Agents
In AGENTS.md around lines 333 to 336, remove the trailing whitespace at the end
of the first line and update the Lockfile wording so it exactly matches the
repository policy: state that `bun.lock` should be committed and `bun.lockb`
should be kept ignored, and rephrase the sentence to read clearly (e.g., "Commit
`bun.lock`; recreate on major tool upgrades. Keep `bun.lockb` ignored.") so
there is no trailing space and the lockfile policy is unambiguous.
| - `bun run fmt`, `bun run lint`, `bun test` all clean; no Biome warnings; | ||
| no TypeScript errors; coverage thresholds hold. | ||
| - `bun audit` passes or has justified, time‑boxed exceptions. | ||
| - `bun run audit` passes or has justified, time‑boxed exceptions. | ||
| - No `any`, no `@ts-ignore`; use `@ts-expect-error` only with a reason. |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Use “bun run test” for consistency and fix punctuation hint.
Align the checklist with scripts and resolve the “loose punctuation” hint.
-- `bun run fmt`, `bun run lint`, `bun test` all clean; no Biome warnings;
+- `bun run fmt`, `bun run lint`, `bun run test` all clean; no Biome warnings;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - `bun run fmt`, `bun run lint`, `bun test` all clean; no Biome warnings; | |
| no TypeScript errors; coverage thresholds hold. | |
| - `bun audit` passes or has justified, time‑boxed exceptions. | |
| - `bun run audit` passes or has justified, time‑boxed exceptions. | |
| - No `any`, no `@ts-ignore`; use `@ts-expect-error` only with a reason. | |
| - `bun run fmt`, `bun run lint`, `bun run test` all clean; no Biome warnings; | |
| no TypeScript errors; coverage thresholds hold. | |
| - `bun run audit` passes or has justified, time-boxed exceptions. | |
| - No `any`, no `@ts-ignore`; use `@ts-expect-error` only with a reason. |
🧰 Tools
🪛 LanguageTool
[uncategorized] ~437-~437: Loose punctuation mark.
Context: ...ecklist (Before Commit) - bun run fmt, bun run lint, bun test all clean; n...
(UNLIKELY_OPENING_PUNCTUATION)
🤖 Prompt for AI Agents
In AGENTS.md around lines 437 to 440, update the checklist to consistently
reference the actual npm scripts (use `bun run test` if your scripts require the
`run` keyword) and fix the punctuation so each bullet uses consistent sentence
punctuation (either end all bullets with a period or make them phrase-style with
no terminal punctuation); also normalize separators (replace mixed
semicolons/commas with commas or periods) and ensure the last bullet ends with a
period.
| const userSchema = z.object({ | ||
| id: z.string(), | ||
| display_name: z.string(), | ||
| }); | ||
| export type User = z.infer<typeof userSchema>; | ||
| const usersSchema = z.array(userSchema); |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Brand identifier types to prevent mix‑ups.
Use a nominal brand for IDs per the guidelines.
-const userSchema = z.object({
- id: z.string(),
- display_name: z.string(),
-});
+const userSchema = z.object({
+ id: z.string().brand<'UserId'>(),
+ display_name: z.string(),
+});
export type User = z.infer<typeof userSchema>;Optionally export type UserId = z.infer<typeof userSchema.shape.id>; if needed at the UI boundary.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const userSchema = z.object({ | |
| id: z.string(), | |
| display_name: z.string(), | |
| }); | |
| export type User = z.infer<typeof userSchema>; | |
| const usersSchema = z.array(userSchema); | |
| const userSchema = z.object({ | |
| id: z.string().brand<'UserId'>(), | |
| display_name: z.string(), | |
| }); | |
| export type User = z.infer<typeof userSchema>; | |
| const usersSchema = z.array(userSchema); |
| @@ -0,0 +1 @@ | |||
| [] | |||
There was a problem hiding this comment.
🧹 Nitpick (assertive)
🛠️ Refactor suggestion
Bootstrap the allow‑list with fields and enforce expiry in CI.
Seed a documented shape now to avoid ad‑hoc entries later and gate CI on expiry.
Apply this diff to define a stable, explicit structure:
-[]
+[
+ // Example entry:
+ // {
+ // "id": "WS-2025-0001",
+ // "package": "example@1.2.3",
+ // "advisory": "GHSA-xxxx-xxxx-xxxx",
+ // "reason": "Upstream fix pending; no exposure in our usage",
+ // "introducedBy": "frontend-pwa",
+ // "addedAt": "2025-08-14T00:00:00Z",
+ // "expiresAt": "2025-10-01T00:00:00Z",
+ // "reviewer": "maintainers@example.org"
+ // }
+]Outside this file, add a JSON Schema (security/audit-exceptions.schema.json) and a CI step to validate and fail on expired entries:
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"type": "array",
"items": {
"type": "object",
"required": ["id", "package", "advisory", "reason", "introducedBy", "addedAt", "expiresAt"],
"properties": {
"id": { "type": "string" },
"package": { "type": "string" },
"advisory": { "type": "string" },
"reason": { "type": "string" },
"introducedBy": { "type": "string" },
"addedAt": { "type": "string", "format": "date-time" },
"expiresAt": { "type": "string", "format": "date-time" },
"reviewer": { "type": "string" }
},
"additionalProperties": false
}
}I can add a workflow step that runs “jq + ajv” to validate shape and a small Node/Bun script to fail if any expiresAt < now. Do you want me to open a follow‑up PR with the CI wiring?
🤖 Prompt for AI Agents
In security/audit-exceptions.json around lines 1 to 1, the file is currently
empty; seed it with a documented array entry using the explicit fields id,
package, advisory, reason, introducedBy, addedAt, expiresAt (and optional
reviewer) so future exceptions follow a stable shape; additionally add a JSON
Schema at security/audit-exceptions.schema.json matching the proposed schema and
add a CI step that validates the JSON against the schema and fails the build if
any expiresAt is in the past (e.g., jq/ajv validation plus a small Node/Bun
script to compare expiresAt to now).
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
Summary
Testing
make fmtSCCACHE_NO_DAEMON=1 make lintmake testcd frontend-pwa && bun run audit(fails: 12 moderate severity vulnerabilities)cd frontend-pwa && bun run audit:snyk(fails: authentication error)https://chatgpt.com/codex/tasks/task_e_689c4491c2248322afe54448c92187a7