Skip to content

feat: version history browser (CE3)#4

Merged
flyingrobots merged 21 commits intomainfrom
ce3-version-history
Feb 14, 2026
Merged

feat: version history browser (CE3)#4
flyingrobots merged 21 commits intomainfrom
ce3-version-history

Conversation

@flyingrobots
Copy link
Owner

Summary

  • Add getArticleHistory(), readVersion(), and restoreVersion() to CmsService — walk parent chain for version summaries, read full content by SHA, and restore historical content as a new draft with ancestry validation and provenance trailers (restoredFromSha, restoredAt)
  • Wire up GET /api/cms/history, GET /api/cms/show-version, and POST /api/cms/restore server endpoints with limit clamping (1–200) and 1MB body cap
  • Add Admin UI collapsible history panel with lazy-fetch on toggle, version preview (DOMPurify + marked), and restore button (disabled for current version)

Test plan

  • 13 new unit tests in test/git.test.js (describe('Version History')) — ordering, limit, status, single-entry, readVersion, restore content/chain/provenance, state blocks (published), unpublished/reverted paths, lineage validation
  • 5 new integration tests in test/server.test.js — all 3 endpoints + validation (missing slug, missing sha)
  • All 54 tests passing locally via npx vitest run
  • Manual: create article, save 3 versions, expand history, preview v1, restore v1

Add getArticleHistory, readVersion, and restoreVersion to CmsService
with ancestry validation and provenance trailers. Wire up three new
server endpoints and an Admin UI history panel with lazy-fetch, preview,
and restore. Includes 18 new tests (13 unit, 5 integration).
@coderabbitai
Copy link

coderabbitai bot commented Feb 14, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Summary by CodeRabbit

Release Notes

  • New Features

    • Version History Browser: Browse previous article versions with full content preview and restore any version as a new draft for editing.
  • Bug Fixes

    • Enhanced upload security with path traversal protection and automatic cleanup of temporary files.
    • Improved error handling to prevent internal system details from being exposed.

Walkthrough

Adds a Version History Browser: CMS service methods for ancestry validation, history/read/restore; server endpoints for history, show-version, restore, and hardened upload/body handling; frontend history panel with lazy-load, preview, and restore; and unit + integration tests for history and restore flows.

Changes

Cohort / File(s) Summary
Service Layer
src/lib/CmsService.js
Added HISTORY_WALK_LIMIT, _validateAncestry(), getArticleHistory(), readVersion(), restoreVersion() — bounded ancestry walks, read-by-SHA with draft/published fallback, and restore-as-new-draft with provenance trailers.
Server & HTTP helpers
src/server/index.js
Added readBody() with limits, imported HISTORY_WALK_LIMIT, new endpoints GET /api/cms/history, GET /api/cms/show-version (1MB truncation flag), POST /api/cms/restore, POST /api/cms/upload (10MB). SHA validation, filename basename sanitization, finally-block tmp cleanup, and standardized generic 500 responses.
Frontend UI & Client API
public/index.html
Added collapsible Version History panel, preview area, and Restore flow; client API methods history(), showVersion(), restore(); UI state historyVersions and selectedVersion; handlers fetchHistory(), selectVersion(), restoreVersion() with lazy-loading and preview/truncation handling.
Tests — Unit & Integration
test/git.test.js, test/server.test.js
New unit tests for CmsService history/read/restore and integration tests for /api/cms/history, /api/cms/show-version, /api/cms/restore covering ordering, limits, ancestry validation, provenance trailers, and error cases.
Changelog & Packaging
CHANGELOG.md, package.json
Changelog updated for Version History Browser; package version bumped to 1.0.2.
Misc / Robustness
*
Defaulted/normalized trailers where missing, moved tmp cleanup into finally, defensive guards against TypeError/injection, readBody optimizations, and shared walkLimit constant extraction across ancestry/history logic.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant UI as Frontend (public/index.html)
    participant Server as API Server (src/server/index.js)
    participant CMS as CmsService (src/lib/CmsService.js)
    participant Git as Git Storage

    User->>UI: Expand History panel
    UI->>Server: GET /api/cms/history?slug=...&limit=...
    Server->>CMS: getArticleHistory({slug,limit})
    CMS->>Git: Walk ancestry from tip ref (bounded by HISTORY_WALK_LIMIT)
    Git-->>CMS: Commit metadata list
    CMS-->>Server: history array
    Server-->>UI: 200 + versions

    User->>UI: Select version
    UI->>Server: GET /api/cms/show-version?slug=...&sha=...
    Server->>CMS: readVersion({slug,sha})
    CMS->>Git: Read commit by SHA (validate ancestry)
    Git-->>CMS: {sha,title,body,trailers}
    CMS-->>Server: version content (truncated flag if >1MB)
    Server-->>UI: 200 + content

    User->>UI: Confirm Restore
    UI->>Server: POST /api/cms/restore {slug,sha}
    Server->>CMS: restoreVersion({slug,sha})
    CMS->>CMS: _validateAncestry(tipSha,targetSha)
    CMS->>Git: Read target commit & create new draft commit (add restoredFromSha/restoredAt)
    Git-->>CMS: new commit SHA
    CMS->>Git: Update draft ref -> new SHA
    CMS-->>Server: {ref, sha: newSha, prev}
    Server-->>UI: 200 OK
    UI->>Server: GET /api/cms/show (reload article)
    Server-->>UI: Current article content
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🧭 Past commits hum in ordered rows,
A panel opens — history flows.
Click a SHA, preview returns,
Restore writes what memory learns.
Draft reborn as timeline turns.

🚥 Pre-merge checks | ✅ 3 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The PR title 'feat: version history browser (CE3)' directly and concisely describes the main feature addition—a version history browser capability—matching the substantial functionality added across CmsService, server endpoints, and Admin UI.
Description check ✅ Passed The PR description comprehensively relates to the changeset, detailing the three new CmsService methods, server endpoints, Admin UI history panel, and including a thorough test plan with specific test counts and passing status.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into main

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch ce3-version-history

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 122a5f0375

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +376 to +377
const message = await this.graph.showNode(sha);
const decoded = this.codec.decode(message);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Restrict readVersion to the article's commit lineage

readVersion accepts both slug and sha, but after checking that the slug exists it directly calls showNode(sha) without verifying ancestry, so GET /api/cms/show-version can return content from a different article (or any decodable commit) whenever a valid SHA is supplied. This breaks slug scoping and can leak unrelated version content in real usage (for example when stale/incorrect SHAs are sent from the UI or API clients).

Useful? React with 👍 / 👎.

const result = await cms.readVersion({ slug, sha });
// Cap response body at 1MB
if (result.body && Buffer.byteLength(result.body, 'utf8') > 1_048_576) {
result.body = result.body.slice(0, 1_048_576);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Truncate show-version response using UTF-8 byte boundaries

The 1MB cap checks UTF-8 byte length but truncates with slice(0, 1_048_576), which slices characters/code units instead of bytes. For multibyte content (e.g. emoji/CJK-heavy markdown), the response can still exceed the intended 1MB limit, so the endpoint does not actually enforce its documented size bound.

Useful? React with 👍 / 👎.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

🤖 Fix all issues with AI agents
In `@public/index.html`:
- Around line 986-1028: When switching articles while the history panel is open,
fetchHistory can show stale data; update both loadArticle and newArticle to
reset the history panel state by programmatically closing the details element
(historySection.open = false), clearing the history list DOM
(historyList.innerHTML = ''), hiding the preview (historyPreview.style.display =
'none'), and resetting state.historyVersions = [] and state.selectedVersion =
null so the UI and state are consistent before loading the new article.
- Around line 610-619: In the restore method, avoid calling res.json()
unconditionally; first check res.ok on the fetch response (the restore function
using API_BASE and res) and only parse JSON when the response is OK, otherwise
read a safe fallback (e.g., res.text()) and throw an Error that includes the
status and any parsed JSON error.code if available; update restore to mirror the
pattern used by api.list/api.snapshot: if (!res.ok) read text (or try JSON
safely) and throw Object.assign(new Error(message), { code }) so non-JSON HTML
error pages (502/504) don't cause a SyntaxError that hides the real failure.

In `@src/lib/CmsService.js`:
- Around line 400-416: The ancestry walker in CmsService (variables walk,
walkLimit, draftSha, and use of this.graph.getNodeInfo) uses a hardcoded
walkLimit = 200 which can cause valid SHAs to be misclassified; change the logic
so if the loop exits due to reaching the walkLimit you throw a distinct
CmsValidationError (e.g., code 'history_walk_limit_exceeded') instead of
'invalid_version_for_article', and either make walkLimit configurable (read from
service config or an argument) or set it dynamically (e.g., to the article's
known history length) so callers can distinguish "too deep to verify" from a
truly foreign SHA; ensure the error message includes canonicalSlug and the limit
reached so recoverers can decide to retry with a larger limit.
- Around line 320-357: getArticleHistory currently only reads the draft ref
(draftRef) and throws if missing; change it to mirror _resolveArticleState by
also checking the published ref: compute publishedRef via
this._refFor(canonicalSlug, 'articles', 'published') or the equivalent used
elsewhere, call this.graph.readRef on draftRef first and if that returns falsy
call this.graph.readRef on the publishedRef to get sha, and then proceed using
that sha (adjust status handling if needed to reflect published vs draft).
Ensure you reference draftRef, publishedRef, sha, this._refFor, and
this.graph.readRef so callers can locate and update the correct code paths.
- Around line 337-354: getArticleHistory is performing two sequential awaits per
node (this.graph.getNodeInfo then this.graph.showNode) causing O(n) serial
subprocess calls; change the per-node logic to run both async calls in parallel
(use Promise.all on this.graph.getNodeInfo(current) and
this.graph.showNode(current)), then decode the message (this.codec.decode) and
push the version object into versions as before, preserving current =
info.parents?.[0] logic; alternatively consider replacing the whole walk with a
single batched git query if available to avoid per-node round trips when limit
is large.
- Around line 364-378: readVersion currently reads any commit SHA without
verifying it belongs to the article identified by slug; replicate the ancestry
validation used in restoreVersion to scope reads to the article. After computing
draftRef via this._refFor(canonicalSlug, 'articles') and draftSha
(this.graph.readRef), walk the article's lineage (the same graph ancestry walk
used in restoreVersion) to ensure the provided sha is one of the ancestor SHAs
for that draftRef, and throw a CmsValidationError (similar to the
article_not_found or a dedicated code like sha_not_in_lineage) if it is not;
keep the existing graph.showNode/codec.decode flow only for validated SHAs in
readVersion.
- Around line 427-432: The two timestamp fields updatedAt and restoredAt are
generated with separate new Date().toISOString() calls causing possible
millisecond differences; capture a single timestamp before building the object
(e.g., const now = new Date().toISOString()) and reuse that value for both
updatedAt and restoredAt in the object that includes ...restTrailers, status:
STATES.DRAFT, and restoredFromSha: sha so both fields are identical and atomic.
- Around line 418-421: The destructuring of decoded.trailers in both the
unpublish and restore methods can throw when trailers is undefined; update the
destructuring at both locations (the line in unpublish and the line in restore
where you write "const { ... } = decoded.trailers") to safely default to an
empty object (e.g., use decoded.trailers ?? {} or decoded.trailers || {}) so the
destructure won't throw when trailers is missing; keep the rest of the logic
unchanged.

In `@src/server/index.js`:
- Around line 204-211: The GET handlers for '/api/cms/history' and
'/api/cms/show-version' currently bubble non-validation errors up to the outer
catch (which calls sendError) and can leak internal error details; wrap the
logic that calls canonicalizeSlug and cms.getArticleHistory /
cms.getArticleVersion inside their own try/catch blocks (mirroring the POST
endpoints) and on catch call sendError(res, err) but ensure
non-CmsValidationError paths return a sanitized message (e.g., generic 'internal
error') or map to a new sanitized Error before calling sendError so raw
stack/file details from git or other internals are not exposed.
- Around line 214-216: Validate the incoming sha query param at the API boundary
before calling CmsService.readVersion or any git-related methods: in the GET
/api/cms/show-version handler (where rawSlug and sha are extracted) and the
corresponding restore handler, ensure sha matches the expected git SHA regex
(e.g. /^[0-9a-f]{40}$/) and return a 400 error if it does not; update callers to
only pass the validated sha to CmsService.readVersion and avoid letting
arbitrary strings reach this.graph.showNode(sha).
- Around line 219-223: The truncation is done by character-based
result.body.slice after checking byte length, which can exceed the 1MB UTF‑8
limit and split multi-byte sequences; fix by converting result.body to a Buffer,
slice the Buffer to the byte limit (1_048_576) and then set result.body to the
truncated Buffer.toString('utf8') so truncation is done by bytes and incomplete
UTF‑8 sequences are handled safely; update the trailers as currently done
(result.trailers = { ...result.trailers, truncated: 'true' }) and adjust the
logic around Buffer.byteLength, result.body and slice usage accordingly.
- Around line 227-244: The POST handlers (e.g., the '/api/cms/restore' block
that builds `body` via `req.on('data')` and then parses it for
`canonicalizeSlug`/`cms.restoreVersion`) currently accumulate the entire request
with no size cap; add a body size guard (e.g., 1MB) inside the `req.on('data')`
handler for all POST endpoints (`/api/cms/restore`, `/snapshot`, `/publish`,
`/unpublish`, `/revert`, `/upload`) that checks the growing `body` length, stops
processing (destroy/abort the request or stop reading) and immediately respond
with a 413 Payload Too Large (use `send` or `sendError`) when the limit is
exceeded, ensuring you also clean up listeners to prevent memory leaks before
returning. Ensure the same pattern is applied consistently where `body += c` is
used and that normal flow still reaches `req.on('end')` to parse JSON and call
`cms.*` methods or `send`/`sendError`.

In `@test/git.test.js`:
- Around line 357-367: The test is confusing because two consecutive calls to
cms.saveSnapshot use the same title and the first call's result is discarded so
v1 refers to the second commit; update the setup so v1 captures the intended
first commit (either assign v1 = await cms.saveSnapshot(...) on the first save
or change the titles/payloads so the saved snapshots are unambiguous) and keep
the later save for v2, then call cms.restoreVersion({ slug: 'restore-chain',
sha: v1.sha }) and assert history via getArticleHistory as before.

In `@test/server.test.js`:
- Around line 248-267: The test "GET /show-version returns content for specific
SHA" creates a second snapshot but never asserts its response status; capture
the second fetch into a variable (e.g., snap2) and add an assertion like
expect(snap2.status).toBe(200) after the POST so setup failures are detected;
locate the second POST call in the test and update it to store the response and
assert its status before proceeding to fetch show-version.

Comment on lines 357 to 367
it('restoreVersion preserves history chain (commit count)', async () => {
await cms.saveSnapshot({ slug: 'restore-chain', title: 'v1', body: 'b1' });
const v1 = await cms.saveSnapshot({ slug: 'restore-chain', title: 'v1', body: 'b1' });
await cms.saveSnapshot({ slug: 'restore-chain', title: 'v2', body: 'b2' });

await cms.restoreVersion({ slug: 'restore-chain', sha: v1.sha });

// After 3 saves + 1 restore = 4 versions in history
const history = await cms.getArticleHistory({ slug: 'restore-chain' });
expect(history).toHaveLength(4);
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Misleading test setup: v1 is actually the second version, not the first.

Lines 358–359 both save with title: 'v1'. The first save's result is discarded, and v1 captures the second commit's SHA. This makes the test confusing — it reads like "restore v1" but actually restores the second snapshot. The test passes by coincidence of the math (3 saves + 1 restore = 4), but the intent is obscured.

♻️ Proposed clarification
-    await cms.saveSnapshot({ slug: 'restore-chain', title: 'v1', body: 'b1' });
-    const v1 = await cms.saveSnapshot({ slug: 'restore-chain', title: 'v1', body: 'b1' });
-    await cms.saveSnapshot({ slug: 'restore-chain', title: 'v2', body: 'b2' });
+    await cms.saveSnapshot({ slug: 'restore-chain', title: 'v1', body: 'b1' });
+    const v2 = await cms.saveSnapshot({ slug: 'restore-chain', title: 'v2', body: 'b2' });
+    await cms.saveSnapshot({ slug: 'restore-chain', title: 'v3', body: 'b3' });
 
-    await cms.restoreVersion({ slug: 'restore-chain', sha: v1.sha });
+    await cms.restoreVersion({ slug: 'restore-chain', sha: v2.sha });
🤖 Prompt for AI Agents
In `@test/git.test.js` around lines 357 - 367, The test is confusing because two
consecutive calls to cms.saveSnapshot use the same title and the first call's
result is discarded so v1 refers to the second commit; update the setup so v1
captures the intended first commit (either assign v1 = await
cms.saveSnapshot(...) on the first save or change the titles/payloads so the
saved snapshots are unambiguous) and keep the later save for v2, then call
cms.restoreVersion({ slug: 'restore-chain', sha: v1.sha }) and assert history
via getArticleHistory as before.

CmsService:
- Add ancestry validation to readVersion (was leaking cross-article content)
- Extract _validateAncestry helper shared by readVersion/restoreVersion
- Fall back to published ref in getArticleHistory for published-only articles
- Parallelize getNodeInfo + showNode calls (Promise.all)
- Distinct error code history_walk_limit_exceeded vs invalid_version_for_article
- Safe destructuring of decoded.trailers (|| {} guard)
- Single Date() call for updatedAt/restoredAt consistency

Server:
- Add readBody helper with 1 MB limit on all POST endpoints (DoS protection)
- Buffer-based truncation for show-version 1 MB cap (byte-accurate)
- SHA format validation (/^[0-9a-f]{40}$/) on show-version and restore
- Wrap GET /history and GET /show-version in try/catch (was leaking stack traces)
- Convert all POST handlers from req.on('end') to await readBody()

UI:
- api.restore: handle non-JSON error responses gracefully
- Refresh history panel on article switch when already open (stale data fix)

Tests:
- Fix misleading v1/v2 naming in restoreVersion chain test
- Assert second snapshot status in show-version server test
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Fix all issues with AI agents
In `@public/index.html`:
- Around line 1060-1078: restoreVersion can run while an autosave is in
progress; add the same guard used in save() and publish() to avoid racing with
the autosave. At the start of restoreVersion(), check state.saving and if true
abort (e.g., show a brief toast or return early) so the restore won't run
concurrently with the autosave; reference restoreVersion, state.saving, save(),
publish(), and the underlying _updateRef/CAS race to locate where to add the
guard.

In `@src/lib/CmsService.js`:
- Around line 393-411: readVersion currently only checks the draftRef and throws
article_not_found if no draft exists; mirror getArticleHistory's behavior by
falling back to the published ref when draftRef has no tip. Update readVersion
to compute both the draftRef and the publishedRef (using the same _refFor logic
used in getArticleHistory), call this.graph.readRef on draftRef and, if falsy,
on the publishedRef to obtain tipSha, then proceed to call
_validateAncestry(tipSha, sha, canonicalSlug) and load the node via
this.graph.showNode(sha); ensure the error is only thrown when neither ref
yields a tipSha so published-only articles resolve correctly.

In `@src/server/index.js`:
- Around line 260-284: The upload handler allows path traversal because filename
from the parsed body is used directly in path.join(tmpDir, filename); sanitize
the user-provided filename before composing filePath by taking only its basename
(or otherwise stripping any path separators), validate it is non-empty and does
not resolve outside tmpDir, then use that sanitized value when creating filePath
and calling fs.writeFileSync; update the block around readBody,
canonicalizeSlug, tmpDir, filePath, filename, and the call to cms.uploadAsset to
use the sanitized filename and ensure tmpDir cleanup remains intact.

In `@test/git.test.js`:
- Around line 334-342: Add two negative-path tests for readVersion: (1) call
cms.readVersion with a nonexistent slug and assert it rejects/throws with error
code 'article_not_found'; (2) create two articles via cms.saveSnapshot, then
call cms.readVersion for the first article using the SHA from the second article
and assert it rejects/throws with error code 'invalid_version_for_article'. Use
the existing test structure around readVersion and reference symbols
cms.saveSnapshot, cms.readVersion and the saved snapshot SHA (e.g., v1.sha) to
locate where to add these tests.

In `@test/server.test.js`:
- Around line 300-309: Add integration tests in test/server.test.js that assert
the server returns 400 for malformed SHA inputs for the two guarded endpoints:
call GET /api/cms/show-version with an invalid sha query (e.g.,
"not-a-valid-sha") and call POST /api/cms/restore with a JSON body containing
slug and an invalid sha (e.g., "ZZZZ"); each test should await the response and
expect res.status toBe(400). Reference the same endpoints used in the code (GET
/show-version and POST /restore) and the server's SHA_RE validation to ensure
these boundary checks are exercised.

Critical:
- Path traversal via filename in upload handler: sanitize to path.basename()

Major:
- readVersion now checks published ref fallback (consistent with getArticleHistory)
- Add negative-path tests for readVersion (article_not_found, cross-lineage)
- Add SHA format validation integration tests for show-version and restore

Minor:
- UI restoreVersion guards against state.saving (race with autosave)
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@src/lib/CmsService.js`:
- Around line 409-411: readVersion currently returns decoded.trailers directly
which can be undefined; add the same defensive guard used in restoreVersion to
ensure trailers is always an object. In the readVersion function, compute const
trailers = decoded.trailers || {}; and return { sha, title: decoded.title, body:
decoded.body, trailers } so callers always receive an object for trailers
(matching the JSDoc contract).

In `@src/server/index.js`:
- Around line 260-288: The temp directory created by fs.mkdtempSync (tmpDir) can
leak if cms.uploadAsset or the chunk-digest validation throws because fs.rmSync
cleanup is only in the happy path; refactor the POST /api/cms/upload handler to
ensure tmpDir is always removed by creating tmpDir before the try, then wrapping
the upload/validation logic in a try/finally (or nest a try around
cms.uploadAsset and the digest checks) and call fs.rmSync(tmpDir, { recursive:
true, force: true }) in the finally block; ensure tmpDir is declared in a scope
visible to the finally, preserve existing error handling (logError and
sendError), and keep using canonicalizeSlug, readBody, send, and sendError as
before.

CI Bot added 5 commits February 14, 2026 00:13
Prevents stale version history from previous article showing
when creating a new article with the history panel already open.
Add || {} fallback to prevent TypeError when trailers is undefined,
matching the defensive pattern already used in restoreVersion.
Ensures callers always receive an object for trailers, matching
the defensive pattern in restoreVersion and the JSDoc contract.
Move tmpDir cleanup to a finally block so it's always removed,
even when cms.uploadAsset or chunk-digest validation throws.
Document PR #4 round-3 review fixes (4 issues).
@flyingrobots
Copy link
Owner Author

@coderabbitai review please 🥕

@coderabbitai
Copy link

coderabbitai bot commented Feb 14, 2026

@flyingrobots: Understood. I will conduct an absolutely merciless, pedantic review of your version history browser implementation. Every edge case, every potential failure mode, every opportunity for improvement will be scrutinized with surgical precision. No stone shall remain unturned. 🔍

Initiating GOD-TIER review now...

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/server/index.js (1)

79-88: ⚠️ Potential issue | 🟠 Major

sendError still exposes raw err.message to clients on 500 responses.

Line 87 returns { error: err.message } for non-CmsValidationError exceptions. Internal git subprocess failures, file system errors, or codec exceptions will have their raw messages sent to the client. This can leak internal paths, library versions, or stack details.

Return a generic message for 500s and log the real error server-side (which logError already handles upstream).

🛡️ Proposed fix
 function sendError(res, err) {
   if (err instanceof CmsValidationError) {
     return send(res, 400, {
       error: err.message,
       code: err.code,
       field: err.field,
     });
   }
-  return send(res, 500, { error: err.message });
+  return send(res, 500, { error: 'Internal server error' });
 }
🤖 Fix all issues with AI agents
In `@src/server/index.js`:
- Around line 90-109: The readBody implementation uses repeated string
concatenation (body += chunk) which is O(n²) for large uploads; change readBody
to accumulate incoming chunks into an array (e.g., let chunks = []; push each
chunk in the 'data' handler, track bytes/limit as before) and on 'end' return a
single Buffer or string by using Buffer.concat(chunks, bytes) (or
Buffer.concat(...).toString() if text is required). Keep the existing
limit/req.destroy()/reject logic and reuse the same symbols (readBody, req,
limit, bytes) to locate and replace the current body variable handling.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/server/index.js (1)

79-88: ⚠️ Potential issue | 🟠 Major

sendError still leaks raw err.message for non-validation errors.

Line 87 sends { error: err.message } for any non-CmsValidationError. Git subprocess failures, filesystem errors, and internal exceptions will have their messages forwarded verbatim to the client. In production, this can expose file paths, internal state, or dependency details.

🛡️ Proposed fix: sanitize non-validation errors
 function sendError(res, err) {
   if (err instanceof CmsValidationError) {
     return send(res, 400, {
       error: err.message,
       code: err.code,
       field: err.field,
     });
   }
-  return send(res, 500, { error: err.message });
+  return send(res, 500, { error: 'Internal server error' });
 }
🤖 Fix all issues with AI agents
In `@public/index.html`:
- Around line 722-725: The loadArticle function currently only calls
fetchHistory when the history panel is open, leaving stale state.historyVersions
and state.selectedVersion from the previous article when the panel is closed;
update loadArticle to unconditionally reset history state to a clean default
(clear state.historyVersions and null/initialise state.selectedVersion) just
like newArticle does, and then still call fetchHistory() conditionally if
document.getElementById('historySection').open so the panel refreshes when open.
- Around line 1043-1065: selectVersion has a race where multiple async
api.showVersion calls can return out-of-order and overwrite the preview; after
awaiting the fetch in selectVersion, verify the selection is still the same
(e.g. compare state.selectedVersion.sha or idx to the sha/idx captured at the
start of the function) before updating contentEl, previewEl or restoreBtn so
only the most recent selection updates the DOM.

In `@src/lib/CmsService.js`:
- Around line 436-441: Destructuring uses lowercase keys because codec.decode
lowercases trailer keys; add a concise inline comment above the
decode/destructure (near this.codec.decode, const trailers, and the
destructuring that removes updatedat/restoredfromsha/restoredat) explaining that
trailer-codec normalizes/decodes keys to lowercase so the subsequent camelCase
writes (updatedAt, restoredFromSha, restoredAt) are intentional and not a casing
bug; keep the comment short and reference codec.decode and the trailers variable
for clarity.
- Around line 320-344: _extract the hardcoded walkLimit (200) into a single
configurable value and use it everywhere so the ancestry check stays in sync
with the server clamp; specifically introduce a shared constant or a CmsService
constructor option (e.g., historyWalkLimit) and replace the local walkLimit in
_validateAncestry with that value, and update the server-side history clamp and
any callers like restoreVersion and readVersion to reference the same symbol
(HISTORY_WALK_LIMIT or this.historyWalkLimit) so all places use a single source
of truth._

CI Bot added 9 commits February 14, 2026 00:33
sendError was forwarding raw err.message for non-CmsValidationError
exceptions, potentially exposing file paths, git subprocess details,
or internal state to clients.
Repeated string concatenation (body += chunk) is O(n²) for large
uploads. Accumulate chunks in an array and concat once on end.
When the history panel is closed and the user switches articles,
historyVersions and selectedVersion from the previous article
persisted until the panel was next opened. Now reset unconditionally.
Rapid version clicks could cause out-of-order responses to overwrite
the preview. Now checks if the selection is still current after the
await before updating the DOM.
The hardcoded walkLimit=200 in _validateAncestry and the server's
history limit clamp were independent values that could silently
diverge. Now both reference a single exported constant.
…tion

The destructured lowercase keys followed by camelCase writes look like
a casing bug without context about codec.decode's normalization behavior.
Triggers a 500 via GET /api/cms/show with a non-existent slug and
asserts the response body says 'Internal server error' with no
filesystem paths, .js extensions, or node_modules references.
The previous commit reset state.historyVersions and selectedVersion
but left stale HTML in historyList and a visible historyPreview.
Now matches the full reset pattern used in newArticle.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@public/index.html`:
- Around line 1075-1093: In restoreVersion, before any awaits, clear the
autosave timer and reset the editor dirty/saving flags to prevent the autosave
save() from racing the restore and from re-prompting; specifically, call
clearAutosave() (the existing autosave cleanup) and set state.dirty = false and
set state.saving = true immediately after the confirm and before calling
api.restore({ slug: state.currentSlug, sha }), then in the try/finally restore
state.saving = false (and restart autosave as appropriate) so loadArticle()
won't trigger a discard prompt or collide with an in-flight autosave.

In `@src/lib/CmsService.js`:
- Around line 349-389: Clamp the requested limit inside getArticleHistory to
avoid unbounded history walks: in the async function getArticleHistory({ slug,
limit = 50 }) immediately normalize limit to a safe range (e.g. min 1 and max
HISTORY_WALK_LIMIT or a module-constant cap) before using it in the while loop;
reference the local variable limit and the function name getArticleHistory to
locate where to apply the clamp so subsequent iterations use the clamped value
rather than trusting the caller.

In `@src/server/index.js`:
- Around line 209-221: The parsing logic treats a parsed 0 as falsy because of
`parseInt(rawLimit, 10) || 50`, causing `"0"` to become 50; update the GET
/api/cms/history handler so you first parse the limit into a variable (e.g.,
parsed = parseInt(rawLimit, 10)) and then set limit using a NaN check instead of
|| (e.g., limit = Math.max(1, Math.min(HISTORY_WALK_LIMIT, Number.isNaN(parsed)
? 50 : parsed))). Modify the block around canonicalizeSlug, HISTORY_WALK_LIMIT
and the call to cms.getArticleHistory so send/res logic and error handling
(send, sendError, logError) remain unchanged.

In `@test/server.test.js`:
- Around line 254-255: Assert the response status before consuming the body:
move the expect(snap1.status).toBe(200) to occur immediately after obtaining
snap1 (before calling await snap1.json()), and do the same for the other
response (snap2) referenced around lines 276–277 so that status is checked prior
to calling .json() to avoid masking non-JSON errors; update the test blocks that
use the snap1 and snap2 variables accordingly.

Comment on lines 1075 to 1093
async restoreVersion() {
if (!state.selectedVersion || !state.currentSlug || state.saving) return;
const { sha } = state.selectedVersion;
if (!confirm(`Restore version ${sha.slice(0, 7)}? This creates a new draft with the old content.`)) return;

try {
await api.restore({ slug: state.currentSlug, sha });
toast('Version restored', 'success');
// Close history section and reload article
document.getElementById('historySection').open = false;
await this.loadArticle(state.currentSlug);
} catch (err) {
if (err.code === 'invalid_state_transition') {
toast('Cannot restore: unpublish the article first', 'error');
} else {
toast('Restore failed: ' + err.message, 'error');
}
}
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

restoreVersion races with the autosave timer — the timer is not cleared before the async restore call.

The state.saving guard at line 1076 blocks restore while a save is in-flight, but the reverse isn't protected. Between the confirm() returning and loadArticle completing (which calls clearAutosave), the 3-second autosave timer can fire. Since restoreVersion never sets state.saving = true, the autosave's save() call proceeds and races the restore — both attempt to CAS-update the same draft ref, and the loser gets a confusing CAS conflict error.

Additionally, if the user had unsaved edits, loadArticle (line 1085) prompts again with "You have unsaved changes. Discard them?" — right after they already confirmed the restore. If they cancel this second dialog, the editor shows stale pre-restore content while the server already has the restored version.

Fix both by clearing autosave and resetting dirty state before the async work:

Proposed fix
       async restoreVersion() {
         if (!state.selectedVersion || !state.currentSlug || state.saving) return;
         const { sha } = state.selectedVersion;
         if (!confirm(`Restore version ${sha.slice(0, 7)}? This creates a new draft with the old content.`)) return;

+        // Prevent autosave from racing the restore and suppress
+        // the redundant "unsaved changes" prompt in loadArticle.
+        UI.clearAutosave();
+        state.dirty = false;
+
         try {
           await api.restore({ slug: state.currentSlug, sha });
           toast('Version restored', 'success');
           document.getElementById('historySection').open = false;
           await this.loadArticle(state.currentSlug);
         } catch (err) {
🤖 Prompt for AI Agents
In `@public/index.html` around lines 1075 - 1093, In restoreVersion, before any
awaits, clear the autosave timer and reset the editor dirty/saving flags to
prevent the autosave save() from racing the restore and from re-prompting;
specifically, call clearAutosave() (the existing autosave cleanup) and set
state.dirty = false and set state.saving = true immediately after the confirm
and before calling api.restore({ slug: state.currentSlug, sha }), then in the
try/finally restore state.saving = false (and restart autosave as appropriate)
so loadArticle() won't trigger a discard prompt or collide with an in-flight
autosave.

CI Bot added 4 commits February 14, 2026 01:13
Clear the autosave timer and reset dirty state after the user confirms
but before the async restore call. Prevents the 3-second autosave from
firing mid-restore (CAS conflict) and the redundant "unsaved changes"
prompt when loadArticle is called after the restore completes.
Defense-in-depth: clamp the caller-provided limit to
[1, HISTORY_WALK_LIMIT] inside the service method itself so future
direct callers (CLI, tests) can't trigger unbounded ancestry walks.
parseInt('0') || 50 evaluates to 50 because 0 is falsy. Use
Number.isNaN to correctly distinguish missing/invalid input from
an explicit zero (which gets clamped to 1 by Math.max).
Swap the order so SyntaxError from non-JSON error responses
doesn't mask the real status code failure during CI.
@flyingrobots
Copy link
Owner Author

@coderabbitai review please 🥕

@coderabbitai
Copy link

coderabbitai bot commented Feb 14, 2026

@flyingrobots: Acknowledged. Initiating another GOD-TIER NIGHTMARE REVIEW pass of your version history implementation. Every line will face the crucible of pedantic scrutiny. Prepare for absolutely zero mercy. 🔥

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@flyingrobots flyingrobots merged commit f370e79 into main Feb 14, 2026
4 checks passed
@flyingrobots flyingrobots deleted the ce3-version-history branch February 14, 2026 09:26
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/server/index.js (1)

132-147: 🧹 Nitpick | 🔵 Trivial

GET /list and /show lack per-endpoint try-catch, unlike every other endpoint.

Lines 135–147 let errors from canonicalizeKind, canonicalizeSlug, and cms.readArticle bubble to the outer catch at line 300. This works (same logError + sendError path), but every new/refactored endpoint wraps its own try-catch. The inconsistency means a future maintainer adding code between the outer try and these handlers could inadvertently change error behavior. Not a bug today, but messy.

🤖 Fix all issues with AI agents
In `@public/index.html`:
- Around line 1005-1047: fetchHistory can be overwritten by a late response from
a previous slug; capture the slug at the start (e.g. const slug =
state.currentSlug) and after awaiting api.history(because versions may return
late) bail if state.currentSlug !== slug before mutating state.historyVersions,
rendering listEl, or touching state.selectedVersion; implement the same
staleness-guard pattern used in selectVersion so only the most-recent fetch
updates the DOM and state.

In `@src/lib/CmsService.js`:
- Around line 354-355: getArticleHistory currently computes effectiveLimit using
Math.min/Math.max which yields NaN for non-finite inputs (e.g. limit: NaN) and
makes the loop skip silently; fix by validating the incoming limit before
computing effectiveLimit: in getArticleHistory({ slug, limit = 50 }) check
Number.isFinite(limit) and that limit is >= 0 (or an integer if you prefer); if
the check fails throw a TypeError (or RangeError) explaining the limit must be a
finite number, otherwise compute effectiveLimit = Math.max(1, Math.min(limit,
HISTORY_WALK_LIMIT)) and proceed — this ensures callers get a clear error
instead of an empty result when they pass NaN or Infinity and references the
function name getArticleHistory, the variable effectiveLimit, HISTORY_WALK_LIMIT
and the while condition that compares versions.length to effectiveLimit.

In `@src/server/index.js`:
- Around line 209-222: The inner const named parsed shadows an outer parsed
(from url.parse) which can confuse readers; rename the inner variable to
parsedLimit (or similar) in the GET /api/cms/history handler where you compute
parseInt(rawLimit, 10) and then use parsedLimit in the subsequent limit
calculation (Math.max(1, Math.min(HISTORY_WALK_LIMIT, Number.isNaN(parsedLimit)
? 50 : parsedLimit))) before calling cms.getArticleHistory({ slug, limit });
ensure you only change the identifier (not behavior) in this block so functions
like canonicalizeSlug, send, and sendError remain unaffected.

In `@test/server.test.js`:
- Around line 329-340: The test "500 responses return generic message without
internal details" currently uses tautological checks on data.error; update it to
assert the response does not contain known internal details by inspecting the
full serialized response (e.g., use JSON.stringify(data)) and asserting it does
not match expected sensitive patterns such as the original slug path or
repository refs and stack traces (for example, ensure JSON.stringify(data) does
not match /refs\/_blog/ and does not match /Error: / or file-path patterns);
keep the existing status and generic error equality checks but replace the three
naive regex checks on data.error with these stronger assertions against the
serialized response (refer to the test case name and the variable data to locate
where to change assertions).
- Around line 294-298: The test does not assert the HTTP status for the
verification fetch (const show = await
fetch(`${baseUrl}/api/cms/show?slug=srv-restore`)); add an assertion on
show.status (e.g., expect(show.status).toBe(200)) before calling await
show.json(), so failures surface the real HTTP status instead of producing a
misleading JSON/property error when verifying article.title.

Comment on lines +1005 to +1047
/* Version History */
async fetchHistory() {
if (!state.currentSlug) return;
const listEl = document.getElementById('historyList');
listEl.innerHTML = '<div class="skeleton" style="height:1.8em"></div>';
document.getElementById('historyPreview').style.display = 'none';
state.selectedVersion = null;

try {
const versions = await api.history(state.currentSlug);
state.historyVersions = versions;
listEl.innerHTML = '';

versions.forEach((v, idx) => {
const div = document.createElement('div');
div.className = 'history-item';
div.dataset.idx = idx;

const shaSpan = document.createElement('span');
shaSpan.className = 'hist-sha';
shaSpan.textContent = v.sha.slice(0, 7);

const titleSpan = document.createElement('span');
titleSpan.className = 'hist-title';
titleSpan.textContent = v.title;

const statusSpan = document.createElement('span');
statusSpan.className = 'hist-status';
statusSpan.textContent = v.status;

const dateSpan = document.createElement('span');
dateSpan.className = 'hist-date';
dateSpan.textContent = relTime(v.date);

div.append(shaSpan, titleSpan, statusSpan, dateSpan);
div.onclick = () => this.selectVersion(v.sha, idx);
listEl.appendChild(div);
});
} catch (err) {
toast('Failed to load history', 'error');
listEl.innerHTML = '';
}
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

fetchHistory has no staleness guard — late-arriving response from a previous article silently overwrites the current article's history.

You correctly added a staleness guard in selectVersion (line 1064: if (state.selectedVersion?.sha !== sha) return), but fetchHistory has no equivalent. Scenario: panel is open, user clicks article A → fetchHistory fires for A → user quickly clicks article B → loadArticle resets state, fires fetchHistory for B → A's response arrives after B's reset but before B's response → A's versions are rendered and assigned to state.historyVersions for article B.

🐛 Proposed fix: capture slug at call time, bail if stale
       async fetchHistory() {
         if (!state.currentSlug) return;
+        const slug = state.currentSlug;
         const listEl = document.getElementById('historyList');
         listEl.innerHTML = '<div class="skeleton" style="height:1.8em"></div>';
         document.getElementById('historyPreview').style.display = 'none';
         state.selectedVersion = null;

         try {
-          const versions = await api.history(state.currentSlug);
+          const versions = await api.history(slug);
+          // Guard: article may have changed during fetch
+          if (state.currentSlug !== slug) return;
           state.historyVersions = versions;
           listEl.innerHTML = '';

           versions.forEach((v, idx) => {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/* Version History */
async fetchHistory() {
if (!state.currentSlug) return;
const listEl = document.getElementById('historyList');
listEl.innerHTML = '<div class="skeleton" style="height:1.8em"></div>';
document.getElementById('historyPreview').style.display = 'none';
state.selectedVersion = null;
try {
const versions = await api.history(state.currentSlug);
state.historyVersions = versions;
listEl.innerHTML = '';
versions.forEach((v, idx) => {
const div = document.createElement('div');
div.className = 'history-item';
div.dataset.idx = idx;
const shaSpan = document.createElement('span');
shaSpan.className = 'hist-sha';
shaSpan.textContent = v.sha.slice(0, 7);
const titleSpan = document.createElement('span');
titleSpan.className = 'hist-title';
titleSpan.textContent = v.title;
const statusSpan = document.createElement('span');
statusSpan.className = 'hist-status';
statusSpan.textContent = v.status;
const dateSpan = document.createElement('span');
dateSpan.className = 'hist-date';
dateSpan.textContent = relTime(v.date);
div.append(shaSpan, titleSpan, statusSpan, dateSpan);
div.onclick = () => this.selectVersion(v.sha, idx);
listEl.appendChild(div);
});
} catch (err) {
toast('Failed to load history', 'error');
listEl.innerHTML = '';
}
},
/* Version History */
async fetchHistory() {
if (!state.currentSlug) return;
const slug = state.currentSlug;
const listEl = document.getElementById('historyList');
listEl.innerHTML = '<div class="skeleton" style="height:1.8em"></div>';
document.getElementById('historyPreview').style.display = 'none';
state.selectedVersion = null;
try {
const versions = await api.history(slug);
// Guard: article may have changed during fetch
if (state.currentSlug !== slug) return;
state.historyVersions = versions;
listEl.innerHTML = '';
versions.forEach((v, idx) => {
const div = document.createElement('div');
div.className = 'history-item';
div.dataset.idx = idx;
const shaSpan = document.createElement('span');
shaSpan.className = 'hist-sha';
shaSpan.textContent = v.sha.slice(0, 7);
const titleSpan = document.createElement('span');
titleSpan.className = 'hist-title';
titleSpan.textContent = v.title;
const statusSpan = document.createElement('span');
statusSpan.className = 'hist-status';
statusSpan.textContent = v.status;
const dateSpan = document.createElement('span');
dateSpan.className = 'hist-date';
dateSpan.textContent = relTime(v.date);
div.append(shaSpan, titleSpan, statusSpan, dateSpan);
div.onclick = () => this.selectVersion(v.sha, idx);
listEl.appendChild(div);
});
} catch (err) {
toast('Failed to load history', 'error');
listEl.innerHTML = '';
}
},
🤖 Prompt for AI Agents
In `@public/index.html` around lines 1005 - 1047, fetchHistory can be overwritten
by a late response from a previous slug; capture the slug at the start (e.g.
const slug = state.currentSlug) and after awaiting api.history(because versions
may return late) bail if state.currentSlug !== slug before mutating
state.historyVersions, rendering listEl, or touching state.selectedVersion;
implement the same staleness-guard pattern used in selectVersion so only the
most-recent fetch updates the DOM and state.

Comment on lines +354 to +355
async getArticleHistory({ slug, limit = 50 }) {
const effectiveLimit = Math.max(1, Math.min(limit, HISTORY_WALK_LIMIT));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

effectiveLimit silently swallows NaN — returns empty array instead of erroring.

If a direct caller passes limit: NaN, Math.min(NaN, 200)NaN, Math.max(1, NaN)NaN. The while condition versions.length < NaN is always false, so the method returns [] with no error. The server endpoint guards against this, but as a public API method, silently returning nothing for a nonsensical input is surprising.

   async getArticleHistory({ slug, limit = 50 }) {
-    const effectiveLimit = Math.max(1, Math.min(limit, HISTORY_WALK_LIMIT));
+    const effectiveLimit = Math.max(1, Math.min(Number(limit) || 50, HISTORY_WALK_LIMIT));
🤖 Prompt for AI Agents
In `@src/lib/CmsService.js` around lines 354 - 355, getArticleHistory currently
computes effectiveLimit using Math.min/Math.max which yields NaN for non-finite
inputs (e.g. limit: NaN) and makes the loop skip silently; fix by validating the
incoming limit before computing effectiveLimit: in getArticleHistory({ slug,
limit = 50 }) check Number.isFinite(limit) and that limit is >= 0 (or an integer
if you prefer); if the check fails throw a TypeError (or RangeError) explaining
the limit must be a finite number, otherwise compute effectiveLimit =
Math.max(1, Math.min(limit, HISTORY_WALK_LIMIT)) and proceed — this ensures
callers get a clear error instead of an empty result when they pass NaN or
Infinity and references the function name getArticleHistory, the variable
effectiveLimit, HISTORY_WALK_LIMIT and the while condition that compares
versions.length to effectiveLimit.

Comment on lines +209 to +222
// GET /api/cms/history?slug=xxx&limit=50
if (req.method === 'GET' && pathname === '/api/cms/history') {
try {
const { slug: rawSlug, limit: rawLimit } = query;
if (!rawSlug) return send(res, 400, { error: 'slug required' });
const slug = canonicalizeSlug(rawSlug);
const parsed = parseInt(rawLimit, 10);
const limit = Math.max(1, Math.min(HISTORY_WALK_LIMIT, Number.isNaN(parsed) ? 50 : parsed));
return send(res, 200, await cms.getArticleHistory({ slug, limit }));
} catch (err) {
logError(err);
return sendError(res, err);
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Variable parsed on line 215 shadows the outer parsed from line 118.

Both declarations are const parsed — the inner one (parseInt(rawLimit, 10)) shadows the outer one (url.parse(req.url, true)). Functionally harmless due to block scoping, but a reader scanning this function will trip on the name collision. Rename to parsedLimit or similar.

-          const parsed = parseInt(rawLimit, 10);
-          const limit = Math.max(1, Math.min(HISTORY_WALK_LIMIT, Number.isNaN(parsed) ? 50 : parsed));
+          const parsedLimit = parseInt(rawLimit, 10);
+          const limit = Math.max(1, Math.min(HISTORY_WALK_LIMIT, Number.isNaN(parsedLimit) ? 50 : parsedLimit));
🤖 Prompt for AI Agents
In `@src/server/index.js` around lines 209 - 222, The inner const named parsed
shadows an outer parsed (from url.parse) which can confuse readers; rename the
inner variable to parsedLimit (or similar) in the GET /api/cms/history handler
where you compute parseInt(rawLimit, 10) and then use parsedLimit in the
subsequent limit calculation (Math.max(1, Math.min(HISTORY_WALK_LIMIT,
Number.isNaN(parsedLimit) ? 50 : parsedLimit))) before calling
cms.getArticleHistory({ slug, limit }); ensure you only change the identifier
(not behavior) in this block so functions like canonicalizeSlug, send, and
sendError remain unaffected.

Comment on lines +294 to +298
// Verify content was restored
const show = await fetch(`${baseUrl}/api/cms/show?slug=srv-restore`);
const article = await show.json();
expect(article.title).toBe('Original');
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing status assertion on the verification show response.

Line 295 fetches /api/cms/show?slug=srv-restore and line 296 immediately parses JSON, but show.status is never asserted. If the show endpoint returns a non-200 (e.g., 500), the article.title assertion on line 297 would fail with a confusing error instead of surfacing the real HTTP status.

     const show = await fetch(`${baseUrl}/api/cms/show?slug=srv-restore`);
+    expect(show.status).toBe(200);
     const article = await show.json();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Verify content was restored
const show = await fetch(`${baseUrl}/api/cms/show?slug=srv-restore`);
const article = await show.json();
expect(article.title).toBe('Original');
});
// Verify content was restored
const show = await fetch(`${baseUrl}/api/cms/show?slug=srv-restore`);
expect(show.status).toBe(200);
const article = await show.json();
expect(article.title).toBe('Original');
});
🤖 Prompt for AI Agents
In `@test/server.test.js` around lines 294 - 298, The test does not assert the
HTTP status for the verification fetch (const show = await
fetch(`${baseUrl}/api/cms/show?slug=srv-restore`)); add an assertion on
show.status (e.g., expect(show.status).toBe(200)) before calling await
show.json(), so failures surface the real HTTP status instead of producing a
misleading JSON/property error when verifying article.title.

Comment on lines +329 to +340
it('500 responses return generic message without internal details', async () => {
// GET /api/cms/show with a non-existent slug throws a plain Error (not
// CmsValidationError), which routes through sendError as a 500.
const res = await fetch(`${baseUrl}/api/cms/show?slug=does-not-exist`);
const data = await res.json();
expect(res.status).toBe(500);
expect(data.error).toBe('Internal server error');
// Must NOT contain filesystem paths or internal details
expect(data.error).not.toMatch(/\//);
expect(data.error).not.toMatch(/node_modules/);
expect(data.error).not.toMatch(/\.js/);
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

500-response test: assertions on data.error are tautological.

Lines 337–339 assert that "Internal server error" doesn't match /\//, /node_modules/, or /\.js/. These will always pass for any hardcoded string that doesn't contain those patterns — they don't actually prove the server suppressed internal details. A more meaningful assertion would check that the raw stack trace or the original error message (e.g., the filesystem path from the thrown Error) is absent. As-is, if someone changes sendError to leak details but not in those exact patterns, the test is blind.

Consider asserting against a known internal detail (e.g., the slug path or refs/) that would appear in the underlying error:

expect(JSON.stringify(data)).not.toMatch(/refs\/_blog/);
🤖 Prompt for AI Agents
In `@test/server.test.js` around lines 329 - 340, The test "500 responses return
generic message without internal details" currently uses tautological checks on
data.error; update it to assert the response does not contain known internal
details by inspecting the full serialized response (e.g., use
JSON.stringify(data)) and asserting it does not match expected sensitive
patterns such as the original slug path or repository refs and stack traces (for
example, ensure JSON.stringify(data) does not match /refs\/_blog/ and does not
match /Error: / or file-path patterns); keep the existing status and generic
error equality checks but replace the three naive regex checks on data.error
with these stronger assertions against the serialized response (refer to the
test case name and the variable data to locate where to change assertions).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant