Skip to content

Use canonical JSON for action hashing#80

Merged
leynos merged 2 commits intomainfrom
codex/replace-manual-hashing-with-canonical-serialization
Aug 8, 2025
Merged

Use canonical JSON for action hashing#80
leynos merged 2 commits intomainfrom
codex/replace-manual-hashing-with-canonical-serialization

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Aug 8, 2025

Summary

  • serialize Action with serde and hash its canonical JSON
  • document and test new hashing approach

closes #35

Testing

  • make fmt
  • make lint
  • make test
  • make markdownlint
  • make nixie (fails: too many arguments for mermaid-cli)

https://chatgpt.com/codex/tasks/task_e_68953c19c0008322b8e2d6b8b6f8a876

Summary by Sourcery

Use canonical JSON serialization for action hashing to ensure stable, order-independent SHA-256 digests

Enhancements:

  • Replace custom Recipe and field-by-field hashing with serde_json canonical JSON serialization in ActionHasher
  • Derive Serialize on Action and configure optional fields to be skipped when absent

Documentation:

  • Update design documentation to reference canonical JSON-based action deduplication

Tests:

  • Update hasher unit tests and snapshots to use new digest values derived from canonical JSON

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Aug 8, 2025

Reviewer's Guide

This PR refactors the Action hashing mechanism to use canonical JSON serialization via serde_json, replacing manual field-by-field hashing and ensuring stable digests as the Action struct evolves; it also derives Serialize on Action, updates docs and tests to align with the new approach.

Class diagram for updated Action struct serialization

classDiagram
    class Action {
        +Recipe recipe
        +Option~String~ description
        +Option~String~ depfile
        +Option~String~ deps_format
        +Option~String~ pool
        +bool restat
    }
    Action : <<Serialize>>
Loading

Class diagram for refactored ActionHasher

classDiagram
    class ActionHasher {
        +hash(action: &Action) String
    }
Loading

File-Level Changes

Change Details Files
Switch action hashing to canonical JSON serialization
  • Remove custom manual hashing logic (hash_recipe, hash_optional_fields, etc.)
  • Serialize Action via serde_json::to_vec and compute SHA-256 digest
  • Panics if serialization fails
src/hasher.rs
Derive Serialize on Action and configure optional fields
  • Add serde::Serialize import and #[derive(Serialize)] on Action struct
  • Annotate optional fields with #[serde(skip_serializing_if = "Option::is_none")]
src/ir.rs
Update hasher tests with new expected digests
  • Replace old hard-coded hash values with new ones reflecting canonical JSON hashing
tests/hasher_tests.rs
Document canonical JSON approach in module and design docs
  • Update hasher.rs comments to describe canonical JSON serialization
  • Revise netsuke-design.md to mention canonical JSON in deduplication description
src/hasher.rs
docs/netsuke-design.md

Assessment against linked issues

Issue Objective Addressed Explanation
#35 Adopt a canonical serialization format (e.g., serde JSON canonical) for hashing Action objects to ensure hash stability across code changes.
#35 Refactor the Action hashing implementation to use the new serialization-based approach, replacing manual byte sequence construction.
#35 Document and test the new hashing approach to ensure maintainability and correctness.

Possibly linked issues


Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Aug 8, 2025

Summary by CodeRabbit

  • Documentation

    • Clarified that build action deduplication uses a SHA-256 hash of a canonical JSON serialisation of action properties.
  • Bug Fixes

    • Improved error handling for action serialisation failures during IR generation.
  • Refactor

    • Switched to canonical JSON serialisation for action hashing, replacing manual hashing logic.
    • Updated error variants to cover more specific IR generation and serialisation cases.
  • Tests

    • Updated expected hash outputs in action hashing tests to reflect the new hashing approach.
  • Chores

    • Added the serde_json_canonicalizer dependency.

Walkthrough

Update the Action hashing mechanism to use canonical JSON serialisation with SHA-256, replacing the previous manual field-by-field approach. Adjust the Action struct to support serialisation, update documentation and comments to reflect the new hashing method, and revise tests to match the new hash outputs.

Changes

Cohort / File(s) Change Summary
Hasher Implementation
src/hasher.rs
Replace manual hashing logic with canonical JSON serialisation and SHA-256; remove old helpers and update documentation/comments.
IR Struct Serialisation
src/ir.rs
Derive Serialize for Action; add #[serde(skip_serializing_if = "Option::is_none")] to optional fields; extend IrGenError with serialization error variant; propagate serialization errors in IR generation.
Documentation and Comments
docs/netsuke-design.md
Update documentation to describe new hashing method; clarify deduplication process and update code comments.
Hasher Tests
tests/hasher_tests.rs
Update expected hash values in test cases to match the new canonical JSON SHA-256 outputs; adjust test assertions to handle Result.
Dependency Management
Cargo.toml
Add serde_json_canonicalizer crate dependency for canonical JSON serialization.

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant ActionHasher
    participant serde_json
    participant SHA256

    Caller->>ActionHasher: hash(&Action)
    ActionHasher->>serde_json: serialize(Action)
    serde_json-->>ActionHasher: canonical JSON bytes
    ActionHasher->>SHA256: compute digest(JSON bytes)
    SHA256-->>ActionHasher: hash digest
    ActionHasher-->>Caller: hex string of digest
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Assessment against linked issues

Objective Addressed Explanation
Adopt canonical serialisation format for Action hashing stability (#35)
Ensure hash stability, maintainability, and reduce collision risk via serialisation (#35)

Assessment against linked issues: Out-of-scope changes

No out-of-scope changes identified.

Possibly related PRs

Suggested reviewers

  • codescene-delta-analysis

Poem

In bytes and hashes, structure’s fate,
Now JSON guides what we create.
SHA-256 hums a stable tune,
No more fields to hand-prune.
Canonical, crisp, and neat,
Deduplication’s job’s complete!
🚀

Warning

Review ran into problems

🔥 Problems

Errors were encountered while retrieving linked issues.

Errors (1)
  • SHA-256: Entity not found: Issue - Could not find referenced Issue.

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fbb750a and f5629db.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (5)
  • Cargo.toml (1 hunks)
  • docs/netsuke-design.md (3 hunks)
  • src/hasher.rs (2 hunks)
  • src/ir.rs (7 hunks)
  • tests/hasher_tests.rs (6 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
Cargo.toml

📄 CodeRabbit Inference Engine (AGENTS.md)

Cargo.toml: Use explicit version ranges in Cargo.toml and keep dependencies up-to-date.
Mandate caret requirements for all dependencies: All crate versions specified in Cargo.toml must use SemVer-compatible caret requirements (e.g., some-crate = "1.2.3").
Prohibit unstable version specifiers: The use of wildcard (*) or open-ended inequality (>=) version requirements is strictly forbidden. Tilde requirements (~) should only be used where a dependency must be locked to patch-level updates for a specific, documented reason.

Files:

  • Cargo.toml
**/*.rs

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.rs: Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Use concat!() to combine long string literals rather than escaping newlines with a backslash.
Prefer semantic error enums: Derive std::error::Error (via the thiserror crate) for any condition the caller might inspect, retry, or map to an HTTP status.
Use an opaque error only at the app boundary: Use eyre::Report for human-readable logs; these should not be exposed in public APIs.
Never export the opaque type from a library: Convert to domain enums at API boundaries, and to eyre only in the main main() entrypoint or top-level async task.

Files:

  • tests/hasher_tests.rs
  • src/hasher.rs
  • src/ir.rs

⚙️ CodeRabbit Configuration File

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

  • Files must not exceed 400 lines in length

    • Large modules must be decomposed
    • Long match statements or dispatch tables should be decomposed by domain and collocated with targets
    • Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.

Files:

  • tests/hasher_tests.rs
  • src/hasher.rs
  • src/ir.rs
docs/**/*.md

📄 CodeRabbit Inference Engine (AGENTS.md)

docs/**/*.md: Use the markdown files within the docs/ directory as a knowledge base and source of truth for project requirements, dependency choices, and architectural decisions.
Proactively update the relevant file(s) in the docs/ directory to reflect the latest state when new decisions are made, requirements change, libraries are added/removed, or architectural patterns evolve.

Files:

  • docs/netsuke-design.md
**/*.md

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.md: Documentation must use en-GB-oxendict spelling and grammar, except for the naming of the "LICENSE" file.
Validate Markdown files using make markdownlint.
Run make fmt after any documentation changes to format all Markdown files and fix table markup.
Validate Mermaid diagrams in Markdown files by running make nixie.
Markdown paragraphs and bullet points must be wrapped at 80 columns.
Code blocks in Markdown must be wrapped at 120 columns.
Tables and headings in Markdown must not be wrapped.
Use dashes (-) for list bullets in Markdown.
Use GitHub-flavoured Markdown footnotes ([^1]) for references and footnotes.

Files:

  • docs/netsuke-design.md

⚙️ CodeRabbit Configuration File

**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")

  • Use en-GB-oxendict (-ize / -our) spelling and grammar
  • Headings must not be wrapped.
  • Documents must start with a level 1 heading
  • Headings must correctly increase or decrease by no more than one level at a time
  • Use GitHub-flavoured Markdown style for footnotes and endnotes.
  • Numbered footnotes must be numbered by order of appearance in the document.

Files:

  • docs/netsuke-design.md
🧬 Code Graph Analysis (2)
tests/hasher_tests.rs (1)
src/hasher.rs (1)
  • hash (56-64)
src/ir.rs (1)
src/hasher.rs (1)
  • hash (56-64)
🔍 MCP Research (1 server)

Deepwiki:

  • The documentation clarifies that the deduplication of build actions in the BuildGraph IR uses a SHA-256 hash computed over a canonical JSON serialization of the action's properties, rather than a generic hash of the action's properties. This ensures stable and unique action identifiers for Ninja build rules. (docs/netsuke-design.md)

  • The IrGenError enum in src/ir.rs includes new error variants MultipleRules, EmptyRule, DuplicateOutput, and ActionSerialisation(#[from] serde_json::Error) to handle serialization errors during action processing, while removing the DependencyNotFound variant. (src/ir.rs)

  • The Action struct in src/ir.rs derives Serialize and uses #[serde(skip_serializing_if = "Option::is_none")] on optional fields (description, depfile, deps_format, pool) to omit them from serialization when absent, supporting canonical JSON serialization. (src/ir.rs)

  • The ActionHasher in src/hasher.rs was refactored to serialize the entire Action struct into canonical JSON using serde_json_canonicalizer::to_writer and hash the resulting byte stream with SHA-256, replacing manual hashing of individual fields. The hash method now returns Result<String, serde_json::Error>. (src/hasher.rs)

⏰ Context from checks skipped due to timeout of 120000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-test
🔇 Additional comments (9)
Cargo.toml (2)

20-20: Add canonical JSON dependency — LGTM

The dependency addition aligns with the canonical JSON hashing design. The version requirement "0.3" is a valid caret range under Cargo semantics.


20-20: Confirm serde_json_canonicalizer at version “0.3.0” is up-to-date

The latest stable release on crates.io is 0.3.0, which satisfies the Cargo.toml requirement of "0.3". No version bump necessary. Run cargo audit to verify there are no security advisories.

tests/hasher_tests.rs (1)

76-79: Handle Result and assert digest — LGTM

The test unpacks the Result via expect and asserts the canonical digest. The cases exercise optional fields, special characters, empty strings, and ordering.

src/hasher.rs (1)

56-64: Canonical JSON hashing implementation — solid and deterministic

Serialise to canonical JSON directly into the hasher with sorted keys and compact output; return a lower-case hex digest. Error propagation via serde_json::Error is correct.

docs/netsuke-design.md (2)

916-918: Document canonical JSON-based action deduplication — LGTM

The text accurately specifies SHA-256 over canonical JSON for action IDs, matching the implementation.


1266-1287: Align error enum documentation with code — LGTM

The snippet now includes MultipleRules, EmptyRule, DuplicateOutput, CircularDependency, and ActionSerialisation with serde_json::Error, matching src/ir.rs. Maintain this in lockstep with code.

src/ir.rs (3)

44-56: Derive Serialize and skip absent optional fields — LGTM

Deriving Serialize and omitting None fields is required for stable canonical JSON and digest stability.


106-108: Add ActionSerialisation error and surface serde_json failures

Wrap serde_json::Error to preserve context when hashing fails. This complies with the error-handling guidelines.


192-208: Register actions with canonical hash and deduplicate — LGTM

Construct Action, compute its canonical JSON digest via ActionHasher, map errors to IrGenError, and deduplicate via the actions map keyed by hash.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch codex/replace-manual-hashing-with-canonical-serialization

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes and found some issues that need to be addressed.

  • Note that serde_json::to_vec does not guarantee canonical JSON key ordering for nested maps—use serde_json_canonicalizer for guaranteed derterminism.
  • Instead of panicking on serialization errors with expect, consider changing hash to return a Result so callers can handle failures gracefully.
  • Serializing the entire Action into a Vec before hashing may incur extra allocations—consider streaming JSON directly into the hasher to reduce memory overhead on hot code paths.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Note that serde_json::to_vec does not guarantee canonical JSON key ordering for nested maps—consider using a dedicated canonical JSON library or explicit key sorting to ensure output stability.
- Instead of panicking on serialization errors with expect, consider changing hash to return a Result so callers can handle failures gracefully.
- Serializing the entire Action into a Vec<u8> before hashing may incur extra allocations—consider streaming JSON directly into the hasher to reduce memory overhead on hot code paths.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 06ff2fd and fbb750a.

⛔ Files ignored due to path filters (1)
  • tests/snapshots/ninja/ninja_snapshot_tests__touch_manifest_ninja.snap is excluded by !**/*.snap
📒 Files selected for processing (4)
  • docs/netsuke-design.md (3 hunks)
  • src/hasher.rs (2 hunks)
  • src/ir.rs (1 hunks)
  • tests/hasher_tests.rs (6 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.rs

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.rs: Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Use concat!() to combine long string literals rather than escaping newlines with a backslash.
Prefer semantic error enums: Derive std::error::Error (via the thiserror crate) for any condition the caller might inspect, retry, or map to an HTTP status.
Use an opaque error only at the app boundary: Use eyre::Report for human-readable logs; these should not be exposed in public APIs.
Never export the opaque type from a library: Convert to domain enums at API boundaries, and to eyre only in the main main() entrypoint or top-level async task.

Files:

  • tests/hasher_tests.rs
  • src/hasher.rs
  • src/ir.rs

⚙️ CodeRabbit Configuration File

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

  • Files must not exceed 400 lines in length

    • Large modules must be decomposed
    • Long match statements or dispatch tables should be decomposed by domain and collocated with targets
    • Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.

Files:

  • tests/hasher_tests.rs
  • src/hasher.rs
  • src/ir.rs
docs/**/*.md

📄 CodeRabbit Inference Engine (AGENTS.md)

docs/**/*.md: Use the markdown files within the docs/ directory as a knowledge base and source of truth for project requirements, dependency choices, and architectural decisions.
Proactively update the relevant file(s) in the docs/ directory to reflect the latest state when new decisions are made, requirements change, libraries are added/removed, or architectural patterns evolve.

Files:

  • docs/netsuke-design.md
**/*.md

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.md: Documentation must use en-GB-oxendict spelling and grammar, except for the naming of the "LICENSE" file.
Validate Markdown files using make markdownlint.
Run make fmt after any documentation changes to format all Markdown files and fix table markup.
Validate Mermaid diagrams in Markdown files by running make nixie.
Markdown paragraphs and bullet points must be wrapped at 80 columns.
Code blocks in Markdown must be wrapped at 120 columns.
Tables and headings in Markdown must not be wrapped.
Use dashes (-) for list bullets in Markdown.
Use GitHub-flavoured Markdown footnotes ([^1]) for references and footnotes.

Files:

  • docs/netsuke-design.md

⚙️ CodeRabbit Configuration File

**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")

  • Use en-GB-oxendict (-ize / -our) spelling and grammar
  • Paragraphs and bullets must be wrapped to 80 columns, except where a long URL would prevent this (in which case, silence MD013 for that line)
  • Code blocks should be wrapped to 120 columns.
  • Headings must not be wrapped.
  • Documents must start with a level 1 heading
  • Headings must correctly increase or decrease by no more than one level at a time
  • Use GitHub-flavoured Markdown style for footnotes and endnotes.
  • Numbered footnotes must be numbered by order of appearance in the document.

Files:

  • docs/netsuke-design.md
🔍 MCP Research (1 server)

Deepwiki:

  • The Action struct now derives the Serialize trait with optional fields annotated to skip serialization if None, enabling stable canonical JSON serialization used for hashing. (src/ir.rs)

  • The ActionHasher implementation was simplified by replacing manual field-by-field hashing with serialization of the entire Action to canonical JSON and hashing the resulting bytes with SHA-256, producing a lowercase hex string. (src/hasher.rs)

  • The hash_action_is_stable test cases were updated to reflect new expected hash values consistent with the canonical JSON + SHA-256 hashing method, without changing test logic or Action parameters. (tests/hasher_tests.rs)

  • The documentation clarifies that build action deduplication in the BuildGraph IR uses SHA-256 hashes computed over canonical JSON serialization of Action properties, replacing previous generic hashing. (docs/netsuke-design.md)

🔇 Additional comments (5)
src/ir.rs (1)

42-55: Derive Serialize and skip absent optionals to stabilise hashes

Derive Serialize on Action and skip None optionals to keep the JSON surface stable. This aligns with the canonical JSON hashing approach and the Deepwiki decision record. LGTM.

tests/hasher_tests.rs (1)

18-18: Update expected digests to match canonical JSON + SHA-256

Updated expectations align with the new hashing method. LGTM.

Also applies to: 29-29, 40-40, 51-51, 62-62, 75-75

docs/netsuke-design.md (3)

916-918: Document deduplication via canonical JSON hash

State that actions keys use a SHA-256 over canonical JSON. This matches the implementation intent. LGTM.


1113-1116: Call out action deduplication behaviour

Clarify deduplication semantics in the design doc. LGTM.


86-99: Validate Mermaid diagrams with make nixie
Run make nixie locally to ensure all Mermaid blocks in docs/netsuke-design.md pass validation. Fix any “too many arguments for mermaid-cli” errors by:

  • Using fences of the form ```mermaid with no trailing tags
  • Removing stray backticks or inline comments inside the diagram block
  • Placing each diagram in its own isolated code fence

Apply this check to lines 86–99, 190–231 and 966–1007.

Comment thread docs/netsuke-design.md
Comment thread src/hasher.rs
@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Aug 8, 2025

@coderabbitai Have the following now been resolved?

  • Note that serde_json::to_vec does not guarantee canonical JSON key ordering for nested maps—use serde_json_canonicalizer for guaranteed derterminism.

  • Instead of panicking on serialization errors with expect, consider changing hash to return a Result so callers can handle failures gracefully.

  • Serializing the entire Action into a Vec before hashing may incur extra allocations—consider streaming JSON directly into the hasher to reduce memory overhead on hot code paths.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Aug 8, 2025

🧩 Analysis chain

@leynos

Short answer:

  • Canonical JSON: Not resolved if the code still uses serde_json::to_vec; swap to serde_json_canonicalizer to guarantee key ordering.
  • Error handling: Not resolved; ActionHasher::hash still panics. Return Result instead.
  • Streaming: Not resolved; code allocates a Vec. Stream canonical JSON directly into the hasher.

Proposed changes:

  1. Use canonical JSON and stream into SHA-256
  • Add dependency:
# Cargo.toml
[dependencies]
serde_json_canonicalizer = "0.1"
sha2 = "0.10"
  • Implement streaming hasher:
// src/hasher.rs
use sha2::{Digest, Sha256};
use std::fmt;
use std::io::{self, Write};

#[derive(Debug)]
pub enum HashError {
    Canonicalize(io::Error),
    Serialize(serde_json::Error),
}

impl fmt::Display for HashError {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        match self {
            HashError::Canonicalize(e) => write!(f, "canonical JSON write failed: {e}"),
            HashError::Serialize(e) => write!(f, "serialization failed: {e}"),
        }
    }
}

impl std::error::Error for HashError {
    fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
        match self {
            HashError::Canonicalize(e) => Some(e),
            HashError::Serialize(e) => Some(e),
        }
    }
}

struct Sha256Writer {
    hasher: Sha256,
}

impl Sha256Writer {
    fn new() -> Self { Self { hasher: Sha256::new() } }
    fn finalize(self) -> String {
        let digest = self.hasher.finalize();
        hex::encode(digest)
    }
}

impl Write for Sha256Writer {
    fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
        self.hasher.update(buf);
        Ok(buf.len())
    }
    fn flush(&mut self) -> io::Result<()> { Ok(0 == 0) }
}

pub struct ActionHasher;

impl ActionHasher {
    pub fn hash(action: &Action) -> Result<String, HashError> {
        let mut writer = Sha256Writer::new();
        // Stream canonical JSON directly into the hasher-backed writer.
        // This guarantees deterministic key ordering for maps and avoids extra allocation.
        serde_json_canonicalizer::to_writer(&mut writer, action)
            .map_err(HashError::Serialize)?;
        Ok(writer.finalize())
    }
}
  1. Update call sites and tests to handle Result
  • Example update in tests:
let got = ActionHasher::hash(&action).expect("hashing must succeed");
assert_eq!(got, expected);

This addresses:

  • Deterministic canonicalization for nested maps via serde_json_canonicalizer.
  • No panics; errors bubble up as Result.
  • No intermediate Vec; JSON is streamed into the hasher writer.

If you prefer a simpler error surface, collapse HashError to anyhow::Result. I can also wire this into your existing error types if you point me to the preferred crate/pattern.

Would you like me to open a follow-up issue to track these refinements, or shall I draft a commit patch here?


🏁 Script executed:

#!/bin/bash
set -e
echo "--- Cargo.toml ---"
fd -a "Cargo.toml" | xargs -I{} sh -c 'echo ">>> {}"; sed -n "1,220p" "{}"'

echo "--- src/hasher.rs ---"
fd -a "hasher.rs" | xargs -I{} sh -c 'echo ">>> {}"; sed -n "1,220p" "{}"'

echo "--- src/ir.rs ---"
fd -a "ir.rs" | xargs -I{} sh -c 'echo ">>> {}"; sed -n "1,260p" "{}"'

echo "--- tests/hasher_tests.rs ---"
fd -a "hasher_tests.rs" | xargs -I{} sh -c 'echo ">>> {}"; sed -n "1,260p" "{}"'

Length of output: 14995


@leynos
Yes—all three are now resolved:

  • Canonical JSON determinism: Implemented. Cargo.toml adds serde_json_canonicalizer = "0.3", and src/hasher.rs uses serde_json_canonicalizer::to_writer(...) to produce canonical JSON with sorted keys (including nested maps) before hashing.

  • No panics; return Result: Implemented. src/hasher.rs: pub fn hash(action: &Action) -> Result<String, serde_json::Error>. src/ir.rs wires this through IrGenError::ActionSerialisation and propagates errors at call sites.

  • Streaming (no Vec allocation): Implemented. src/hasher.rs streams JSON directly into Sha256 via a DigestWriter that implements Write; no intermediate buffer is allocated.

Tiny follow-up: the example in src/hasher.rs still shows using hash without handling Result. Update it to unwrap/expect, e.g.:

let hash = ActionHasher::hash(&action).expect("hash action");
assert!(!hash.is_empty());

Want me to open a quick follow-up issue to track that doc tweak?

✅ Actions performed

Comments resolved and changes approved.

@leynos leynos merged commit 3503796 into main Aug 8, 2025
4 checks passed
@leynos leynos deleted the codex/replace-manual-hashing-with-canonical-serialization branch August 8, 2025 10:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Consider canonical serialization format for Action hashing stability

1 participant