Skip to content

Move wrap tests to module#152

Merged
leynos merged 3 commits intomainfrom
j0itgl-codex/move-tests-from-src/wrap.rs-to-tests/wrap.rs
Jul 31, 2025
Merged

Move wrap tests to module#152
leynos merged 3 commits intomainfrom
j0itgl-codex/move-tests-from-src/wrap.rs-to-tests/wrap.rs

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Jul 31, 2025

Summary

  • extract markdown tokenization helpers to src/wrap/tokenize.rs
  • move unit tests from src/wrap.rs to src/wrap/tests.rs
  • keep wrap.rs under 400 lines with new submodule

Testing

  • make fmt
  • make test

https://chatgpt.com/codex/tasks/task_e_688a894f52ec8322bf11eb49aa45d8ec

Summary by Sourcery

Extract markdown tokenization logic into a dedicated wrap/tokenize module and relocate wrap.rs unit tests to a separate tests file, reducing wrap.rs size.

Enhancements:

  • Extract Token enum and tokenization functions (inline, markdown, punctuation detection, line breaking) into src/wrap/tokenize.rs
  • Update wrap.rs to import and use the new tokenize submodule for all tokenization and line-breaking logic

Tests:

  • Move existing wrap_text unit tests from src/wrap.rs to src/wrap/tests.rs

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Jul 31, 2025

Reviewer's Guide

Refactors wrap.rs by extracting markdown tokenization logic into a new tokenize submodule, updating wrap.rs to import and use the submodule, and moving unit tests into a dedicated tests file.

Class diagram for refactored wrap module and tokenize submodule

classDiagram
    class wrap {
        +wrap_text(lines: &[String], width: usize) Vec<String>
        +wrap_preserving_code(text: &str, width: usize) Vec<String>
        +FENCE_RE: Regex
    }
    class tokenize {
        +Token<'a>
        +tokenize_markdown(input: &str) Vec<Token<'_>>
        +tokenize_inline(text: &str) Vec<String>
        +is_trailing_punctuation(c: char) bool
        +should_break_line(width: usize, current_width: usize, last_split: Option<usize>) bool
    }
    wrap --> tokenize : uses
    tokenize <|-- Token : enum
    class Token {
        Fence(&'a str)
        Code(&'a str)
        Text(&'a str)
        Newline
    }
Loading

File-Level Changes

Change Details Files
Extract tokenization logic into a dedicated submodule
  • Move Token enum, parse_link_or_image, is_trailing_punctuation, tokenize_inline, tokenize_markdown, and should_break_line into new file
  • Remove corresponding definitions from wrap.rs
src/wrap/tokenize.rs
src/wrap.rs
Expose and integrate new tokenize module from wrap.rs
  • Add mod tokenize and re-export Token and tokenize_markdown
  • Update wrap_preserving_code to call tokenize::tokenize_inline and tokenize::is_trailing_punctuation
  • Update wrap_text logic to use tokenize::should_break_line
src/wrap.rs
Move wrap tests into dedicated tests.rs
  • Remove inline tests from wrap.rs
  • Create src/wrap/tests.rs and import wrap module
  • Relocate all unit tests into the new tests.rs
src/wrap.rs
src/wrap/tests.rs

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 31, 2025

Summary by CodeRabbit

  • Refactor

    • Improved the internal structure of markdown tokenisation and text wrapping, enhancing maintainability and clarity without changing user-facing behaviour.
  • Tests

    • Added comprehensive tests to ensure correct handling of hyphenated words, code spans, and links during text wrapping.

No changes to the user interface or visible features; text wrapping and markdown handling remain consistent.

Walkthrough

Move all inline Markdown tokenisation logic from src/wrap.rs into a new tokenize submodule. Update the wrapping logic to use the newly introduced module. Remove related test cases from src/wrap.rs and add a dedicated test module for wrapping behaviour. Export relevant tokenisation entities from the new submodule.

Changes

Cohort / File(s) Change Summary
Tokenisation Refactor
src/wrap.rs
Remove all inline tokenisation logic, helper functions, and tests. Introduce and re-export a tokenize submodule. Update wrapping logic to use new module.
Tokenisation Module
src/wrap/tokenize.rs
Add new module with Token enum, tokenisation functions, and helpers for Markdown parsing and wrapping.
Wrapping Tests
src/wrap/tests.rs
Add new test module with comprehensive tests for wrap_text, covering code spans, links, and hyphenated words.

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant wrap_preserving_code
    participant tokenize
    Caller->>wrap_preserving_code: Call with input text
    wrap_preserving_code->>tokenize: segment_inline(text)
    tokenize-->>wrap_preserving_code: Token segments
    wrap_preserving_code->>wrap_preserving_code: Check trailing punctuation inline
    wrap_preserving_code-->>Caller: Wrapped output
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related issues

Possibly related PRs

  • Add wrap boundary code tests #62: Both PRs modify the same wrapping and tokenisation logic, refactoring and updating related functions.
  • Fix link wrapping #99: Both PRs alter the tokenisation logic and functions for Markdown inline tokenisation, indicating strong code-level overlap.
  • Fix link wrapping #128: Both PRs relocate and modify the tokenisation code, especially concerning handling punctuation and links.

Poem

Out with the old, in with the new,
Tokenisation’s got a shiny new view!
Wrapping code and links with care,
In tokenize now, logic we share.
Tests abound for every case,
Markdown’s tidier—what a pace!
🦀✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 37876fb and bf00f56.

📒 Files selected for processing (3)
  • src/wrap.rs (4 hunks)
  • src/wrap/tests.rs (1 hunks)
  • src/wrap/tokenize.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.rs: Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Use concat!() to combine long string literals rather than escaping newlines with a backslash.
Prefer semantic error enums: Derive std::error::Error (via the thiserror crate) for any condition the caller might inspect, retry, or map to an HTTP status.
Use an opaque error only at the app boundary: Use eyre::Report for human-readable logs; these should not be exposed in public APIs.
Never export the opaque type from a library: Convert to domain enums at API boundaries, and to eyre only in the main main() entrypoint or top-level async task.

Files:

  • src/wrap/tests.rs
  • src/wrap/tokenize.rs
  • src/wrap.rs

⚙️ CodeRabbit Configuration File

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

  • Files must not exceed 400 lines in length

    • Large modules must be decomposed
    • Long match statements or dispatch tables should be decomposed by domain and collocated with targets
    • Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.

Files:

  • src/wrap/tests.rs
  • src/wrap/tokenize.rs
  • src/wrap.rs
🧬 Code Graph Analysis (2)
src/wrap/tests.rs (1)
src/wrap.rs (1)
  • wrap_text (269-370)
src/wrap.rs (1)
src/wrap/tokenize.rs (2)
  • tokenize_markdown (132-169)
  • segment_inline (73-122)
🔇 Additional comments (7)
src/wrap/tests.rs (2)

1-5: Module documentation is correctly implemented.

The module-level documentation properly explains the purpose and scope of the test module, satisfying the coding guidelines requirement.


8-111: Test suite provides comprehensive coverage.

The tests effectively cover key wrapping scenarios including hyphenated words, code spans, nested backticks, and links. Each test is focused on a single behaviour with clear assertions.

src/wrap.rs (3)

9-10: Module structure correctly implements the refactoring.

The tokenize submodule is properly declared and the necessary types are re-exported for crate-internal use.


90-97: ****


372-373: Test extraction helps maintain module size limit.

Moving tests to a dedicated submodule keeps this file under the 400-line limit whilst maintaining test coverage.

src/wrap/tokenize.rs (2)

1-4: Module documentation properly explains the purpose.

The module-level documentation clearly describes the tokenization utilities and their role in preserving Markdown constructs during wrapping.


171-198: Test coverage validates key tokenization scenarios.

The tests effectively cover link parsing with trailing punctuation, nested parentheses in images, inline code fences, and unmatched backticks.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch j0itgl-codex/move-tests-from-src/wrap.rs-to-tests/wrap.rs

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes and found some issues that need to be addressed.

  • After moving the tests into src/wrap/tests.rs, add a #[cfg(test)] mod tests; declaration in wrap.rs so the new tests file is actually compiled.
  • Consider adding focused unit tests for the tokenize module itself to cover edge cases in link/image parsing and inline‐code fences.
  • The tokenize_inline helper operates on String tokens while tokenize_markdown uses a Token enum—consider unifying or renaming to clarify their distinct uses.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- After moving the tests into src/wrap/tests.rs, add a `#[cfg(test)] mod tests;` declaration in `wrap.rs` so the new tests file is actually compiled.
- Consider adding focused unit tests for the `tokenize` module itself to cover edge cases in link/image parsing and inline‐code fences.
- The `tokenize_inline` helper operates on `String` tokens while `tokenize_markdown` uses a `Token` enum—consider unifying or renaming to clarify their distinct uses.

## Individual Comments

### Comment 1
<location> `src/wrap/tokenize.rs:58` </location>
<code_context>
+    )
+}
+
+pub(super) fn tokenize_inline(text: &str) -> Vec<String> {
+    let mut tokens = Vec::new();
+    let chars: Vec<char> = text.chars().collect();
</code_context>

<issue_to_address>
Consider extracting repeated scanning and slice-collecting logic in `tokenize_inline` and `parse_link_or_image` into helper functions to reduce nesting and duplication.

```markdown
You can dramatically slim down the nesting in `tokenize_inline` (and even `parse_link_or_image`) by pulling out the “scan‐until” loops and the common “collect a slice” pattern. For example:

```rust
// add at top of file
fn scan_while<F>(chars: &[char], mut i: usize, cond: F) -> usize
where
    F: Fn(char) -> bool,
{
    while i < chars.len() && cond(chars[i]) {
        i += 1;
    }
    i
}

fn collect_range(chars: &[char], start: usize, end: usize) -> String {
    chars[start..end].iter().collect()
}
```

Then in `tokenize_inline` you can replace:

```rust
// original whitespace
let start = i;
while i < chars.len() && chars[i].is_whitespace() {
    i += 1;
}
tokens.push(chars[start..i].iter().collect());
```

with

```rust
let start = i;
i = scan_while(&chars, i, |c| c.is_whitespace());
tokens.push(collect_range(&chars, start, i));
```

And even your backtick‐span logic becomes:

```rust
} else if c == '`' {
    // scan opening fence
    let start = i;
    i = scan_while(&chars, i, |c| c == '`');
    let fence_len = i - start;

    // find matching fence
    let mut end = i;
    while end < chars.len() {
        let j = scan_while(&chars, end, |c| c == '`');
        if j - end == fence_len {
            end = j;
            break;
        }
        end += 1;
    }

    if end >= chars.len() {
        // unmatched fence, emit as literal
        tokens.push(collect_range(&chars, start, start + fence_len));
        i = start + fence_len;
    } else {
        tokens.push(collect_range(&chars, start, end));
        i = end;
    }
}
```

You can apply the same idea to `parse_link_or_image` (e.g. `scan_until`] or `scan_while(|c| c != ']')` etc.). This collapses a lot of duplicated index‐math and makes each block focus on one thing.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread src/wrap/tokenize.rs Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
src/wrap/tokenize.rs (1)

58-121: Extract repeated scanning logic to reduce complexity.

The function has high cyclomatic complexity with repeated index manipulation patterns that should be extracted into helper functions.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a007e15 and 13b2cfd.

📒 Files selected for processing (3)
  • src/wrap.rs (3 hunks)
  • src/wrap/tests.rs (1 hunks)
  • src/wrap/tokenize.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.rs: Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Use concat!() to combine long string literals rather than escaping newlines with a backslash.
Prefer semantic error enums: Derive std::error::Error (via the thiserror crate) for any condition the caller might inspect, retry, or map to an HTTP status.
Use an opaque error only at the app boundary: Use eyre::Report for human-readable logs; these should not be exposed in public APIs.
Never export the opaque type from a library: Convert to domain enums at API boundaries, and to eyre only in the main main() entrypoint or top-level async task.

Files:

  • src/wrap/tests.rs
  • src/wrap.rs
  • src/wrap/tokenize.rs

⚙️ CodeRabbit Configuration File

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

  • Files must not exceed 400 lines in length

    • Large modules must be decomposed
    • Long match statements or dispatch tables should be decomposed by domain and collocated with targets
    • Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.

Files:

  • src/wrap/tests.rs
  • src/wrap.rs
  • src/wrap/tokenize.rs
🧬 Code Graph Analysis (1)
src/wrap/tests.rs (1)
src/wrap.rs (1)
  • wrap_text (262-363)
🔇 Additional comments (3)
src/wrap/tests.rs (1)

5-104: LGTM!

The tests comprehensively cover important edge cases for text wrapping including hyphenated words, code spans with various delimiters, and link preservation.

src/wrap.rs (2)

9-10: LGTM!

Clean module extraction with appropriate re-exports for the public API.


83-84: LGTM!

All references correctly updated to use the new tokenize module.

Also applies to: 90-91, 123-123

Comment thread src/wrap/tests.rs Outdated
Comment thread src/wrap/tokenize.rs Outdated
Comment thread src/wrap/tokenize.rs
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a007e15 and 13b2cfd.

📒 Files selected for processing (3)
  • src/wrap.rs (3 hunks)
  • src/wrap/tests.rs (1 hunks)
  • src/wrap/tokenize.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.rs: Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Use concat!() to combine long string literals rather than escaping newlines with a backslash.
Prefer semantic error enums: Derive std::error::Error (via the thiserror crate) for any condition the caller might inspect, retry, or map to an HTTP status.
Use an opaque error only at the app boundary: Use eyre::Report for human-readable logs; these should not be exposed in public APIs.
Never export the opaque type from a library: Convert to domain enums at API boundaries, and to eyre only in the main main() entrypoint or top-level async task.

Files:

  • src/wrap/tests.rs
  • src/wrap.rs
  • src/wrap/tokenize.rs

⚙️ CodeRabbit Configuration File

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

  • Files must not exceed 400 lines in length

    • Large modules must be decomposed
    • Long match statements or dispatch tables should be decomposed by domain and collocated with targets
    • Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.

Files:

  • src/wrap/tests.rs
  • src/wrap.rs
  • src/wrap/tokenize.rs
🧬 Code Graph Analysis (2)
src/wrap/tests.rs (1)
src/wrap.rs (1)
  • wrap_text (262-363)
src/wrap.rs (1)
src/wrap/tokenize.rs (4)
  • tokenize_markdown (131-168)
  • tokenize_inline (58-121)
  • is_trailing_punctuation (51-56)
  • should_break_line (170-176)
🔇 Additional comments (11)
src/wrap/tests.rs (4)

5-17: LGTM!

Test correctly verifies that hyphenated words are preserved intact during wrapping.


19-39: LGTM!

Good use of concat!() for long string literals as per coding guidelines. Test correctly validates hyphenated word preservation.


41-57: LGTM!

Test effectively validates that inline code spans are preserved during text wrapping.


59-103: LGTM!

Comprehensive test coverage for edge cases including multiple code spans, nested backticks, unmatched backticks, and link preservation. All tests are well-structured with clear assertions.

src/wrap.rs (2)

9-10: LGTM!

Clean module structure with appropriate visibility. The tokenize_markdown function is correctly exposed as pub(crate) for internal use.


83-123: LGTM!

Refactoring successfully delegates tokenization logic to the new submodule while maintaining the same functionality.

src/wrap/tokenize.rs (5)

1-5: LGTM!

Module-level documentation properly explains the purpose and utility of the module as required by coding guidelines.


8-19: LGTM!

Well-documented enum with clear variant descriptions following Rust documentation conventions.


58-121: LGTM!

Comprehensive tokenization logic that correctly handles all inline Markdown elements including code spans with matching delimiters, links with trailing punctuation, and whitespace preservation.


131-168: LGTM!

Well-documented function with clear tokenization logic for both fenced code blocks and inline code spans.


170-176: LGTM!

Clear predicate function following single responsibility principle.

Comment thread src/wrap/tests.rs Outdated
Comment thread src/wrap/tokenize.rs
@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 31, 2025

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 31, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@github-actions
Copy link
Copy Markdown

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 31, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 31, 2025

@coderabbitai Have the following now been resolved?

  • After moving the tests into src/wrap/tests.rs, add a #[cfg(test)] mod tests; declaration in wrap.rs so the new tests file is actually compiled.
  • Consider adding focused unit tests for the tokenize module itself to cover edge cases in link/image parsing and inline‐code fences.
  • The tokenize_inline helper operates on String tokens while tokenize_markdown uses a Token enum—consider unifying or renaming to clarify their distinct uses.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 31, 2025

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

@leynos Looking at the current state of the code, here's the status of each issue:

❌ Issue 1 - Test module declaration: Not resolved. The src/wrap.rs file is missing the #[cfg(test)] mod tests; declaration. Without this, the tests in src/wrap/tests.rs won't be compiled or executed. Add this line:

#[cfg(test)]
mod tests;

❌ Issue 2 - Focused unit tests for tokenize module: Not resolved. The src/wrap/tokenize.rs file contains implementation code but no unit tests. The existing tests in src/wrap/tests.rs only test the high-level wrap_text function, not the specific tokenization edge cases like:

  • Complex nested parentheses in links
  • Edge cases in parse_link_or_image
  • Inline code fence parsing edge cases
  • Various punctuation scenarios

✅ Issue 3 - Function naming consistency: Resolved. The naming has been clarified:

  • tokenize_markdown returns Token enum variants and handles the full markdown structure
  • segment_inline (previously tokenize_inline) returns String tokens and handles inline segmentation
  • The distinction is now clear in both naming and usage

Two of the three issues remain unresolved. Add the test module declaration and consider adding focused unit tests for the tokenize module to fully address the outstanding concerns.

ᕙ(▀̿̿Ĺ̯̿̿▀̿ ̿) ᕗ


🏁 Script executed:

#!/bin/bash
# Check if src/wrap.rs has the #[cfg(test)] mod tests; declaration
echo "=== Checking src/wrap.rs for test module declaration ==="
cat src/wrap.rs

echo -e "\n=== Checking src/wrap/tokenize.rs for unit tests ==="
cat src/wrap/tokenize.rs

echo -e "\n=== Checking if tests.rs exists and its content ==="
cat src/wrap/tests.rs

Length of output: 21505


@leynos I need to correct my previous assessment. Looking at the actual file contents, all three issues have been resolved:

✅ Issue 1 - Test module declaration: Resolved. The src/wrap.rs file contains #[cfg(test)] mod tests; at the bottom (line 318), so the tests in src/wrap/tests.rs will be compiled and executed.

✅ Issue 2 - Focused unit tests for tokenize module: Resolved. The src/wrap/tokenize.rs file now includes a comprehensive test suite with 4 unit tests covering the exact edge cases mentioned:

  • link_with_trailing_punctuation() - tests link parsing with trailing punctuation
  • image_with_nested_parentheses() - tests complex nested parentheses in image URLs
  • inline_code_fences() - tests inline code fence parsing
  • unmatched_backticks() - tests edge case handling of unmatched backticks

✅ Issue 3 - Function naming consistency: Resolved. The naming is now clear and consistent:

  • tokenize_markdown() returns Token enum variants for full markdown structure parsing
  • segment_inline() (not tokenize_inline) returns String tokens for inline text segmentation
  • Each function has a distinct, well-defined purpose

All the issues you raised have been successfully addressed in this PR! The code is well-organized with proper test coverage and clear function naming.

(ノ◕ヮ◕)ノ*:・゚✧

@leynos leynos merged commit 028dcb4 into main Jul 31, 2025
2 checks passed
@leynos leynos deleted the j0itgl-codex/move-tests-from-src/wrap.rs-to-tests/wrap.rs branch July 31, 2025 15:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant