Skip to content

Refactor wrap prefixes and doc fix#147

Merged
leynos merged 6 commits intomainfrom
codex/apply-lazy_regex-to-regex-statics-and-fix-typo
Aug 1, 2025
Merged

Refactor wrap prefixes and doc fix#147
leynos merged 6 commits intomainfrom
codex/apply-lazy_regex-to-regex-statics-and-fix-typo

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Jul 30, 2025

Summary

  • fix spelling mistake in AGENTS.md
  • apply lazy_regex! to prefix regexes
  • simplify prefix handling in wrap_text

Testing

  • make fmt
  • make lint
  • make test
  • make markdownlint
  • make nixie (fails: too many arguments)

https://chatgpt.com/codex/tasks/task_e_68895b89df048322b7b16baac0ee37dc

Summary by Sourcery

Refactor regex initialization and prefix handling in wrap_text, and correct spelling in AGENTS.md.

Enhancements:

  • Use lazy_regex! macro for initializing FENCE_RE, BULLET_RE, FOOTNOTE_RE, and BLOCKQUOTE_RE statics
  • Remove PrefixHandler abstraction and inline bullet, footnote, and blockquote prefix matching in wrap_text

Documentation:

  • Fix spelling of "managable" to "manageable" in AGENTS.md

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 30, 2025

Summary by CodeRabbit

  • Refactor

    • Simplified and clarified text wrapping logic for handling line prefixes and regex usage.
    • Improved control flow in text wrapping for better readability and maintainability.
    • Minor code style improvement in HTML handling logic.
  • Tests

    • Added comprehensive unit tests to ensure correct handling of hyphenated words, code spans, and links during text wrapping.
  • Style

    • Minor documentation formatting update.

Walkthrough

Refactor regex initialisation and prefix handling in wrap_text by replacing the generic handler abstraction with explicit sequential regex matches. Introduce a helper function for code fence detection. Remove unused imports and update control flow for clarity. Add comprehensive unit tests for wrapping behaviour. Make a minor documentation change.

Changes

Cohort / File(s) Change Summary
Regex Refactor and Prefix Handling
src/wrap.rs
Replace manual regex initialisation with lazy_regex! macro; remove PrefixHandler abstraction; extract prefix handling logic; introduce is_fence helper; simplify control flow; remove unused imports.
Unit Tests for Wrapping
tests/wrap_unit.rs
Add new unit tests for wrap_text covering hyphenated words, code spans, nested/unmatched backticks, and links.
HTML Node Refactor
src/html.rs
Refactor contains_strong to combine pattern match and tag check into a single if let statement.
Documentation Minor Edit
tests/common/mod.rs
Add a trailing space in a doc comment example line; no functional impact.

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant wrap_text
    participant Regexes
    participant handle_prefix_line

    Caller->>wrap_text: Call with input text
    wrap_text->>Regexes: Match BULLET_RE
    alt Match found
        wrap_text->>handle_prefix_line: Handle bullet prefix
        wrap_text-->>wrap_text: continue
    else No match
        wrap_text->>Regexes: Match FOOTNOTE_RE
        alt Match found
            wrap_text->>handle_prefix_line: Handle footnote prefix
            wrap_text-->>wrap_text: continue
        else No match
            wrap_text->>Regexes: Match BLOCKQUOTE_RE
            alt Match found
                wrap_text->>handle_prefix_line: Handle blockquote prefix
                wrap_text-->>wrap_text: continue
            else No match
                wrap_text->>is_fence: Check for code fence
                alt Fence found
                    wrap_text-->>wrap_text: continue
                else
                    wrap_text-->>wrap_text: Process line normally
                end
            end
        end
    end
    wrap_text-->>Caller: Return wrapped text
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~18 minutes

Possibly related PRs

Poem

Refactored regex, handlers gone,
Prefixes clear, the flow marches on.
Code spans and links, now wrapped with care,
Hyphens unbroken, precision in air.
Tests now abound, to keep bugs at bay—
Ship these changes without delay! 🚀

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch codex/apply-lazy_regex-to-regex-statics-and-fix-typo

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Jul 30, 2025

Reviewer's Guide

Refactors wrap_text by switching all static regex initializations to lazy_regex! macros and replacing the generic PrefixHandler/HANDLERS mechanism with explicit regex capture branches for bullets, footnotes, and blockquotes, plus a minor spelling correction in AGENTS.md.

File-Level Changes

Change Details Files
Use lazy_regex! for static regex definitions
  • Replaced std::sync::LazyLock::new and Regex::new calls with lazy_regex!
  • Updated FENCE_RE, BULLET_RE, FOOTNOTE_RE, and BLOCKQUOTE_RE to use the macro
src/wrap.rs
Streamline prefix handling in wrap_text
  • Removed PrefixHandler struct and HANDLERS array
  • Replaced labeled loop and handler iteration with sequential if let captures for each regex
  • Simplified control flow by eliminating the 'line_loop label
src/wrap.rs
Fix typo and reflow content in AGENTS.md
  • Corrected spelling of “managable” to “manageable”
  • Merged split lines into a single coherent paragraph
AGENTS.md

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes and they look great!

Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments

### Comment 1
<location> `src/wrap.rs:402` </location>
<code_context>
-                );
-                continue 'line_loop;
-            }
+        if let Some(cap) = BULLET_RE.captures(line) {
+            let prefix = cap.get(1).unwrap().as_str();
+            let rest = cap.get(2).unwrap().as_str();
+            handle_prefix_line(&mut out, &mut buf, &mut indent, width, prefix, rest, false);
+            continue;
+        }
+
</code_context>

<issue_to_address>
Explicitly unwrapping capture groups may panic on malformed input.

Using unwrap() on capture groups can cause runtime panics if the expected groups are missing. Use expect with a clear message or handle the None case to prevent unexpected crashes.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread src/wrap.rs
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🔭 Outside diff range comments (1)
src/wrap.rs (1)

68-102: Remove dead code from previous implementation.

The PrefixHandler struct, its implementation, and the HANDLERS array are no longer used after the refactoring. Remove this dead code.

Apply this diff:

-struct PrefixHandler {
-    re: &'static std::sync::LazyLock<Regex>,
-    is_bq: bool,
-    build_prefix: fn(&Captures) -> String,
-    rest_group: usize,
-}
-
-impl PrefixHandler {
-    fn build_bullet_prefix(cap: &Captures) -> String { cap[1].to_string() }
-
-    fn build_footnote_prefix(cap: &Captures) -> String { format!("{}{}", &cap[1], &cap[2]) }
-
-    fn build_blockquote_prefix(cap: &Captures) -> String { cap[1].to_string() }
-}
-
-static HANDLERS: &[PrefixHandler] = &[
-    PrefixHandler {
-        re: &BULLET_RE,
-        is_bq: false,
-        build_prefix: PrefixHandler::build_bullet_prefix,
-        rest_group: 2,
-    },
-    PrefixHandler {
-        re: &FOOTNOTE_RE,
-        is_bq: false,
-        build_prefix: PrefixHandler::build_footnote_prefix,
-        rest_group: 3,
-    },
-    PrefixHandler {
-        re: &BLOCKQUOTE_RE,
-        is_bq: true,
-        build_prefix: PrefixHandler::build_blockquote_prefix,
-        rest_group: 2,
-    },
-];
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e4948ab and bfa0e22.

📒 Files selected for processing (4)
  • src/lib.rs (1 hunks)
  • src/tokenize.rs (1 hunks)
  • src/wrap.rs (4 hunks)
  • tests/wrap_unit.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.rs: Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Use concat!() to combine long string literals rather than escaping newlines with a backslash.
Prefer semantic error enums: Derive std::error::Error (via the thiserror crate) for any condition the caller might inspect, retry, or map to an HTTP status.
Use an opaque error only at the app boundary: Use eyre::Report for human-readable logs; these should not be exposed in public APIs.
Never export the opaque type from a library: Convert to domain enums at API boundaries, and to eyre only in the main main() entrypoint or top-level async task.

Files:

  • src/lib.rs
  • tests/wrap_unit.rs
  • src/tokenize.rs
  • src/wrap.rs

⚙️ CodeRabbit Configuration File

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

  • Files must not exceed 400 lines in length

    • Large modules must be decomposed
    • Long match statements or dispatch tables should be decomposed by domain and collocated with targets
    • Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.

Files:

  • src/lib.rs
  • tests/wrap_unit.rs
  • src/tokenize.rs
  • src/wrap.rs
🧬 Code Graph Analysis (2)
tests/wrap_unit.rs (1)
src/wrap.rs (1)
  • wrap_text (297-408)
src/wrap.rs (1)
src/tokenize.rs (1)
  • is_fence (155-155)
🪛 GitHub Actions: CI
src/lib.rs

[warning] 30-30: Prettier formatting check failed: 'mod tokenize;' line is misplaced and needs correction.

🔇 Additional comments (6)
src/tokenize.rs (2)

1-3: Module documentation is present and follows guidelines.


154-155: LGTM!

Clean implementation of fence detection wrapper.

tests/wrap_unit.rs (3)

3-15: Well-structured test for hyphenated word preservation.

Test correctly validates that hyphenated words remain intact during wrapping.


17-35: Excellent test coverage and proper use of concat! macro.

Test properly validates space preservation and follows the guideline to use concat!() for long string literals.


37-106: Comprehensive test coverage for code spans and links.

The test suite thoroughly covers:

  • Code span preservation across line breaks
  • Multiple code spans handling
  • Edge cases with nested and unmatched backticks
  • Link URL integrity preservation

All tests are well-structured and provide valuable coverage.

src/wrap.rs (1)

350-375: Excellent refactoring of prefix handling logic.

The explicit regex matching is clearer than the previous generic handler approach. Good use of expect() with descriptive error messages as requested in the previous review.

Comment thread src/lib.rs
mod tokenize;
pub mod textproc;
pub mod wrap;

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Group internal modules together for better organization.

Move the mod tokenize; declaration to line 32 to group it with other internal modules (html and reflow). This maintains consistent module organization and resolves the formatting issue.

Apply this diff:

 pub mod table;
-mod tokenize;
 pub mod textproc;
 pub mod wrap;

Then insert at line 32:

 mod reflow;
+mod tokenize;
 pub mod table;
🤖 Prompt for AI Agents
In src/lib.rs at line 33, the module declaration `mod tokenize;` is not grouped
with other internal modules like `html` and `reflow`. Move the `mod tokenize;`
line up to line 32 to place it alongside these modules, ensuring consistent
organization and resolving the formatting issue.

Comment thread src/tokenize.rs Outdated
Comment on lines +52 to +53
let tok: String = chars[start..=start].iter().collect();
(tok, start + 1)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix incorrect fallback behavior for incomplete links.

When link parsing fails, the function returns only the first character at the start position. This should return the opening bracket (or exclamation mark + bracket for images) to preserve the original text.

Apply this diff:

-    let tok: String = chars[start..=start].iter().collect();
-    (tok, start + 1)
+    let end = if chars[start] == '!' { start + 2 } else { start + 1 };
+    let tok: String = chars[start..end].iter().collect();
+    (tok, end)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let tok: String = chars[start..=start].iter().collect();
(tok, start + 1)
let end = if chars[start] == '!' { start + 2 } else { start + 1 };
let tok: String = chars[start..end].iter().collect();
(tok, end)
🤖 Prompt for AI Agents
In src/tokenize.rs around lines 52 to 53, the fallback behavior for incomplete
links returns only the first character at the start position, which loses
important context. Modify the code to return the opening bracket or the
exclamation mark plus bracket for images instead, preserving the original text.
Adjust the slice and returned string accordingly to include these characters
when link parsing fails.

Comment thread src/tokenize.rs Outdated
Comment on lines +105 to +107
while i < chars.len() && !chars[i].is_whitespace() && chars[i] != '`' {
i += 1;
}
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Aug 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add bracket detection to prevent merging adjacent links.

The loop should also check for brackets to properly tokenize adjacent links without spaces between them.

Apply this diff:

-            while i < chars.len() && !chars[i].is_whitespace() && chars[i] != '`' {
+            while i < chars.len() && !chars[i].is_whitespace() && chars[i] != '`' && chars[i] != '[' && !(chars[i] == '!' && i + 1 < chars.len() && chars[i + 1] == '[') {
                 i += 1;
             }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
while i < chars.len() && !chars[i].is_whitespace() && chars[i] != '`' {
i += 1;
}
while i < chars.len()
&& !chars[i].is_whitespace()
&& chars[i] != '`'
&& chars[i] != '['
&& !(chars[i] == '!' && i + 1 < chars.len() && chars[i + 1] == '[')
{
i += 1;
}
🤖 Prompt for AI Agents
In src/tokenize.rs around lines 105 to 107, the while loop that advances the
index should also check for bracket characters to avoid merging adjacent links
without spaces. Modify the loop condition to include a check that stops the loop
if the current character is an opening or closing bracket, ensuring proper
tokenization of adjacent links.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai new GitHub issue, please

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

I'll create a GitHub issue for the bracket detection enhancement in the tokenizer.


🏁 Script executed:

#!/bin/bash
gh issue create \
  --title "Add bracket detection in tokenize_inline to prevent merging adjacent links" \
  --body "## Issue Description

The \`tokenize_inline\` function in \`src/tokenize.rs\` does not properly handle adjacent links without spaces between them. The current while loop condition on line 105-107 only checks for whitespace and backticks, but should also check for bracket characters to prevent merging adjacent markdown links.

## Current Behaviour
Adjacent links like \`[link1](url1)[link2](url2)\` may be incorrectly tokenised as a single text token instead of being recognised as separate link tokens.

## Proposed Solution
Modify the while loop condition to include bracket detection:

\`\`\`rust
while i < chars.len() 
    && !chars[i].is_whitespace() 
    && chars[i] != '\`' 
    && chars[i] != '[' 
    && !(chars[i] == '!' && i + 1 < chars.len() && chars[i + 1] == '[') 
{
    i += 1;
}
\`\`\`

## Context
- **File**: \`src/tokenize.rs\`
- **Lines**: 105-107
- **Function**: \`tokenize_inline\`

## References
- PR: https://github.com/leynos/mdtablefix/pull/147
- Comment: https://github.com/leynos/mdtablefix/pull/147#discussion_r2249003058
- Reported by: @leynos" \
  --assignee leynos

Length of output: 1258


@leynos I've created GitHub issue #175 for the bracket detection enhancement: #175

The issue documents the need to modify the tokenize_inline function's while loop condition to properly handle adjacent markdown links without spaces between them. This will ensure that links like [link1](url1)[link2](url2) are correctly tokenised as separate link tokens rather than being merged into a single text token.

Comment thread src/tokenize.rs Outdated
tokens
}

/// Split the input string into [`Token`]s by analysing whitespace and backtick delimiters.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Update documentation to match the actual function behavior.

The documentation mentions analyzing "whitespace and backtick delimiters" but this function actually processes entire Markdown documents including fenced code blocks.

Apply this diff:

-/// Split the input string into [`Token`]s by analysing whitespace and backtick delimiters.
+/// Tokenize Markdown text into [`Token`]s, handling fenced code blocks and inline elements.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// Split the input string into [`Token`]s by analysing whitespace and backtick delimiters.
/// Tokenize Markdown text into [`Token`]s, handling fenced code blocks and inline elements.
🤖 Prompt for AI Agents
In src/tokenize.rs at line 114, update the function documentation to accurately
reflect its behavior. Replace the current description about splitting input by
whitespace and backtick delimiters with a statement that the function processes
entire Markdown documents, including handling fenced code blocks. Ensure the new
documentation clearly describes the function's scope and purpose.

Comment thread src/tokenize.rs Outdated
Comment on lines +150 to +151
out.pop();
out
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Aug 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Guard against empty output when removing final newline.

The unconditional pop() could panic or remove a non-newline token if the input doesn't end with a newline.

Apply this diff:

-    out.pop();
+    if matches!(out.last(), Some(Token::Newline)) {
+        out.pop();
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
out.pop();
out
if matches!(out.last(), Some(Token::Newline)) {
out.pop();
}
out
🤖 Prompt for AI Agents
In src/tokenize.rs around lines 150 to 151, the code unconditionally calls pop()
on the output vector, which can panic if the vector is empty or remove a
non-newline token if the input does not end with a newline. To fix this, add a
check to ensure the output is not empty and that the last token is a newline
before calling pop(). This guards against panics and only removes the final
newline token when appropriate.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai new GitHub issue, please

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

@leynos I'll create a GitHub issue for the potential panic in the tokenize module.


🏁 Script executed:

#!/bin/bash
gh issue create \
  --title "Guard against panic in tokenize_markdown when removing final newline" \
  --body "## Issue Description

The \`tokenize_markdown\` function in \`src/tokenize.rs\` unconditionally calls \`out.pop()\` at line 150 to remove the final newline token. This could panic if the output vector is empty or incorrectly remove a non-newline token if the input doesn't end with a newline.

## Location
- File: \`src/tokenize.rs\`
- Lines: 150-151

## Problem
\`\`\`rust
out.pop();
out
\`\`\`

The unconditional \`pop()\` operation assumes:
1. The vector is not empty
2. The last token is always a newline

## Suggested Fix
Add a guard to only remove the final token if it's actually a newline:

\`\`\`rust
if matches!(out.last(), Some(Token::Newline)) {
    out.pop();
}
out
\`\`\`

## Context
- Identified during code review of PR #147
- Related PR: https://github.com/leynos/mdtablefix/pull/147
- Comment: https://github.com/leynos/mdtablefix/pull/147#discussion_r2249003061

## Requested by
@leynos" \
  --assignee leynos

Length of output: 1061


@leynos I've successfully created GitHub issue #176 to track the potential panic in the tokenize_markdown function. The issue includes the problem description, location details, suggested fix, and references to this PR and comment thread.

You can view the issue here: #176

  (\   /)
 ( ._. )
o_(")(")

thoughtful rabbit

Comment thread src/wrap.rs
Comment on lines 23 to 24
static FENCE_RE: std::sync::LazyLock<Regex> =
std::sync::LazyLock::new(|| Regex::new(r"^\s*(```|~~~).*").unwrap());
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Aug 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove duplicate fence detection code.

The file has duplicate fence detection logic. Remove the local FENCE_RE and is_fence function, and use the one from the tokenize module instead.

Apply this diff:

-static FENCE_RE: std::sync::LazyLock<Regex> =
-    std::sync::LazyLock::new(|| Regex::new(r"^\s*(```|~~~).*").unwrap());

 // ... later in file ...

-#[doc(hidden)]
-pub fn is_fence(line: &str) -> bool { FENCE_RE.is_match(line) }
+#[doc(hidden)]
+pub fn is_fence(line: &str) -> bool { crate::tokenize::is_fence(line) }

Also applies to: 213-213

🤖 Prompt for AI Agents
In src/wrap.rs around lines 23-24 and line 213, remove the local static FENCE_RE
and the is_fence function that uses it. Instead, update the is_fence function to
call and return the result of crate::tokenize::is_fence(line). This eliminates
duplicate fence detection logic by reusing the implementation from the tokenize
module.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai new GitHub issue, please

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

@leynos I'll create a new GitHub issue for the duplicate fence detection code.


🏁 Script executed:

#!/bin/bash
gh issue create \
  --title "Remove duplicate fence detection code in src/wrap.rs" \
  --body "## Issue Description

There is duplicate fence detection logic in \`src/wrap.rs\` that should be consolidated to use the implementation from the \`tokenize\` module.

## Current Problem

The file \`src/wrap.rs\` contains:
- A local \`FENCE_RE\` static regex
- A local \`is_fence\` function that uses this regex

This duplicates the fence detection logic already implemented in the \`tokenize\` module.

## Proposed Solution

Remove the local \`FENCE_RE\` static and update the \`is_fence\` function to call \`crate::tokenize::is_fence(line)\` instead.

## References

- PR: https://github.com/leynos/mdtablefix/pull/147
- Comment: https://github.com/leynos/mdtablefix/pull/147#discussion_r2249003063
- Requested by: @leynos

## Code Location

- File: \`src/wrap.rs\`
- Lines: ~23-24 and ~213" \
  --assignee leynos

Length of output: 927


@leynos I've successfully created GitHub issue #177 for removing the duplicate fence detection code in src/wrap.rs.

Issue URL: #177

The issue has been assigned to you and includes:

  • A detailed description of the duplicate code problem
  • The proposed solution to consolidate the fence detection logic
  • References to this PR and comment thread
  • The specific file and line locations affected

🧬 Code Graph Analysis Results

Relevant snippets from src/wrap/tokenize.rs lines 71-120:

pub(super) fn segment_inline(text: &str) -> Vec<String> {
    let mut tokens = Vec::new();
    let chars: Vec<char> = text.chars().collect();
    let mut i = 0;
    while i < chars.len() {
        let c = chars[i];
        if c.is_whitespace() {
            let start = i;
            i = scan_while(&chars, i, char::is_whitespace);
            tokens.push(collect_range(&chars, start, i));
        } else if c == '`' {
            let start = i;
            let fence_end = scan_while(&chars, i, |ch| ch == '`');
            let fence_len = fence_end - start;
            i = fence_end;

            let mut end = i;
            while end < chars.len() {
                let j = scan_while(&chars, end, |ch| ch == '`');
                if j - end == fence_len {
                    end = j;
                    break;
                }
                end += 1;
            }

            if end >= chars.len() {
                tokens.push(collect_range(&chars, start, start + fence_len));
                i = start + fence_len;
            } else {
                tokens.push(collect_range(&chars, start, end));
                i = end;
            }
        } else if c == '[' || (c == '!' && i + 1 < chars.len() && chars[i + 1] == '[') {
            let (tok, mut new_i) = parse_link_or_image(&chars, i);
            tokens.push(tok);
            let punct_start = new_i;
            new_i = scan_while(&chars, new_i, is_trailing_punctuation);
            if new_i > punct_start {
                tokens.push(collect_range(&chars, punct_start, new_i));
            }
            i = new_i;
        } else {
            let start = i;
            i = scan_while(&chars, i, |ch| !ch.is_whitespace() && ch != '`');
            tokens.push(collect_range(&chars, start, i));
        }
    }
    tokens
}
  • This function segments a markdown inline text into tokens, preserving whitespace, inline code spans (backticks), links/images, and punctuation.
  • Returns a vector of string tokens.
  • Uses helper functions scan_while, collect_range, and parse_link_or_image (not shown).
  • Handles inline code spans by matching balanced backticks.
  • Recognizes links/images starting with [ or ![.
  • Whitespace sequences are grouped as single tokens.

Relevant snippet from src/wrap/tokenize.rs lines 158-192:

pub fn tokenize_markdown(source: &str) -> Vec<Token<'_>> {
    if source.is_empty() {
        return Vec::new();
    }

    let mut tokens = Vec::new();
    let lines: Vec<&str> = source.split('\n').collect();
    let last_idx = lines.len() - 1;
    let mut in_fence = false;

    for (i, line) in lines.iter().enumerate() {
        if super::is_fence(line) {
            tokens.push(Token::Fence(line));
            if i != last_idx {
                tokens.push(Token::Newline);
            }
            in_fence = !in_fence;
            continue;
        }

        if in_fence {
            tokens.push(Token::Fence(line));
            if i != last_idx {
                tokens.push(Token::Newline);
            }
            continue;
        }

        tokenize_inline(line, &mut |tok| tokens.push(tok));
        if i != last_idx {
            tokens.push(Token::Newline);
        }
    }
    tokens
}
  • Tokenizes a markdown source string into a vector of Token enums.
  • Handles fenced code blocks by toggling in_fence state and emitting Token::Fence.
  • For non-fenced lines, calls tokenize_inline to tokenize inline elements.
  • Inserts Token::Newline tokens between lines except after the last line.
  • Uses super::is_fence to detect fenced code block delimiters.

These two snippets are directly relevant because src/wrap.rs publicly re-exports tokenize::Token and tokenize::tokenize_markdown, and internally uses tokenize::segment_inline in the wrap_preserving_code function to split text into tokens for wrapping while preserving inline code spans and links.

No other snippets from the provided files appear directly relevant to understanding or addressing the contents of src/wrap.rs.

Comment thread src/wrap.rs Outdated
Comment thread src/wrap.rs Outdated
@leynos leynos merged commit 9ba5577 into main Aug 1, 2025
1 of 2 checks passed
@leynos leynos deleted the codex/apply-lazy_regex-to-regex-statics-and-fix-typo branch August 1, 2025 23:44
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bfa0e22 and bcad468.

📒 Files selected for processing (3)
  • src/html.rs (1 hunks)
  • src/wrap.rs (4 hunks)
  • tests/common/mod.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.rs: Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Use concat!() to combine long string literals rather than escaping newlines with a backslash.
Prefer semantic error enums: Derive std::error::Error (via the thiserror crate) for any condition the caller might inspect, retry, or map to an HTTP status.
Use an opaque error only at the app boundary: Use eyre::Report for human-readable logs; these should not be exposed in public APIs.
Never export the opaque type from a library: Convert to domain enums at API boundaries, and to eyre only in the main main() entrypoint or top-level async task.

Files:

  • tests/common/mod.rs
  • src/html.rs
  • src/wrap.rs

⚙️ CodeRabbit Configuration File

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

  • Files must not exceed 400 lines in length

    • Large modules must be decomposed
    • Long match statements or dispatch tables should be decomposed by domain and collocated with targets
    • Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.

Files:

  • tests/common/mod.rs
  • src/html.rs
  • src/wrap.rs
🔇 Additional comments (10)
src/html.rs (1)

115-119: Excellent refactoring of pattern matching.

The combined if let with && condition reduces nesting and improves readability whilst maintaining identical logic. This is a clean improvement over the previous nested structure.

src/wrap.rs (9)

11-11: Approve removal of unused import.

Removing the unused Captures import cleans up the code after refactoring away from capture group handling patterns.


26-29: Excellent use of lazy_regex! macro for BULLET_RE.

The lazy_regex! macro with compile-time assertion message is a significant improvement over the manual LazyLock::new pattern. This provides better error messages and compile-time validation.


31-34: Excellent use of lazy_regex! macro for FOOTNOTE_RE.

Consistent application of the lazy_regex! macro maintains the pattern established with BULLET_RE and provides the same benefits of compile-time validation and clear error messages.


36-39: Excellent use of lazy_regex! macro for BLOCKQUOTE_RE.

The pattern is consistently applied across all three regex declarations, creating a uniform and maintainable approach to regex initialization.


166-166: Verify fence detection logic consolidation.

The is_fence helper function encapsulates the regex check, but there's still duplicate fence detection logic as noted in past review comments. This function should call crate::tokenize::is_fence(line) instead of maintaining separate regex patterns.

The GitHub issue #177 has already been created to track this consolidation task.


256-264: Approve simplified fence detection in main loop.

The refactored code uses the is_fence helper function, which improves readability over direct regex matching. This change aligns with the overall simplification goals of the refactor.


303-308: Excellent explicit handling of bullet list prefixes.

The explicit regex matching with expect() calls provides clear error messages and eliminates the generic handler abstraction. This makes the code more readable and debuggable whilst maintaining the same functionality.


310-318: Excellent explicit handling of footnote prefixes.

The pattern matches the bullet handling approach, creating consistency in how different prefix types are processed. The explicit capture group access with descriptive expect() messages improves maintainability.


320-328: Excellent explicit handling of blockquote prefixes.

The final prefix handler completes the pattern established by bullets and footnotes. The repeat_prefix: true parameter correctly maintains blockquote formatting behaviour. This explicit approach is much clearer than the previous generic handler system.

Comment thread tests/common/mod.rs
/// Example:
/// ```
/// let input: Vec<String> = include_lines!("data/bold_header_input.txt");
/// let input: Vec<String> = include_lines!("data/bold_header_input.txt");
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Remove trailing whitespace from documentation example.

The added trailing space serves no functional purpose and may trigger linting warnings.

-/// let input: Vec<String> = include_lines!("data/bold_header_input.txt"); 
+/// let input: Vec<String> = include_lines!("data/bold_header_input.txt");
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// let input: Vec<String> = include_lines!("data/bold_header_input.txt");
/// let input: Vec<String> = include_lines!("data/bold_header_input.txt");
🤖 Prompt for AI Agents
In tests/common/mod.rs at line 22, remove the trailing whitespace from the
documentation example line containing
include_lines!("data/bold_header_input.txt"); to prevent linting warnings and
keep the code clean.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant