Skip to content

Add token processing helper#140

Closed
leynos wants to merge 4 commits intomainfrom
codex/create-process_tokens-function-and-refactor
Closed

Add token processing helper#140
leynos wants to merge 4 commits intomainfrom
codex/create-process_tokens-function-and-refactor

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Jul 29, 2025

Summary

  • share token-processing logic in process_tokens
  • use process_tokens in ellipsis and footnote modules
  • cover the new helper with a unit test

Testing

  • make lint
  • make test

https://chatgpt.com/codex/tasks/task_e_6888f6ff67208322b8d20d3c1e042e70

Summary by Sourcery

Centralize markdown token processing by introducing a reusable helper and refactor existing modules to leverage it

Enhancements:

  • Extract common tokenization and output-assembly logic into a new process_tokens helper in a textproc module
  • Refactor ellipsis and footnotes modules to replace manual token loops with process_tokens

Tests:

  • Add unit test for process_tokens to verify identity transformation

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Jul 29, 2025

Reviewer's Guide

This PR introduces a new process_tokens helper to encapsulate common tokenize-and-transform logic, and refactors the ellipsis and footnote modules to use this helper, removing manual tokenization boilerplate and adding unit coverage for the new function.

Class diagram for the new process_tokens helper and its usage

classDiagram
    class textproc {
        +process_tokens(lines: &[String], f: FnMut(Token, &mut String)) Vec<String>
    }
    class wrap {
        <<module>>
        +Token
        +tokenize_markdown(input: &str) -> Iterator<Token>
    }
    class ellipsis {
        +replace_ellipsis(lines: &[String]) Vec<String>
    }
    class footnotes {
        +convert_footnotes(lines: &[String]) Vec<String>
    }
    textproc ..> wrap : uses Token, tokenize_markdown
    ellipsis ..> textproc : uses process_tokens
    footnotes ..> textproc : uses process_tokens
Loading

Class diagram for Token enum and its usage in process_tokens

classDiagram
    class Token {
        <<enum>>
        Text(&str)
        Code(&str)
        Fence(&str)
        Newline
    }
    textproc ..> Token : processes
    ellipsis ..> Token : matches
    footnotes ..> Token : matches
Loading

File-Level Changes

Change Details Files
Extract token processing logic into shared helper
  • Created textproc module with process_tokens function
  • Added doc comments and unit test for process_tokens
  • Registered textproc module in lib.rs
src/textproc.rs
src/lib.rs
Refactor ellipsis replacement to use shared helper
  • Replaced manual token loop with process_tokens call
  • Imported process_tokens and removed tokenize_markdown import
  • Dropped manual join/split and loop boilerplate
src/ellipsis.rs
Refactor footnote conversion to use shared helper
  • Swapped manual token loop for process_tokens invocation
  • Imported process_tokens and removed tokenize_markdown import
  • Eliminated manual join/split and loop boilerplate
src/footnotes.rs

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 29, 2025

Summary by CodeRabbit

  • Documentation

    • Updated architecture documentation to reflect the introduction of a shared text processing module and clarified module relationships.
  • Refactor

    • Streamlined text normalisation and footnote conversion by using a new shared token-processing utility, improving maintainability and consistency.
  • New Features

    • Introduced a new internal module for token-level text transformations, preserving formatting and blank lines.
  • Chores

    • Removed outdated tokenisation logic no longer in use.

Walkthrough

Update the architecture documentation to include a new textproc module that provides a shared process_tokens function. Refactor both ellipsis and footnotes modules to use this utility for token-level transformations, replacing manual token handling. Introduce the new textproc module, update module declarations, and ensure preservation of trailing blank lines.

Changes

Cohort / File(s) Change Summary
Architecture Documentation
docs/architecture.md
Updated the module relationship diagram and accompanying text to introduce the textproc module, detail its responsibilities, and clarify its interactions with ellipsis, footnotes, process, and io modules.
Ellipsis Refactor
src/ellipsis.rs
Refactored replace_ellipsis to use process_tokens from the new textproc module, removing manual tokenisation and output assembly. Adjusted imports accordingly; preserved existing logic for ellipsis replacement.
Footnotes Refactor
src/footnotes.rs
Refactored convert_footnotes to use process_tokens for token processing, replacing manual string joining and token iteration. Updated imports and streamlined logic, maintaining original footnote conversion behaviour.
Module Declaration
src/lib.rs
Added mod textproc; to the internal module declarations. No other changes made.
New Shared Utility
src/textproc.rs
Introduced the textproc module with the process_tokens function for shared token-level transformations. Ensured preservation of trailing blank lines. Added tests for identity transformation, empty input, and blank line preservation.

Sequence Diagram(s)

sequenceDiagram
    participant Caller as ellipsis/footnotes
    participant TextProc as textproc::process_tokens
    participant Tokenizer as wrap::tokenize_markdown

    Caller->>TextProc: process_tokens(lines, closure)
    TextProc->>Tokenizer: tokenize_markdown(joined_lines)
    Tokenizer-->>TextProc: Tokens
    loop For each token
        TextProc->>TextProc: Apply closure to token, append to output buffer
    end
    TextProc-->>Caller: Output lines (with trailing blanks preserved)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~18 minutes

Possibly related PRs

Poem

In code’s embrace, a helper appears,
textproc unites what once brought tears.
Ellipsis and footnotes now dance in line,
Tokens transformed, their logic refined.
Trailing blanks kept, the tests all pass—
Raise a toast to shared code at last!
🦾✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch codex/create-process_tokens-function-and-refactor

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes - here's some feedback:

  • Add a unit test for the empty input case in process_tokens to confirm it returns an empty vector.
  • Be mindful that joining lines and splitting on '\n' may change trailing blank lines—consider adding tests or handling for trailing newlines to preserve original line structure.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Add a unit test for the empty input case in process_tokens to confirm it returns an empty vector.
- Be mindful that joining lines and splitting on '\n' may change trailing blank lines—consider adding tests or handling for trailing newlines to preserve original line structure.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 35b50b8 and 25bba64.

📒 Files selected for processing (5)
  • docs/architecture.md (2 hunks)
  • src/ellipsis.rs (1 hunks)
  • src/footnotes.rs (2 hunks)
  • src/lib.rs (1 hunks)
  • src/textproc.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.rs

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.rs: Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Use concat!() to combine long string literals rather than escaping newlines with a backslash.
Prefer semantic error enums: Derive std::error::Error (via the thiserror crate) for any condition the caller might inspect, retry, or map to an HTTP status.
Use an opaque error only at the app boundary: Use eyre::Report for human-readable logs; these should not be exposed in public APIs.
Never export the opaque type from a library: Convert to domain enums at API boundaries, and to eyre only in the main main() entrypoint or top-level async task.

Files:

  • src/lib.rs
  • src/textproc.rs
  • src/footnotes.rs
  • src/ellipsis.rs

⚙️ CodeRabbit Configuration File

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

  • Files must not exceed 400 lines in length

    • Large modules must be decomposed
    • Long match statements or dispatch tables should be decomposed by domain and collocated with targets
    • Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.

Files:

  • src/lib.rs
  • src/textproc.rs
  • src/footnotes.rs
  • src/ellipsis.rs
docs/**/*.md

📄 CodeRabbit Inference Engine (AGENTS.md)

docs/**/*.md: Reference: Use the markdown files within the docs/ directory as a knowledge base and source of truth for project requirements, dependency choices, and architectural decisions.
Update: When new decisions are made, requirements change, libraries are added/removed, or architectural patterns evolve, proactively update the relevant file(s) in the docs/ directory to reflect the latest state. Ensure the documentation remains accurate and current.

Files:

  • docs/architecture.md
**/*.md

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.md: Documentation must use en-GB-oxendict spelling and grammar. (EXCEPTION: the naming of the "LICENSE" file, which is to be left unchanged for community consistency.)
Validate Markdown files using make markdownlint.
Run make fmt after any documentation changes to format all Markdown files and fix table markup.
Validate Mermaid diagrams in Markdown files by running make nixie.
Markdown paragraphs and bullet points must be wrapped at 80 columns.
Code blocks must be wrapped at 120 columns.
Tables and headings must not be wrapped.
Use dashes (-) for list bullets.
Use GitHub-flavoured Markdown footnotes ([^1]) for references and footnotes.

Files:

  • docs/architecture.md

⚙️ CodeRabbit Configuration File

**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")

  • Use en-GB-oxendict (-ize / -our) spelling and grammar
  • Paragraphs and bullets must be wrapped to 80 columns, except where a long URL would prevent this (in which case, silence MD013 for that line)
  • Code blocks should be wrapped to 120 columns.
  • Headings must not be wrapped.
  • Documents must start with a level 1 heading
  • Headings must correctly increase or decrease by no more than one level at a time
  • Use GitHub-flavoured Markdown style for footnotes and endnotes.
  • Numbered footnotes must be numbered by order of appearance in the document.

Files:

  • docs/architecture.md
🧠 Learnings (1)
src/footnotes.rs (1)

Learnt from: CR
PR: leynos/mdtablefix#0
File: docs/architecture.md:0-0
Timestamp: 2025-07-29T12:45:00.466Z
Learning: Applies to docs/src/footnotes.rs : The convert_footnotes function should convert bare numeric references into GitHub-flavoured Markdown footnotes, ignoring numbers inside inline code or parentheses, and only process the final contiguous list of numeric references.

🧬 Code Graph Analysis (2)
src/textproc.rs (1)
src/wrap.rs (1)
  • tokenize_markdown (140-177)
src/ellipsis.rs (1)
src/textproc.rs (1)
  • process_tokens (39-58)
🔇 Additional comments (12)
src/textproc.rs (4)

1-7: Excellent module documentation follows Rust guidelines.

The module-level documentation clearly explains the purpose and utility, adhering to the coding guideline requiring every module to begin with a //! comment. The explanation of how the helper works (join, tokenise, transform, split) is clear and helpful.


8-8: Import statement is correct and properly scoped.

The import correctly brings in the required Token type and tokenize_markdown function from the wrap module.


10-42: Comprehensive function documentation with useful example.

The Rustdoc comment thoroughly documents the function's behaviour, parameters, and includes a working example. The #[must_use] attribute is appropriately applied since ignoring the return value would be meaningless.


39-58: Verify trailing blank line preservation
Run the generated test stub in test_trailing_blanks.rs to confirm that process_tokens preserves trailing blanks:

  • Compile and execute the test:
    rustc test_trailing_blanks.rs && ./test_trailing_blanks
  • Ensure input.len() == result.len() and that the count of trailing empty lines in the result matches the input.
    Fix the blank-line handling in src/textproc.rs:39-58 if the assertions fail.
src/lib.rs (1)

25-25: Module declaration is correctly positioned and scoped.

The mod textproc; declaration is properly placed in alphabetical order and correctly marked as private for internal use, consistent with the other internal modules like html and reflow.

docs/architecture.md (3)

228-231: Module diagram correctly represents the new textproc module.

The addition of the textproc class with its process_tokens() function accurately represents the new module's interface.


256-256: Dependency arrows correctly show textproc usage.

The dotted arrows from ellipsis and footnotes to textproc accurately represent the "uses process_tokens" relationships established by the refactoring.

Also applies to: 263-263


268-272: Text description accurately explains module responsibilities.

The updated description clearly delineates the responsibilities: ellipsis for text normalisation, footnotes for bare reference conversion, and textproc for shared token-processing helpers.

src/footnotes.rs (2)

18-18: Import correctly updated for the refactoring.

The import statement properly replaces the direct use of tokenize_markdown with process_tokens from the new textproc module, maintaining access to the Token type.


96-108: Refactoring successfully simplifies token processing logic.

The conversion to process_tokens eliminates the manual joining, tokenisation, iteration, and splitting while preserving the exact same token handling logic. The inline footnote conversion via convert_inline(t) and the unchanged token matching patterns ensure behavioural equivalence.

src/ellipsis.rs (2)

10-10: Import correctly updated for process_tokens usage.

The import statement properly brings in process_tokens from the textproc module while maintaining access to the Token type needed for pattern matching.


18-36: Refactoring maintains identical ellipsis replacement behaviour.

The conversion to process_tokens preserves the exact regex replacement logic for Token::Text and unchanged handling of code spans, fenced blocks, and newlines. The streamlined approach eliminates manual string manipulation while maintaining behavioural equivalence.

Comment thread src/textproc.rs
@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 29, 2025

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 29, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 25bba64 and 17ee047.

📒 Files selected for processing (1)
  • src/textproc.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

⚙️ CodeRabbit Configuration File

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

  • Files must not exceed 400 lines in length

    • Large modules must be decomposed
    • Long match statements or dispatch tables should be decomposed by domain and collocated with targets
    • Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.

Files:

  • src/textproc.rs
🧬 Code Graph Analysis (1)
src/textproc.rs (1)
src/wrap.rs (1)
  • tokenize_markdown (140-177)
🔇 Additional comments (2)
src/textproc.rs (2)

55-59: Handle the case where the transformation intentionally appends additional trailing blanks.

The current logic only pads when out_blanks < trailing_blanks. If the closure adds more trailing blank lines than the input had, leave them as-is to respect the caller’s intent. Document this behaviour explicitly in the doc comment.

Confirm that this is the desired contract; otherwise adjust the loop to truncate when out_blanks > trailing_blanks.


38-43: Move #[must_use] directly above the function signature.

Place the attribute immediately before fn process_tokens (after the doc comment) to satisfy the style rule “function attributes after doc comments”.

-#[must_use]
-pub(crate) fn process_tokens<F>(lines: &[String], mut f: F) -> Vec<String>
+#[must_use]
+pub(crate) fn process_tokens<F>(lines: &[String], mut f: F) -> Vec<String>

Likely an incorrect or invalid review comment.

Comment thread src/textproc.rs Outdated
Comment thread src/textproc.rs
Comment thread src/textproc.rs
@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 30, 2025

@coderabbitai Have these now been resolved?

  • Add a unit test for the empty input case in process_tokens to confirm it returns an empty vector.
  • Be mindful that joining lines and splitting on '\n' may change trailing blank lines—consider adding tests or handling for trailing newlines to preserve original line structure.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 30, 2025

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

@leynos Yes, both of these issues have been resolved in the current implementation:

✅ Empty input test case

The test_empty_input test has been added (lines 65-69):

#[test]
fn test_empty_input() {
    let input: Vec<String> = vec![];
    let result = process_tokens(&input, |_token, _output| {});
    assert_eq!(result, Vec::<String>::new());
}

✅ Trailing blank lines handling

The process_tokens function includes specific logic to preserve trailing blank lines (lines 17-28):

// Count trailing blank lines in input
let trailing_blanks_input = lines.iter().rev().take_while(|line| line.is_empty()).count();

// ... processing logic ...

// Preserve trailing blank lines from input
let trailing_blanks_output = result.iter().rev().take_while(|line| line.is_empty()).count();
if trailing_blanks_input > trailing_blanks_output {
    for _ in 0..(trailing_blanks_input - trailing_blanks_output) {
        result.push(String::new());
    }
}

Additionally, there's a dedicated test test_trailing_blank_lines (lines 81-90) that verifies this behavior works correctly.

Both concerns have been thoroughly addressed with proper implementation and test coverage.


🏁 Script executed:

#!/bin/bash
# Examine the textproc.rs file to check the implementation and tests
cat src/textproc.rs

Length of output: 4907


@leynos Yes, both issues have been thoroughly resolved:

✅ Empty input test case

The test empty_input_returns_empty_vector (lines 99-103) specifically tests this:

#[test]
fn empty_input_returns_empty_vector() {
    let lines: Vec<String> = Vec::new();
    let out = process_tokens(&lines, |_tok, _out| unreachable!());
    assert!(out.is_empty());
}

✅ Trailing blank lines handling

The implementation includes comprehensive logic to preserve trailing blank lines:

  1. Early empty check (line 42): Returns empty vector for empty input
  2. All-blanks handling (lines 44-47): Preserves structure when all lines are blank
  3. Trailing blank preservation (lines 82-85): Compares input vs output trailing blanks and adds empty strings as needed

Additionally, there are two dedicated tests for trailing blank line scenarios:

  • preserves_trailing_blank_lines: Tests mixed content with trailing blanks
  • blanks_only_are_preserved: Tests the edge case of all-blank input

Both concerns have been comprehensively addressed with robust implementation and thorough test coverage.

@leynos leynos force-pushed the codex/create-process_tokens-function-and-refactor branch from 6e3357a to 191360c Compare July 30, 2025 18:46
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 17ee047 and 191360c.

📒 Files selected for processing (6)
  • docs/architecture.md (2 hunks)
  • src/ellipsis.rs (1 hunks)
  • src/footnotes.rs (2 hunks)
  • src/lib.rs (1 hunks)
  • src/textproc.rs (1 hunks)
  • src/wrap.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.rs

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.rs: Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Use concat!() to combine long string literals rather than escaping newlines with a backslash.
Prefer semantic error enums: Derive std::error::Error (via the thiserror crate) for any condition the caller might inspect, retry, or map to an HTTP status.
Use an opaque error only at the app boundary: Use eyre::Report for human-readable logs; these should not be exposed in public APIs.
Never export the opaque type from a library: Convert to domain enums at API boundaries, and to eyre only in the main main() entrypoint or top-level async task.

Files:

  • src/lib.rs
  • src/footnotes.rs
  • src/wrap.rs
  • src/textproc.rs
  • src/ellipsis.rs

⚙️ CodeRabbit Configuration File

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

  • Files must not exceed 400 lines in length

    • Large modules must be decomposed
    • Long match statements or dispatch tables should be decomposed by domain and collocated with targets
    • Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.

Files:

  • src/lib.rs
  • src/footnotes.rs
  • src/wrap.rs
  • src/textproc.rs
  • src/ellipsis.rs
docs/**/*.md

📄 CodeRabbit Inference Engine (AGENTS.md)

docs/**/*.md: Reference: Use the markdown files within the docs/ directory as a knowledge base and source of truth for project requirements, dependency choices, and architectural decisions.
Update: When new decisions are made, requirements change, libraries are added/removed, or architectural patterns evolve, proactively update the relevant file(s) in the docs/ directory to reflect the latest state. Ensure the documentation remains accurate and current.

Files:

  • docs/architecture.md
**/*.md

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.md: Documentation must use en-GB-oxendict spelling and grammar. (EXCEPTION: the naming of the "LICENSE" file, which is to be left unchanged for community consistency.)
Validate Markdown files using make markdownlint.
Run make fmt after any documentation changes to format all Markdown files and fix table markup.
Validate Mermaid diagrams in Markdown files by running make nixie.
Markdown paragraphs and bullet points must be wrapped at 80 columns.
Code blocks must be wrapped at 120 columns.
Tables and headings must not be wrapped.
Use dashes (-) for list bullets.
Use GitHub-flavoured Markdown footnotes ([^1]) for references and footnotes.

Files:

  • docs/architecture.md

⚙️ CodeRabbit Configuration File

**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")

  • Use en-GB-oxendict (-ize / -our) spelling and grammar
  • Paragraphs and bullets must be wrapped to 80 columns, except where a long URL would prevent this (in which case, silence MD013 for that line)
  • Code blocks should be wrapped to 120 columns.
  • Headings must not be wrapped.
  • Documents must start with a level 1 heading
  • Headings must correctly increase or decrease by no more than one level at a time
  • Use GitHub-flavoured Markdown style for footnotes and endnotes.
  • Numbered footnotes must be numbered by order of appearance in the document.

Files:

  • docs/architecture.md
🧠 Learnings (1)
src/footnotes.rs (1)

Learnt from: CR
PR: leynos/mdtablefix#0
File: docs/architecture.md:0-0
Timestamp: 2025-07-29T12:45:00.466Z
Learning: Applies to docs/src/footnotes.rs : The convert_footnotes function should convert bare numeric references into GitHub-flavoured Markdown footnotes, ignoring numbers inside inline code or parentheses, and only process the final contiguous list of numeric references.

🧬 Code Graph Analysis (2)
src/footnotes.rs (1)
src/textproc.rs (1)
  • process_tokens (40-105)
src/textproc.rs (1)
src/wrap.rs (1)
  • is_fence (266-266)
🔇 Additional comments (11)
src/textproc.rs (4)

1-8: Module documentation follows Rust conventions well.

The module-level documentation provides a clear summary and explains the module's purpose, utility, and key behaviour like trailing blank line preservation.


9-9: Import is appropriate and follows crate structure.

Using the wrap module's Token enum and is_fence function aligns with the existing architecture.


11-46: Function documentation is comprehensive and includes examples.

The documentation clearly explains the tokenisation behaviour, parameter expectations, and return value. The example demonstrates proper usage patterns.


107-163: Test coverage addresses key scenarios effectively.

The tests verify identity transformation, empty input handling, content removal, and trailing blank line preservation. The test suite appears comprehensive based on the past review comments indicating edge cases were already addressed.

src/lib.rs (1)

32-32: Module declaration follows established patterns.

The textproc module is correctly declared as internal alongside other private modules like html and reflow. This maintains the clean separation between public and internal APIs.

docs/architecture.md (3)

228-231: Architecture diagram accurately reflects the new module structure.

The addition of the textproc module with its process_tokens() function properly documents the new shared utility.


255-255: Dependency relationships correctly updated.

The diagram now shows ellipsis and footnotes depending on textproc instead of directly on wrap for tokenisation, which accurately reflects the refactoring.

Also applies to: 262-262


267-271: Module descriptions align with implementation.

The textual description correctly identifies textproc as containing shared token-processing helpers used by both ellipsis and footnotes modules.

src/wrap.rs (1)

74-74: Token enum documentation updated appropriately.

The documentation comment correctly reflects that tokens are now emitted by "token-processing helpers" rather than the specific tokenize_markdown function that was removed.

src/footnotes.rs (2)

21-21: Import updated correctly to use new textproc module.

The change from tokenize_markdown to process_tokens and adding textproc:: import aligns with the architectural refactoring.


99-108: Token processing refactoring preserves functionality effectively.

The refactored code using process_tokens maintains the same token handling logic while eliminating manual line joining/splitting. The closure properly handles all token types (Text, Code, Fence, Newline) with equivalent behaviour to the previous implementation.

Comment thread src/ellipsis.rs
use regex::Regex;

use crate::wrap::{Token, tokenize_markdown};
use crate::{textproc::process_tokens, wrap::Token};
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Reduce coupling to wrap by re-exporting Token via textproc.

Importing Token directly from wrap keeps ellipsis tied to an implementation detail that the new textproc helper is trying to abstract away. Re-export Token from the textproc module (e.g. pub use crate::wrap::Token;) and import it here via crate::textproc::Token instead. This makes future refactors (or a replacement tokenizer) localised to textproc.

🤖 Prompt for AI Agents
In src/ellipsis.rs at line 12, the import statement directly imports Token from
the wrap module, creating tight coupling. To fix this, modify the textproc
module to re-export Token by adding 'pub use crate::wrap::Token;' there, then
change the import in ellipsis.rs to import Token from crate::textproc instead of
crate::wrap. This reduces coupling and localizes future tokenizer changes to
textproc.

Comment thread src/ellipsis.rs
Comment on lines +19 to +27
process_tokens(lines, |token, out| match token {
Token::Text(t) => {
let replaced = DOT_RE.replace_all(t, |caps: &regex::Captures<'_>| {
let len = caps[0].len();
let ellipses = "…".repeat(len / 3);
let leftover = ".".repeat(len % 3);
format!("{ellipses}{leftover}")
});
out.push_str(&replaced);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Avoid unnecessary allocation when no replacement is needed.

Regex::replace_all always allocates, even if the input contains no match. For the common case where a Token::Text chunk has no "...", exit early to spare an allocation and copy:

 Token::Text(t) => {
-    let replaced = DOT_RE.replace_all(t, |caps: &regex::Captures<'_>| {
+    if !DOT_RE.is_match(t) {
+        out.push_str(t);
+        return;
+    }
+
+    let replaced = DOT_RE.replace_all(t, |caps: &regex::Captures<'_>| {
         let len = caps[0].len();
         let ellipses = "…".repeat(len / 3);
         let leftover = ".".repeat(len % 3);
         format!("{ellipses}{leftover}")
     });
     out.push_str(&replaced);
 }

The early return keeps the hot path allocation-free and silences a forthcoming clippy replaced_with_default lint.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
process_tokens(lines, |token, out| match token {
Token::Text(t) => {
let replaced = DOT_RE.replace_all(t, |caps: &regex::Captures<'_>| {
let len = caps[0].len();
let ellipses = "…".repeat(len / 3);
let leftover = ".".repeat(len % 3);
format!("{ellipses}{leftover}")
});
out.push_str(&replaced);
process_tokens(lines, |token, out| match token {
Token::Text(t) => {
if !DOT_RE.is_match(t) {
out.push_str(t);
return;
}
let replaced = DOT_RE.replace_all(t, |caps: &regex::Captures<'_>| {
let len = caps[0].len();
let ellipses = "…".repeat(len / 3);
let leftover = ".".repeat(len % 3);
format!("{ellipses}{leftover}")
});
out.push_str(&replaced);
🤖 Prompt for AI Agents
In src/ellipsis.rs around lines 19 to 27, the code uses Regex::replace_all which
always allocates even if no matches are found. To avoid unnecessary allocation,
first check if the input text contains the pattern "..." using a quick contains
or is_match call. If no match is found, push the original text directly to the
output and return early. Only call replace_all if a match is present to minimize
allocations and improve performance.

Comment thread src/ellipsis.rs
Comment on lines +30 to +35
out.push('`');
out.push_str(c);
out.push('`');
}
Token::Fence(f) => out.push_str(f),
Token::Newline => out.push('\n'),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Push a newline after fences to preserve line structure.

Token::Fence(f) writes the raw fence text but omits the trailing newline that was present in the original lines slice. This breaks round-tripping for inputs where the fence is on its own line ("```"). Append '\n' to align with the handling in wrap::tokenize_markdown:

-        Token::Fence(f) => out.push_str(f),
+        Token::Fence(f) => {
+            out.push_str(f);
+            out.push('\n');
+        },

Verify the existing tests still pass and add one that checks for exact line preservation around fenced blocks.

🤖 Prompt for AI Agents
In src/ellipsis.rs around lines 30 to 35, the Token::Fence(f) arm currently
pushes the fence string without a trailing newline, which breaks line structure
preservation. Modify this arm to append a newline character after pushing the
fence string to maintain the original line breaks. After making this change, run
existing tests to ensure no regressions and add a new test case that verifies
exact line preservation for inputs with fenced code blocks on their own lines.

Comment thread src/textproc.rs
Comment on lines +47 to +105

let trailing_blanks = lines.iter().rev().take_while(|l| l.is_empty()).count();
if trailing_blanks == lines.len() {
return vec![String::new(); lines.len()];
}

let mut out = String::new();
let mut in_fence = false;
let last_idx = lines.len() - 1;
for (i, line) in lines.iter().enumerate() {
let trimmed = line.as_str();
if is_fence(trimmed) {
f(Token::Fence(trimmed), &mut out);
if i < last_idx {
f(Token::Newline, &mut out);
}
in_fence = !in_fence;
continue;
}
if in_fence {
f(Token::Fence(trimmed), &mut out);
if i < last_idx {
f(Token::Newline, &mut out);
}
continue;
}
let mut rest = trimmed;
while let Some(pos) = rest.find('`') {
if pos > 0 {
f(Token::Text(&rest[..pos]), &mut out);
}
if let Some(end) = rest[pos + 1..].find('`') {
f(Token::Code(&rest[pos + 1..pos + 1 + end]), &mut out);
rest = &rest[pos + end + 2..];
} else {
f(Token::Text(&rest[pos..]), &mut out);
rest = "";
break;
}
}
if !rest.is_empty() {
f(Token::Text(rest), &mut out);
}
if i < last_idx {
f(Token::Newline, &mut out);
}
}

if out.is_empty() {
return Vec::new();
}

let mut result: Vec<String> = out.split('\n').map(str::to_string).collect();
let out_blanks = result.iter().rev().take_while(|l| l.is_empty()).count();
for _ in out_blanks..trailing_blanks {
result.push(String::new());
}
result
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Token processing logic is sound but verify fence state handling.

The implementation correctly handles empty input, preserves trailing blank lines, and processes tokens appropriately. However, verify that the fence state tracking correctly handles nested or malformed fence sequences.


🏁 Script executed:

#!/bin/bash
# Description: Search for fence-related logic in tests to ensure comprehensive coverage
# Expected: Find tests that verify fence state handling edge cases

echo "Searching for fence-related test cases:"
rg -A 10 -B 2 "fence" src/textproc.rs

echo -e "\nSearching for fence validation logic in wrap module:"
rg -A 5 -B 2 "is_fence" src/wrap.rs

Length of output: 1703


Add fence state handling tests

No existing tests verify transitions of in_fence or malformed fence sequences. Add unit tests covering:

  • Opening and closing code fences ( is_fence toggle)
  • Nested backticks inside a fence (should remain in_fence)
  • Malformed fences (unmatched backticks, language identifiers)

Target locations:

  • src/textproc.rs: process_tokens loop around lines 47–105
  • src/wrap.rs: is_fence regex in FENCE_RE

Write tests in tests/textproc.rs (or equivalent) to confirm correct token emission and fence state tracking.

🤖 Prompt for AI Agents
In src/textproc.rs between lines 47 and 105, add unit tests in tests/textproc.rs
to verify the fence state handling in the process_tokens function. Write tests
that check the toggling of in_fence when encountering opening and closing fences
using is_fence, ensure nested backticks inside fences do not toggle the fence
state, and handle malformed fences such as unmatched backticks or fences with
language identifiers. Also, include tests that confirm the correct tokens are
emitted during these scenarios to validate fence state tracking and token
processing.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 30, 2025

Superseded by #150

@leynos leynos closed this Jul 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant