Skip to content

Refactor tokenize_markdown to iterate lazily#191

Merged
leynos merged 3 commits intomainfrom
codex/refactor-tokenize_markdown-for-lazy-iteration
Aug 19, 2025
Merged

Refactor tokenize_markdown to iterate lazily#191
leynos merged 3 commits intomainfrom
codex/refactor-tokenize_markdown-for-lazy-iteration

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Aug 18, 2025

Summary

  • avoid collecting input lines in tokenize_markdown by iterating lazily

closes #63

Testing

  • make fmt
  • make lint
  • make test

https://chatgpt.com/codex/tasks/task_e_68a3a4ba57c08322979aa9fe8a3af687

Summary by Sourcery

Refactor tokenize_markdown to iterate lazily over input lines using a Peekable iterator, removing the intermediate Vec and index tracking.

Enhancements:

  • Refactor tokenize_markdown to use a Peekable iterator over split lines instead of collecting them into a Vec
  • Use peek() to determine the presence of subsequent lines rather than tracking indices

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Aug 18, 2025

Reviewer's guide (collapsed on small PRs)

Reviewer's Guide

The PR refactors the tokenize_markdown function to use a peekable iterator over input lines instead of collecting them into a Vec, and updates newline insertion logic to rely on lookahead rather than index comparisons.

Class diagram for the refactored tokenize_markdown function

classDiagram
    class TokenizeMarkdown {
        +tokenize_markdown(source: &str) Vec<Token<'_>>
    }
    class Token {
        <<enum>>
        Fence(line: &str)
        Newline
        // ... other variants
    }
    TokenizeMarkdown --> Token
    class PeekableIterator {
        +next() &str
        +peek() Option<&str>
    }
    TokenizeMarkdown ..> PeekableIterator : uses
    class TokenizeInline {
        +tokenize_inline(line: &str, callback: Fn(Token))
    }
    TokenizeMarkdown ..> TokenizeInline : calls
Loading

File-Level Changes

Change Details Files
Refactor tokenize_markdown to use lazy, peekable iteration over input lines
  • Replaced Vec<&str> collection with split('\n').peekable()
  • Converted for (i, line) loop into while let Some(line) = lines.next()
  • Removed last_idx variable and index-based loop
src/wrap/tokenize.rs
Update newline emission logic to use peekable lookahead
  • Replaced i != last_idx checks with lines.peek().is_some() for fence handling
  • Replaced index-based newline checks in inline tokenization with peek().is_some()
src/wrap/tokenize.rs

Assessment against linked issues

Issue Objective Addressed Explanation
#63 Refactor tokenize_markdown to avoid collecting all input lines into a Vec, and instead use lazy iteration over lines.

Possibly linked issues


Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Aug 18, 2025

Summary by CodeRabbit

  • New Features
    • Processed output now preserves trailing blank lines, ensuring exact retention at the end of documents.
  • Bug Fixes
    • More reliable newline handling, including proper preservation of a final newline.
    • Improved compatibility with Windows line endings (CRLF), producing correct line breaks.
    • Consistent line demarcation across fenced code blocks and inline text.
  • Tests
    • Added tests covering trailing newline preservation and CRLF handling.
    • Expanded integration tests to verify correct retention of trailing blank lines.

Walkthrough

Introduce line-by-line Peekable iteration and a private newline helper in src/wrap/tokenize.rs to change when Newline tokens are emitted; add a public process_text helper in src/textproc.rs to split output while preserving specified trailing blank lines; include tests for newline and trailing-blank behaviour.

Changes

Cohort / File(s) Summary
Tokenizer refactor & newline handling
src/wrap/tokenize.rs
Replace Vec-based iteration with a Peekable line iterator; add push_newline_if_needed to centralise Newline emission using source.ends_with('\n'); adjust fence and inline flows to call the helper and avoid double-processing; preserve public Token enum and function signatures.
Text processing helper
src/textproc.rs
Add pub fn process_text(out: &str, trailing_blanks: usize) -> Vec<String> to split lines while preserving trailing blanks; refactor process_tokens to call this helper; add tests and documentation.
Tests
tests/*, src/*
Add unit/integration tests for trailing newline, CRLF handling, and trailing-blank preservation.

Sequence Diagram(s)

sequenceDiagram
  participant Caller
  participant Tokenizer
  participant Lines as Peekable<lines()>

  Caller->>Tokenizer: tokenize_markdown(source)
  Tokenizer->>Lines: create peekable iterator
  loop per line
    Lines->>Tokenizer: next() (line)
    alt fence start/end
      Tokenizer->>Tokenizer: push Fence/Code tokens
      Tokenizer->>Lines: peek() to decide newline emission
      Tokenizer->>Tokenizer: push_newline_if_needed()
      Tokenizer->>Tokenizer: continue (skip re-processing)
    else regular or inline
      Tokenizer->>Tokenizer: push Text/Code tokens
      Tokenizer->>Lines: peek()
      Tokenizer->>Tokenizer: push_newline_if_needed()
    end
  end
  Tokenizer-->>Caller: Vec<Token>
Loading
sequenceDiagram
  participant Renderer
  participant ProcessText

  Renderer->>Renderer: render tokens -> out (String)
  Renderer->>ProcessText: process_text(&out, trailing_blanks)
  ProcessText->>ProcessText: split lines, preserve trailing newline, append blanks as needed
  ProcessText-->>Renderer: Vec<String> (lines)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Assessment against linked issues

Objective Addressed Explanation
Avoid Vec allocation in tokenize_markdown by using lazy iteration (issue #63) Changes are in src/wrap/tokenize.rs and modify line iteration and newline emission; they do not remove or refactor the Vec<char> allocation in src/lib.rs referenced by the issue.

Assessment against linked issues: Out-of-scope changes

Code Change Explanation
Tokeniser refactor and newline helper (src/wrap/tokenize.rs) Issue #63 specifically targets Vec<char> allocation in src/lib.rs; this change modifies line iteration and newline emission but does not touch the Vec<char> allocation site.
New public helper process_text (src/textproc.rs) Issue #63 concerns memory allocation in tokenisation; adding a line-splitting helper and exporting it is unrelated to that memory-allocation objective.

Possibly related issues

Poem

Peek where lines will go, and call the bell,
Emit the newline only when there’s more to tell.
Split the text, keep blanks that quietly plead,
Tidy tokens march, and trailing lines succeed.
Ship the change; let parsers read.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch codex/refactor-tokenize_markdown-for-lazy-iteration

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and found some issues that need to be addressed.

  • Extract the repeated peek-and-push-newline logic into a small helper function to reduce code duplication.
  • Consider using the standard lines() iterator (or split_terminator) instead of split('\n') for more idiomatic handling of line boundaries and trailing newlines.
  • Add a targeted test case for inputs ending with a newline to ensure the refactored function preserves the original trailing-newline behavior.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Extract the repeated peek-and-push-newline logic into a small helper function to reduce code duplication.
- Consider using the standard `lines()` iterator (or `split_terminator`) instead of `split('\n')` for more idiomatic handling of line boundaries and trailing newlines.
- Add a targeted test case for inputs ending with a newline to ensure the refactored function preserves the original trailing-newline behavior.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

💡 Knowledge Base configuration:

  • Jira integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between cd5dd25 and 9570957.

📒 Files selected for processing (1)
  • src/wrap/tokenize.rs (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.rs: Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Use concat!() to combine long string literals rather than escaping newlines with a backslash.
Prefer semantic error enums: Derive std::error::Error (via the thiserror crate) for any condition the caller might inspect, retry, or map to an HTTP status.
Use an opaque error only at the app boundary: Use eyre::Report for human-readable logs; these should not be exposed in public APIs.
Never export the opaque type from a library: Convert to domain enums at API boundaries, and to eyre only in the main main() entrypoint or top-level async task.

Files:

  • src/wrap/tokenize.rs

⚙️ CodeRabbit Configuration File

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Where code is only used by specific features, it must be conditionally compiled or a conditional expectation for unused_code applied.

  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

  • Files must not exceed 400 lines in length

    • Large modules must be decomposed
    • Long match statements or dispatch tables should be decomposed by domain and collocated with targets
    • Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.

Files:

  • src/wrap/tokenize.rs
🧬 Code Graph Analysis (1)
src/wrap/tokenize.rs (1)
src/wrap.rs (1)
  • is_fence (173-173)
🔇 Additional comments (1)
src/wrap/tokenize.rs (1)

287-287: No-op change.

Skip reviewing this line; it introduces no functional change.

Comment thread src/wrap/tokenize.rs Outdated
@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Aug 19, 2025

@coderabbitai Have the following now been resolved?

  • Extract the repeated peek-and-push-newline logic into a small helper function to reduce code duplication.
  • Consider using the standard lines() iterator (or split_terminator) instead of split('\n') for more idiomatic handling of line boundaries and trailing newlines.
  • Add a targeted test case for inputs ending with a newline to ensure the refactored function preserves the original trailing-newline behavior.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Aug 19, 2025

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, let's look at the current implementation of tokenize_markdown
echo "=== Current tokenize_markdown implementation ==="
cat src/wrap/tokenize.rs

echo -e "\n=== Checking for usage of split vs lines ==="
# Check if the code uses split('\n') or lines()
rg -n "split\('\\\\n'\)" src/wrap/tokenize.rs
rg -n "\.lines\(\)" src/wrap/tokenize.rs

echo -e "\n=== Looking for repeated peek-and-push patterns ==="
# Look for patterns where newline tokens are pushed after peeking
rg -n -A2 -B2 "peek.*is_some" src/wrap/tokenize.rs
rg -n -A2 -B2 "push.*Newline" src/wrap/tokenize.rs

echo -e "\n=== Checking test files for trailing newline tests ==="
# Find test files and check for tests related to trailing newlines
fd -e rs -x echo "File: {}" \; -x grep -l "test" {} \;
echo -e "\n=== Looking for trailing newline test cases ==="
fd -e rs -x rg -l "trailing.*newline\|newline.*trailing\|ending.*newline\|newline.*ending" {} \;

Length of output: 13539


🏁 Script executed:

#!/bin/bash
# Let's look more specifically at the test files to check for trailing newline tests
echo "=== Examining tokenize_markdown test file ==="
cat tests/wrap/tokenize_markdown.rs

echo -e "\n=== Looking for any newline-related tests in tokenize.rs itself ==="
rg -n -C3 "trailing.*newline\|newline.*trailing\|ends_with.*newline\|tokenize.*newline" src/wrap/tokenize.rs

echo -e "\n=== Checking if there are multiple places where newline tokens are pushed ==="
rg -n -C2 "push_newline_if_needed" src/wrap/tokenize.rs

echo -e "\n=== Looking at the helper function implementation ==="
rg -n -A10 -B2 "fn push_newline_if_needed" src/wrap/tokenize.rs

Length of output: 3686


@leynos Yes, all three issues have been resolved in the current implementation:

✅ 1. Extract repeated peek-and-push-newline logic into helper function

Resolved - A push_newline_if_needed helper function has been created (lines 254-264) that encapsulates the logic for conditionally pushing newline tokens. This helper is called consistently in three places:

  • Line 280: After fence tokens
  • Line 287: After lines inside fenced blocks
  • Line 292: After regular text tokens

The helper eliminates the code duplication and centralizes the newline emission logic.

✅ 2. Use standard lines() iterator instead of split('\n')

Resolved - Line 274 shows the code now uses source.lines().peekable() instead of split('\n'). This provides more idiomatic handling of line boundaries and properly handles different line ending formats (LF, CRLF).

✅ 3. Add targeted test case for inputs ending with newline

Resolved - There are existing test cases that cover trailing newline behavior:

  • tokenize_marks_trailing_newline (lines 299-301): Tests input ending with \n
  • tokenize_handles_crlf (lines 304-309): Tests CRLF line endings

The implementation properly preserves trailing newline behavior using the had_trailing_newline flag (line 273) which tracks whether the original input ended with a newline.

All three improvements have been successfully implemented! 🐰✨

leynos added 3 commits August 19, 2025 20:16
Avoid collecting all lines up front to reduce memory usage when processing large Markdown sources.
@leynos leynos force-pushed the codex/refactor-tokenize_markdown-for-lazy-iteration branch from ac25c53 to 96ae005 Compare August 19, 2025 19:26
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/textproc.rs (1)

110-118: Drop MSRV concern; preserve or optionally simplify repeat_n use

The repository pins a nightly toolchain (nightly-2025-06-26) and defines no explicit MSRV in Cargo.toml, so std::iter::repeat_n is fully supported. Leave the code as is, or apply this optional refactor to reduce iterator plumbing:

• Refactor in src/textproc.rs around lines 112–114:

-    if out_blanks < trailing_blanks {
-        result.extend(std::iter::repeat_n(
-            String::new(),
-            trailing_blanks - out_blanks,
-        ));
-    }
+    if out_blanks < trailing_blanks {
+        let missing = trailing_blanks - out_blanks;
+        result.resize(result.len() + missing, String::new());
+    }
♻️ Duplicate comments (1)
src/wrap/tokenize.rs (1)

277-284: Adopt lines().peekable() with a trailing-newline sentinel — this fixes CRLF handling and silences Clippy

Switching to source.lines() and tracking had_trailing_newline matches the desired lazy iteration goal, handles \r\n correctly, and addresses the manual_split_lines lint.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

💡 Knowledge Base configuration:

  • Jira integration is disabled

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 9570957 and 96ae005.

📒 Files selected for processing (2)
  • src/textproc.rs (2 hunks)
  • src/wrap/tokenize.rs (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.rs: Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Use concat!() to combine long string literals rather than escaping newlines with a backslash.
Prefer semantic error enums: Derive std::error::Error (via the thiserror crate) for any condition the caller might inspect, retry, or map to an HTTP status.
Use an opaque error only at the app boundary: Use eyre::Report for human-readable logs; these should not be exposed in public APIs.
Never export the opaque type from a library: Convert to domain enums at API boundaries, and to eyre only in the main main() entrypoint or top-level async task.

Files:

  • src/textproc.rs
  • src/wrap/tokenize.rs

⚙️ CodeRabbit Configuration File

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Where code is only used by specific features, it must be conditionally compiled or a conditional expectation for unused_code applied.

  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

  • Files must not exceed 400 lines in length

    • Large modules must be decomposed
    • Long match statements or dispatch tables should be decomposed by domain and collocated with targets
    • Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.

Files:

  • src/textproc.rs
  • src/wrap/tokenize.rs
🧬 Code Graph Analysis (2)
src/textproc.rs (1)
src/html.rs (1)
  • lines (179-182)
src/wrap/tokenize.rs (1)
src/wrap/fence.rs (1)
  • is_fence (31-38)
🔇 Additional comments (3)
src/wrap/tokenize.rs (1)

285-301: Centralise newline emission — helper usage is clean and correct

Route all paths (fence open/close, in-fence lines, and inline tokenisation) through push_newline_if_needed. This removes duplication and keeps trailing-newline semantics consistent.

src/textproc.rs (2)

85-86: Delegate line splitting to process_text — simplifies process_tokens and keeps concerns separated

This improves readability and makes trailing-blank behaviour explicit in one place.


104-109: Preserve trailing newline by appending an empty line — LGTM

This matches the tokeniser’s semantics and avoids dropping the final empty line.

Comment thread src/textproc.rs
Comment on lines +88 to +99
/// Split processed output into lines while preserving trailing blanks.
///
/// # Examples
///
/// ```rust
/// use mdtablefix::textproc::process_text;
///
/// let lines = process_text("a\nb\n", 0);
/// assert_eq!(lines, vec!["a".to_string(), "b".to_string(), String::new()]);
/// ```
#[must_use]
pub fn process_text(out: &str, trailing_blanks: usize) -> Vec<String> {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Define behaviour for out.is_empty() with non-zero trailing_blanks

Either document the current behaviour (returns an empty Vec regardless of trailing_blanks), or return exactly trailing_blanks empty lines to mirror the process_tokens blanks-only branch. Implement the latter for consistency.

 #[must_use]
 pub fn process_text(out: &str, trailing_blanks: usize) -> Vec<String> {
-    if out.is_empty() {
-        return Vec::new();
-    }
+    if out.is_empty() {
+        return vec![String::new(); trailing_blanks];
+    }

Add a test:

#[test]
fn process_text_empty_out_preserves_requested_blanks() {
    let lines = process_text("", 3);
    assert_eq!(lines, vec![String::new(), String::new(), String::new()]);
}
🤖 Prompt for AI Agents
In src/textproc.rs around lines 88 to 99, the function process_text currently
returns an empty Vec when out.is_empty() even if trailing_blanks > 0; change it
to return a Vec containing exactly trailing_blanks empty String entries (to
mirror process_tokens' blanks-only branch) and add the unit test
process_text_empty_out_preserves_requested_blanks that asserts process_text("",
3) yields three empty strings; ensure the implementation handles trailing_blanks
== 0 by returning an empty Vec as before and keep behavior for non-empty out
unchanged.

Comment thread src/wrap/tokenize.rs
Comment on lines +254 to +268
fn push_newline_if_needed<I>(
tokens: &mut Vec<Token<'_>>,
lines: &mut std::iter::Peekable<I>,
had_trailing_newline: bool,
) where
I: Iterator,
{
// Emit a newline token if another line follows or when the
// original input ended with a trailing newline. The peek avoids
// prematurely allocating for the final newline when it isn't
// necessary.
if lines.peek().is_some() || (had_trailing_newline && lines.peek().is_none()) {
tokens.push(Token::Newline);
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Remove the double peek to avoid redundant work and borrow churn

Call peek() once and branch on a local boolean. This is cheaper and makes the borrow scope obvious.

 fn push_newline_if_needed<I>(
     tokens: &mut Vec<Token<'_>>,
     lines: &mut std::iter::Peekable<I>,
     had_trailing_newline: bool,
 ) where
     I: Iterator,
 {
-    // Emit a newline token if another line follows or when the
-    // original input ended with a trailing newline. The peek avoids
-    // prematurely allocating for the final newline when it isn't
-    // necessary.
-    if lines.peek().is_some() || (had_trailing_newline && lines.peek().is_none()) {
-        tokens.push(Token::Newline);
-    }
+    // Emit a newline token if another line follows or when the original input
+    // ended with a trailing newline.
+    let at_end = lines.peek().is_none();
+    if !at_end || (had_trailing_newline && at_end) {
+        tokens.push(Token::Newline);
+    }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fn push_newline_if_needed<I>(
tokens: &mut Vec<Token<'_>>,
lines: &mut std::iter::Peekable<I>,
had_trailing_newline: bool,
) where
I: Iterator,
{
// Emit a newline token if another line follows or when the
// original input ended with a trailing newline. The peek avoids
// prematurely allocating for the final newline when it isn't
// necessary.
if lines.peek().is_some() || (had_trailing_newline && lines.peek().is_none()) {
tokens.push(Token::Newline);
}
}
fn push_newline_if_needed<I>(
tokens: &mut Vec<Token<'_>>,
lines: &mut std::iter::Peekable<I>,
had_trailing_newline: bool,
) where
I: Iterator,
{
// Emit a newline token if another line follows or when the original input
// ended with a trailing newline.
let at_end = lines.peek().is_none();
if !at_end || (had_trailing_newline && at_end) {
tokens.push(Token::Newline);
}
}
🤖 Prompt for AI Agents
In src/wrap/tokenize.rs around lines 254-268, avoid calling lines.peek() twice
by calling it once into a local boolean (e.g. let has_next =
lines.peek().is_some();) and then use that boolean in the if-condition (if
has_next || (had_trailing_newline && !has_next) { tokens.push(Token::Newline);
}) so you eliminate redundant work and tighten the borrow scope.

Comment thread src/wrap/tokenize.rs
Comment on lines +333 to +346
#[test]
fn tokenize_marks_trailing_newline() {
let tokens = tokenize_markdown("foo\n");
assert_eq!(tokens, vec![Token::Text("foo"), Token::Newline]);
}

#[test]
fn tokenize_handles_crlf() {
let tokens = tokenize_markdown("foo\r\nbar");
assert_eq!(
tokens,
vec![Token::Text("foo"), Token::Newline, Token::Text("bar")]
);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Augment tests with more edge cases for trailing blanks and fences

Good additions for trailing newline and CRLF. Add explicit coverage for:

  • Multiple trailing blanks: "a\n\n" should yield Text("a"), Newline, Text(""), Newline
  • Final newline inside a fenced block

Example tests to add:

#[test]
fn tokenize_multiple_trailing_blanks() {
    let tokens = tokenize_markdown("a\n\n");
    assert_eq!(
        tokens,
        vec![Token::Text("a"), Token::Newline, Token::Text(""), Token::Newline]
    );
}

#[test]
fn tokenize_fence_at_eof_without_trailing_newline() {
    let tokens = tokenize_markdown("```\ncode\n```");
    assert_eq!(
        tokens,
        vec![
            Token::Fence("```"),
            Token::Newline,
            Token::Fence("code"),
            Token::Newline,
            Token::Fence("```"),
        ]
    );
}
🤖 Prompt for AI Agents
In src/wrap/tokenize.rs around lines 333 to 346, add two unit tests to cover
edge cases missing from the current suite: (1) a test for multiple trailing
blank lines where input "a\n\n" should produce Token::Text("a"), Token::Newline,
Token::Text(""), Token::Newline; and (2) a test for a fenced block ending at EOF
without a trailing newline where input "```\ncode\n```" should produce
Token::Fence("```"), Token::Newline, Token::Fence("code"), Token::Newline,
Token::Fence("```"). Implement these as #[test] functions using
tokenize_markdown and assert_eq! with the specified expected vectors.

@leynos leynos merged commit 449a998 into main Aug 19, 2025
1 of 2 checks passed
@leynos leynos deleted the codex/refactor-tokenize_markdown-for-lazy-iteration branch August 19, 2025 19:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Memory allocation concern in tokenize_markdown function

1 participant