Skip to content

Simplify token processing#161

Merged
leynos merged 4 commits intomainfrom
codex/refactor-process_tokens-to-use-tokenizer
Jul 31, 2025
Merged

Simplify token processing#161
leynos merged 4 commits intomainfrom
codex/refactor-process_tokens-to-use-tokenizer

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Jul 31, 2025

Summary

  • provide a reusable tokenizer via wrap::tokenize_markdown
  • refactor process_tokens to use that tokenizer
  • update architecture notes

Testing

  • make fmt
  • make lint
  • make test
  • make markdownlint
  • make nixie (fails: too many arguments)

https://chatgpt.com/codex/tasks/task_e_688bb7f516308322b96f22d672c6d6b7

Summary by Sourcery

Introduce a reusable Markdown tokenizer via tokenize_markdown, refactor the text processing pipeline to leverage this new function, and update architecture notes accordingly

New Features:

  • Add public tokenize_markdown function for reusable Markdown tokenization

Enhancements:

  • Re-export tokenize_markdown alongside Token in the wrap module
  • Refactor text processing in textproc.rs to use tokenize_markdown and remove inline/fence parsing logic
  • Simplify trailing blank line handling using repeat_n

Documentation:

  • Update architecture documentation to describe the new tokenization approach

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Jul 31, 2025

Reviewer's Guide

This PR centralizes markdown tokenization by introducing a reusable tokenize_markdown function in wrap::tokenize, refactors the existing process_tokens implementation to leverage this tokenizer, and updates related exports and documentation to reflect the new architecture.

Class diagram for the new reusable tokenizer and refactored process_tokens

classDiagram
    class Token {
        <<enum>>
        +Text(&str)
        +Code(&str)
        +Fence(&str)
        +Newline
    }
    class tokenize_markdown {
        +tokenize_markdown(source: &str) Token[]
    }
    class process_tokens {
        +process_tokens(lines: &[String], f: FnMut(Token, &mut String)) Vec<String>
    }
    Token <.. tokenize_markdown : emits
    tokenize_markdown <.. process_tokens : used by
Loading

Flow diagram for centralized markdown tokenization

flowchart TD
    Input[Markdown Source]
    Tokenizer[tokenize_markdown]
    Processor[process_tokens]
    Output[Processed Lines]
    Input --> Tokenizer --> Processor --> Output
Loading

File-Level Changes

Change Details Files
Extracted inline tokenization and created a general-purpose Markdown tokenizer.
  • Moved tokenize_inline from textproc.rs into wrap/tokenize.rs
  • Introduced tokenize_markdown to emit a full stream of Token values
  • Handled code fences, inline code, and newlines uniformly in the new tokenizer
src/wrap/tokenize.rs
Refactored process_tokens to use the new tokenizer.
  • Replaced manual line-by-line handle_line logic with a loop over tokenize_markdown output
  • Simplified output assembly by splitting on \n and extending trailing blank lines using repeat_n
src/textproc.rs
Updated module exports to expose the new tokenizer.
  • Adjusted wrap.rs to re-export tokenize_markdown alongside Token
  • Replaced prior pub use crate::wrap::Token import in textproc.rs with pub use crate::wrap::{Token, tokenize_markdown}
src/wrap.rs
src/textproc.rs
Revised architecture documentation to describe the new tokenization flow.
  • Noted that wrap::tokenize_markdown replaces the inline FSM in process_tokens
  • Clarified tokenization responsibilities in docs/architecture.md
docs/architecture.md

Possibly linked issues


Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 31, 2025

Warning

Rate limit exceeded

@leynos has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 0 minutes and 53 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 31c0f68 and 55e2335.

📒 Files selected for processing (1)
  • tests/wrap/tokenize_markdown.rs (1 hunks)

Summary by CodeRabbit

  • New Features
    • Introduced a new Markdown tokenisation function that accurately handles fenced code blocks and inline code spans.
  • Bug Fixes
    • Improved handling of unclosed and malformed code fences during Markdown tokenisation.
  • Documentation
    • Updated architecture documentation to clarify the new tokenisation process and module responsibilities.
  • Tests
    • Added unit tests to verify correct behaviour for unclosed and malformed code fences in Markdown input.

Walkthrough

Update the Markdown architecture documentation to clarify the role of tokenization, removing outdated references to internal state machines. Refactor src/textproc.rs to delegate all tokenisation to the new wrap::tokenize_markdown function, eliminating manual fence and inline token handling. Add tokenize_inline and tokenize_markdown to wrap/tokenize.rs, and re-export them in wrap.rs.

Changes

Cohort / File(s) Change Summary
Documentation Update
docs/architecture.md
Clarified the description of the textproc module, specifying that tokenisation is now handled by wrap::tokenize_markdown. Removed references to the old state machine.
Delegation of Tokenisation in Text Processing
src/textproc.rs
Removed internal inline tokenisation and fence handling; now re-exports and delegates to wrap::tokenize_markdown. Simplified process_tokens and improved handling of trailing blank lines.
Public API Update in Wrap
src/wrap.rs
Added public re-export of tokenize_markdown alongside Token.
New Tokenisation Logic
src/wrap/tokenize.rs
Introduced tokenize_inline and tokenize_markdown functions for structured Markdown tokenisation.
Unit Tests for Tokenisation
tests/wrap/tokenize_markdown.rs
Added tests verifying handling of unclosed and malformed fenced code blocks by the new tokenizer.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant textproc
    participant wrap
    participant tokenize

    Client->>textproc: process_tokens(lines, f)
    textproc->>wrap: tokenize_markdown(joined_lines)
    wrap->>tokenize: tokenize_markdown(source)
    tokenize-->>wrap: Vec<Token>
    wrap-->>textproc: Vec<Token>
    textproc->>textproc: Apply transformation f to each Token
    textproc-->>Client: Vec<String> (processed lines)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~15 minutes

Possibly related PRs

  • Move wrap tests to module #152: Further simplifies src/textproc.rs by delegating tokenisation to wrap::tokenize_markdown, building on the modularisation introduced here.
  • Improve tokenization docs #115: Refactors tokenisation logic by moving it to wrap::tokenize_markdown and updates documentation, both PRs focus on the same function.
  • Fix link wrapping #99: Both PRs modify tokenisation logic, with this PR centralising it in wrap::tokenize_markdown and the other extending inline token handling for links and images.

Poem

In lines of code, the fences fell,
Token streams now cast their spell.
Old state machines have bid adieu,
While wrap brings token magic through.
Markdown’s secrets, parsed anew—
The docs now tell the story true!
🦾✨

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch codex/refactor-process_tokens-to-use-tokenizer

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes and found some issues that need to be addressed.

Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments

### Comment 1
<location> `src/textproc.rs:60` </location>
<code_context>
-    for _ in out_blanks..trailing_blanks {
-        result.push(String::new());
-    }
+    let mut result: Vec<String> = out.split('\n').map(ToOwned::to_owned).collect();
+    result.extend(std::iter::repeat_n(
+        String::new(),
+        trailing_blanks.saturating_sub(result.len()),
</code_context>

<issue_to_address>
Using repeat_n with saturating_sub may not always match the original trailing blank line count.

The new logic may add extra blank lines when result.len() is less than trailing_blanks. Please verify this aligns with the intended behavior for all cases.
</issue_to_address>

### Comment 2
<location> `src/wrap/tokenize.rs:150` </location>
<code_context>
+
+/// Tokenize a block of Markdown into [`Token`]s.
+#[must_use]
+pub fn tokenize_markdown(source: &str) -> Vec<Token<'_>> {
+    if source.is_empty() {
+        return Vec::new();
</code_context>

<issue_to_address>
Consider refactoring both `tokenize_markdown` and `tokenize_inline` to use streaming iterators and avoid intermediate allocations for improved clarity and efficiency.

```suggestion
You can collapse the two `for`-loops + `Vec<&str>` + `last_idx` logic in `tokenize_markdown` into a single streaming pass over `source.split_inclusive('\n')`. That removes one allocation and unifies your fence/newline handling:

```rust
pub fn tokenize_markdown(source: &str) -> Vec<Token<'_>> {
    let mut tokens = Vec::new();
    let mut in_fence = false;

    for segment in source.split_inclusive('\n') {
        let line = segment.trim_end_matches('\n');
        let is_fence = super::is_fence(line);

        if is_fence || in_fence {
            tokens.push(Token::Fence(line));
            if is_fence { in_fence = !in_fence; }
        } else {
            tokenize_inline(line, &mut |t| tokens.push(t));
        }

        tokens.push(Token::Newline);
    }

    // drop the trailing newline if source didn’t end with '\n'
    if !source.ends_with('\n') {
        tokens.pop();
    }
    tokens
}
```

Similarly, you can simplify `tokenize_inline` by turning it into an explicit iterator, removing manual `find` + `.repeat(…)` calls and flattening its internal loops:

```rust
struct InlineTok<'a> { rest: &'a str }

impl<'a> InlineTok<'a> {
    fn new(s: &'a str) -> Self { InlineTok { rest: s } }
}

impl<'a> Iterator for InlineTok<'a> {
    type Item = Token<'a>;

    fn next(&mut self) -> Option<Self::Item> {
        if self.rest.is_empty() {
            return None;
        }
        if let Some(pos) = self.rest.find('`') {
            if pos > 0 {
                let (head, tail) = self.rest.split_at(pos);
                self.rest = tail;
                return Some(Token::Text(head));
            }
            // count backticks
            let delim_len = self.rest.chars().take_while(|&c| c=='`').count();
            let (d, after) = self.rest.split_at(delim_len);
            if let Some(idx) = after.find(d) {
                let (code, rest) = after.split_at(idx);
                self.rest = &rest[d.len()..];
                return Some(Token::Code(code));
            }
        }
        // no more backticks, emit remainder
        let rem = self.rest;
        self.rest = "";
        Some(Token::Text(rem))
    }
}

fn tokenize_inline<'a, F>(text: &'a str, mut emit: F)
where F: FnMut(Token<'a>) {
    for tok in InlineTok::new(text) {
        emit(tok);
    }
}
```

These changes preserve all current behavior but reduce nesting, eliminate the temporary `Vec<&str>`, and avoid manual string-repeat allocations.
</issue_to_address>

### Comment 3
<location> `src/wrap/tokenize.rs:122` </location>
<code_context>
-pub use crate::wrap::Token;
-use crate::wrap::is_fence;
-
-fn tokenize_inline<'a, F>(text: &'a str, emit: &mut F)
-where
-    F: FnMut(Token<'a>),
</code_context>

<issue_to_address>
Module is missing a `//!` comment at the top.

Please add a `//!` module-level doc comment at the very top of this file to describe its purpose, as required by the review instructions.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread src/textproc.rs Outdated
Comment thread src/wrap/tokenize.rs
Comment thread src/wrap/tokenize.rs
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 05ce23e and 3b3f05b.

📒 Files selected for processing (4)
  • docs/architecture.md (1 hunks)
  • src/textproc.rs (2 hunks)
  • src/wrap.rs (1 hunks)
  • src/wrap/tokenize.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
docs/**/*.md

📄 CodeRabbit Inference Engine (AGENTS.md)

docs/**/*.md: Reference: Use the markdown files within the docs/ directory as a knowledge base and source of truth for project requirements, dependency choices, and architectural decisions.
Update: When new decisions are made, requirements change, libraries are added/removed, or architectural patterns evolve, proactively update the relevant file(s) in the docs/ directory to reflect the latest state. Ensure the documentation remains accurate and current.

Files:

  • docs/architecture.md
**/*.md

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.md: Documentation must use en-GB-oxendict spelling and grammar. (EXCEPTION: the naming of the "LICENSE" file, which is to be left unchanged for community consistency.)
Validate Markdown files using make markdownlint.
Run make fmt after any documentation changes to format all Markdown files and fix table markup.
Validate Mermaid diagrams in Markdown files by running make nixie.
Markdown paragraphs and bullet points must be wrapped at 80 columns.
Code blocks must be wrapped at 120 columns.
Tables and headings must not be wrapped.
Use dashes (-) for list bullets.
Use GitHub-flavoured Markdown footnotes ([^1]) for references and footnotes.

Files:

  • docs/architecture.md

⚙️ CodeRabbit Configuration File

**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")

  • Use en-GB-oxendict (-ize / -our) spelling and grammar
  • Paragraphs and bullets must be wrapped to 80 columns, except where a long URL would prevent this (in which case, silence MD013 for that line)
  • Code blocks should be wrapped to 120 columns.
  • Headings must not be wrapped.
  • Documents must start with a level 1 heading
  • Headings must correctly increase or decrease by no more than one level at a time
  • Use GitHub-flavoured Markdown style for footnotes and endnotes.
  • Numbered footnotes must be numbered by order of appearance in the document.

Files:

  • docs/architecture.md
**/*.rs

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.rs: Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Use concat!() to combine long string literals rather than escaping newlines with a backslash.
Prefer semantic error enums: Derive std::error::Error (via the thiserror crate) for any condition the caller might inspect, retry, or map to an HTTP status.
Use an opaque error only at the app boundary: Use eyre::Report for human-readable logs; these should not be exposed in public APIs.
Never export the opaque type from a library: Convert to domain enums at API boundaries, and to eyre only in the main main() entrypoint or top-level async task.

Files:

  • src/wrap.rs
  • src/textproc.rs
  • src/wrap/tokenize.rs

⚙️ CodeRabbit Configuration File

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

  • Files must not exceed 400 lines in length

    • Large modules must be decomposed
    • Long match statements or dispatch tables should be decomposed by domain and collocated with targets
    • Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.

Files:

  • src/wrap.rs
  • src/textproc.rs
  • src/wrap/tokenize.rs
🧬 Code Graph Analysis (3)
src/wrap.rs (1)
src/wrap/tokenize.rs (1)
  • tokenize_markdown (150-184)
src/textproc.rs (1)
src/wrap/tokenize.rs (1)
  • tokenize_markdown (150-184)
src/wrap/tokenize.rs (1)
src/wrap.rs (1)
  • is_fence (186-186)
🔇 Additional comments (5)
src/wrap/tokenize.rs (1)

122-146: Function correctly implements inline code span tokenization.

The tokenize_inline function properly handles backtick-delimited code spans with matching fence lengths. The logic correctly handles unmatched delimiters by treating them as text.

src/wrap.rs (1)

11-11: LGTM!

The re-export correctly makes tokenize_markdown available alongside Token for external use.

docs/architecture.md (1)

269-273: Documentation accurately reflects the architectural changes.

The update correctly describes that tokenization is now handled by wrap::tokenize_markdown instead of the previous state machine approach in process_tokens.

src/textproc.rs (2)

9-9: LGTM!

The import correctly uses the new tokenize_markdown function from the wrap module.


50-54: Refactoring correctly delegates tokenization logic.

The simplification properly uses tokenize_markdown to handle the entire source string instead of processing line by line. This aligns with the PR objective of centralising tokenization.

Comment thread src/textproc.rs
Comment thread src/wrap/tokenize.rs
@leynos leynos merged commit 539b572 into main Jul 31, 2025
1 of 2 checks passed
@leynos leynos deleted the codex/refactor-process_tokens-to-use-tokenizer branch July 31, 2025 19:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant