Skip to content

Add helper to reemit original tokens#182

Merged
leynos merged 1 commit intomainfrom
codex/add-push_original_token-function
Aug 7, 2025
Merged

Add helper to reemit original tokens#182
leynos merged 1 commit intomainfrom
codex/add-push_original_token-function

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Aug 7, 2025

Summary

  • add push_original_token helper to write tokens back unchanged
  • reuse helper in replace_ellipsis and convert_footnotes

Testing

  • make fmt
  • make lint
  • make test

https://chatgpt.com/codex/tasks/task_e_6893db40a43883228b1df1f3b86e2ed1

Summary by Sourcery

Introduce a helper to reconstruct original tokens and refactor existing transformations to leverage it, reducing duplication and ensuring consistent token output.

New Features:

  • Add push_original_token helper to reemit original tokens without modification

Enhancements:

  • Refactor replace_ellipsis and convert_footnotes to use the new helper instead of manual token reconstruction

Tests:

  • Add unit tests to verify that push_original_token correctly round-trips all Token variants

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Aug 7, 2025

Reviewer's Guide

This PR adds a new inline helper to reemit original Markdown tokens and refactors existing transformations to delegate unmodified tokens to this helper, reducing duplication and ensuring consistent token output.

Class diagram for Token and push_original_token helper

classDiagram
    class Token {
        <<enum>>
        Text(&'a str)
        Code(&'a str)
        Fence(&'a str)
        Newline
    }
    class textproc {
        +push_original_token(token: &Token, out: &mut String)
    }
    Token <.. textproc : uses
Loading

Class diagram for refactored replace_ellipsis and convert_footnotes functions

classDiagram
    class textproc {
        +process_tokens(lines: &[String], f: Fn(Token, &mut String))
        +push_original_token(token: &Token, out: &mut String)
    }
    class ellipsis {
        +replace_ellipsis(lines: &[String]) : Vec<String>
    }
    class footnotes {
        +convert_footnotes(lines: &[String]) : Vec<String>
    }
    textproc <.. ellipsis : uses
    textproc <.. footnotes : uses
    ellipsis ..> textproc : calls process_tokens, push_original_token
    footnotes ..> textproc : calls process_tokens, push_original_token
Loading

File-Level Changes

Change Details Files
Introduce push_original_token helper to reemit tokens unchanged
  • Implement inline function matching all Token variants and appending original Markdown text
  • Add documentation with example usage snippet
  • Add unit test covering round-trip for all token variants
src/textproc.rs
Refactor replace_ellipsis to use the new helper
  • Import push_original_token into ellipsis module
  • Replace manual code/fence/newline match arms with wildcard delegating to helper
src/ellipsis.rs
Refactor convert_footnotes to use the new helper
  • Import push_original_token into footnotes module
  • Replace manual token forwarding logic with wildcard branch using helper
src/footnotes.rs

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Aug 7, 2025

Summary by CodeRabbit

  • New Features

    • Introduced a helper function to accurately reconstruct original Markdown tokens.
  • Refactor

    • Simplified token processing in ellipsis replacement and footnote conversion, centralising logic for handling non-text tokens.
  • Tests

    • Added unit tests to ensure correct reconstruction of all token variants.

Walkthrough

Refactor token handling in both replace_ellipsis and convert_footnotes to delegate all non-text token output to a new helper function, push_original_token, now publicly available in textproc. Add a unit test to ensure this helper correctly reconstructs all token variants to their original Markdown forms.

Changes

Cohort / File(s) Change Summary
Ellipsis token handling refactor
src/ellipsis.rs
Replace manual non-text token reconstruction in replace_ellipsis with calls to push_original_token.
Footnote token handling refactor
src/footnotes.rs
Simplify non-text token handling in convert_footnotes using push_original_token.
Helper function addition and test
src/textproc.rs
Add public push_original_token function and corresponding unit test for all token variants.

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant Ellipsis/FN
    participant textproc

    Caller->>Ellipsis/FN: Call replace_ellipsis/convert_footnotes(lines)
    Ellipsis/FN->>Ellipsis/FN: Tokenise lines
    loop For each token
        alt Token::Text
            Ellipsis/FN->>Ellipsis/FN: Process text token
        else Non-Text Token
            Ellipsis/FN->>textproc: push_original_token(token, out)
            textproc-->>Ellipsis/FN: Append original Markdown
        end
    end
    Ellipsis/FN-->>Caller: Return processed lines
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

  • Add streaming token processing #150: Refactors token handling using the new push_original_token helper, directly relating to the streaming token processing API and helper usage introduced here.
  • Add ellipsis command-line option #95: Refactors replace_ellipsis to use push_original_token, directly tying into the ellipsis replacement and token processing improvements.

Poem

Tokens march in tidy rows,
No more manual, tangled prose.
A helper rises, strong and keen,
To keep the Markdown crisp and clean.
With tests in tow, the code feels bright—
Refactored logic, pure delight!


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0e613ac and 4340a40.

📒 Files selected for processing (3)
  • src/ellipsis.rs (2 hunks)
  • src/footnotes.rs (2 hunks)
  • src/textproc.rs (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.rs: Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Use concat!() to combine long string literals rather than escaping newlines with a backslash.
Prefer semantic error enums: Derive std::error::Error (via the thiserror crate) for any condition the caller might inspect, retry, or map to an HTTP status.
Use an opaque error only at the app boundary: Use eyre::Report for human-readable logs; these should not be exposed in public APIs.
Never export the opaque type from a library: Convert to domain enums at API boundaries, and to eyre only in the main main() entrypoint or top-level async task.

Files:

  • src/textproc.rs
  • src/footnotes.rs
  • src/ellipsis.rs

⚙️ CodeRabbit Configuration File

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

  • Files must not exceed 400 lines in length

    • Large modules must be decomposed
    • Long match statements or dispatch tables should be decomposed by domain and collocated with targets
    • Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.

Files:

  • src/textproc.rs
  • src/footnotes.rs
  • src/ellipsis.rs
🧬 Code Graph Analysis (1)
src/footnotes.rs (1)
src/textproc.rs (2)
  • process_tokens (66-98)
  • push_original_token (27-38)
🔇 Additional comments (7)
src/textproc.rs (3)

11-26: Excellent documentation and API design.

The rustdoc comments follow proper conventions with a clear description, usage guidance, and a practical example. The function signature is well-designed, taking a token reference and mutable string buffer.


205-223: Comprehensive test coverage for all token variants.

The test thoroughly verifies that push_original_token correctly reconstructs all token variants to their original Markdown forms. The use of buf.clear() between assertions is clean and efficient.


27-38: Verify push_original_token preserves multi-backtick fences
Update Token::Code reconstruction to handle multi-backtick code spans correctly. The current implementation always wraps content in a single backtick, which turns an original “code” span into “code”.

• File: src/textproc.rs, lines 27–38
• Add logic to detect the longest backtick run in the code content and emit a fence that’s one backtick longer.
• Add a round-trip test: feed “A code span” through your tokenizer and push_original_token, and assert the output equals the original string.

src/footnotes.rs (2)

21-21: Import correctly updated for new helper function.

The addition of push_original_token to the import statement enables the refactoring in the token processing closure.


99-102: Clean refactoring eliminates code duplication.

The catch-all pattern _ => push_original_token(&tok, out) effectively replaces the previous explicit matching for Token::Code, Token::Fence, and Token::Newline. This centralises the token reconstruction logic and reduces maintenance burden.

src/ellipsis.rs (2)

12-12: Import correctly updated for new helper function.

The addition of push_original_token to the import statement supports the refactoring in the token processing logic.


19-35: Effective refactoring reduces code duplication.

The replacement of explicit token matching with _ => push_original_token(&tok, out) eliminates the manual reconstruction logic for non-text tokens whilst preserving the core ellipsis replacement functionality for text tokens. The variable name change to tok provides consistency with other modules.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch codex/add-push_original_token-function

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@leynos leynos merged commit e792432 into main Aug 7, 2025
2 checks passed
@leynos leynos deleted the codex/add-push_original_token-function branch August 7, 2025 21:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant