Skip to content

Fix empty backtick edge case#171

Merged
leynos merged 3 commits intomainfrom
codex/handle-empty-delimiter-case-in-tokenization
Aug 3, 2025
Merged

Fix empty backtick edge case#171
leynos merged 3 commits intomainfrom
codex/handle-empty-delimiter-case-in-tokenization

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Aug 1, 2025

Summary

  • handle cases with empty inline code delimiters
  • add regression test for unmatched inline code

Testing

  • RUSTUP_TOOLCHAIN=nightly-2025-07-22 make fmt
  • RUSTUP_TOOLCHAIN=nightly-2025-07-22 make lint
  • RUSTUP_TOOLCHAIN=nightly-2025-07-22 make test
  • RUSTUP_TOOLCHAIN=nightly-2025-07-22 make nixie (fails: too many arguments)

https://chatgpt.com/codex/tasks/task_e_688d32f6a54483229b3f0e212a78abb2

Summary by Sourcery

Treat unmatched or empty backtick delimiters as plain text when tokenizing markdown and add a regression test to cover this scenario

Bug Fixes:

  • Return single backtick as text when no backtick delimiters are present instead of consuming the entire string
  • Treat unmatched inline code delimiters as text tokens rather than code spans

Tests:

  • Add regression test for unmatched inline code delimiters to ensure they are tokenized as text

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Aug 1, 2025

Reviewer's Guide

Fix inline code parsing to treat empty and unmatched backtick delimiters as plain text and add a regression test for unmatched code spans.

Class diagram for updated Tokenizer logic in tokenize.rs

classDiagram
    class Token {
        <<enum>>
        +Text(&str)
        +Code(&str)
    }
    class Tokenizer {
        +next_token(s: &str) -> Option<(Token, usize)>
    }
    Tokenizer --> Token
Loading

File-Level Changes

Change Details Files
Handle empty and unmatched backtick delimiters in the tokenizer
  • Return only the single backtick as Text when delim_len == 0
  • Fallback to returning the backtick sequence as Text if no closing delimiter is found
src/wrap/tokenize.rs
Add regression test for unmatched inline code
  • Introduce unmatched_inline_code_is_text test case
  • Assert that input with an unmatched backtick yields sequential Text tokens
tests/wrap/tokenize_markdown.rs

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Aug 1, 2025

Summary by CodeRabbit

  • Bug Fixes
    • Improved handling of unmatched inline code delimiters, ensuring they are correctly treated as plain text.
  • Tests
    • Added tests to verify correct tokenisation of unmatched backticks in markdown.

Walkthrough

Refine the tokenisation logic in next_token to segment text more precisely before backticks and treat unmatched backtick delimiter sequences as text tokens. Add two new tests to verify that unmatched inline code delimiters are tokenised as plain text rather than code spans.

Changes

Cohort / File(s) Change Summary
Tokenisation Logic Refinement
src/wrap/tokenize.rs
Update next_token to segment text before backticks more accurately and treat unmatched backtick delimiter sequences as text tokens.
Test Coverage for Unmatched Backticks
tests/wrap/tokenize_markdown.rs
Add tests to ensure unmatched inline code delimiters are tokenised as text, not code spans.

Sequence Diagram(s)

sequenceDiagram
    participant Input as Input String
    participant Tokenizer as next_token
    participant Output as Token Stream

    Input->>Tokenizer: Provide markdown string
    Tokenizer->>Tokenizer: Count leading backticks (delim_len)
    alt delim_len == 0
        Tokenizer->>Tokenizer: Search for next backtick
        alt Backtick found
            Tokenizer->>Output: Emit Token::Text up to backtick
        else No backtick found
            Tokenizer->>Output: Emit Token::Text (whole string)
        end
    else delim_len > 0
        Tokenizer->>Tokenizer: Search for matching closing delimiter of same length
        alt Closing delimiter found
            Tokenizer->>Output: Emit Token::Code with enclosed content
        else No closing delimiter
            Tokenizer->>Output: Emit Token::Text (opening delimiter only)
        end
    end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~7 minutes

Possibly related PRs

  • Move wrap tests to module #152: Refactors tokenisation logic into the tokenize module, upon which the current PR builds by directly modifying the tokenisation behaviour.

Poem

Backticks wander, unmatched and free,
No longer code, just text they’ll be.
Tokeniser sharpens its keen new eye,
Splitting segments as it goes by.
With tests to prove its parsing might,
Markdown tokens fall in line—just right!


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a31c3f8 and 23139a7.

📒 Files selected for processing (1)
  • src/wrap/tokenize.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

📄 CodeRabbit Inference Engine (AGENTS.md)

**/*.rs: Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Use concat!() to combine long string literals rather than escaping newlines with a backslash.
Prefer semantic error enums: Derive std::error::Error (via the thiserror crate) for any condition the caller might inspect, retry, or map to an HTTP status.
Use an opaque error only at the app boundary: Use eyre::Report for human-readable logs; these should not be exposed in public APIs.
Never export the opaque type from a library: Convert to domain enums at API boundaries, and to eyre only in the main main() entrypoint or top-level async task.

Files:

  • src/wrap/tokenize.rs

⚙️ CodeRabbit Configuration File

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

  • Files must not exceed 400 lines in length

    • Large modules must be decomposed
    • Long match statements or dispatch tables should be decomposed by domain and collocated with targets
    • Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.

Files:

  • src/wrap/tokenize.rs
🔇 Additional comments (2)
src/wrap/tokenize.rs (2)

126-132: LGTM! Improved tokenisation logic handles edge cases correctly.

The delimiter counting approach using take_while is idiomatic Rust, and the logic correctly returns meaningful text segments rather than single characters when no backticks are present at the start.


134-140: Excellent handling of matched and unmatched delimiter sequences.

The logic correctly implements markdown inline code parsing by searching for matching closing delimiters and gracefully degrading unmatched sequences to plain text. Position calculations properly account for delimiter lengths.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch codex/handle-empty-delimiter-case-in-tokenization

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes and found some issues that need to be addressed.

  • The delim_len == 0 branch now slices &s[..1], which can panic on multi-byte UTF-8 chars; use the first char’s actual byte length instead of hard-coding 1.
  • Rather than emitting a single-byte Text token for non-backtick characters, consider consuming up to the next backtick so you don’t fragment text into one-character tokens.
  • You might also add a regression test for multiple unmatched backticks (e.g. "``bad code") to verify the new fallback behavior handles that correctly.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The delim_len == 0 branch now slices &s[..1], which can panic on multi-byte UTF-8 chars; use the first char’s actual byte length instead of hard-coding 1.
- Rather than emitting a single-byte Text token for non-backtick characters, consider consuming up to the next backtick so you don’t fragment text into one-character tokens.
- You might also add a regression test for multiple unmatched backticks (e.g. "``bad code") to verify the new fallback behavior handles that correctly.

## Individual Comments

### Comment 1
<location> `src/wrap/tokenize.rs:132` </location>
<code_context>
         let delim_len = s.chars().take_while(|&c| c == '`').count();
         if delim_len == 0 {
-            return Some((Token::Text(s), s.len()));
+            return Some((Token::Text(&s[..1]), 1));
         }
         let closing = &s[..delim_len];
</code_context>

<issue_to_address>
Returning only the first character as Token::Text may cause unexpected tokenization.

Returning only the first character may result in single-character tokens, which could be inefficient and disrupt consumers expecting larger text segments. Please confirm if this behavior is intentional or consider returning the full string as before.
</issue_to_address>

### Comment 2
<location> `src/wrap/tokenize.rs:139` </location>
<code_context>
             let code = &s[delim_len..delim_len + end];
             return Some((Token::Code(code), delim_len + end + delim_len));
         }
+        return Some((Token::Text(&s[..delim_len]), delim_len));
     }
     Some((Token::Text(s), s.len()))
</code_context>

<issue_to_address>
Returning Token::Text for unmatched opening backticks may not align with expected Markdown parsing.

Verify that returning only the opening backticks as Token::Text is intentional, as this may differ from standard Markdown handling. Also, consider if this could cause issues with consecutive unmatched backticks.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread src/wrap/tokenize.rs Outdated
Comment thread src/wrap/tokenize.rs Outdated
@leynos leynos merged commit a93a3bc into main Aug 3, 2025
2 checks passed
@leynos leynos deleted the codex/handle-empty-delimiter-case-in-tokenization branch August 3, 2025 22:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant