Skip to content

Implement logos-based tokenizer#7

Merged
leynos merged 8 commits intomainfrom
codex/build-differential-datalog-tokenizer
Jun 25, 2025
Merged

Implement logos-based tokenizer#7
leynos merged 8 commits intomainfrom
codex/build-differential-datalog-tokenizer

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Jun 24, 2025

Summary

  • integrate logos lexer for tokenisation
  • expose tokenize API and add tests
  • document lexer decision in parser plan

Testing

  • make fmt
  • make lint
  • make test
  • make markdownlint

https://chatgpt.com/codex/tasks/task_e_685aced31d6883229a9c4a110e425f6b

Summary by Sourcery

Implement lexing via the logos crate, introduce a tokenize API producing (SyntaxKind, Span) streams, update documentation to reflect the new lexer design, and add end-to-end tests for the tokenizer.

New Features:

  • Introduce a Logos-based tokenizer module that emits (SyntaxKind, Span) pairs via a tokenize() API
  • Expose the tokenize function and Span type in the library public API

Enhancements:

  • Use a static HashMap for keyword lookup to map identifiers to SyntaxKind in O(1) time
  • Update parser-plan.md to document the logos-based lexing flow with a sequence diagram

Build:

  • Add logos as a dependency and rstest as a dev-dependency in Cargo.toml

Documentation:

  • Annotate supported number literal formats in the DDlog design roadmap
  • Document lexer integration and token preservation in the parser plan

Tests:

  • Add comprehensive integration tests for the tokenizer covering keywords, literals, trivia, punctuation, operators, error cases, and token spans

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Jun 24, 2025

Reviewer's Guide

This PR integrates a declarative, logos-based lexer by introducing a new tokenizer module with a public tokenize API and Span type, updates documentation to describe the lexer design, adds the necessary dependencies, and delivers a suite of tests covering token kinds, spans, and error cases.

Sequence diagram for the new logos-based tokenization process

sequenceDiagram
    participant Client
    participant Tokenizer
    participant LogosLexer
    participant SyntaxKind
    Client->>Tokenizer: tokenize(src: &str)
    Tokenizer->>LogosLexer: Token::lexer(src)
    loop for each token
        LogosLexer-->>Tokenizer: next() -> Token + span
        Tokenizer->>SyntaxKind: map Token to SyntaxKind
        Tokenizer-->>Tokenizer: collect (SyntaxKind, Span)
    end
    Tokenizer-->>Client: Vec<(SyntaxKind, Span)>
Loading

Class diagram for the new tokenizer module and public API

classDiagram
    class Tokenizer {
        +tokenize(src: &str) Vec<(SyntaxKind, Span)>
    }
    class Token {
        <<enum>>
        Whitespace
        Comment
        Ident
        Number
        String
        LParen
        RParen
        LBrace
        RBrace
        LBracket
        RBracket
        Semi
        Comma
        Dot
        ColonColon
        Colon
        Pipe
        Amp
        EqEq
        Eq
        Implies
        Percent
        Star
        Slash
        Plus
        Minus
        Arrow
        FatArrow
        Lte
        Spaceship
        Gte
        Lt
        Gt
        Neq
        Shr
        Shl
        Tilde
        At
        Hash
        Apostrophe
    }
    class SyntaxKind {
        <<enum>>
        T_WHITESPACE
        T_COMMENT
        T_IDENT
        T_NUMBER
        T_STRING
        T_LPAREN
        T_RPAREN
        T_LBRACE
        T_RBRACE
        T_LBRACKET
        T_RBRACKET
        T_SEMI
        T_COMMA
        T_DOT
        T_COLON_COLON
        T_COLON
        T_PIPE
        T_AMP
        T_EQEQ
        T_EQ
        T_IMPLIES
        T_PERCENT
        T_STAR
        T_SLASH
        T_PLUS
        T_MINUS
        T_ARROW
        T_FAT_ARROW
        T_LTE
        T_SPACESHIP
        T_GTE
        T_LT
        T_GT
        T_NEQ
        T_SHR
        T_SHL
        T_TILDE
        T_AT
        T_HASH
        T_APOSTROPHE
        N_ERROR
        ...
    }
    class Span {
        <<type>>
        std::ops::Range<usize>
    }
    Tokenizer --> Token : uses
    Tokenizer --> SyntaxKind : maps to
    Tokenizer --> Span : returns
    Token --> SyntaxKind : mapped to
Loading

Class diagram for keyword mapping in the tokenizer

classDiagram
    class KEYWORDS {
        <<static HashMap<&'static str, SyntaxKind>>
    }
    class Tokenizer {
        +keyword_kind(ident: &str) Option<SyntaxKind>
    }
    KEYWORDS <.. Tokenizer : used by
Loading

File-Level Changes

Change Details Files
Introduce a logos-based tokenizer module and public tokenize API
  • Add src/tokenizer.rs with a Token enum (using logos), a keyword lookup map, and a tokenize() function producing (SyntaxKind, Span) pairs
  • Modify src/lib.rs to expose tokenizer module and re-export Span and tokenize
  • Update Cargo.toml to include logos as a dependency and rstest as a dev-dependency
src/tokenizer.rs
src/lib.rs
Cargo.toml
Update documentation to reflect lexer integration
  • Enhance docs/parser-plan.md to explain the choice of a logos lexer and include a mermaid sequence diagram
  • Add an inline comment for decimal, floating-point, hex, binary, and octal number support in docs/ddlint-design-and-road-map.md
docs/parser-plan.md
docs/ddlint-design-and-road-map.md
Add comprehensive tests for the tokenizer
  • Create tests/tokenizer.rs with parameterized rstest cases for keywords, literals, punctuation, operators, trivia, spans, and error scenarios
  • Cover edge cases such as negative numbers, escaped and unterminated strings/comments, and empty input
tests/tokenizer.rs

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jun 24, 2025

Summary by CodeRabbit

  • New Features
    • Introduced a public tokenizer module for DDlog source code, allowing conversion of source text into token and span pairs.
    • Exposed new public API items for tokenisation directly from the crate root.
  • Documentation
    • Expanded documentation to detail the new tokenisation approach, supported numeric literal formats, and included a workflow diagram.
  • Tests
    • Added comprehensive integration tests covering tokenisation accuracy, span mapping, literals, trivia, errors, and complex expressions.
  • Chores
    • Updated dependencies to include required libraries for tokenisation and testing.

Summary by CodeRabbit

  • New Features

    • Introduced a tokenizer for DDlog source code, enabling conversion of source text into token streams with precise span information.
    • Exposed new public API items: the tokenizer module, a Span type, and a tokenize function for external use.
  • Documentation

    • Updated documentation to clarify the choice and implementation details of the tokenizer.
  • Tests

    • Added new tests to verify token classification and span accuracy using the rstest framework.
  • Chores

    • Added logos as a runtime dependency with specific features enabled, and rstest as a development dependency.

Walkthrough

A new tokenizer module for DDlog source code was introduced, utilising the logos crate for lexical analysis and exposing tokenize and Span from the crate root. Documentation was updated to clarify the lexer choice, and tests were added using rstest to verify tokenisation and span extraction.

Changes

Files / Grouped Paths Change Summary
Cargo.toml Added logos as a runtime dependency (with export_derive feature), phf with macros feature, and rstest as a dev-dependency.
docs/parser-plan.md Expanded documentation to specify use of a logos lexer, including a Mermaid sequence diagram illustrating tokenisation workflow.
docs/ddlint-design-and-road-map.md Added comment listing supported numeric literal formats for the T_NUMBER variant.
src/lib.rs Added and publicly exposed tokenizer module; re-exported Span and tokenize.
src/tokenizer.rs New module implementing lexical analysis with logos, exposing tokenize and Span.
tests/tokenizer.rs Added tests for the tokenizer using rstest for token kinds, span validation, and error cases.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Caller (e.g. Parser or Test)
    participant Tokenizer as tokenizer::tokenize
    participant Logos as logos::Lexer

    Client->>Tokenizer: tokenize(src: &str)
    Tokenizer->>Logos: create Lexer from src
    loop For each token
        Logos-->>Tokenizer: (Token, Span)
        Tokenizer->>Tokenizer: Map Token to SyntaxKind (check for keywords)
    end
    Tokenizer-->>Client: Vec<(SyntaxKind, Span)>
Loading

Possibly related PRs

Poem

A rabbit with code on its mind,
Hopped through the tokens it’d find.
With logos it parsed,
Each span neatly starred,
Now DDlog’s syntax is outlined!
🐇✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 373ae27 and 6844bf6.

📒 Files selected for processing (4)
  • Cargo.toml (1 hunks)
  • docs/parser-plan.md (1 hunks)
  • src/tokenizer.rs (1 hunks)
  • tests/tokenizer.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
`Cargo.toml`: Use explicit version ranges in `Cargo.toml` and keep dependencies up-to-date.

Cargo.toml: Use explicit version ranges in Cargo.toml and keep dependencies up-to-date.

  • Cargo.toml
`docs/**/*.md`: Use the markdown files within the `docs/` directory as a knowledge base and source of truth for project requirements, dependency choices, and architectural decision...

docs/**/*.md: Use the markdown files within the docs/ directory as a knowledge base and source of truth for project requirements, dependency choices, and architectural decisions.
Proactively update the relevant file(s) in the docs/ directory to reflect the latest state when new decisions are made, requirements change, libraries are added/removed, or architectural patterns evolve.
Documentation in docs/ must use en-GB-oxendict spelling and grammar, except for the word 'license'.
Validate Markdown files using make markdownlint.
Run make fmt after any documentation changes to format all Markdown files and fix table markup.
Validate Markdown Mermaid diagrams using the make nixie.
Markdown paragraphs and bullet points must be wrapped at 80 columns.
Code blocks in Markdown must be wrapped at 120 columns.
Tables and headings in Markdown must not be wrapped.

  • docs/parser-plan.md
`**/*.rs`: Document public APIs using Rustdoc comments (`///`) so documentation can be generated with cargo doc. Every module must begin with a module level (`//!`) comment explain...

**/*.rs: Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Use explicit version ranges in Cargo.toml and keep dependencies up-to-date.
Use rstest fixtures for shared setup.
Replace duplicated tests with #[rstest(...)] parameterised cases.
Prefer mockall for mocks/stubs.
Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.

  • tests/tokenizer.rs
  • src/tokenizer.rs
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: build-test
🔇 Additional comments (11)
Cargo.toml (1)

10-14: Well-structured dependency additions for tokenizer implementation.

The explicit version ranges and feature selections align with the coding guidelines and support the tokenizer implementation effectively. The addressed version specification concerns from previous reviews are now resolved.

docs/parser-plan.md (1)

36-55: Excellent documentation of tokenizer design and workflow.

The updated documentation clearly explains the logos lexer choice and phf::Map usage for keyword lookup. The Mermaid sequence diagram effectively illustrates the tokenization process, enhancing understanding for future maintainers.

src/tokenizer.rs (4)

1-13: Well-documented module with clear public API.

The module documentation clearly explains the purpose and approach. The Span type alias provides a clean interface for byte ranges.


15-98: Comprehensive token definitions with robust regex patterns.

The Token enum covers all necessary DDlog syntax elements. The number regex on line 24 now properly supports hexadecimal, binary, octal, floating-point, and scientific notation formats, addressing previous feedback effectively.


105-181: Efficient keyword lookup implementation.

The static phf::Map provides O(1) keyword lookup performance, which is much better than the previous large match statement approach. The comprehensive keyword coverage supports DDlog's syntax requirements.


187-245: Robust tokenization function with good performance characteristics.

The implementation properly handles error cases, pre-allocates vector capacity for performance, and maintains span information for CST construction. The logic correctly maps tokens to SyntaxKind variants and preserves trivia tokens.

tests/tokenizer.rs (5)

1-11: Well-structured test module with appropriate setup.

The module documentation clearly describes the test scope, and the clippy expectation for .expect() usage in tests is properly justified and scoped.


17-40: Good foundation tests for basic tokenization.

The single token tests and span verification provide solid coverage of fundamental tokenizer behaviour. The fixture usage demonstrates good test organisation practices.


55-68: Excellent coverage of extended numeric formats.

These tests verify support for hexadecimal, binary, octal, and scientific notation number formats, ensuring the comprehensive regex pattern works correctly across all supported numeric literals.


86-107: Robust error handling test coverage.

The tests properly verify that unknown characters and unterminated strings produce error tokens, ensuring the tokenizer handles invalid input gracefully.


216-224: Valuable integration test for real-world usage.

The complex expression test ensures the tokenizer works correctly on realistic DDlog code, verifying that the implication operator and punctuation are properly recognised without producing errors.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate Unit Tests
  • Create PR with Unit Tests
  • Post Copyable Unit Tests in Comment
  • Commit Unit Tests in branch codex/build-differential-datalog-tokenizer

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai auto-generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🔭 Outside diff range comments (1)
src/lib.rs (1)

1-3: Update the crate documentation to reflect expanded scope.

The module comment states "Currently exposes only the parser language definitions" but now also exposes tokenizer functionality.

 //! Library crate for ddlint.
 //!
-//! Currently exposes only the parser language definitions.
+//! Exposes parser language definitions and lexical analysis functionality.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7ade219 and 21ac3ff.

📒 Files selected for processing (5)
  • Cargo.toml (1 hunks)
  • docs/parser-plan.md (1 hunks)
  • src/lib.rs (1 hunks)
  • src/tokenizer.rs (1 hunks)
  • tests/tokenizer.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
`docs/**/*.md`: Use the markdown files within the `docs/` directory as a knowledge base and source of truth for project requirements, dependency choices, and architectural decision...

docs/**/*.md: Use the markdown files within the docs/ directory as a knowledge base and source of truth for project requirements, dependency choices, and architectural decisions.
Proactively update the relevant file(s) in the docs/ directory to reflect the latest state when new decisions are made, requirements change, libraries are added/removed, or architectural patterns evolve.
Documentation in docs/ must use en-GB-oxendict spelling and grammar, except for the word 'license'.
Validate Markdown files using make markdownlint.
Run make fmt after any documentation changes to format all Markdown files and fix table markup.
Validate Markdown Mermaid diagrams using the make nixie.
Markdown paragraphs and bullet points must be wrapped at 80 columns.
Code blocks in Markdown must be wrapped at 120 columns.
Tables and headings in Markdown must not be wrapped.

  • docs/parser-plan.md
`Cargo.toml`: Use explicit version ranges in `Cargo.toml` and keep dependencies up-to-date.

Cargo.toml: Use explicit version ranges in Cargo.toml and keep dependencies up-to-date.

  • Cargo.toml
`**/*.rs`: Document public APIs using Rustdoc comments (`///`) so documentation can be generated with cargo doc. Every module must begin with a module level (`//!`) comment explain...

**/*.rs: Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Use explicit version ranges in Cargo.toml and keep dependencies up-to-date.
Use rstest fixtures for shared setup.
Replace duplicated tests with #[rstest(...)] parameterised cases.
Prefer mockall for mocks/stubs.
Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.

  • src/lib.rs
  • tests/tokenizer.rs
  • src/tokenizer.rs
🔇 Additional comments (7)
docs/parser-plan.md (1)

36-39: LGTM! Clear explanation of the lexer design choice.

The documentation effectively explains why the logos lexer was chosen over chumsky's text utilities, highlighting the benefits of declarative token definitions whilst maintaining parser interoperability.

src/lib.rs (1)

8-8: LGTM! Clean module export structure.

The public module declaration and re-exports properly expose the tokenizer functionality at the crate root.

Also applies to: 11-11

tests/tokenizer.rs (1)

1-17: LGTM! Good use of rstest for parameterised testing.

The test structure follows the coding guidelines well, using rstest fixtures and parameterised cases as recommended.

src/tokenizer.rs (4)

1-6: LGTM! Excellent module documentation.

The module documentation clearly explains the purpose and approach, following the coding guidelines for module-level comments.


180-182: LGTM! Good use of #[must_use] attribute.

The function signature correctly uses the #[must_use] attribute and follows naming conventions.


187-187: Replace unwrap_or with expect per coding guidelines.

The coding guidelines prefer .expect() over .unwrap() methods for better error context.

-        let text = src.get(span.clone()).unwrap_or("");
+        let text = src.get(span.clone()).expect("lexer span should be valid for source");
⛔ Skipped due to learnings
Learnt from: CR
PR: leynos/wireframe#0
File: AGENTS.md:0-0
Timestamp: 2025-06-24T16:40:23.802Z
Learning: Prefer `.expect()` over `.unwrap()` in Rust.
Learnt from: CR
PR: leynos/ddlint#0
File: AGENTS.md:0-0
Timestamp: 2025-06-24T16:39:54.809Z
Learning: Prefer `.expect()` over `.unwrap()` in Rust.

25-26: Review string literal pattern for completeness.

The string regex r#""([^"\\]|\\.)*""# handles basic escaped characters but may not cover all DDlog string literal requirements. Verify it handles:

  • Escaped quotes: "\""
  • Escaped backslashes: "\\"
  • Unicode escapes: "\u{1234}"
  • Other escape sequences: "\n", "\t", etc.
#!/bin/bash
# Test the string regex against various cases
python3 << 'EOF'
import re
pattern = r'"([^"\\]|\\.)*"'
test_cases = [
    '""',           # empty string
    '"hello"',      # simple string
    '"don\'t"',     # apostrophe
    '"\""',         # escaped quote
    '"\\\\"',       # escaped backslash
    '"line1\\nline2"', # escaped newline
    '"\\u{1234}"',  # unicode escape
]

for case in test_cases:
    match = re.fullmatch(pattern, case)
    print(f"{case:15} -> {'✓' if match else '✗'}")
EOF

Comment thread Cargo.toml Outdated
Comment thread Cargo.toml Outdated
Comment thread tests/tokenizer.rs
Comment thread tests/tokenizer.rs
Comment thread src/tokenizer.rs Outdated
Comment thread src/tokenizer.rs
Comment thread src/tokenizer.rs Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🔭 Outside diff range comments (1)
tests/tokenizer.rs (1)

1-72: Add required module-level documentation.

The coding guidelines require that every module must begin with a module-level (//!) comment explaining the module's purpose and utility.

Add a module-level comment at the beginning of the file:

+//! Integration tests for the tokenizer module.
+//! 
+//! Tests verify that the logos-based lexer correctly tokenises DDlog source code
+//! into (SyntaxKind, Span) pairs, covering keywords, literals, trivia, and error cases.
+
 use ddlint::{SyntaxKind, tokenize};
♻️ Duplicate comments (3)
tests/tokenizer.rs (3)

9-17: Expand test coverage for comprehensive tokenizer validation.

The current tests only cover a few keywords and identifiers. Consider adding test cases for all token types, operators, punctuation, edge cases, and complex expressions to ensure comprehensive validation of the tokenizer functionality.


23-23: Improve error handling by using .expect() instead of .unwrap_or().

The coding guidelines prefer .expect() over .unwrap() methods for better error messages.

Apply this diff:

-        let text = simple_input.get(span.clone()).unwrap_or("");
+        let text = simple_input.get(span.clone()).expect("span should be valid for input");

53-57: Replace panic with .expect() for clearer error handling.

Same issue as the previous test function - use .expect() instead of the panic pattern.

Apply this diff:

-    let first = tokens
-        .first()
-        .cloned()
-        .unwrap_or_else(|| panic!("no token"));
+    let first = tokens
+        .first()
+        .cloned()
+        .expect("tokenizer should produce at least one token");
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 21ac3ff and cf35d9f.

📒 Files selected for processing (1)
  • tests/tokenizer.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.rs`: Document public APIs using Rustdoc comments (`///`) so documentation can be generated with cargo doc. Every module must begin with a module level (`//!`) comment explain...

**/*.rs: Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Use explicit version ranges in Cargo.toml and keep dependencies up-to-date.
Use rstest fixtures for shared setup.
Replace duplicated tests with #[rstest(...)] parameterised cases.
Prefer mockall for mocks/stubs.
Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.

  • tests/tokenizer.rs

Comment thread tests/tokenizer.rs
Comment thread tests/tokenizer.rs
coderabbitai Bot added a commit that referenced this pull request Jun 24, 2025
Docstrings generation was requested by @leynos.

* #7 (comment)

The following files were modified:

* `src/tokenizer.rs`
* `tests/tokenizer.rs`
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jun 24, 2025

Note

Generated docstrings for this pull request at #8

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (5)
Cargo.toml (1)

10-13: Dependency version ranges properly addressed.

The explicit version ranges for both logos and rstest dependencies now follow the coding guidelines and address the concerns raised in previous reviews.

tests/tokenizer.rs (4)

29-29: Replace .unwrap_or("") with .expect() for better error handling.

This unresolved issue from previous reviews violates the coding guidelines which prefer .expect() over .unwrap() methods.

Apply this diff:

-        let text = simple_input.get(span.clone()).unwrap_or("");
+        let text = simple_input.get(span.clone()).expect("span should be valid for input");

44-47: Replace .unwrap_or_else(|| panic!()) with .expect() for clearer error handling.

This unresolved issue from previous reviews should use .expect() as preferred by the coding guidelines.

Apply this diff:

-    let first = tokens
-        .first()
-        .cloned()
-        .unwrap_or_else(|| panic!("no token"));
+    let first = tokens
+        .first()
+        .cloned()
+        .expect("tokenizer should produce at least one token");

59-62: Replace .unwrap_or_else(|| panic!()) with .expect() for clearer error handling.

Consistent with previous suggestions, use .expect() for better error messages.

Apply this diff:

-    let first = tokens
-        .first()
-        .cloned()
-        .unwrap_or_else(|| panic!("no token"));
+    let first = tokens
+        .first()
+        .cloned()
+        .expect("tokenizer should produce at least one token");

72-75: Replace .unwrap_or_else(|| panic!()) with .expect() for clearer error handling.

This unresolved issue from previous reviews needs to be addressed for consistency with coding guidelines.

Apply this diff:

-    let first = tokens
-        .first()
-        .cloned()
-        .unwrap_or_else(|| panic!("no token"));
+    let first = tokens
+        .first()
+        .cloned()
+        .expect("tokenizer should produce at least one token");
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cf35d9f and 9625b8c.

📒 Files selected for processing (4)
  • Cargo.toml (1 hunks)
  • docs/parser-plan.md (1 hunks)
  • src/lib.rs (1 hunks)
  • tests/tokenizer.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
`Cargo.toml`: Use explicit version ranges in `Cargo.toml` and keep dependencies up-to-date.

Cargo.toml: Use explicit version ranges in Cargo.toml and keep dependencies up-to-date.

  • Cargo.toml
`**/*.rs`: Document public APIs using Rustdoc comments (`///`) so documentation can be generated with cargo doc. Every module must begin with a module level (`//!`) comment explain...

**/*.rs: Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Use explicit version ranges in Cargo.toml and keep dependencies up-to-date.
Use rstest fixtures for shared setup.
Replace duplicated tests with #[rstest(...)] parameterised cases.
Prefer mockall for mocks/stubs.
Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.

  • tests/tokenizer.rs
  • src/lib.rs
`docs/**/*.md`: Use the markdown files within the `docs/` directory as a knowledge base and source of truth for project requirements, dependency choices, and architectural decision...

docs/**/*.md: Use the markdown files within the docs/ directory as a knowledge base and source of truth for project requirements, dependency choices, and architectural decisions.
Proactively update the relevant file(s) in the docs/ directory to reflect the latest state when new decisions are made, requirements change, libraries are added/removed, or architectural patterns evolve.
Documentation in docs/ must use en-GB-oxendict spelling and grammar, except for the word 'license'.
Validate Markdown files using make markdownlint.
Run make fmt after any documentation changes to format all Markdown files and fix table markup.
Validate Markdown Mermaid diagrams using the make nixie.
Markdown paragraphs and bullet points must be wrapped at 80 columns.
Code blocks in Markdown must be wrapped at 120 columns.
Tables and headings in Markdown must not be wrapped.

  • docs/parser-plan.md
🔇 Additional comments (5)
src/lib.rs (2)

3-3: Documentation update appropriately reflects expanded functionality.

The crate documentation accurately describes the addition of lexical analysis capabilities.


8-11: Clean integration of tokenizer module into public API.

The tokenizer module declaration and re-exports follow Rust conventions and provide a clean public interface for the lexical analysis functionality.

docs/parser-plan.md (1)

36-55: Excellent documentation of tokenizer design and workflow.

The addition clearly explains the rationale for choosing logos over chumsky text utilities, and the sequence diagram effectively illustrates the tokenization process from client request through to returning (SyntaxKind, Span) pairs.

tests/tokenizer.rs (2)

1-8: Excellent module documentation and imports.

The module-level documentation clearly explains the purpose and scope of the tokenizer tests, and the imports are appropriate for the testing framework used.


97-111: Comprehensive test coverage demonstrates robust tokenizer implementation.

The test suite effectively covers edge cases, error conditions, and complex expressions, providing good confidence in the tokenizer's correctness across various input scenarios.

Comment thread tests/tokenizer.rs Outdated
@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jun 25, 2025

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jun 25, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jun 25, 2025

@sourcery-ai review

Repository owner deleted a comment from coderabbitai Bot Jun 25, 2025
Repository owner deleted a comment from coderabbitai Bot Jun 25, 2025
Repository owner deleted a comment from coderabbitai Bot Jun 25, 2025
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes - here's some feedback:

  • The raw‐string regex for number literals uses \\. which matches a backslash+dot rather than the intended literal dot; switch to r"[0-9]+(?:\.[0-9]+)?(?:[eE][+-]?[0-9]+)?" (or split parts) to correctly match decimals.
  • Consider replacing the runtime HashMap in KEYWORDS with a compile‐time phf::Map (or equivalent) to eliminate startup overhead and speed up keyword lookups.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The raw‐string regex for number literals uses `\\.` which matches a backslash+dot rather than the intended literal dot; switch to `r"[0-9]+(?:\.[0-9]+)?(?:[eE][+-]?[0-9]+)?"` (or split parts) to correctly match decimals.
- Consider replacing the runtime `HashMap` in `KEYWORDS` with a compile‐time `phf::Map` (or equivalent) to eliminate startup overhead and speed up keyword lookups.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 23a5207 and 373ae27.

📒 Files selected for processing (1)
  • tests/tokenizer.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.rs`: Document public APIs using Rustdoc comments (`///`) so documentation can be generated with cargo doc. Every module must begin with a module level (`//!`) comment explain...

**/*.rs: Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Use explicit version ranges in Cargo.toml and keep dependencies up-to-date.
Use rstest fixtures for shared setup.
Replace duplicated tests with #[rstest(...)] parameterised cases.
Prefer mockall for mocks/stubs.
Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.

  • tests/tokenizer.rs
🔇 Additional comments (2)
tests/tokenizer.rs (2)

1-15: Excellent module setup and adherence to testing guidelines.

The module documentation clearly explains the purpose, and the use of rstest fixtures aligns perfectly with the coding guidelines. The expect attribute with a clear reason is appropriate for test code.


17-216: Comprehensive test coverage successfully addresses previous feedback.

The test suite now provides excellent coverage of the tokenizer functionality, addressing the expansion suggestions from previous reviews. The tests cover all major token types, error conditions, edge cases, and complex expressions effectively.

Comment thread tests/tokenizer.rs Outdated
Comment thread tests/tokenizer.rs
@leynos leynos merged commit f842e97 into main Jun 25, 2025
2 checks passed
@leynos leynos deleted the codex/build-differential-datalog-tokenizer branch June 25, 2025 02:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant