Skip to content

Refactor parser span handling#48

Merged
leynos merged 5 commits intomainfrom
codex/refactor-build_green_tree-parameters
Jul 4, 2025
Merged

Refactor parser span handling#48
leynos merged 5 commits intomainfrom
codex/refactor-build_green_tree-parameters

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Jul 4, 2025

Summary

  • collapse parser span arguments into a ParsedSpans struct
  • adjust parse and build_green_tree to use new struct
  • update documentation for parser plan

Testing

  • make fmt
  • make lint
  • make test
  • make markdownlint
  • make nixie

https://chatgpt.com/codex/tasks/task_e_6867382d39a083228a9dab0ba898535c

Summary by Sourcery

Refactor parser span handling by consolidating individual span lists into a unified ParsedSpans struct, updating core functions and documentation to use the new struct, and adding validation for sorted spans.

Enhancements:

  • Introduce ParsedSpans struct to encapsulate all parser span collections
  • Simplify parse_tokens and build_green_tree signatures to accept a single ParsedSpans argument
  • Add assert_sorted method on ParsedSpans to enforce sorted and non-overlapping spans

Documentation:

  • Update parser-plan.md and inline doc comments to reference ParsedSpans instead of individual span lists

Tests:

  • Adapt tests for build_green_tree to construct and use ParsedSpans in panic scenarios

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Jul 4, 2025

Reviewer's Guide

Encapsulate parser span collections into a ParsedSpans struct and update parsing and green-tree construction interfaces to use it, simplifying span management and enforcing sorted invariants.

Sequence diagram for updated parser span handling

sequenceDiagram
    participant Parser
    participant SpanCollector
    participant CSTBuilder
    participant ASTRoot
    Parser->>SpanCollector: parse_tokens(tokens, src)
    SpanCollector-->>Parser: (ParsedSpans, errors)
    Parser->>CSTBuilder: build_green_tree(tokens, src, ParsedSpans)
    CSTBuilder-->>ASTRoot: Root::from_green(green)
    ASTRoot->>ASTRoot: functions() -> Vec<Function>
Loading

Class diagram for ParsedSpans and parser refactor

classDiagram
    class ParsedSpans {
        +Vec<Span> imports
        +Vec<Span> typedefs
        +Vec<Span> relations
        +Vec<Span> indexes
        +Vec<Span> functions
        +Vec<Span> rules
        +assert_sorted()
    }
    class Parsed {
        +GreenNode green
        +Vec<Simple<SyntaxKind>> errors
    }
    class GreenNode
    class Simple
    class SyntaxKind
    class Span

    ParsedSpans --* Span : contains
    Parsed --* GreenNode : has
    Parsed --* Simple : has errors
    Simple --* SyntaxKind : for
Loading

File-Level Changes

Change Details Files
Introduce ParsedSpans struct to encapsulate span categories
  • Define ParsedSpans with imports, typedefs, relations, indexes, functions, and rules fields
  • Derive Debug, Default, Clone, PartialEq for ParsedSpans
  • Implement assert_sorted method using ensure_span_lists_sorted
src/parser/mod.rs
Refactor parse_tokens to return ParsedSpans
  • Change parse_tokens signature to return (ParsedSpans, errors)
  • Update doc comments and examples to reference ParsedSpans
  • Wrap individual span vectors into a ParsedSpans instance for return
src/parser/mod.rs
Update parse and build_green_tree to use ParsedSpans
  • Collapse parse() calls to parse_tokens into a single spans tuple
  • Modify build_green_tree signature to accept ParsedSpans instead of multiple lists
  • Replace manual span-sorting checks with spans.assert_sorted()
src/parser/mod.rs
Update tests and documentation for ParsedSpans usage
  • Adjust unit tests to construct and pass ParsedSpans to build_green_tree
  • Change panic tests to use spans.assert_sorted through ParsedSpans
  • Revise parser-plan.md sequence diagram to reference spans parameter
src/parser/mod.rs
docs/parser-plan.md

Possibly linked issues

  • #0: PR implements issue's solution by creating ParsedSpans struct and updating build_green_tree/parse_tokens.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 4, 2025

Summary by CodeRabbit

  • Documentation

    • Updated sequence diagram to reflect a parameter rename for clarity.
  • New Features

    • Introduced a unified structure to group all parsed statement categories, improving organisation and future extensibility.
  • Refactor

    • Simplified function signatures by replacing multiple span vectors with a single grouped struct.
    • Updated related documentation and tests to align with the new structure.

Summary by CodeRabbit

  • Documentation

    • Updated sequence diagram to reflect a parameter name change for clarity.
  • Refactor

    • Improved code organisation by grouping related parsed statement spans into a single struct, resulting in a simplified and more streamlined parsing process. No changes to parsing logic or error handling.

Walkthrough

The parser module was refactored to introduce a ParsedSpans struct, consolidating vectors of statement spans into a single structure. Function signatures for parse_tokens and build_green_tree were updated to use this struct. Documentation and a related sequence diagram in the docs were updated to reflect the parameter renaming and new struct usage.

Changes

File(s) Change Summary
src/parser/mod.rs Introduced public ParsedSpans struct; updated parse_tokens and build_green_tree signatures to use it; refactored internal logic and documentation accordingly.
docs/parser-plan.md Updated sequence diagram to rename parameter from function_spans to spans in build_green_tree call.

Sequence Diagram(s)

sequenceDiagram
    participant Parser
    participant CSTBuilder

    Parser->>Parser: parse_tokens(tokens, src)
    Parser->>Parser: returns (ParsedSpans, errors)
    Parser->>CSTBuilder: build_green_tree(tokens, src, spans: ParsedSpans)
    CSTBuilder-->>Parser: GreenNode
Loading

Possibly related PRs

Poem

A bundle of spans in a struct so neat,
No more tuples crowding our parser’s seat!
With functions and rules all tucked in one place,
The code hops along at a tidier pace.
🐇✨

“Spans together, we parse with glee—
Refactored and ready, as clean as can be!”

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 Clippy (1.86.0)
Updating crates.io index

warning: failed to write cache, path: /usr/local/registry/index/index.crates.io-1949cf8c6b5b557f/.cache/ch/um/chumsky, error: Permission denied (os error 13)
Locking 60 packages to latest Rust 1.86.0 compatible versions
Adding chumsky v0.9.3 (available: v0.10.1)
Adding logos v0.13.0 (available: v0.15.0)
Adding phf v0.11.3 (available: v0.12.1)
Adding rowan v0.15.16 (available: v0.16.1)
Downloading crates ...
Downloaded ahash v0.8.12
error: failed to create directory /usr/local/registry/cache/index.crates.io-1949cf8c6b5b557f

Caused by:
Permission denied (os error 13)

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate Unit Tests
  • Create PR with Unit Tests
  • Post Copyable Unit Tests in a Comment
  • Commit Unit Tests in branch codex/refactor-build_green_tree-parameters

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai auto-generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes - here's some feedback:

  • You’ve got six repetitive assert_spans_sorted(&spans.<field>) calls in build_green_tree – consider adding an assert_sorted(&self) method on ParsedSpans or using a small macro/iterator to DRY that up.
  • Deriving or implementing Default (and possibly Clone) for ParsedSpans could improve ergonomics when you need an empty or duplicated spans struct in tests or future code paths.
  • You might add a method on ParsedSpans to iterate over all Span vectors (e.g. all_spans(&self)) for any generic processing, rather than accessing each field manually.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- You’ve got six repetitive `assert_spans_sorted(&spans.<field>)` calls in `build_green_tree` – consider adding an `assert_sorted(&self)` method on ParsedSpans or using a small macro/iterator to DRY that up.
- Deriving or implementing `Default` (and possibly `Clone`) for ParsedSpans could improve ergonomics when you need an empty or duplicated spans struct in tests or future code paths.
- You might add a method on ParsedSpans to iterate over all Span vectors (e.g. `all_spans(&self)`) for any generic processing, rather than accessing each field manually.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 61186b1 and 73efb52.

📒 Files selected for processing (2)
  • docs/parser-plan.md (1 hunks)
  • src/parser/mod.rs (5 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
`docs/**/*.md`: Documentation must use en-GB-oxendict spelling and grammar (with the exception of 'license' which is to be left unchanged for community consistency).

docs/**/*.md: Documentation must use en-GB-oxendict spelling and grammar (with the exception of 'license' which is to be left unchanged for community consistency).

📄 Source: CodeRabbit Inference Engine (AGENTS.md)

List of files the instruction was applied to:

  • docs/parser-plan.md
`**/*.md`: Markdown paragraphs and bullet points must be wrapped at 80 columns. Code blocks must be wrapped at 120 columns. Tables and headings must not be wrapped.

**/*.md: Markdown paragraphs and bullet points must be wrapped at 80 columns.
Code blocks must be wrapped at 120 columns.
Tables and headings must not be wrapped.

📄 Source: CodeRabbit Inference Engine (AGENTS.md)

List of files the instruction was applied to:

  • docs/parser-plan.md
`**/*.md`: * Avoid 2nd person or 1st person pronouns ("I", "you", "we") * Use en...

**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")

  • Use en-oxendic spelling and grammar.
  • Paragraphs and bullets must be wrapped to 80 columns, except where a long URL would prevent this (in which case, silence MD013 for that line)
  • Code blocks should be wrapped to 120 columns.
  • Headings must not be wrapped.

⚙️ Source: CodeRabbit Configuration File

List of files the instruction was applied to:

  • docs/parser-plan.md
`**/*.rs`: Comment why, not what. Explain assumptions, edge cases, trade-offs, o...

**/*.rs: Comment why, not what. Explain assumptions, edge cases, trade-offs, or complexity. Don't echo the obvious.
Comments must use en-GB-oxendict spelling and grammar.
Function documentation must include clear examples.
Name things precisely. Use clear, descriptive variable and function names. For booleans, prefer names with is, has, or should.
Each file should encapsulate a coherent module. Group related code (e.g., models + utilities + fixtures) close together.
Group by feature, not layer. Colocate views, logic, fixtures, and helpers related to a domain concept rather than splitting by type.
Every module must begin with a module level (//! ) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Place function attributes after doc comments.
Do not use return in single-line functions.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Clippy warnings MUST be disallowed.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Use predicate functions for conditional criteria with more than two branches.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Prefer .expect() over .unwrap().

📄 Source: CodeRabbit Inference Engine (AGENTS.md)

List of files the instruction was applied to:

  • src/parser/mod.rs
`**/*.rs`: * Seek to keep the cyclomatic complexity of functions no more than 12...

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments must use en-GB-oxendict spelling and grammar.

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

⚙️ Source: CodeRabbit Configuration File

List of files the instruction was applied to:

  • src/parser/mod.rs
🔇 Additional comments (4)
src/parser/mod.rs (3)

242-244: LGTM!

The function correctly uses the new ParsedSpans struct, maintaining the same functionality whilst improving code organisation.


257-258: Comprehensive update to use the new struct.

The documentation, function signature, and return value construction are all correctly updated to use ParsedSpans. The examples in documentation remain clear and helpful.

Also applies to: 263-264, 269-269, 283-290


832-833: Excellent API simplification.

The function signature is much cleaner with a single ParsedSpans parameter instead of multiple individual slice parameters. All usages are correctly updated to access the struct fields, and the validation logic remains intact.

Also applies to: 835-841, 845-850

docs/parser-plan.md (1)

78-78: Documentation correctly updated to reflect code changes.

The parameter name change from function_spans to spans accurately reflects the new ParsedSpans struct that contains all span types, not just function spans.

Comment thread src/parser/mod.rs
@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 4, 2025

@coderabbitai resume

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 4, 2025

✅ Actions performed

Reviews resumed.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 4, 2025

@sourcery-ai review

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes - here's some feedback:

  • The doc comment on build_green_tree only mentions spans.imports and spans.typedefs—please update it to list all of the ParsedSpans fields (imports, typedefs, relations, indexes, functions, rules) for clarity.
  • Consider renaming ParsedSpans::assert_sorted to something like debug_assert_sorted and using debug_assert! internally so that ordering checks are only active in debug builds.
  • Since you already derive PartialEq for ParsedSpans, you can also derive Eq to fully support equality semantics.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The doc comment on `build_green_tree` only mentions `spans.imports` and `spans.typedefs`—please update it to list all of the ParsedSpans fields (imports, typedefs, relations, indexes, functions, rules) for clarity.
- Consider renaming `ParsedSpans::assert_sorted` to something like `debug_assert_sorted` and using `debug_assert!` internally so that ordering checks are only active in debug builds.
- Since you already derive `PartialEq` for `ParsedSpans`, you can also derive `Eq` to fully support equality semantics.

## Individual Comments

### Comment 1
<location> `src/parser/mod.rs:850` </location>
<code_context>
+/// that tokens are wrapped into well-formed nodes during tree construction.
+/// Spans are checked with debug assertions.
+fn build_green_tree(tokens: Vec<(SyntaxKind, Span)>, src: &str, spans: &ParsedSpans) -> GreenNode {
+    spans.assert_sorted();
     let mut builder = GreenNodeBuilder::new();
     builder.start_node(DdlogLanguage::kind_to_raw(SyntaxKind::N_DATALOG_PROGRAM));
</code_context>

<issue_to_address>
assert_sorted is always called, which may impact performance in release builds.

Consider replacing with debug_assert! or guarding with a feature flag if this check is only needed during development.
</issue_to_address>

<suggested_fix>
<<<<<<< SEARCH
    spans.assert_sorted();
=======
    debug_assert!({
        spans.assert_sorted();
        true
    });
>>>>>>> REPLACE

</suggested_fix>

### Comment 2
<location> `src/parser/mod.rs:197` </location>
<code_context>
     errors: Vec<Simple<SyntaxKind>>,
 }

+/// Spans for each parsed statement category.
+#[derive(Debug, Default, Clone, PartialEq)]
+pub struct ParsedSpans {
</code_context>

<issue_to_address>
Module is missing a `//!` module-level comment at the top.

Please add a `//!` doc comment at the very top of the module to describe its purpose, as required by the review instructions.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread src/parser/mod.rs
/// that tokens are wrapped into well-formed nodes during tree construction.
/// Spans are checked with debug assertions.
fn build_green_tree(tokens: Vec<(SyntaxKind, Span)>, src: &str, spans: &ParsedSpans) -> GreenNode {
spans.assert_sorted();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (performance): assert_sorted is always called, which may impact performance in release builds.

Consider replacing with debug_assert! or guarding with a feature flag if this check is only needed during development.

Suggested change
spans.assert_sorted();
debug_assert!({
spans.assert_sorted();
true
});

Comment thread src/parser/mod.rs
errors: Vec<Simple<SyntaxKind>>,
}

/// Spans for each parsed statement category.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (review_instructions): Module is missing a //! module-level comment at the top.

Please add a //! doc comment at the very top of the module to describe its purpose, as required by the review instructions.

Review instructions:

Path patterns: **/*.rs

Instructions:
Every module must begin with a //! comment.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 4, 2025

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 4, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
src/parser/mod.rs (1)

214-226: Runtime cost of always calling assert_sorted.

This check executes in every build, even release, which may become significant for large files. Replacing it with debug_assert! (or an opt-in feature flag) avoids the cost in production while still catching bugs in debug/test builds.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 73efb52 and 2d1f73d.

📒 Files selected for processing (2)
  • docs/parser-plan.md (1 hunks)
  • src/parser/mod.rs (7 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
`**/*.md`: * Avoid 2nd person or 1st person pronouns ("I", "you", "we") * Use en...

**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")

  • Use en-oxendic spelling and grammar.
  • Paragraphs and bullets must be wrapped to 80 columns, except where a long URL would prevent this (in which case, silence MD013 for that line)
  • Code blocks should be wrapped to 120 columns.
  • Headings must not be wrapped.

⚙️ Source: CodeRabbit Configuration File

List of files the instruction was applied to:

  • docs/parser-plan.md
`**/*.rs`: * Seek to keep the cyclomatic complexity of functions no more than 12...

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments must use en-GB-oxendict spelling and grammar.

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

⚙️ Source: CodeRabbit Configuration File

List of files the instruction was applied to:

  • src/parser/mod.rs
🧬 Code Graph Analysis (1)
src/parser/mod.rs (3)
src/parser/token_stream.rs (3)
  • tokens (105-107)
  • src (118-120)
  • new (41-47)
src/parser/span_collector.rs (1)
  • new (44-50)
src/language.rs (1)
  • kind_to_raw (185-190)
🔇 Additional comments (1)
docs/parser-plan.md (1)

76-79: Documentation line LGTM.

The sequence diagram now matches the new ParsedSpans parameter.

Comment thread src/parser/mod.rs
Comment on lines +846 to 860
/// `spans.imports` and `spans.typedefs` must be sorted and non-overlapping so
/// that tokens are wrapped into well-formed nodes during tree construction.
/// Spans are checked with debug assertions.
fn build_green_tree(tokens: Vec<(SyntaxKind, Span)>, src: &str, spans: &ParsedSpans) -> GreenNode {
spans.assert_sorted();
let mut builder = GreenNodeBuilder::new();
builder.start_node(DdlogLanguage::kind_to_raw(SyntaxKind::N_DATALOG_PROGRAM));

let mut import_iter = imports.iter().peekable();
let mut typedef_iter = typedefs.iter().peekable();
let mut relation_iter = relations.iter().peekable();
let mut index_iter = indexes.iter().peekable();
let mut function_iter = functions.iter().peekable();
let mut rule_iter = rules.iter().peekable();
let mut import_iter = spans.imports.iter().peekable();
let mut typedef_iter = spans.typedefs.iter().peekable();
let mut relation_iter = spans.relations.iter().peekable();
let mut index_iter = spans.indexes.iter().peekable();
let mut function_iter = spans.functions.iter().peekable();
let mut rule_iter = spans.rules.iter().peekable();

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Jul 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Pass tokens by slice to avoid an unnecessary move/clone.

build_green_tree consumes a whole Vec, but it only iterates. Accepting a slice keeps ownership with the caller and removes one move:

-fn build_green_tree(tokens: Vec<(SyntaxKind, Span)>, src: &str, spans: &ParsedSpans) -> GreenNode {
+fn build_green_tree(tokens: &[(SyntaxKind, Span)], src: &str, spans: &ParsedSpans) -> GreenNode {

Call-site in parse:

let green = build_green_tree(&tokens, src, &spans);

(Only the borrow changes; the rest of the code compiles unchanged.)

🤖 Prompt for AI Agents
In src/parser/mod.rs around lines 846 to 860, the function build_green_tree
currently takes ownership of a Vec of tokens, causing an unnecessary move.
Change the function signature to accept a slice of tokens instead of a Vec by
replacing Vec<(SyntaxKind, Span)> with &[(SyntaxKind, Span)]. Update all uses
inside the function accordingly to work with the slice. Also, update the call
site in the parse function to pass a reference to the tokens vector rather than
moving it, i.e., call build_green_tree(&tokens, src, &spans). This avoids
cloning or moving the vector and keeps ownership with the caller.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai New GitHub issue for this, please

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

@leynos I'll create a GitHub issue for this optimization suggestion.


🏁 Script executed:

#!/bin/bash
gh issue create \
  --title "Optimise build_green_tree to accept token slice instead of Vec" \
  --body "## Summary

The \`build_green_tree\` function currently takes ownership of a \`Vec<(SyntaxKind, Span)>\` but only iterates over the tokens. This causes an unnecessary move operation.

## Proposed Change

Change the function signature from:
\`\`\`rust
fn build_green_tree(tokens: Vec<(SyntaxKind, Span)>, src: &str, spans: &ParsedSpans) -> GreenNode
\`\`\`

To:
\`\`\`rust
fn build_green_tree(tokens: &[(SyntaxKind, Span)], src: &str, spans: &ParsedSpans) -> GreenNode
\`\`\`

And update the call site in \`parse\` function:
\`\`\`rust
let green = build_green_tree(&tokens, src, &spans);
\`\`\`

## Benefits

- Avoids unnecessary move/clone operations
- Keeps ownership with the caller
- More idiomatic Rust (prefer borrowing over moving when possible)

## References

- Original PR: https://github.com/leynos/ddlint/pull/48
- Code review comment: https://github.com/leynos/ddlint/pull/48#discussion_r2186173998" \
  --assignee leynos

Length of output: 1047


Perfect! I've successfully created GitHub issue #51 for this optimization:

Optimise build_green_tree to accept token slice instead of Vec

The issue captures the technical details of the proposed change, explains the benefits of avoiding unnecessary move operations, and includes backlinks to this PR and comment for context. It's been assigned to you as requested.


🧬 Code Graph Analysis Results

Relevant snippets from src/parser/token_stream.rs (lines 41-47, 59-61, 76-78, 89-93, 105-107, 118-120, 158-171, 214-229)

pub(crate) fn new(tokens: &'a [(SyntaxKind, Span)], src: &'a str) -> Self {
    Self {
        tokens,
        src,
        cursor: 0,
    }
}

pub(crate) fn cursor(&self) -> usize {
    self.cursor
}

pub(crate) fn peek(&self) -> Option<&(SyntaxKind, Span)> {
    self.tokens.get(self.cursor)
}

pub(crate) fn advance(&mut self) {
    if self.cursor < self.tokens.len() {
        self.cursor += 1;
    }
}

pub(crate) fn tokens(&self) -> &[(SyntaxKind, Span)] {
    self.tokens
}

pub(crate) fn src(&self) -> &str {
    self.src
}

/// Returns the position immediately after the end of the current line starting at token index `start`.
/// If the token at `start` is out of bounds, returns the length of the source string.
/// Iterates tokens from `start` forward until a token containing a newline is found, returning its end offset.
pub(crate) fn line_end(&self, start: usize) -> usize {
    let mut end = self.tokens.get(start).map_or(self.src.len(), |t| t.1.end);
    for tok in self.tokens.iter().skip(start) {
        end = tok.1.end;
        if self
            .src
            .get(tok.1.clone())
            .is_some_and(|text| text.contains('\n'))
        {
            break;
        }
    }
    end
}

/// Peeks the next token after the current cursor, skipping inline whitespace and comment tokens that do not contain newlines.
/// Returns the next non-whitespace/comment token or None if none found.
pub(crate) fn peek_after_ws_inline(&self) -> Option<&(SyntaxKind, Span)> {
    let mut idx = self.cursor + 1;
    while let Some(tok) = self.tokens.get(idx) {
        if matches!(tok.0, SyntaxKind::T_WHITESPACE | SyntaxKind::T_COMMENT)
            && self
                .src
                .get(tok.1.clone())
                .is_some_and(|text| !text.contains('\n'))
        {
            idx += 1;
        } else {
            break;
        }
    }
    self.tokens.get(idx)
}

Relevant snippet from src/parser/span_collector.rs (lines 44-50)

/// Creates a new `SpanCollector` with the given tokens, source string, and extra data.
/// Initializes the internal `TokenStream` and an empty vector for collected spans.
pub(crate) fn new(tokens: &'a [(SyntaxKind, Span)], src: &'a str, extra: Extra) -> Self {
    Self {
        stream: TokenStream::new(tokens, src),
        spans: Vec::new(),
        extra,
    }
}

Relevant snippet from src/parser/ast/parse_utils.rs (lines 17-45 and 93-124)

/// Parses name/type pairs from an iterator over syntax elements.
/// Skips tokens until the first '(' is found, then collects pairs of identifiers and types.
/// Returns a vector of (name, type) string tuples.
pub(super) fn parse_name_type_pairs<I>(mut iter: I) -> Vec<(String, String)>
where
    I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
{
    // Skip to the first '(' to handle leading trivia.
    for e in &mut iter {
        if e.kind() == SyntaxKind::T_LPAREN {
            break;
        }
    }

    let mut pairs = Vec::new();
    let mut buf = String::new();
    let mut name: Option<String> = None;
    let mut depth = 0usize;

    for e in iter {
        match e {
            NodeOrToken::Token(t) => {
                if handle_token(&t, &mut buf, &mut name, &mut pairs, &mut depth) {
                    break;
                }
            }
            NodeOrToken::Node(n) => buf.push_str(&n.text().to_string()),
        }
    }

    pairs
}

/// Parses a type annotation following a colon from a peekable iterator over syntax elements.
/// Skips whitespace and comments, expects a colon token, then collects tokens until a terminating token or newline.
/// Returns the type as a trimmed string or None if no type found.
pub(super) fn parse_type_after_colon<I>(iter: &mut std::iter::Peekable<I>) -> Option<String>
where
    I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
{
    skip_whitespace_and_comments(iter);
    if !matches!(
        iter.peek().map(SyntaxElement::kind),
        Some(SyntaxKind::T_COLON)
    ) {
        return None;
    }
    iter.next();

    let mut buf = String::new();
    for e in iter {
        match e {
            NodeOrToken::Token(t) => match t.kind() {
                SyntaxKind::T_LBRACE | SyntaxKind::T_SEMI => break,
                SyntaxKind::T_WHITESPACE if t.text().contains('\n') => break,
                _ => buf.push_str(t.text()),
            },
            NodeOrToken::Node(n) => buf.push_str(&n.text().to_string()),
        }
    }

    let text = buf.trim();
    if text.is_empty() {
        None
    } else {
        Some(text.to_string())
    }
}

Relevant snippet from src/tokenizer.rs (lines 191-247)

/// Tokenizes the input source string into a vector of (SyntaxKind, Span) tuples.
/// Uses a lexer to identify tokens and map them to SyntaxKind variants.
/// On lexer errors, inserts N_ERROR tokens with the error span.
/// Supports keywords, identifiers, literals, punctuation, and operators.
pub fn tokenize(src: &str) -> Vec<(SyntaxKind, Span)> {
    let mut lexer = Token::lexer(src);
    let estimated_tokens = src.len() >> 2; // roughly four chars per token
    let mut out = Vec::with_capacity(estimated_tokens);
    while let Some(result) = lexer.next() {
        let span = lexer.span();
        let text = src.get(span.clone()).unwrap_or("");
        let Ok(token) = result else {
            out.push((SyntaxKind::N_ERROR, span));
            continue;
        };
        let kind = match token {
            Token::Whitespace => SyntaxKind::T_WHITESPACE,
            Token::Comment => SyntaxKind::T_COMMENT,
            Token::Ident => keyword_kind(text).unwrap_or(SyntaxKind::T_IDENT),
            Token::Number => SyntaxKind::T_NUMBER,
            Token::String => SyntaxKind::T_STRING,
            Token::LParen => SyntaxKind::T_LPAREN,
            Token::RParen => SyntaxKind::T_RPAREN,
            Token::LBrace => SyntaxKind::T_LBRACE,
            Token::RBrace => SyntaxKind::T_RBRACE,
            Token::LBracket => SyntaxKind::T_LBRACKET,
            Token::RBracket => SyntaxKind::T_RBRACKET,
            Token::Semi => SyntaxKind::T_SEMI,
            Token::Comma => SyntaxKind::T_COMMA,
            Token::Dot => SyntaxKind::T_DOT,
            Token::ColonColon => SyntaxKind::T_COLON_COLON,
            Token::Colon => SyntaxKind::T_COLON,
            Token::Pipe => SyntaxKind::T_PIPE,
            Token::Amp => SyntaxKind::T_AMP,
            Token::EqEq => SyntaxKind::T_EQEQ,
            Token::Eq => SyntaxKind::T_EQ,
            Token::Implies => SyntaxKind::T_IMPLIES,
            Token::Percent => SyntaxKind::T_PERCENT,
            Token::Star => SyntaxKind::T_STAR,
            Token::Slash => SyntaxKind::T_SLASH,
            Token::Plus => SyntaxKind::T_PLUS,
            Token::Minus => SyntaxKind::T_MINUS,
            Token::Arrow => SyntaxKind::T_ARROW,
            Token::FatArrow => SyntaxKind::T_FAT_ARROW,
            Token::Lte => SyntaxKind::T_LTE,
            Token::Spaceship => SyntaxKind::T_SPACESHIP,
            Token::Gte => SyntaxKind::T_GTE,
            Token::Lt => SyntaxKind::T_LT,
            Token::Gt => SyntaxKind::T_GT,
            Token::Neq => SyntaxKind::T_NEQ,
            Token::Shr => SyntaxKind::T_SHR,
            Token::Shl => SyntaxKind::T_SHL,
            Token::Tilde => SyntaxKind::T_TILDE,
            Token::At => SyntaxKind::T_AT,
            Token::Hash => SyntaxKind::T_HASH,
            Token::Apostrophe => SyntaxKind::T_APOSTROPHE,
        };
        out.push((kind, span));
    }
    out
}

Relevant snippet from src/language.rs (lines 185-190)

/// Converts a SyntaxKind variant to a raw rowan::SyntaxKind wrapper.
/// This is used to interface with the rowan syntax tree builder.
fn kind_to_raw(kind: Self::Kind) -> RowanSyntaxKind {
    RowanSyntaxKind(
        kind.to_u16()
            .unwrap_or_else(|| unreachable!("all SyntaxKind variants map to u16")),
    )
}

Relevant snippet from src/parser/mod.rs (macro token_dispatch!)

/// Macro to iterate over the token stream and dispatch handlers by SyntaxKind.
/// Expects a parsing context `ctx` with a `stream` field.
/// Loops until the stream is exhausted, invoking the handler associated with each recognized kind.
/// Handlers must advance the stream to consume tokens they process.
/// Any unhandled kind is skipped.
macro_rules! token_dispatch {
    ( $ctx:ident, {
        $( $kind:path => $handler:ident ),* $(,)?
    } ) => {{
        while let Some(&(kind, ref span_ref)) = $ctx.stream.peek() {
            let span = span_ref.clone();
            match kind {
                $( $kind => $handler(&mut $ctx, span.clone()), )*
                _ => $ctx.stream.advance(),
            }
        }
    }};
}

Summary of SpanCollector usage and role

  • SpanCollector is a helper struct that wraps a TokenStream and collects spans of recognized statements.
  • It is constructed with tokens, source string, and an extra data container (often a vector for errors).
  • It provides methods to parse spans of statements using chumsky parsers and to advance or skip tokens.
  • It is used extensively in functions like collect_import_spans, collect_typedef_spans, collect_relation_spans, collect_index_spans, collect_function_spans, and collect_rule_spans to identify statement spans and collect parse errors.

These snippets provide the core token stream handling, span collection, tokenization, and AST utilities necessary to understand and work with the src/parser/mod.rs file and its parsing logic.

Comment thread src/parser/mod.rs
Comment on lines +197 to +212
/// Spans for each parsed statement category.
#[derive(Debug, Default, Clone, PartialEq)]
pub struct ParsedSpans {
/// `import` statement spans.
pub imports: Vec<Span>,
/// `typedef` statement spans.
pub typedefs: Vec<Span>,
/// `relation` declaration spans.
pub relations: Vec<Span>,
/// `index` declaration spans.
pub indexes: Vec<Span>,
/// `function` definition spans.
pub functions: Vec<Span>,
/// Rule spans.
pub rules: Vec<Span>,
}
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Jul 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Restrict the public surface of ParsedSpans (and future-proof it).

ParsedSpans is exported with all fields pub, allowing external crates to mutate internal invariants and making it painful to extend the struct without a breaking change.

Consider:

-#[derive(Debug, Default, Clone, PartialEq)]
-pub struct ParsedSpans {
+#[derive(Debug, Default, Clone, PartialEq, Eq)]
+#[non_exhaustive]
+pub struct ParsedSpans {

and either (a) keep the fields private and expose a new() / builder API or (b) provide getter methods only. This guards invariants (e.g. sorted + non-overlapping) and lets you add more collections later without semver pain.

🤖 Prompt for AI Agents
In src/parser/mod.rs around lines 197 to 212, the ParsedSpans struct currently
exposes all its fields as public, which risks external mutation and breaks
encapsulation. To fix this, make all fields private by removing the pub keyword,
then add a public constructor method new() to initialize the struct and provide
public getter methods for each field to allow read-only access. This approach
protects internal invariants and allows future extension without breaking
changes.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai New GitHub issue for this, please

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

@leynos I'll create a GitHub issue for the ParsedSpans encapsulation improvement.


🏁 Script executed:

#!/bin/bash
gh issue create \
  --title "Improve encapsulation of ParsedSpans struct" \
  --body "The \`ParsedSpans\` struct currently exposes all fields as public, which allows external mutation and breaks encapsulation principles. This makes it difficult to maintain invariants (such as sorted and non-overlapping spans) and complicates future extensions without breaking changes.

The struct should be refactored to:
- Make fields private
- Provide a public constructor method
- Expose read-only access through getter methods
- Consider adding \`#[non_exhaustive]\` for future extensibility

This issue was raised during code review and requested to be tracked separately.

**References:**
- Original PR: https://github.com/leynos/ddlint/pull/48
- Comment: https://github.com/leynos/ddlint/pull/48#discussion_r2186173999
- Requested by: @leynos" \
  --assignee leynos

Length of output: 887


Perfect! I've created GitHub issue #52 for the ParsedSpans encapsulation improvement: #52

The issue tracks the need to improve encapsulation by making the struct fields private, adding proper constructor and getter methods, and considering #[non_exhaustive] for future extensibility. It's assigned to you and includes proper backlinks to this PR and comment thread.


🧬 Code Graph Analysis Results

Relevant snippets from src/parser/token_stream.rs (lines 41-47, 59-61, 76-78, 89-93, 105-107, 118-120, 158-171, 214-229)

pub(crate) fn new(tokens: &'a [(SyntaxKind, Span)], src: &'a str) -> Self {
    Self {
        tokens,
        src,
        cursor: 0,
    }
}

pub(crate) fn cursor(&self) -> usize {
    self.cursor
}

pub(crate) fn peek(&self) -> Option<&(SyntaxKind, Span)> {
    self.tokens.get(self.cursor)
}

pub(crate) fn advance(&mut self) {
    if self.cursor < self.tokens.len() {
        self.cursor += 1;
    }
}

pub(crate) fn tokens(&self) -> &[(SyntaxKind, Span)] {
    self.tokens
}

pub(crate) fn src(&self) -> &str {
    self.src
}

/// Returns the position immediately after the end of the current line starting at `start`.
/// If `start` is out of bounds, returns the length of the source string.
/// Iterates tokens from `start` forward until a token containing a newline is found.
pub(crate) fn line_end(&self, start: usize) -> usize {
    let mut end = self.tokens.get(start).map_or(self.src.len(), |t| t.1.end);
    for tok in self.tokens.iter().skip(start) {
        end = tok.1.end;
        if self
            .src
            .get(tok.1.clone())
            .is_some_and(|text| text.contains('\n'))
        {
            break;
        }
    }
    end
}

/// Peeks the next token after the current cursor, skipping inline whitespace and comments.
/// Inline whitespace/comments are those that do not contain a newline.
/// Returns the next non-whitespace/comment token or None if none found.
pub(crate) fn peek_after_ws_inline(&self) -> Option<&(SyntaxKind, Span)> {
    let mut idx = self.cursor + 1;
    while let Some(tok) = self.tokens.get(idx) {
        if matches!(tok.0, SyntaxKind::T_WHITESPACE | SyntaxKind::T_COMMENT)
            && self
                .src
                .get(tok.1.clone())
                .is_some_and(|text| !text.contains('\n'))
        {
            idx += 1;
        } else {
            break;
        }
    }
    self.tokens.get(idx)
}

Relevant snippet from src/parser/span_collector.rs (lines 44-50)

pub(crate) fn new(tokens: &'a [(SyntaxKind, Span)], src: &'a str, extra: Extra) -> Self {
    Self {
        stream: TokenStream::new(tokens, src),
        spans: Vec::new(),
        extra,
    }
}

Relevant snippet from src/parser/ast/parse_utils.rs (lines 17-45, 93-124)

/// Parses name/type pairs from an iterator of syntax elements.
/// Returns a vector of (name, type) string pairs.
/// Skips tokens until the first '(' is found, then parses pairs separated by commas.
/// Handles nested parentheses and collects tokens into strings.
pub(super) fn parse_name_type_pairs<I>(mut iter: I) -> Vec<(String, String)>
where
    I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
{
    // Skip to the first '(' to handle leading trivia.
    for e in &mut iter {
        if e.kind() == SyntaxKind::T_LPAREN {
            break;
        }
    }

    let mut pairs = Vec::new();
    let mut buf = String::new();
    let mut name: Option<String> = None;
    let mut depth = 0usize;

    for e in iter {
        match e {
            NodeOrToken::Token(t) => {
                if handle_token(&t, &mut buf, &mut name, &mut pairs, &mut depth) {
                    break;
                }
            }
            NodeOrToken::Node(n) => buf.push_str(&n.text().to_string()),
        }
    }

    pairs
}

/// Parses a type annotation after a colon token from a peekable iterator of syntax elements.
/// Returns the type as a string if present, or None if no colon or empty type.
/// Stops parsing at '{', ';', or newline whitespace tokens.
pub(super) fn parse_type_after_colon<I>(iter: &mut std::iter::Peekable<I>) -> Option<String>
where
    I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
{
    skip_whitespace_and_comments(iter);
    if !matches!(
        iter.peek().map(SyntaxElement::kind),
        Some(SyntaxKind::T_COLON)
    ) {
        return None;
    }
    iter.next();

    let mut buf = String::new();
    for e in iter {
        match e {
            NodeOrToken::Token(t) => match t.kind() {
                SyntaxKind::T_LBRACE | SyntaxKind::T_SEMI => break,
                SyntaxKind::T_WHITESPACE if t.text().contains('\n') => break,
                _ => buf.push_str(t.text()),
            },
            NodeOrToken::Node(n) => buf.push_str(&n.text().to_string()),
        }
    }

    let text = buf.trim();
    if text.is_empty() {
        None
    } else {
        Some(text.to_string())
    }
}

Relevant snippet from src/tokenizer.rs (lines 191-247)

/// Tokenizes the source string into a vector of (SyntaxKind, Span) tuples.
/// Uses a lexer to produce tokens, mapping lexer tokens to SyntaxKind variants.
/// On lexer errors, inserts N_ERROR tokens with the error span.
/// Supports keywords, identifiers, literals, punctuation, and operators.
pub fn tokenize(src: &str) -> Vec<(SyntaxKind, Span)> {
    let mut lexer = Token::lexer(src);
    let estimated_tokens = src.len() >> 2; // roughly four chars per token
    let mut out = Vec::with_capacity(estimated_tokens);
    while let Some(result) = lexer.next() {
        let span = lexer.span();
        let text = src.get(span.clone()).unwrap_or("");
        let Ok(token) = result else {
            out.push((SyntaxKind::N_ERROR, span));
            continue;
        };
        let kind = match token {
            Token::Whitespace => SyntaxKind::T_WHITESPACE,
            Token::Comment => SyntaxKind::T_COMMENT,
            Token::Ident => keyword_kind(text).unwrap_or(SyntaxKind::T_IDENT),
            Token::Number => SyntaxKind::T_NUMBER,
            Token::String => SyntaxKind::T_STRING,
            Token::LParen => SyntaxKind::T_LPAREN,
            Token::RParen => SyntaxKind::T_RPAREN,
            Token::LBrace => SyntaxKind::T_LBRACE,
            Token::RBrace => SyntaxKind::T_RBRACE,
            Token::LBracket => SyntaxKind::T_LBRACKET,
            Token::RBracket => SyntaxKind::T_RBRACKET,
            Token::Semi => SyntaxKind::T_SEMI,
            Token::Comma => SyntaxKind::T_COMMA,
            Token::Dot => SyntaxKind::T_DOT,
            Token::ColonColon => SyntaxKind::T_COLON_COLON,
            Token::Colon => SyntaxKind::T_COLON,
            Token::Pipe => SyntaxKind::T_PIPE,
            Token::Amp => SyntaxKind::T_AMP,
            Token::EqEq => SyntaxKind::T_EQEQ,
            Token::Eq => SyntaxKind::T_EQ,
            Token::Implies => SyntaxKind::T_IMPLIES,
            Token::Percent => SyntaxKind::T_PERCENT,
            Token::Star => SyntaxKind::T_STAR,
            Token::Slash => SyntaxKind::T_SLASH,
            Token::Plus => SyntaxKind::T_PLUS,
            Token::Minus => SyntaxKind::T_MINUS,
            Token::Arrow => SyntaxKind::T_ARROW,
            Token::FatArrow => SyntaxKind::T_FAT_ARROW,
            Token::Lte => SyntaxKind::T_LTE,
            Token::Spaceship => SyntaxKind::T_SPACESHIP,
            Token::Gte => SyntaxKind::T_GTE,
            Token::Lt => SyntaxKind::T_LT,
            Token::Gt => SyntaxKind::T_GT,
            Token::Neq => SyntaxKind::T_NEQ,
            Token::Shr => SyntaxKind::T_SHR,
            Token::Shl => SyntaxKind::T_SHL,
            Token::Tilde => SyntaxKind::T_TILDE,
            Token::At => SyntaxKind::T_AT,
            Token::Hash => SyntaxKind::T_HASH,
            Token::Apostrophe => SyntaxKind::T_APOSTROPHE,
        };
        out.push((kind, span));
    }
    out
}

Summary of token_dispatch! macro (lines ~40-60)

A macro that iterates over a token stream in a parsing context ctx with a stream field. For each token, it matches the token kind against provided handlers. If a handler matches, it calls the handler function with the context and token span. Handlers must advance the stream to consume tokens. Unhandled tokens are skipped by advancing the stream.


Summary of collect_*_spans functions

  • collect_import_spans: Parses import statements, collects their spans, and recovers on errors by skipping to line end.
  • collect_typedef_spans: Collects spans for typedef and extern type declarations by skipping to line ends.
  • collect_relation_spans: Collects spans for relation declarations, including optional input/output keywords and primary key clauses.
  • collect_index_spans: Parses index declarations with syntax validation, collects spans and errors.
  • collect_function_spans: Parses function and extern function declarations, collects spans and errors.
  • collect_rule_spans: Parses rule declarations starting at line beginnings, collects spans and errors.

Summary of CST construction functions

  • build_green_tree: Constructs a rowan::GreenNode CST from tokens and statement spans. Uses iterators over sorted spans to start and finish syntax nodes for each statement type.
  • advance_span_iter: Advances a peekable iterator over spans past tokens that end before a given position.
  • maybe_start and maybe_finish: Start or finish syntax nodes in the builder when token positions match span boundaries.
  • push_token: Adds tokens to the CST builder, wrapping error tokens in error nodes.

Summary of Parsed and ParsedSpans structs

  • Parsed: Holds the CST green node, typed AST root, and parse errors.
  • ParsedSpans: Holds vectors of spans for each statement category (imports, typedefs, relations, indexes, functions, rules). Has a method to assert spans are sorted and non-overlapping.

Summary of ast module

Defines typed wrappers around rowan syntax nodes for root, import, typedef, relation, index, rule, and function declarations. Provides methods to access syntax nodes, extract names, aliases, parameters, columns, primary keys, and other relevant information from the CST.


These snippets and summaries provide comprehensive context on the parsing process, token stream management, span collection, CST construction, and typed AST wrappers in src/parser/mod.rs. They are relevant for understanding the file and addressing issues related to parsing, token handling, span management, and AST construction.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 4, 2025

@coderabbitai new GitHub issue advising that ParsedSpans::assert_sorted should be renamed ParsedSpans::ensure_sorted, please

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 4, 2025

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

@leynos, I'll create a GitHub issue for this suggestion.


🏁 Script executed:

#!/bin/bash
gh issue create \
  --title "Rename ParsedSpans::assert_sorted to ParsedSpans::ensure_sorted" \
  --body "This issue suggests renaming the \`ParsedSpans::assert_sorted\` method to \`ParsedSpans::ensure_sorted\` for better clarity and consistency with naming conventions.

The method was introduced in PR #48 and is used to verify that all span lists are sorted and non-overlapping. The name \`ensure_sorted\` would be more descriptive as it better conveys the method's purpose of ensuring/verifying the sorting invariant.

**Backlinks:**
- PR: https://github.com/leynos/ddlint/pull/48
- Comment: https://github.com/leynos/ddlint/pull/48#issuecomment-[comment_id]

**Requested by:** @leynos" \
  --assignee leynos

Length of output: 742


Perfect! I've successfully created a GitHub issue for your suggestion. The issue has been created at:

#53

The issue suggests renaming ParsedSpans::assert_sorted to ParsedSpans::ensure_sorted for better clarity and naming consistency. It's been assigned to you and includes backlinks to this PR and your comment.

@leynos leynos merged commit 9161ed3 into main Jul 4, 2025
2 checks passed
@leynos leynos deleted the codex/refactor-build_green_tree-parameters branch July 4, 2025 23:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant