Skip to content

Add index declaration parser#36

Merged
leynos merged 6 commits intomainfrom
codex/implement-index-parser-for-declarations
Jun 29, 2025
Merged

Add index declaration parser#36
leynos merged 6 commits intomainfrom
codex/implement-index-parser-for-declarations

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Jun 29, 2025

Summary

  • support index keyword and on keyword in the tokenizer
  • record index spans during token scan and build CST N_INDEX nodes
  • expose parsed indexes via new Index AST wrapper
  • implement parsing logic and add tests for index declarations
  • refactor index span parser and simplify AST columns method

Testing

  • make lint
  • make test

https://chatgpt.com/codex/tasks/task_e_6861737ff3f08322a0b0a42d8dfb9f39

Summary by Sourcery

Add parsing support for index declarations, integrate them into the CST and AST, and expose them via a new Index wrapper.

New Features:

  • Support tokenization of index and on keywords
  • Parse index declarations into dedicated CST N_INDEX nodes
  • Expose index declarations in the AST with a new Index wrapper and Root.indexes() method

Enhancements:

  • Integrate index span collection into the overall parsing and CST construction flow
  • Introduce an inline_ws combinator for handling whitespace and comments

Documentation:

  • Update grammar documentation to include index declarations and nested parentheses support

Tests:

  • Add parser tests for single-column, multi-column, nested-function, and error cases for index declarations

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Jun 29, 2025

Reviewer's Guide

This PR introduces full support for index <name> on <relation>(<columns>) declarations by extending tokenization with new keywords, enhancing the parser to collect index spans and build N_INDEX CST nodes, exposing indexes in the AST through a new Index wrapper, and covering all scenarios with dedicated tests.

ER diagram for index declaration CST node integration

erDiagram
    DATALOG_PROGRAM ||--o{ N_INDEX : contains
    N_INDEX {
        name string
        relation string
        columns string
    }
Loading

Class diagram for new Index AST wrapper and Root::indexes method

classDiagram
    class Root {
        +Vec<Index> indexes()
    }
    class Index {
        +syntax: SyntaxNode
        +syntax() SyntaxNode
        +name() Option<String>
        +relation() Option<String>
        +columns() Vec<String>
    }
    Root --> "*" Index
Loading

File-Level Changes

Change Details Files
Extend tokenizer and language definitions for new keywords
  • Added K_INDEX and K_ON variants to the SyntaxKind enum
  • Inserted "index" and "on" entries into the keyword map
src/language.rs
src/tokenizer.rs
Enhance parser to collect and parse index declarations
  • Updated parse and parse_tokens to return index_spans
  • Extended build_green_tree to wrap N_INDEX nodes around index spans
  • Added inline_ws combinator and collect_index_spans with index_decl and index_columns parsers
  • Hooked token_dispatch! to invoke index span collection
src/parser/mod.rs
Expose parsed indexes in the AST
  • Added Root::indexes method to gather N_INDEX nodes
  • Introduced Index struct with name, relation, and columns accessors
src/parser/mod.rs
Add comprehensive tests for index declarations
  • Created fixtures and rstest functions for single/multi-column, nested functions, and error cases
  • Asserted correct name, relation, columns, and error reporting
tests/parser.rs
Update documentation for index grammar support
  • Documented index production in the Haskell-parser guide
  • Noted nested parentheses support within the column list
docs/haskell-parser-analysis.md

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jun 29, 2025

Summary by CodeRabbit

  • New Features

    • Added support for parsing and recognising index declarations, including nested expressions in column lists.
    • Users can now define indexes with the syntax index <name> on <relation>(<columns>).
  • Bug Fixes

    • Improved error handling for invalid or malformed index declarations.
  • Tests

    • Introduced new tests to verify correct parsing and error detection for various index declaration scenarios.
  • Documentation

    • Updated documentation to clarify grammar support for index declarations and nested parentheses.

Summary by CodeRabbit

  • New Features

    • Added support for parsing and handling index declarations, including recognition of the keywords "index" and "on".
    • Index declarations now support nested parentheses within column lists.
  • Bug Fixes

    • Improved error detection and handling for invalid or malformed index declarations.
  • Tests

    • Introduced new tests covering valid and invalid index declarations, including edge cases with nested functions and unbalanced parentheses.
  • Documentation

    • Updated documentation to describe enhanced support for nested parentheses in index column lists.

Summary by CodeRabbit

  • New Features

    • Added support for parsing and recognising "index" declarations, including their names, target relations, and columns.
    • Users can now define indexes in their code, and these are properly represented in the syntax tree and AST.
  • Bug Fixes

    • Improved error detection for invalid "index" declarations.
  • Tests

    • Added tests to verify correct parsing of single-column and multi-column index declarations, as well as error handling for invalid syntax.

Walkthrough

The changes introduce support for index declarations in the parser, tokenizer, and typed AST. New keyword tokens (K_INDEX and K_ON) are added, the parser recognises and collects index statement spans, the CST and AST process index nodes, and dedicated tests verify correct parsing and error handling for index syntax.

Changes

File(s) Change Summary
src/language.rs, src/tokenizer.rs Added K_INDEX and K_ON keyword variants to SyntaxKind and updated the keyword map to recognise them.
src/parser/mod.rs Extended parser, CST builder, and AST to support index declarations; added new Index struct and related methods.
tests/parser.rs Added test fixtures and tests for single-column, multi-column, and invalid index declarations.
docs/haskell-parser-analysis.md Expanded documentation to clarify grammar for index declarations, including nested parentheses support.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Tokenizer
    participant Parser
    participant CSTBuilder
    participant AST

    User->>Tokenizer: Provide source with 'index' declaration
    Tokenizer->>Parser: Token stream (incl. K_INDEX, K_ON)
    Parser->>Parser: Collect spans for imports, typedefs, relations, indexes
    Parser->>CSTBuilder: Pass tokens and spans (incl. indexes)
    CSTBuilder->>AST: Build syntax tree with index nodes
    AST->>User: Expose index nodes and fields via API
Loading

Possibly related PRs

  • Add SyntaxKind and language wrapper #6: Adds the SyntaxKind enum and DdlogLanguage wrapper, which are directly built upon by the introduction of the K_INDEX and K_ON variants and their use in parsing index declarations.
  • Implement relation parser #35: Implements parsing, CST, and AST support for relation declarations, similar in approach to the new index declaration support but for different syntax elements.

Poem

In the garden of code, a new root takes hold,
With "index" and "on", the syntax grows bold.
Parsers now gather these seeds in their beds,
While tests check the blooms for their columns and heads.
A hop and a skip—now indexes thrive,
In the language, alive!
🥕


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3b90efd and 2287540.

📒 Files selected for processing (1)
  • docs/haskell-parser-analysis.md (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
`docs/**/*.md`: Use the markdown files within the `docs/` directory as a knowled...

docs/**/*.md: Use the markdown files within the docs/ directory as a knowledge base and source of truth for project requirements, dependency choices, and architectural decisions.
Proactively update the relevant file(s) in the docs/ directory to reflect the latest state when new decisions are made, requirements change, libraries are added/removed, or architectural patterns evolve.
Documentation in docs/ must use en-GB-oxendict spelling and grammar, except for the word 'license'.
Validate Markdown files using make markdownlint.
Run make fmt after any documentation changes to format all Markdown files and fix table markup.
Validate Markdown Mermaid diagrams using the make nixie.
Markdown paragraphs and bullet points must be wrapped at 80 columns.
Code blocks in Markdown must be wrapped at 120 columns.
Tables and headings in Markdown must not be wrapped.

📄 Source: CodeRabbit Inference Engine (AGENTS.md)

List of files the instruction was applied to:

  • docs/haskell-parser-analysis.md
`docs/**/*.md`: Use British English spelling based on the Oxford English Diction...

docs/**/*.md: Use British English spelling based on the Oxford English Dictionary, except retain US spelling for API terms (e.g., 'color').
Use the Oxford comma in lists.
Write headings in sentence case and use Markdown heading levels in order without skipping.
Follow markdownlint recommendations for Markdown formatting.
Always use fenced code blocks with a language identifier; use 'plaintext' for non-code text.
Use '-' as the first level bullet and renumber lists when items change.
Prefer inline links using text or angle brackets around the URL.
Expand any uncommon acronym on first use, e.g., Continuous Integration (CI).
Wrap paragraphs at 80 columns and code at 120 columns; do not wrap tables.
Use footnotes referenced with [^label].
When embedding figures, use 'alt text' and provide concise alt text describing the content.
Add a short description before each Mermaid diagram so screen readers can understand it.

📄 Source: CodeRabbit Inference Engine (docs/documentation-style-guide.md)

List of files the instruction was applied to:

  • docs/haskell-parser-analysis.md
`**/*.md`: * Avoid 2nd person or 1st person pronouns ("I", "you", "we") * Use en...

**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")

  • Use en-oxendic spelling and grammar.
  • Paragraphs and bullets must be wrapped to 80 columns, except where a long URL would prevent this (in which case, silence MD013 for that line)
  • Code blocks should be wrapped to 120 columns.
  • Headings must not be wrapped.

⚙️ Source: CodeRabbit Configuration File

List of files the instruction was applied to:

  • docs/haskell-parser-analysis.md
✨ Finishing Touches
🧪 Generate Unit Tests
  • Create PR with Unit Tests
  • Post Copyable Unit Tests in Comment
  • Commit Unit Tests in branch codex/implement-index-parser-for-declarations

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai auto-generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes - here's some feedback:

  • Refactor the repeated filter(...).ignored() whitespace/comment parser into a shared combinator to reduce duplication in index_columns and index_decl.
  • The manual buffer logic in Index::columns is brittle; consider iterating over child identifier tokens directly or using rowan queries to collect column names more robustly.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Refactor the repeated `filter(...).ignored()` whitespace/comment parser into a shared combinator to reduce duplication in `index_columns` and `index_decl`.
- The manual buffer logic in `Index::columns` is brittle; consider iterating over child identifier tokens directly or using rowan queries to collect column names more robustly.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jun 29, 2025

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jun 29, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jun 29, 2025

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jun 29, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c5dccb5 and afe3bb1.

📒 Files selected for processing (4)
  • src/language.rs (2 hunks)
  • src/parser/mod.rs (7 hunks)
  • src/tokenizer.rs (2 hunks)
  • tests/parser.rs (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.rs`: * Seek to keep the cyclomatic complexity of functions no more than 12...

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments must use en-GB-oxendict spelling and grammar.

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

⚙️ Source: CodeRabbit Configuration File

List of files the instruction was applied to:

  • src/language.rs
  • src/tokenizer.rs
  • tests/parser.rs
  • src/parser/mod.rs
🔇 Additional comments (7)
src/tokenizer.rs (1)

138-138: LGTM!

The new keyword mappings are correctly added and maintain alphabetical ordering.

Also applies to: 149-149

src/language.rs (1)

95-95: LGTM!

The new syntax kind variants are properly integrated and maintain alphabetical ordering.

Also applies to: 106-106

tests/parser.rs (1)

75-88: LGTM!

Comprehensive test coverage for index parsing including valid single/multi-column cases and error handling for invalid syntax.

Also applies to: 411-454

src/parser/mod.rs (4)

95-113: LGTM!

The integration of index span collection follows the established pattern for other statement types.

Also applies to: 116-154


524-585: LGTM!

The CST construction correctly integrates index nodes following the established pattern.


740-748: LGTM!

The method correctly follows the established pattern for collecting typed AST nodes.


1066-1150: LGTM!

The Index AST wrapper correctly extracts all required information from the syntax tree.

Comment thread src/parser/mod.rs
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src/parser/mod.rs (1)

457-548: Robust index declaration parser with proper error recovery.

The implementation correctly handles the index <name> on <relation>(<columns>) pattern with proper support for nested parentheses in column expressions. The past review comment about handling nested parentheses has been addressed through the depth tracking mechanism.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3c9744e and d4b24b0.

📒 Files selected for processing (1)
  • src/parser/mod.rs (8 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
`**/*.rs`: Document public APIs using Rustdoc comments (`///`) so documentation ...

**/*.rs: Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Use explicit version ranges in Cargo.toml and keep dependencies up-to-date.
Use rstest fixtures for shared setup.
Replace duplicated tests with #[rstest(...)] parameterised cases.
Prefer mockall for mocks/stubs.
Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.

📄 Source: CodeRabbit Inference Engine (AGENTS.md)

List of files the instruction was applied to:

  • src/parser/mod.rs
`**/*.rs`: * Seek to keep the cyclomatic complexity of functions no more than 12...

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments must use en-GB-oxendict spelling and grammar.

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

⚙️ Source: CodeRabbit Configuration File

List of files the instruction was applied to:

  • src/parser/mod.rs
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: build-test
🔇 Additional comments (5)
src/parser/mod.rs (5)

59-74: Well-structured whitespace parser.

The inline_ws function is properly documented and follows the established patterns in the codebase. The implementation correctly matches whitespace and comment tokens.


112-122: Proper integration of index parsing.

The changes correctly integrate index span collection and CST building into the main parse flow, following the established pattern for other statement types.


132-171: Correct extension of parse_tokens for index support.

The function properly extends the parsing pipeline to include index declarations. Good use of #[expect] with clear reasoning, and proper error merging from multiple parsing stages.


561-561: Consistent CST construction for index nodes.

The modifications to build_green_tree properly integrate index span handling, maintaining consistency with the existing pattern for other statement types.

Also applies to: 566-566, 573-573, 579-579, 599-604, 611-611


771-779: Consistent AST accessor for index declarations.

The indexes() method follows the established pattern for accessing parsed statement types from the root AST node.

Comment thread src/parser/mod.rs Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d4b24b0 and 3b90efd.

📒 Files selected for processing (3)
  • docs/haskell-parser-analysis.md (1 hunks)
  • src/parser/mod.rs (8 hunks)
  • tests/parser.rs (2 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
`docs/**/*.md`: Use the markdown files within the `docs/` directory as a knowled...

docs/**/*.md: Use the markdown files within the docs/ directory as a knowledge base and source of truth for project requirements, dependency choices, and architectural decisions.
Proactively update the relevant file(s) in the docs/ directory to reflect the latest state when new decisions are made, requirements change, libraries are added/removed, or architectural patterns evolve.
Documentation in docs/ must use en-GB-oxendict spelling and grammar, except for the word 'license'.
Validate Markdown files using make markdownlint.
Run make fmt after any documentation changes to format all Markdown files and fix table markup.
Validate Markdown Mermaid diagrams using the make nixie.
Markdown paragraphs and bullet points must be wrapped at 80 columns.
Code blocks in Markdown must be wrapped at 120 columns.
Tables and headings in Markdown must not be wrapped.

📄 Source: CodeRabbit Inference Engine (AGENTS.md)

List of files the instruction was applied to:

  • docs/haskell-parser-analysis.md
`docs/**/*.md`: Use British English spelling based on the Oxford English Diction...

docs/**/*.md: Use British English spelling based on the Oxford English Dictionary, except retain US spelling for API terms (e.g., 'color').
Use the Oxford comma in lists.
Write headings in sentence case and use Markdown heading levels in order without skipping.
Follow markdownlint recommendations for Markdown formatting.
Always use fenced code blocks with a language identifier; use 'plaintext' for non-code text.
Use '-' as the first level bullet and renumber lists when items change.
Prefer inline links using text or angle brackets around the URL.
Expand any uncommon acronym on first use, e.g., Continuous Integration (CI).
Wrap paragraphs at 80 columns and code at 120 columns; do not wrap tables.
Use footnotes referenced with [^label].
When embedding figures, use 'alt text' and provide concise alt text describing the content.
Add a short description before each Mermaid diagram so screen readers can understand it.

📄 Source: CodeRabbit Inference Engine (docs/documentation-style-guide.md)

List of files the instruction was applied to:

  • docs/haskell-parser-analysis.md
`**/*.md`: * Avoid 2nd person or 1st person pronouns ("I", "you", "we") * Use en...

**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")

  • Use en-oxendic spelling and grammar.
  • Paragraphs and bullets must be wrapped to 80 columns, except where a long URL would prevent this (in which case, silence MD013 for that line)
  • Code blocks should be wrapped to 120 columns.
  • Headings must not be wrapped.

⚙️ Source: CodeRabbit Configuration File

List of files the instruction was applied to:

  • docs/haskell-parser-analysis.md
`**/*.rs`: Document public APIs using Rustdoc comments (`///`) so documentation ...

**/*.rs: Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Use explicit version ranges in Cargo.toml and keep dependencies up-to-date.
Use rstest fixtures for shared setup.
Replace duplicated tests with #[rstest(...)] parameterised cases.
Prefer mockall for mocks/stubs.
Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.

📄 Source: CodeRabbit Inference Engine (AGENTS.md)

List of files the instruction was applied to:

  • tests/parser.rs
  • src/parser/mod.rs
`**/*.rs`: * Seek to keep the cyclomatic complexity of functions no more than 12...

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments must use en-GB-oxendict spelling and grammar.

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

⚙️ Source: CodeRabbit Configuration File

List of files the instruction was applied to:

  • tests/parser.rs
  • src/parser/mod.rs
🧬 Code Graph Analysis (1)
tests/parser.rs (1)
src/parser/mod.rs (10)
  • parse (110-130)
  • errors (99-101)
  • indexes (773-779)
  • name (846-852)
  • name (923-934)
  • name (1112-1123)
  • relation (1127-1138)
  • columns (954-1009)
  • columns (1146-1194)
  • root (93-95)
🪛 LanguageTool
docs/haskell-parser-analysis.md

[typographical] ~176-~176: Consider using an em dash in dialogues and enumerations.
Context: ...turning a Transformer node. - index – defines an index on a relation. The Rus...

(DASH_RULE)

🔇 Additional comments (7)
tests/parser.rs (1)

75-485: Comprehensive test coverage for index parsing.

The test additions provide excellent coverage for the new index parsing functionality:

  • Valid cases: single-column and multi-column indexes, nested function calls
  • Error cases: missing 'on' keyword, unbalanced parentheses
  • Proper assertions for parsed structure and error detection

The tests follow the existing patterns and conventions in the file.

src/parser/mod.rs (6)

59-74: Well-designed whitespace parser combinator.

The inline_ws() function provides a clean, reusable way to handle whitespace and comments in parsers. Good documentation and follows established patterns.


110-171: Correct integration of index parsing into the pipeline.

The updates to parse and parse_tokens properly integrate index span collection:

  • Index spans are collected alongside imports, typedefs, and relations
  • Errors from index parsing are correctly aggregated with other parse errors
  • The function signature uses #[expect] attribute appropriately for the complex return type

457-548: Robust index parsing with proper nested parenthesis handling.

The collect_index_spans implementation correctly parses index declarations:

  • index_columns parser properly tracks parenthesis depth to handle nested expressions like lower(name)
  • Error recovery skips to line end on parse failure
  • Follows the established pattern for span collection

This addresses previous concerns about handling nested parentheses in column expressions.


561-611: Consistent CST building for index nodes.

The updates to build_green_tree properly integrate index span handling:

  • Assertion for sorted spans maintains invariants
  • Iterator management follows the same pattern as other node types
  • N_INDEX nodes are created at appropriate positions

1097-1195: Well-implemented Index AST with proper column expression handling.

The Index struct and its methods provide clean access to parsed index information:

  • name() and relation() correctly extract identifiers from appropriate positions
  • columns() properly handles comma-separated column expressions with nested parentheses
  • The implementation preserves complex expressions like lower(username) as complete strings

This addresses previous review comments about proper column extraction.


771-779: Consistent API addition for accessing index nodes.

The indexes() method follows the established pattern for collecting typed nodes from the root, maintaining API consistency with imports(), type_defs(), and relations().

Comment thread docs/haskell-parser-analysis.md Outdated
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jun 29, 2025

@sourcery-ai review

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes and they look great!

Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments

### Comment 1
<location> `tests/parser.rs` </location>
<code_context>
+#[fixture]
+fn index_single_column() -> &'static str {
</code_context>

<issue_to_address>
Consider adding a test for whitespace variations in index declarations.

Adding such a test will help ensure the parser handles formatting variations like extra spaces, tabs, or newlines between tokens.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread tests/parser.rs

#[fixture]
fn index_single_column() -> &'static str {
"index Idx_User_username on User(username)"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Consider adding a test for whitespace variations in index declarations.

Adding such a test will help ensure the parser handles formatting variations like extra spaces, tabs, or newlines between tokens.

@leynos leynos merged commit d264b13 into main Jun 29, 2025
2 checks passed
@leynos leynos deleted the codex/implement-index-parser-for-declarations branch June 29, 2025 22:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant