Skip to content

Refactor parse_utils module#97

Merged
leynos merged 9 commits intomainfrom
codex/refactor-parse_utils-into-submodules
Jul 20, 2025
Merged

Refactor parse_utils module#97
leynos merged 9 commits intomainfrom
codex/refactor-parse_utils-into-submodules

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Jul 20, 2025

Summary

  • split parsing helpers into focused submodules
  • re-export APIs in new parse_utils module
  • adjust tests and update uses

Testing

  • make lint
  • make test

https://chatgpt.com/codex/tasks/task_e_687c6271d1708322a061878921a1aefe

Summary by Sourcery

Refactor the parsing utilities by reorganizing parse_utils into focused submodules, standardizing error handling, and updating exports and tests to match the new structure.

Enhancements:

  • Split the monolithic parse_utils into separate delimiter, errors, token_utils, and type_parsing modules
  • Introduce structured ParseError and DelimiterError types with a DelimStack for improved error reporting
  • Extract delimiter-related helpers (extract_parenthesized, paren_block_span) into a dedicated module
  • Adjust visibility of core parsing functions (parse_name_type_pairs, parse_output_list, parse_type_expr, parse_type_after_colon) to pub(crate)
  • Re-export key utilities from parse_utils/mod.rs for a consolidated public API

Tests:

  • Update unit tests to conform with the new module structure and add clippy's expect attributes for clearer failures

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Jul 20, 2025

Reviewer's Guide

This PR refactors the parse_utils module by extracting parsing helpers into dedicated submodules (type_parsing, delimiter, errors, token_utils), adjusting function visibility and imports, cleaning up redundant comments, introducing enhanced parameter parsing with error handling, and updating tests to use clippy expect annotations.

Class diagram for refactored parse_utils module structure

classDiagram
    class parse_utils {
        <<module>>
    }
    class delimiter {
        +extract_parenthesized()
        +paren_block_span()
    }
    class errors {
        +Delim
        +DelimStack
        +DelimiterError
        +ParseError
    }
    class token_utils {
        <<module>>
    }
    class type_parsing {
        +parse_name_type_pairs()
        +parse_output_list()
        +parse_type_after_colon()
        +parse_type_expr()
    }

    parse_utils o-- delimiter
    parse_utils o-- errors
    parse_utils o-- token_utils
    parse_utils o-- type_parsing
    delimiter ..> errors : uses
    type_parsing ..> errors : uses
    type_parsing ..> delimiter : uses
    delimiter ..> token_utils : uses (via lexer_helpers)
Loading

Class diagram for new error handling types in errors.rs

classDiagram
    class Delim {
        <<enum>>
        Paren
        Angle
        Bracket
        Brace
    }
    class DelimStack {
        +open()
        +close()
        +is_empty()
        -Vec<Delim>
    }
    class DelimiterError {
        +expected: Delim
        +found: SyntaxKind
        +span: TextRange
    }
    class ParseError {
        <<enum>>
        Delimiter(DelimiterError)
        UnclosedDelimiter
        MissingColon
        MissingName
        MissingType
    }

    DelimStack *-- Delim
    DelimiterError *-- Delim
    ParseError *-- DelimiterError
Loading

File-Level Changes

Change Details Files
Modularize parse_utils into submodules
  • Created new modules: delimiter.rs, errors.rs, token_utils.rs, type_parsing.rs
  • Added parse_utils/mod.rs to re-export helper functions
  • Moved parsing logic out of the monolithic parse_utils.rs into focused files
src/parser/ast/parse_utils/mod.rs
src/parser/ast/parse_utils/errors.rs
src/parser/ast/parse_utils/delimiter.rs
src/parser/ast/parse_utils/token_utils.rs
src/parser/ast/parse_utils/type_parsing.rs
Adjust function visibility and imports
  • Changed parsing helpers from pub(super) to pub(crate)
  • Replaced super imports with crate and rowan for consistency
src/parser/ast/parse_utils/type_parsing.rs
Clean up redundant comments and documentation
  • Removed inline comments preceding helper logic
  • Deleted outdated doc comments on parsing functions
src/parser/ast/parse_utils/type_parsing.rs
Introduce enhanced parameter parsing helpers
  • Added collect_parameter_name and finalise_parameter for name-type extraction
  • Integrated ParseError variants for missing colon, name, and type
src/parser/ast/parse_utils/type_parsing.rs
Revise tests with clippy expect annotations and remove outdated cases
  • Annotated expect calls with #[expect(clippy::expect_used)]
  • Removed deprecated parameterized test for parse_name_type_pairs
src/parser/ast/parse_utils/type_parsing.rs

Possibly linked issues

  • Implement function parser #40: The PR refactors parse_utils module, splitting parse_name_type_pairs and parse_type_after_colon into submodules, as requested by the issue.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 20, 2025

Summary by CodeRabbit

  • New Features

    • Introduced advanced parsing utilities for handling balanced delimiters and extracting enclosed text, with robust support for nested structures and detailed error reporting.
    • Added utilities for parsing type expressions, parameter name-type pairs, and output lists, including comprehensive error handling for unmatched or unclosed delimiters.
  • Bug Fixes

    • Improved error handling in key list extraction and parenthesised content parsing to preserve partially collected results on error.
  • Documentation

    • Extended documentation with a new section and diagram explaining the hierarchy and structure of parsing error types related to delimiters.

Walkthrough

Introduce new modules for delimiter and error handling utilities within the parser, centralise delimiter-aware parsing logic, and refactor type parsing to delegate delimiter and error management to shared helpers. Update module exports and visibility to enable reuse across AST parsing components, and remove redundant local implementations. Modify error handling in relation key extraction and preserve partial parsing output on errors in syntax utilities.

Changes

File(s) Change Summary
src/parser/ast/parse_utils/delimiter.rs Add balanced delimiter parsing and text extraction utilities; implement paren_block_span and extract_parenthesized.
src/parser/ast/parse_utils/errors.rs Add delimiter stack management, structured parse error types, and display/error trait implementations.
src/parser/ast/parse_utils/mod.rs Create shared parse_utils module; re-export key functions from delimiter and type_parsing modules.
src/parser/ast/parse_utils/token_utils.rs Add token-level delimiter stack utilities and parse error reporting helpers.
src/parser/ast/parse_utils/type_parsing.rs Remove local delimiter/error logic; delegate to new utilities; update visibility and imports.
src/parser/ast/relation.rs Change extract_key_list to convert extract_parenthesized errors into None instead of propagating.
src/syntax_utils.rs Modify extract_parenthesized to return partial collected text on error instead of default empty string.

Sequence Diagram(s)

sequenceDiagram
    participant Parser
    participant DelimiterUtils
    participant TokenUtils
    participant ErrorUtils

    Parser->>DelimiterUtils: extract_parenthesized(iter, open, close)
    DelimiterUtils->>TokenUtils: advance_to_delimiter
    DelimiterUtils->>DelimiterUtils: collect_balanced_content (track nesting)
    DelimiterUtils->>TokenUtils: extract_element_text
    DelimiterUtils-->>Parser: Option<String> (enclosed content)

    Parser->>ErrorUtils: Report delimiter or parse errors as needed
Loading

Possibly related PRs

Poem

🦀
Delimiters balanced, errors now neat,
Parsing utilities share a common seat.
No more tangled, nested dread—
Helpers and errors, centralised instead.
Parentheses, brackets, braces—oh my!
The parser now sings, as tokens fly by.
🦀


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between da1ecc1 and b1a4c15.

📒 Files selected for processing (1)
  • src/parser/ast/parse_utils/type_parsing.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

Instructions used from:

Sources:
📄 CodeRabbit Inference Engine

  • AGENTS.md

⚙️ CodeRabbit Configuration File

🔇 Additional comments (8)
src/parser/ast/parse_utils/type_parsing.rs (8)

25-87: Well-structured parameter parsing with comprehensive error handling

The function effectively handles delimiter matching and provides detailed error reporting for mismatched delimiters.


89-102: Good defensive programming with saturating arithmetic

Excellent use of saturating_sub to prevent underflow when tracking delimiter depth.


104-132: Clean implementation with good helper reuse

The function effectively parses output lists with clear logic flow.


134-192: Successfully refactored to reduce complexity

The function now has appropriate cyclomatic complexity through effective delegation to helper functions and use of predicates.


194-241: Well-designed delimiter handling with appropriate error recovery

The separation between opening and closing delimiter handlers provides clear logic flow and proper error handling.


272-305: Clear parsing logic with appropriate termination conditions

The function correctly handles different termination scenarios including newlines within whitespace tokens.


370-428: Comprehensive parameter validation with detailed error reporting

The function provides excellent error messages and handles all edge cases for missing names, types, and colons.


430-601: Exemplary test coverage with parameterized tests

The tests comprehensively cover success cases, error scenarios, and edge cases using rstest effectively.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch codex/refactor-parse_utils-into-submodules

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai auto-generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown

@codescene-delta-analysis codescene-delta-analysis Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Health Improved (1 files improve in Code Health)

Gates Failed
Enforce critical code health rules (1 file with Deep, Nested Complexity)
Enforce advisory code health rules (1 file with Excess Number of Function Arguments)

Gates Passed
4 Quality Gates Passed

See analysis details in CodeScene

Reason for failure
Enforce critical code health rules Violations Code Health Impact
delimiter.rs 1 critical rule 10.00 → 9.39 Suppress
Enforce advisory code health rules Violations Code Health Impact
token_utils.rs 1 advisory rule 10.00 → 9.69 Suppress
View Improvements
File Code Health Impact Categories Improved
type_parsing.rs 6.00 → 7.18 Low Cohesion, Overall Code Complexity, Deep, Nested Complexity, Excess Number of Function Arguments

Quality Gate Profile: Pay Down Tech Debt
Want more control? Customize Code Health rules or catch issues early with our IDE extension and CLI tool.

Comment thread src/parser/ast/parse_utils/delimiter.rs Outdated
Comment thread src/parser/ast/parse_utils/token_utils.rs Outdated
Comment thread src/parser/ast/parse_utils/token_utils.rs Outdated
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes and they look great!

Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments

### Comment 1
<location> `src/parser/ast/parse_utils/delimiter.rs:39` </location>
<code_context>
+    None
+}
+
+fn collect_balanced_content<I>(
+    iter: &mut std::iter::Peekable<I>,
+    open_kind: SyntaxKind,
</code_context>

<issue_to_address>
collect_balanced_content returns None if the end of the iterator is reached without closing.

Consider returning the partially collected buffer or emitting an error for unclosed delimiters to improve diagnostics.

Suggested implementation:

```rust
#[derive(Debug)]
pub struct UnclosedDelimiterError {
    pub collected: String,
    pub expected: SyntaxKind,
}

fn collect_balanced_content<I>(
    iter: &mut std::iter::Peekable<I>,
    open_kind: SyntaxKind,
    close_kind: SyntaxKind,
) -> Result<String, UnclosedDelimiterError>
where
    I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
{
    let mut buffer = String::new();
    let mut depth = 1;

    while let Some(e) = iter.next() {
        if e.kind() == open_kind {
            depth += 1;
        } else if e.kind() == close_kind {
            depth -= 1;
            if depth == 0 {
                return Ok(buffer);
            }
        }
        buffer.push_str(&e.text());
    }

    Err(UnclosedDelimiterError {
        collected: buffer,
        expected: close_kind,
    })
}

```

```rust
fn advance_to_delimiter<I>(iter: &mut std::iter::Peekable<I>, open_kind: SyntaxKind) -> Option<()>

```

- Update all call sites of `collect_balanced_content` to handle the new `Result<String, UnclosedDelimiterError>` return type instead of `Option<String>`.
- If you want to emit diagnostics, you can use the `UnclosedDelimiterError` struct to provide more context.
- You may want to move the `UnclosedDelimiterError` definition to a common error module if it will be reused elsewhere.
</issue_to_address>

### Comment 2
<location> `src/parser/ast/parse_utils/delimiter.rs:15` </location>
<code_context>
+}
+
+#[must_use]
+pub fn extract_parenthesized<I>(
+    iter: &mut std::iter::Peekable<I>,
+    open_kind: SyntaxKind,
</code_context>

<issue_to_address>
Consider collapsing the multiple small helper functions into a single loop within extract_parenthesized to streamline the logic.

```suggestion
// You can collapse `advance_to_delimiter`, `collect_balanced_content`, 
// `handle_*`, and `extract_element_text` into one small loop and remove
// the extra helpers. This keeps the same behavior with less boilerplate:

pub fn extract_parenthesized<I>(
    iter: &mut std::iter::Peekable<I>,
    open_kind: SyntaxKind,
    close_kind: SyntaxKind,
) -> Option<String>
where
    I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
{
    // drop until the first opening delimiter
    let mut depth = 0;
    let mut buf = String::new();
    while let Some(elem) = iter.next() {
        if elem.kind() == open_kind {
            // found the opener
            buf.push_str(elem.text()); // .text() works on both Token & Node
            depth = 1;
            break;
        }
    }
    if depth == 0 {
        return None;
    }

    // collect until depth goes back to zero
    while let Some(elem) = iter.next() {
        let txt = elem.text();
        match elem.kind() {
            k if k == open_kind => {
                depth += 1;
                buf.push_str(txt);
            }
            k if k == close_kind => {
                depth -= 1;
                buf.push_str(txt);
                if depth == 0 {
                    return Some(buf);
                }
            }
            _ => buf.push_str(txt),
        }
    }
    None
}

// Now you can delete:
//   - advance_to_delimiter
//   - collect_balanced_content
//   - handle_opening_delimiter
//   - handle_closing_delimiter
//   - extract_element_text
```

This keeps the same one‐pass depth counting and string accumulation, but collapses all the tiny helpers into a single focused function.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread src/parser/ast/parse_utils/delimiter.rs Outdated
Comment thread src/parser/ast/parse_utils/delimiter.rs
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

🔭 Outside diff range comments (5)
src/parser/ast/parse_utils/type_parsing.rs (5)

11-73: Add function documentation

Document this key parsing function with examples.

Add documentation:

+/// Parses parameter name-type pairs from a token stream.
+///
+/// Extracts `(name: type, ...)` patterns from function or relation declarations,
+/// handling nested type expressions with balanced delimiters.
+///
+/// # Returns
+/// A tuple of successfully parsed name-type pairs and any parse errors encountered.
+///
+/// # Example
+/// ```
+/// // For input "(x: u32, y: Vec<string>)", returns:
+/// // (vec![("x", "u32"), ("y", "Vec<string>")], vec![])
+/// ```
pub(crate) fn parse_name_type_pairs<I>(iter: I) -> (Vec<(String, String)>, Vec<ParseError>)

90-118: Add function documentation

Document the output list parsing function.

Add documentation:

+/// Parses a comma-separated list of identifiers after a colon.
+///
+/// Used to parse output declarations like `: foo, bar, baz`.
+///
+/// # Example
+/// ```
+/// // For input "function f(): a, b", returns vec!["a", "b"]
+/// ```
pub(crate) fn parse_output_list<I>(iter: I) -> Vec<String>

120-216: Add function documentation

Document the type expression parser.

Add documentation:

+/// Parses a type expression with balanced delimiters.
+///
+/// Handles complex nested types like `Vec<Map<string, Vec<u8>>>`,
+/// tracking delimiter balance and generating errors for mismatches.
+///
+/// # Returns
+/// A tuple of the parsed type string and any delimiter errors.
pub(crate) fn parse_type_expr<I>(iter: &mut std::iter::Peekable<I>) -> (String, Vec<ParseError>)

218-251: Add function documentation

Document the return type parsing function.

Add documentation:

+/// Parses a type annotation after a colon in function signatures.
+///
+/// Extracts the return type from declarations like `function f(): bool {}`.
+/// Stops at braces, semicolons, or newlines.
+///
+/// # Returns
+/// The parsed type string, or None if no type is specified.
pub(crate) fn parse_type_after_colon<I>(iter: &mut std::iter::Peekable<I>) -> Option<String>

202-213: Improve error handling for unclosed delimiters

The span for unclosed delimiters is always empty at position 0, which provides poor error location information.

Track the position of opened delimiters to provide accurate error spans:

-        errors.push(ParseError::UnclosedDelimiter {
-            delimiter: ch,
-            span: TextRange::empty(0.into()),
-        });
+        // TODO: Track delimiter positions in DelimStack to provide accurate spans
+        errors.push(ParseError::UnclosedDelimiter {
+            delimiter: ch,
+            span: TextRange::empty(0.into()), // FIXME: Use actual delimiter position
+        });
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 483545f and f5dee98.

📒 Files selected for processing (5)
  • src/parser/ast/parse_utils/delimiter.rs (1 hunks)
  • src/parser/ast/parse_utils/errors.rs (1 hunks)
  • src/parser/ast/parse_utils/mod.rs (1 hunks)
  • src/parser/ast/parse_utils/token_utils.rs (1 hunks)
  • src/parser/ast/parse_utils/type_parsing.rs (7 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

Instructions used from:

Sources:
📄 CodeRabbit Inference Engine

  • AGENTS.md

⚙️ CodeRabbit Configuration File

🧬 Code Graph Analysis (1)
src/parser/ast/parse_utils/delimiter.rs (1)
src/parser/lexer_helpers.rs (1)
  • balanced_block (187-192)
🔇 Additional comments (3)
src/parser/ast/parse_utils/mod.rs (1)

1-18: Well-structured module with clear documentation

The module is properly documented with a //! comment explaining its purpose, and the re-exports are appropriately scoped. The organization into focused submodules aligns well with the PR objectives.

src/parser/ast/parse_utils/delimiter.rs (2)

27-37: Document internal helper functions

Add brief documentation for clarity.

Add documentation:

+/// Advances the iterator until the specified opening delimiter is found.
fn advance_to_delimiter<I>(iter: &mut std::iter::Peekable<I>, open_kind: SyntaxKind) -> Option<()>

Likely an incorrect or invalid review comment.


39-62: Document helper function logic

Add documentation explaining the balanced content collection.

Add documentation:

+/// Collects text between balanced delimiters, tracking nesting depth.
+///
+/// Starts with depth 1 (after opening delimiter) and returns when depth
+/// reaches 0 (matching closing delimiter found).
fn collect_balanced_content<I>(

Likely an incorrect or invalid review comment.

Comment thread src/parser/ast/parse_utils/delimiter.rs
Comment thread src/parser/ast/parse_utils/delimiter.rs
Comment thread src/parser/ast/parse_utils/delimiter.rs Outdated
Comment thread src/parser/ast/parse_utils/errors.rs
Comment thread src/parser/ast/parse_utils/errors.rs Outdated
Comment on lines +5 to +14
pub(crate) fn open_and_push(
token: &rowan::SyntaxToken<DdlogLanguage>,
buf: &mut String,
stack: &mut DelimStack,
delim: Delim,
count: usize,
) {
stack.open(delim, count);
buf.push_str(token.text());
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add function documentation

Document public APIs using Rustdoc comments.

Add documentation:

+/// Opens a delimiter and pushes its token text to the buffer.
+///
+/// Updates the delimiter stack to track the opened delimiter(s) and
+/// appends the token's text representation to the output buffer.
pub(crate) fn open_and_push(
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub(crate) fn open_and_push(
token: &rowan::SyntaxToken<DdlogLanguage>,
buf: &mut String,
stack: &mut DelimStack,
delim: Delim,
count: usize,
) {
stack.open(delim, count);
buf.push_str(token.text());
}
/// Opens a delimiter and pushes its token text to the buffer.
///
/// Updates the delimiter stack to track the opened delimiter(s) and
/// appends the token's text representation to the output buffer.
pub(crate) fn open_and_push(
token: &rowan::SyntaxToken<DdlogLanguage>,
buf: &mut String,
stack: &mut DelimStack,
delim: Delim,
count: usize,
) {
stack.open(delim, count);
buf.push_str(token.text());
}
🤖 Prompt for AI Agents
In src/parser/ast/parse_utils/token_utils.rs around lines 5 to 14, the public
function open_and_push lacks Rustdoc comments. Add a Rustdoc comment above the
function describing its purpose, parameters, and behavior to improve code
clarity and maintainability.

Comment on lines +16 to +26
pub(crate) fn close_and_push(
token: &rowan::SyntaxToken<DdlogLanguage>,
buf: &mut String,
stack: &mut DelimStack,
delim: Delim,
count: usize,
) -> usize {
let closed = stack.close(delim, count);
buf.push_str(token.text());
closed
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add function documentation

Document the return value behaviour.

Add documentation:

+/// Closes a delimiter and pushes its token text to the buffer.
+///
+/// Attempts to close the specified number of delimiters on the stack.
+/// Returns the actual number of delimiters successfully closed.
pub(crate) fn close_and_push(
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub(crate) fn close_and_push(
token: &rowan::SyntaxToken<DdlogLanguage>,
buf: &mut String,
stack: &mut DelimStack,
delim: Delim,
count: usize,
) -> usize {
let closed = stack.close(delim, count);
buf.push_str(token.text());
closed
}
/// Closes a delimiter and pushes its token text to the buffer.
///
/// Attempts to close the specified number of delimiters on the stack.
/// Returns the actual number of delimiters successfully closed.
pub(crate) fn close_and_push(
token: &rowan::SyntaxToken<DdlogLanguage>,
buf: &mut String,
stack: &mut DelimStack,
delim: Delim,
count: usize,
) -> usize {
let closed = stack.close(delim, count);
buf.push_str(token.text());
closed
}
🤖 Prompt for AI Agents
In src/parser/ast/parse_utils/token_utils.rs around lines 16 to 26, the function
close_and_push lacks documentation describing its return value. Add a doc
comment above the function explaining what the function does and specifically
detail the meaning and behavior of the returned usize value to clarify its
purpose for future readers.

Comment thread src/parser/ast/parse_utils/token_utils.rs Outdated
Comment thread src/parser/ast/parse_utils/token_utils.rs
Comment thread src/parser/ast/parse_utils/type_parsing.rs
Comment thread src/parser/ast/parse_utils/delimiter.rs Outdated
Comment thread src/parser/ast/parse_utils/token_utils.rs Outdated
Comment thread src/parser/ast/parse_utils/token_utils.rs Outdated
Add module-level documentation to newly created submodules and document the delimiter stack is_empty helper. Refactor collect_balanced_content to delegate token handling for lower nesting.
Copy link
Copy Markdown

@codescene-delta-analysis codescene-delta-analysis Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Health Improved (1 files improve in Code Health)

Gates Failed
Enforce advisory code health rules (2 files with Excess Number of Function Arguments)

Gates Passed
5 Quality Gates Passed

See analysis details in CodeScene

Reason for failure
Enforce advisory code health rules Violations Code Health Impact
delimiter.rs 1 advisory rule 10.00 → 9.69 Suppress
token_utils.rs 1 advisory rule 10.00 → 9.69 Suppress
View Improvements
File Code Health Impact Categories Improved
type_parsing.rs 6.00 → 7.18 Low Cohesion, Overall Code Complexity, Deep, Nested Complexity, Excess Number of Function Arguments

Quality Gate Profile: Pay Down Tech Debt
Want more control? Customize Code Health rules or catch issues early with our IDE extension and CLI tool.

Copy link
Copy Markdown

@codescene-delta-analysis codescene-delta-analysis Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Health Improved (1 files improve in Code Health)

Gates Failed
Enforce critical code health rules (1 file with Bumpy Road Ahead, Deep, Nested Complexity)
Enforce advisory code health rules (1 file with Complex Method)

Gates Passed
4 Quality Gates Passed

See analysis details in CodeScene

Reason for failure
Enforce critical code health rules Violations Code Health Impact
delimiter.rs 2 critical rules 10.00 → 9.24 Suppress
Enforce advisory code health rules Violations Code Health Impact
type_parsing.rs 1 advisory rule 6.00 → 7.18 Suppress
View Improvements
File Code Health Impact Categories Improved
type_parsing.rs 6.00 → 7.18 Low Cohesion, Overall Code Complexity, Deep, Nested Complexity, Excess Number of Function Arguments

Quality Gate Profile: Pay Down Tech Debt
Want more control? Customize Code Health rules or catch issues early with our IDE extension and CLI tool.

Comment thread src/parser/ast/parse_utils/delimiter.rs
Comment thread src/parser/ast/parse_utils/delimiter.rs
Copy link
Copy Markdown

@codescene-delta-analysis codescene-delta-analysis Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Health Improved (1 files improve in Code Health)

Gates Failed
Enforce critical code health rules (1 file with Bumpy Road Ahead, Deep, Nested Complexity)
Enforce advisory code health rules (1 file with Complex Method)

Gates Passed
4 Quality Gates Passed

See analysis details in CodeScene

Reason for failure
Enforce critical code health rules Violations Code Health Impact
delimiter.rs 2 critical rules 10.00 → 9.24 Suppress
Enforce advisory code health rules Violations Code Health Impact
type_parsing.rs 1 advisory rule 6.00 → 7.14 Suppress
View Improvements
File Code Health Impact Categories Improved
type_parsing.rs 6.00 → 7.14 Low Cohesion, Overall Code Complexity, Deep, Nested Complexity, Excess Number of Function Arguments

Quality Gate Profile: Pay Down Tech Debt
Want more control? Customize Code Health rules or catch issues early with our IDE extension and CLI tool.

Comment thread src/parser/ast/parse_utils/type_parsing.rs
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
src/parser/ast/parse_utils/type_parsing.rs (1)

126-232: Address increasing complexity in parse_type_expr.

Whilst the refactoring successfully delegates to utility modules, the function remains overly complex with high cyclomatic complexity. Extract meaningful helper functions for the different delimiter handling branches to reduce complexity and improve maintainability.

Extract delimiter-specific handlers:

+fn handle_paren_delimiters<I>(
+    token: &rowan::SyntaxToken<DdlogLanguage>,
+    ctx: &mut TokenParseContext,
+    iter: &mut std::iter::Peekable<I>,
+    is_opening: bool,
+) -> bool
+where
+    I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
+{
+    if is_opening {
+        open_delimiter(&mut *ctx.stack, Delim::Paren, 1);
+        push(token, ctx);
+        iter.next();
+        false
+    } else {
+        let should_break = ctx.stack.close(Delim::Paren, 1) == 0;
+        push(token, ctx);
+        iter.next();
+        should_break
+    }
+}
+
+fn handle_angle_delimiters<I>(
+    token: &rowan::SyntaxToken<DdlogLanguage>,
+    ctx: &mut TokenParseContext,
+    iter: &mut std::iter::Peekable<I>,
+    errors: &mut Vec<ParseError>,
+    kind: SyntaxKind,
+) where
+    I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
+{
+    match kind {
+        SyntaxKind::T_LT => open_delimiter(&mut *ctx.stack, Delim::Angle, 1),
+        SyntaxKind::T_SHL => open_delimiter(&mut *ctx.stack, Delim::Angle, 2),
+        SyntaxKind::T_GT => {
+            if close_delimiter(&mut *ctx.stack, Delim::Angle, 1) < 1 {
+                push_error(errors, Delim::Angle, token);
+            }
+        }
+        SyntaxKind::T_SHR => {
+            if close_delimiter(&mut *ctx.stack, Delim::Angle, 2) < 2 {
+                push_error(errors, Delim::Angle, token);
+            }
+        }
+        _ => unreachable!(),
+    }
+    push(token, ctx);
+    iter.next();
+}
♻️ Duplicate comments (3)
src/parser/ast/parse_utils/errors.rs (3)

19-20: Encapsulate the public field.

Make the field private and provide necessary methods:

#[derive(Default, Debug)]
-pub(crate) struct DelimStack(pub(crate) Vec<Delim>);
+pub(crate) struct DelimStack(Vec<Delim>);

Add a method for controlled access:

impl DelimStack {
    /// Returns an iterator over unclosed delimiters for error reporting.
    pub(crate) fn unclosed(&mut self) -> impl Iterator<Item = Delim> + '_ {
        self.0.drain(..)
    }
}

87-92: Add method documentation.

Document the behaviour of delimiter stack operations:

impl DelimStack {
+    /// Opens one or more delimiters of the same type.
+    ///
+    /// Pushes `count` instances of `delim` onto the stack.
    pub(super) fn open(&mut self, delim: Delim, count: usize) {

94-107: Add method documentation and improve logic.

Document the closing behaviour:

+    /// Attempts to close delimiters of the specified type.
+    ///
+    /// Pops up to `count` matching delimiters from the stack.
+    /// Returns the actual number of delimiters closed. Stops early
+    /// if a non-matching delimiter is encountered.
    pub(super) fn close(&mut self, delim: Delim, count: usize) -> usize {

Consider checking the top element before popping to avoid unnecessary push/pop operations.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4a641da and 22d626d.

📒 Files selected for processing (6)
  • src/parser/ast/parse_utils/delimiter.rs (1 hunks)
  • src/parser/ast/parse_utils/errors.rs (1 hunks)
  • src/parser/ast/parse_utils/token_utils.rs (1 hunks)
  • src/parser/ast/parse_utils/type_parsing.rs (9 hunks)
  • src/parser/ast/relation.rs (1 hunks)
  • src/syntax_utils.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

Instructions used from:

Sources:
📄 CodeRabbit Inference Engine

  • AGENTS.md

⚙️ CodeRabbit Configuration File

🔇 Additional comments (4)
src/parser/ast/relation.rs (1)

146-147: LGTM: Correct adaptation to new error handling pattern.

The change from direct error propagation to .ok()? correctly adapts to the new Result<String, UnclosedDelimiterError> return type from extract_parenthesized, maintaining the existing API contract whilst gracefully handling parsing errors.

src/syntax_utils.rs (1)

41-41: Excellent improvement to error resilience.

Preserving the partially collected text from UnclosedDelimiterError instead of defaulting to an empty string provides much better parsing resilience and maintains useful partial results when delimiters are unclosed.

src/parser/ast/parse_utils/token_utils.rs (1)

1-50: Well-structured module addressing previous concerns.

The introduction of TokenParseContext successfully reduces function parameter counts whilst maintaining clear separation of concerns. The module provides clean abstractions for delimiter stack management and error recording.

src/parser/ast/parse_utils/delimiter.rs (1)

51-91: Address remaining complexity issues from previous reviews.

Whilst the core functionality correctly implements error handling with partial content preservation, the nested complexity and multiple small helper functions flagged by previous reviewers remain unaddressed. Collapse the helper functions into a single streamlined loop within extract_parenthesized to reduce boilerplate and complexity.

Apply the previously suggested refactoring to combine the logic into a single function:

 pub fn extract_parenthesized<I>(
     iter: &mut std::iter::Peekable<I>,
     open_kind: SyntaxKind,
     close_kind: SyntaxKind,
 ) -> Result<String, UnclosedDelimiterError>
 where
     I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
 {
-    // Skip to the first opening delimiter.
-    for e in iter.by_ref() {
-        if e.kind() == open_kind {
-            break;
-        }
-    }
-
-    let mut depth = 1usize;
-    let mut buf = String::new();
-
-    for e in iter.by_ref() {
-        let text = element_text(&e);
-        match e.kind() {
-            k if k == open_kind => {
-                depth += 1;
-                buf.push_str(&text);
-            }
-            k if k == close_kind => {
-                depth -= 1;
-                if depth == 0 {
-                    return Ok(buf);
-                }
-                buf.push_str(&text);
-            }
-            _ => buf.push_str(&text),
-        }
-    }
-
-    Err(UnclosedDelimiterError {
-        collected: buf,
-        expected: close_kind,
-    })
+    let mut depth = 0;
+    let mut buf = String::new();
+    
+    while let Some(elem) = iter.next() {
+        let txt = element_text(&elem);
+        match elem.kind() {
+            k if k == open_kind => {
+                depth += 1;
+                buf.push_str(&txt);
+                if depth == 1 {
+                    break; // Found the first opener
+                }
+            }
+            _ if depth == 0 => continue, // Skip until first opener
+            _ => {
+                buf.push_str(&txt);
+            }
+        }
+    }
+    
+    if depth == 0 {
+        return Err(UnclosedDelimiterError {
+            collected: String::new(),
+            expected: open_kind,
+        });
+    }
+    
+    while let Some(elem) = iter.next() {
+        let txt = element_text(&elem);
+        match elem.kind() {
+            k if k == open_kind => {
+                depth += 1;
+                buf.push_str(&txt);
+            }
+            k if k == close_kind => {
+                depth -= 1;
+                if depth == 0 {
+                    return Ok(buf);
+                }
+                buf.push_str(&txt);
+            }
+            _ => buf.push_str(&txt),
+        }
+    }
+    
+    Err(UnclosedDelimiterError {
+        collected: buf,
+        expected: close_kind,
+    })
 }

Likely an incorrect or invalid review comment.

Comment thread src/parser/ast/parse_utils/type_parsing.rs
Comment thread src/parser/ast/parse_utils/type_parsing.rs
Comment thread src/parser/ast/parse_utils/delimiter.rs
Comment thread src/parser/ast/parse_utils/delimiter.rs
Copy link
Copy Markdown

@codescene-delta-analysis codescene-delta-analysis Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Health Improved (1 files improve in Code Health)

Gates Failed
Enforce critical code health rules (1 file with Bumpy Road Ahead)
Enforce advisory code health rules (1 file with Excess Number of Function Arguments)

Gates Passed
4 Quality Gates Passed

See analysis details in CodeScene

Reason for failure
Enforce critical code health rules Violations Code Health Impact
delimiter.rs 1 critical rule 10.00 → 9.54 Suppress
Enforce advisory code health rules Violations Code Health Impact
delimiter.rs 1 advisory rule 10.00 → 9.54 Suppress
View Improvements
File Code Health Impact Categories Improved
type_parsing.rs 6.00 → 7.32 Low Cohesion, Complex Method, Deep, Nested Complexity, Excess Number of Function Arguments

Quality Gate Profile: Pay Down Tech Debt
Want more control? Customize Code Health rules or catch issues early with our IDE extension and CLI tool.

Comment thread src/parser/ast/parse_utils/delimiter.rs
Comment thread src/parser/ast/parse_utils/delimiter.rs
Copy link
Copy Markdown

@codescene-delta-analysis codescene-delta-analysis Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Health Improved (1 files improve in Code Health)

Gates Failed
Enforce critical code health rules (1 file with Bumpy Road Ahead)
Enforce advisory code health rules (1 file with Excess Number of Function Arguments)

Gates Passed
4 Quality Gates Passed

See analysis details in CodeScene

Reason for failure
Enforce critical code health rules Violations Code Health Impact
delimiter.rs 1 critical rule 10.00 → 9.54 Suppress
Enforce advisory code health rules Violations Code Health Impact
delimiter.rs 1 advisory rule 10.00 → 9.54 Suppress
View Improvements
File Code Health Impact Categories Improved
type_parsing.rs 6.00 → 7.32 Low Cohesion, Complex Method, Deep, Nested Complexity, Excess Number of Function Arguments

Quality Gate Profile: Pay Down Tech Debt
Want more control? Customize Code Health rules or catch issues early with our IDE extension and CLI tool.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 20, 2025

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 20, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Comment thread src/parser/ast/parse_utils/delimiter.rs
Comment thread src/parser/ast/parse_utils/delimiter.rs
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (1)
src/parser/ast/parse_utils/type_parsing.rs (1)

126-232: Refactor to reduce cyclomatic complexity below the threshold of 12.

The parse_type_expr function has excessive cyclomatic complexity due to the large match statement with many delimiter handling branches. Extract delimiter handling logic into helper functions.

+/// Handles an opening delimiter token.
+fn handle_opening_delimiter(
+    kind: SyntaxKind,
+    token: &rowan::SyntaxToken<DdlogLanguage>,
+    ctx: &mut TokenParseContext,
+) {
+    let (delim, count) = match kind {
+        SyntaxKind::T_LPAREN => (Delim::Paren, 1),
+        SyntaxKind::T_LT => (Delim::Angle, 1),
+        SyntaxKind::T_SHL => (Delim::Angle, 2),
+        SyntaxKind::T_LBRACKET => (Delim::Bracket, 1),
+        SyntaxKind::T_LBRACE => (Delim::Brace, 1),
+        _ => return,
+    };
+    open_delimiter(&mut *ctx.stack, delim, count);
+    push(token, ctx);
+}
+
+/// Handles a closing delimiter token.
+fn handle_closing_delimiter(
+    kind: SyntaxKind,
+    token: &rowan::SyntaxToken<DdlogLanguage>,
+    ctx: &mut TokenParseContext,
+    errors: &mut Vec<ParseError>,
+) -> bool {
+    let (delim, count, should_continue) = match kind {
+        SyntaxKind::T_RPAREN => (Delim::Paren, 1, ctx.stack.close(Delim::Paren, 1) > 0),
+        SyntaxKind::T_GT => (Delim::Angle, 1, true),
+        SyntaxKind::T_SHR => (Delim::Angle, 2, true),
+        SyntaxKind::T_RBRACKET => (Delim::Bracket, 1, true),
+        SyntaxKind::T_RBRACE => (Delim::Brace, 1, true),
+        _ => return true,
+    };
+    
+    if kind != SyntaxKind::T_RPAREN && close_delimiter(&mut *ctx.stack, delim, count) < count {
+        push_error(errors, delim, token);
+    }
+    
+    if should_continue {
+        push(token, ctx);
+    }
+    
+    should_continue
+}
+
 pub(crate) fn parse_type_expr<I>(iter: &mut std::iter::Peekable<I>) -> (String, Vec<ParseError>)
 where
     I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
 {
     use rowan::NodeOrToken;
 
     let mut buf = String::new();
     let mut errors = Vec::new();
     let mut depth = DelimStack::default();
     let mut ctx = TokenParseContext::new(&mut buf, &mut depth);
 
     while let Some(e) = iter.peek() {
         match e {
             NodeOrToken::Token(t) => match t.kind() {
-                SyntaxKind::T_LPAREN => {
-                    open_delimiter(&mut *ctx.stack, Delim::Paren, 1);
-                    push(t, &mut ctx);
-                    iter.next();
-                }
-                SyntaxKind::T_RPAREN => {
-                    if ctx.stack.close(Delim::Paren, 1) == 0 {
-                        break;
-                    }
-                    push(t, &mut ctx);
-                    iter.next();
-                }
-                SyntaxKind::T_LT => {
-                    open_delimiter(&mut *ctx.stack, Delim::Angle, 1);
-                    push(t, &mut ctx);
-                    iter.next();
-                }
-                SyntaxKind::T_SHL => {
-                    open_delimiter(&mut *ctx.stack, Delim::Angle, 2);
-                    push(t, &mut ctx);
-                    iter.next();
-                }
-                SyntaxKind::T_GT => {
-                    if close_delimiter(&mut *ctx.stack, Delim::Angle, 1) < 1 {
-                        push_error(&mut errors, Delim::Angle, t);
-                    }
-                    push(t, &mut ctx);
-                    iter.next();
-                }
-                SyntaxKind::T_SHR => {
-                    if close_delimiter(&mut *ctx.stack, Delim::Angle, 2) < 2 {
-                        push_error(&mut errors, Delim::Angle, t);
-                    }
-                    push(t, &mut ctx);
-                    iter.next();
-                }
-                SyntaxKind::T_LBRACKET => {
-                    open_delimiter(&mut *ctx.stack, Delim::Bracket, 1);
-                    push(t, &mut ctx);
-                    iter.next();
-                }
-                SyntaxKind::T_RBRACKET => {
-                    if close_delimiter(&mut *ctx.stack, Delim::Bracket, 1) < 1 {
-                        push_error(&mut errors, Delim::Bracket, t);
-                    }
-                    push(t, &mut ctx);
-                    iter.next();
-                }
-                SyntaxKind::T_LBRACE => {
-                    open_delimiter(&mut *ctx.stack, Delim::Brace, 1);
-                    push(t, &mut ctx);
-                    iter.next();
-                }
-                SyntaxKind::T_RBRACE => {
-                    if close_delimiter(&mut *ctx.stack, Delim::Brace, 1) < 1 {
-                        push_error(&mut errors, Delim::Brace, t);
-                    }
-                    push(t, &mut ctx);
-                    iter.next();
-                }
+                kind if is_opening_delimiter(kind) => {
+                    handle_opening_delimiter(kind, t, &mut ctx);
+                    iter.next();
+                }
+                kind if is_closing_delimiter(kind) => {
+                    if !handle_closing_delimiter(kind, t, &mut ctx, &mut errors) {
+                        break;
+                    }
+                    iter.next();
+                }
                 SyntaxKind::T_COMMA if ctx.stack.is_empty() => break,
                 _ => {
                     push(t, &mut ctx);
                     iter.next();
                 }
             },
             NodeOrToken::Node(n) => {
                 let text = n.text().to_string();
                 let is_whitespace = text.chars().all(char::is_whitespace);
                 let is_comment = n.kind() == SyntaxKind::T_COMMENT;
                 if !is_whitespace && !is_comment {
                     ctx.buf.push_str(&text);
                 }
                 iter.next();
             }
         }
     }
     
     // ... rest of function remains the same

Add these predicate functions:

/// Checks if a token kind is an opening delimiter.
fn is_opening_delimiter(kind: SyntaxKind) -> bool {
    matches!(
        kind,
        SyntaxKind::T_LPAREN
            | SyntaxKind::T_LT
            | SyntaxKind::T_SHL
            | SyntaxKind::T_LBRACKET
            | SyntaxKind::T_LBRACE
    )
}

/// Checks if a token kind is a closing delimiter.
fn is_closing_delimiter(kind: SyntaxKind) -> bool {
    matches!(
        kind,
        SyntaxKind::T_RPAREN
            | SyntaxKind::T_GT
            | SyntaxKind::T_SHR
            | SyntaxKind::T_RBRACKET
            | SyntaxKind::T_RBRACE
    )
}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 22d626d and 24020fd.

📒 Files selected for processing (5)
  • docs/function-parsing-design.md (1 hunks)
  • src/parser/ast/parse_utils/delimiter.rs (1 hunks)
  • src/parser/ast/parse_utils/errors.rs (1 hunks)
  • src/parser/ast/parse_utils/token_utils.rs (1 hunks)
  • src/parser/ast/parse_utils/type_parsing.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.md

Instructions used from:

Sources:
⚙️ CodeRabbit Configuration File

**/*.rs

Instructions used from:

Sources:
⚙️ CodeRabbit Configuration File

🔇 Additional comments (8)
docs/function-parsing-design.md (1)

46-49: Fix line wrapping to comply with 80-column limit.

Lines 46-49 exceed the 80-column limit. Wrap these lines appropriately.

-A hierarchy of error types supports rich diagnostics when delimiters do not
-match or names and types are missing. The following diagram shows the
-relationships between these types.
+A hierarchy of error types supports rich diagnostics when delimiters do not
+match or names and types are missing. The following diagram shows the
+relationships between these types.

Likely an incorrect or invalid review comment.

src/parser/ast/parse_utils/token_utils.rs (7)

1-6: Excellent module documentation.

The module documentation clearly explains the purpose and utility, properly uses British English, and follows the required //! format.


7-9: Clean import organisation.

The imports are well-structured and appropriately scoped.


11-22: Well-designed context struct.

The TokenParseContext effectively groups related parameters and provides a clean API with proper lifetime management.


24-27: Clean single-responsibility function.

The function properly focuses on delimiter opening operations and maintains a simple, clear interface.


29-32: Excellent delimiter closing function.

The function maintains single responsibility and provides clear return value semantics.


34-37: Simple and effective utility function.

The push function maintains consistency with the context pattern and has a single, clear responsibility.


39-53: Comprehensive error recording function.

The function provides excellent error handling with clear documentation and proper error construction including all necessary context.

Comment thread src/parser/ast/parse_utils/delimiter.rs
Comment thread src/parser/ast/parse_utils/errors.rs
Comment thread src/parser/ast/parse_utils/errors.rs
Comment thread src/parser/ast/parse_utils/errors.rs
Copy link
Copy Markdown

@codescene-delta-analysis codescene-delta-analysis Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Health Improved (1 files improve in Code Health)

Gates Passed
6 Quality Gates Passed

See analysis details in CodeScene

View Improvements
File Code Health Impact Categories Improved
type_parsing.rs 6.00 → 7.32 Low Cohesion, Complex Method, Deep, Nested Complexity, Excess Number of Function Arguments

Quality Gate Profile: Pay Down Tech Debt
Want more control? Customize Code Health rules or catch issues early with our IDE extension and CLI tool.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 20, 2025

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 20, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 24020fd and 58404e7.

📒 Files selected for processing (2)
  • src/parser/ast/parse_utils/delimiter.rs (1 hunks)
  • src/parser/ast/parse_utils/errors.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs

Instructions used from:

Sources:
⚙️ CodeRabbit Configuration File

🔇 Additional comments (1)
src/parser/ast/parse_utils/delimiter.rs (1)

1-170: Excellent modular design and comprehensive documentation!

The delimiter parsing utilities are well-structured with proper error handling, comprehensive documentation including examples, and good separation of concerns. The refactoring to reduce complexity and parameter counts has been effectively implemented using helper functions and context structs.

Comment thread src/parser/ast/parse_utils/errors.rs Outdated
Copy link
Copy Markdown

@codescene-delta-analysis codescene-delta-analysis Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Health Improved (1 files improve in Code Health)

Gates Failed
Enforce advisory code health rules (1 file with Code Duplication)

Gates Passed
5 Quality Gates Passed

See analysis details in CodeScene

Reason for failure
Enforce advisory code health rules Violations Code Health Impact
type_parsing.rs 1 advisory rule 6.00 → 6.62 Suppress
View Improvements
File Code Health Impact Categories Improved
type_parsing.rs 6.00 → 6.62 Low Cohesion, Complex Method, Deep, Nested Complexity, Excess Number of Function Arguments

Quality Gate Profile: Pay Down Tech Debt
Want more control? Customize Code Health rules or catch issues early with our IDE extension and CLI tool.

Comment thread src/parser/ast/parse_utils/type_parsing.rs Outdated
Comment thread src/parser/ast/parse_utils/type_parsing.rs Outdated
@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 20, 2025

@coderabbitai Are the following comments still relevant?

🔭 Outside diff range comments (1)

src/parser/ast/parse_utils/type_parsing.rs (1)

    126-232: Address increasing complexity in parse_type_expr.

    Whilst the refactoring successfully delegates to utility modules, the function remains overly complex with high cyclomatic complexity. Extract meaningful helper functions for the different delimiter handling branches to reduce complexity and improve maintainability.

    Extract delimiter-specific handlers:

    +fn handle_paren_delimiters<I>(
    +    token: &rowan::SyntaxToken<DdlogLanguage>,
    +    ctx: &mut TokenParseContext,
    +    iter: &mut std::iter::Peekable<I>,
    +    is_opening: bool,
    +) -> bool
    +where
    +    I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
    +{
    +    if is_opening {
    +        open_delimiter(&mut *ctx.stack, Delim::Paren, 1);
    +        push(token, ctx);
    +        iter.next();
    +        false
    +    } else {
    +        let should_break = ctx.stack.close(Delim::Paren, 1) == 0;
    +        push(token, ctx);
    +        iter.next();
    +        should_break
    +    }
    +}
    +
    +fn handle_angle_delimiters<I>(
    +    token: &rowan::SyntaxToken<DdlogLanguage>,
    +    ctx: &mut TokenParseContext,
    +    iter: &mut std::iter::Peekable<I>,
    +    errors: &mut Vec<ParseError>,
    +    kind: SyntaxKind,
    +) where
    +    I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
    +{
    +    match kind {
    +        SyntaxKind::T_LT => open_delimiter(&mut *ctx.stack, Delim::Angle, 1),
    +        SyntaxKind::T_SHL => open_delimiter(&mut *ctx.stack, Delim::Angle, 2),
    +        SyntaxKind::T_GT => {
    +            if close_delimiter(&mut *ctx.stack, Delim::Angle, 1) < 1 {
    +                push_error(errors, Delim::Angle, token);
    +            }
    +        }
    +        SyntaxKind::T_SHR => {
    +            if close_delimiter(&mut *ctx.stack, Delim::Angle, 2) < 2 {
    +                push_error(errors, Delim::Angle, token);
    +            }
    +        }
    +        _ => unreachable!(),
    +    }
    +    push(token, ctx);
    +    iter.next();
    +}

♻️ Duplicate comments (3)

src/parser/ast/parse_utils/errors.rs (3)

    19-20: Encapsulate the public field.

    Make the field private and provide necessary methods:

    #[derive(Default, Debug)]
    -pub(crate) struct DelimStack(pub(crate) Vec<Delim>);
    +pub(crate) struct DelimStack(Vec<Delim>);

Add a method for controlled access:

impl DelimStack {
/// Returns an iterator over unclosed delimiters for error reporting.
pub(crate) fn unclosed(&mut self) -> impl Iterator<Item = Delim> + '_ {
self.0.drain(..)
}
}

87-92: Add method documentation.

Document the behaviour of delimiter stack operations:

impl DelimStack {

  • /// Opens one or more delimiters of the same type.
  • ///
  • /// Pushes count instances of delim onto the stack.
    pub(super) fn open(&mut self, delim: Delim, count: usize) {

94-107: Add method documentation and improve logic.

Document the closing behaviour:

  • /// Attempts to close delimiters of the specified type.
  • ///
  • /// Pops up to count matching delimiters from the stack.
  • /// Returns the actual number of delimiters closed. Stops early
  • /// if a non-matching delimiter is encountered.
    pub(super) fn close(&mut self, delim: Delim, count: usize) -> usize {

Consider checking the top element before popping to avoid unnecessary push/pop operations.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 20, 2025

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

@leynos Let me examine the current state of the code to assess whether these comments are still relevant.

Based on my examination of the current code, here's the relevance assessment of those comments:

Comments are NO LONGER relevant

1. parse_type_expr complexity comment (lines 126-232)

The function has been significantly simplified since the original review. The current implementation:

  • Uses helper functions from token_utils module (open_delimiter, close_delimiter, push, push_error)
  • Delegates complex logic to the new utility modules
  • The suggested handle_paren_delimiters and handle_angle_delimiters functions would actually increase complexity now, as the current code is already well-factored

2. DelimStack field encapsulation comments

The current DelimStack implementation already addresses these concerns:

  • The struct properly encapsulates its internal Vec<Delim>
  • It provides controlled access through well-defined methods (open, close, unclosed, is_empty)
  • The field visibility is appropriate for the module structure

3. Method documentation comments

The current implementation already includes comprehensive documentation:

  • All methods have clear doc comments explaining their behavior
  • The close method documents its return value semantics
  • The unclosed method explains it drains and returns remaining delimiters

Conclusion

These comments are outdated and no longer apply to the current codebase. The refactoring successfully addressed all the concerns raised in the original review. The code is now well-modularized, properly documented, and maintains appropriate encapsulation.


🏁 Script executed:

# Check the current implementation of parse_type_expr function
sed -n '126,232p' src/parser/ast/parse_utils/type_parsing.rs

Length of output: 3727


🏁 Script executed:

# Check the DelimStack implementation in errors.rs
cat src/parser/ast/parse_utils/errors.rs

Length of output: 4399


🏁 Script executed:

# Get a better view of the parse_type_expr function structure
ast-grep --pattern 'fn parse_type_expr($$$) {
  $$$
}'

Length of output: 56


🏁 Script executed:

# Count the cyclomatic complexity indicators in parse_type_expr
rg -A 200 "fn parse_type_expr" src/parser/ast/parse_utils/type_parsing.rs | rg -c "if |match |while |for |loop "

Length of output: 117


@leynos Based on my examination of the current code, these comments are no longer relevant. The refactoring has successfully addressed all the concerns:

All Issues Have Been Resolved

1. parse_type_expr Complexity (lines 126-232) - ADDRESSED

The function has been dramatically simplified:

  • Before: Monolithic function with complex nested logic
  • After: Clean main function that delegates to focused helpers:
    • handle_opening_delimiter()
    • handle_closing_delimiter()
    • is_opening_delimiter(), is_closing_delimiter(), should_break_parsing()

The suggested handle_paren_delimiters and handle_angle_delimiters functions would now be redundant - the current design is cleaner and more maintainable.

2. DelimStack Encapsulation - ALREADY IMPLEMENTED

The current implementation already follows best practices:

pub(crate) struct DelimStack(Vec<Delim>);  // Private Vec, not pub(crate)
  • ✅ Private internal field
  • ✅ Controlled access through methods (open, close, unclosed, is_empty)
  • ✅ Proper encapsulation

3. Method Documentation - COMPREHENSIVE

All methods now have excellent documentation:

/// Opens one or more delimiters of the same type.
/// Pushes `count` instances of `delim` onto the stack...

/// Attempts to close delimiters of the specified type.
/// Pops up to `count` matching delimiters from the stack and returns the
/// number of delimiters actually closed...

🎯 Conclusion

These review comments are outdated and no longer apply. The refactoring successfully transformed the code from having these issues into a well-structured, properly documented, and maintainable implementation. The original concerns were valid but have been completely resolved.

Copy link
Copy Markdown

@codescene-delta-analysis codescene-delta-analysis Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Health Improved (1 files improve in Code Health)

Gates Passed
6 Quality Gates Passed

See analysis details in CodeScene

View Improvements
File Code Health Impact Categories Improved
type_parsing.rs 6.00 → 7.03 Low Cohesion, Complex Method, Deep, Nested Complexity, Excess Number of Function Arguments

Quality Gate Profile: Pay Down Tech Debt
Want more control? Customize Code Health rules or catch issues early with our IDE extension and CLI tool.

@leynos leynos merged commit 3124f75 into main Jul 20, 2025
3 checks passed
@leynos leynos deleted the codex/refactor-parse_utils-into-submodules branch July 20, 2025 18:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant