Skip to content

Implement import parsing#14

Open
leynos wants to merge 1 commit intomainfrom
codex/add-parse_import-function-and-integrate-with-parser
Open

Implement import parsing#14
leynos wants to merge 1 commit intomainfrom
codex/add-parse_import-function-and-integrate-with-parser

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Jun 25, 2025

Summary

  • add N_IMPORT_STMT syntax kind
  • parse simple import statements with whitespace support
  • expose import AST items
  • add parser unit tests and update integration tests

Testing

  • make fmt (fails: markdownlint errors)
  • make lint
  • make test

https://chatgpt.com/codex/tasks/task_e_685c87f279b0832280865bab2c84a4cc

Summary by Sourcery

Implement import statement parsing by introducing an Import AST node, updating the parser to collect and expose top-level items, and adjusting syntax kinds and tests accordingly.

New Features:

  • Add support for parsing top-level import statements

Enhancements:

  • Introduce Import AST node and wrap it in a new Item enum for parsed items
  • Expose parsed items via a new Parsed.items() method
  • Update parser to return a list of AST items instead of raw syntax kinds
  • Rename SyntaxKind variant from N_IMPORT to N_IMPORT_STMT

Tests:

  • Add unit tests for successful import parsing and missing-semicolon error recovery
  • Add integration test to verify import items are collected in the parser output

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Jun 25, 2025

Reviewer's Guide

This PR implements import statement parsing by extending the parser to collect AST items, introducing a new import combinator and top-level declaration parser, updating the AST with an Import type and Item enum, renaming the import syntax kind, and covering all changes with unit and integration tests.

File-Level Changes

Change Details Files
Expose parsed AST items in the public API
  • Added items Vec to Parsed struct
  • Introduced Parsed::items() accessor
  • Updated parse to collect and return parsed items
src/parser/mod.rs
Refactor parse_tokens to emit AST items instead of raw kinds
  • Changed signature to return Vec<ast::Item> and errors
  • Replaced combinator pipeline to map parsed items into a flat Vec
  • Adjusted invocation in parse to unpack items
src/parser/mod.rs
Implement import parsing combinators
  • Added parse_import to recognize import <ident>; with whitespace
  • Created decl parser to integrate imports into top-level items
  • Boxed and combined parsers in decl
src/parser/mod.rs
Extend AST model with import support
  • Defined ast::Import struct with module field
  • Added ast::Item::Import variant
src/parser/mod.rs
Rename import syntax kind
  • Changed N_IMPORT to N_IMPORT_STMT in SyntaxKind
src/language.rs
Add tests for import parsing
  • Wrote unit tests for valid and invalid import statements
  • Added integration test fixture checking parsed.items
src/parser/mod.rs
tests/parser.rs

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jun 25, 2025

Summary by CodeRabbit

  • New Features

    • Added support for parsing import statements as structured items in the language parser.
  • Bug Fixes

    • Improved parser handling to correctly recognise and construct import items.
  • Tests

    • Introduced unit tests to verify correct parsing of import statements.
      Error: Could not generate a valid Mermaid diagram after multiple attempts.
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate Unit Tests
  • Create PR with Unit Tests
  • Post Copyable Unit Tests in Comment
  • Commit Unit Tests in branch codex/add-parse_import-function-and-integrate-with-parser

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai auto-generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes and they look great!

Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments

### Comment 1
<location> `src/parser/mod.rs:114` </location>
<code_context>
     builder.finish()
 }

+fn parse_import(
+    src: &str,
+) -> impl Parser<SyntaxKind, ast::Import, Error = Simple<SyntaxKind>> + Clone + '_ {
</code_context>

<issue_to_address>
parse_import assumes module names are single identifiers.

To handle complex import paths (e.g., with dots or slashes), the parser will need to be updated. Currently, it only supports single identifier module names.
</issue_to_address>

### Comment 2
<location> `src/parser/mod.rs:72` </location>
<code_context>
     }
 }

 fn parse_tokens(
     tokens: &[(SyntaxKind, Span)],
     len: usize,
</code_context>

<issue_to_address>
Consider simplifying import extraction by walking the AST after parsing instead of using nested parser combinators.

Here’s a simpler approach that drops the nested `Parser` combinators entirely and just walks your AST to find imports. You can

1. Revert `parse_tokens` back to only return `(Vec<SyntaxKind>, Vec<Simple<SyntaxKind>>)`.
2. Remove your `parse_import`/`decl` helpers.
3. In `parse(...)`, after you build `root`, do a one‐pass walk over its syntax tree to collect imports.

```rust
pub fn parse(src: &str) -> Parsed {
    let tokens = tokenize(src);
    let (parsed_kinds, errors) = parse_tokens(&tokens, src.len());
    debug_assert_eq!(
        parsed_kinds.len(),
        tokens.len(),
        "parser output token count differs from lexer"
    );

    let green = build_green_tree(tokens, src);
    let root = ast::Root::from_green(green.clone());

    // --- new: extract imports in one pass over the AST ---
    let items = root
        .syntax()
        .descendants()
        .filter_map(ast::Import::cast)       // see helper below
        .map(ast::Item::Import)
        .collect();

    Parsed { green, root, errors, items }
}
```

Then in your `ast` module add a tiny helper to turn an `import` node into `ast::Import`:

```rust
impl Import {
    pub fn cast(node: SyntaxNode) -> Option<Import> {
        if node.kind() != SyntaxKind::N_IMPORT {
            return None;
        }
        // e.g. input: AST node for `import foo;`
        // skip the `import` token and the trailing `;`
        let text = node.text();
        let module = text
            .trim_start_matches("import")
            .trim_end_matches(';')
            .trim();
        Some(Import { module: module.to_string() })
    }
}
```

And your enum stays the same:

```rust
pub enum Item {
    Import(Import),
    // … future items
}
```

This removes all of the `map`/`or`/`boxed` boilerplate and delegates import-finding to a single AST walk, while preserving full functionality and error recovery.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread src/parser/mod.rs
builder.finish()
}

fn parse_import(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: parse_import assumes module names are single identifiers.

To handle complex import paths (e.g., with dots or slashes), the parser will need to be updated. Currently, it only supports single identifier module names.

Comment thread src/parser/mod.rs
}
}

fn parse_tokens(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (complexity): Consider simplifying import extraction by walking the AST after parsing instead of using nested parser combinators.

Here’s a simpler approach that drops the nested Parser combinators entirely and just walks your AST to find imports. You can

  1. Revert parse_tokens back to only return (Vec<SyntaxKind>, Vec<Simple<SyntaxKind>>).
  2. Remove your parse_import/decl helpers.
  3. In parse(...), after you build root, do a one‐pass walk over its syntax tree to collect imports.
pub fn parse(src: &str) -> Parsed {
    let tokens = tokenize(src);
    let (parsed_kinds, errors) = parse_tokens(&tokens, src.len());
    debug_assert_eq!(
        parsed_kinds.len(),
        tokens.len(),
        "parser output token count differs from lexer"
    );

    let green = build_green_tree(tokens, src);
    let root = ast::Root::from_green(green.clone());

    // --- new: extract imports in one pass over the AST ---
    let items = root
        .syntax()
        .descendants()
        .filter_map(ast::Import::cast)       // see helper below
        .map(ast::Item::Import)
        .collect();

    Parsed { green, root, errors, items }
}

Then in your ast module add a tiny helper to turn an import node into ast::Import:

impl Import {
    pub fn cast(node: SyntaxNode) -> Option<Import> {
        if node.kind() != SyntaxKind::N_IMPORT {
            return None;
        }
        // e.g. input: AST node for `import foo;`
        // skip the `import` token and the trailing `;`
        let text = node.text();
        let module = text
            .trim_start_matches("import")
            .trim_end_matches(';')
            .trim();
        Some(Import { module: module.to_string() })
    }
}

And your enum stays the same:

pub enum Item {
    Import(Import),
    // … future items
}

This removes all of the map/or/boxed boilerplate and delegates import-finding to a single AST walk, while preserving full functionality and error recovery.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 656f35b and 63d2152.

📒 Files selected for processing (3)
  • src/language.rs (1 hunks)
  • src/parser/mod.rs (6 hunks)
  • tests/parser.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.rs`: Document public APIs using Rustdoc comments (`///`) so documentation ...

**/*.rs: Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Use explicit version ranges in Cargo.toml and keep dependencies up-to-date.
Use rstest fixtures for shared setup.
Replace duplicated tests with #[rstest(...)] parameterised cases.
Prefer mockall for mocks/stubs.
Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.

📄 Source: CodeRabbit Inference Engine (AGENTS.md)

List of files the instruction was applied to:

  • src/language.rs
  • tests/parser.rs
  • src/parser/mod.rs
🧬 Code Graph Analysis (1)
tests/parser.rs (1)
src/parser/mod.rs (2)
  • parse (57-70)
  • items (46-48)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: build-test
🔇 Additional comments (7)
src/language.rs (1)

165-165: LGTM! Clear and descriptive naming.

The rename from N_IMPORT to N_IMPORT_STMT better reflects that this syntax kind represents a complete import statement node in the AST, aligning with the naming pattern of other statement-level constructs.

tests/parser.rs (1)

77-89: Well-structured test for import parsing.

The test follows best practices by using rstest fixtures and provides clear verification of the parser's ability to construct typed import AST items. The pattern matching assertion is precise and readable.

src/parser/mod.rs (5)

22-22: Good addition of typed items to the parser output.

The new items field and its accessor method follow the established patterns in the Parsed struct, with proper documentation and consistent use of #[must_use].

Also applies to: 44-48


59-59: Correct integration of items parsing.

The changes properly thread the source string to parse_tokens and correctly initialise the new items field in the Parsed struct.

Also applies to: 68-68


72-85: Clean implementation of the token parsing logic.

The updated parse_tokens function correctly uses the decl parser to extract AST items whilst allowing unrecognised tokens to be skipped. The use of flatten() to filter out None values is idiomatic.


194-206: Well-designed AST types for import declarations.

The Import struct and Item enum are appropriately minimal and extensible, with clear documentation and suitable trait derivations. The design allows for future expansion to other declaration types.


209-236: Comprehensive unit tests for the import parser.

The tests provide excellent coverage of both the happy path and error cases. Testing the parser combinator directly at the unit level is a good practice for ensuring correctness before integration.

Comment thread src/parser/mod.rs
Comment on lines +114 to +128
fn parse_import(
src: &str,
) -> impl Parser<SyntaxKind, ast::Import, Error = Simple<SyntaxKind>> + Clone + '_ {
just(SyntaxKind::K_IMPORT)
.then_ignore(select! { SyntaxKind::T_WHITESPACE => () }.repeated())
.ignore_then(select!(|span| SyntaxKind::T_IDENT => span))
.then_ignore(just(SyntaxKind::T_SEMI))
.map(move |span: Span| {
let text = src.get(span.clone()).unwrap_or("");
ast::Import {
module: text.to_string(),
}
})
.boxed()
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Consider improving error handling for invalid spans.

The import parser implementation is well-structured, but the unwrap_or("") on line 122 could silently produce empty module names if the span is invalid. Consider logging a warning or handling this case more explicitly.

 .map(move |span: Span| {
-    let text = src.get(span.clone()).unwrap_or("");
+    let text = src.get(span.clone()).unwrap_or_else(|| {
+        warn!("Invalid span {:?} for import module name", span);
+        ""
+    });
     ast::Import {
         module: text.to_string(),
     }
 })

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/parser/mod.rs around lines 114 to 128, the parse_import function uses
unwrap_or("") on a span that might be invalid, which can silently produce empty
module names. To fix this, replace unwrap_or with explicit error handling by
checking if the span is valid before extracting the text. If invalid, log a
warning or return a parse error instead of defaulting to an empty string,
ensuring invalid spans are handled explicitly and do not produce empty module
names silently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant