Conversation
Reviewer's GuideThis PR implements import statement parsing by extending the parser to collect AST items, introducing a new import combinator and top-level declaration parser, updating the AST with an Import type and Item enum, renaming the import syntax kind, and covering all changes with unit and integration tests. File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
Summary by CodeRabbit
✨ Finishing Touches
🧪 Generate Unit Tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Hey @leynos - I've reviewed your changes and they look great!
Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments
### Comment 1
<location> `src/parser/mod.rs:114` </location>
<code_context>
builder.finish()
}
+fn parse_import(
+ src: &str,
+) -> impl Parser<SyntaxKind, ast::Import, Error = Simple<SyntaxKind>> + Clone + '_ {
</code_context>
<issue_to_address>
parse_import assumes module names are single identifiers.
To handle complex import paths (e.g., with dots or slashes), the parser will need to be updated. Currently, it only supports single identifier module names.
</issue_to_address>
### Comment 2
<location> `src/parser/mod.rs:72` </location>
<code_context>
}
}
fn parse_tokens(
tokens: &[(SyntaxKind, Span)],
len: usize,
</code_context>
<issue_to_address>
Consider simplifying import extraction by walking the AST after parsing instead of using nested parser combinators.
Here’s a simpler approach that drops the nested `Parser` combinators entirely and just walks your AST to find imports. You can
1. Revert `parse_tokens` back to only return `(Vec<SyntaxKind>, Vec<Simple<SyntaxKind>>)`.
2. Remove your `parse_import`/`decl` helpers.
3. In `parse(...)`, after you build `root`, do a one‐pass walk over its syntax tree to collect imports.
```rust
pub fn parse(src: &str) -> Parsed {
let tokens = tokenize(src);
let (parsed_kinds, errors) = parse_tokens(&tokens, src.len());
debug_assert_eq!(
parsed_kinds.len(),
tokens.len(),
"parser output token count differs from lexer"
);
let green = build_green_tree(tokens, src);
let root = ast::Root::from_green(green.clone());
// --- new: extract imports in one pass over the AST ---
let items = root
.syntax()
.descendants()
.filter_map(ast::Import::cast) // see helper below
.map(ast::Item::Import)
.collect();
Parsed { green, root, errors, items }
}
```
Then in your `ast` module add a tiny helper to turn an `import` node into `ast::Import`:
```rust
impl Import {
pub fn cast(node: SyntaxNode) -> Option<Import> {
if node.kind() != SyntaxKind::N_IMPORT {
return None;
}
// e.g. input: AST node for `import foo;`
// skip the `import` token and the trailing `;`
let text = node.text();
let module = text
.trim_start_matches("import")
.trim_end_matches(';')
.trim();
Some(Import { module: module.to_string() })
}
}
```
And your enum stays the same:
```rust
pub enum Item {
Import(Import),
// … future items
}
```
This removes all of the `map`/`or`/`boxed` boilerplate and delegates import-finding to a single AST walk, while preserving full functionality and error recovery.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| builder.finish() | ||
| } | ||
|
|
||
| fn parse_import( |
There was a problem hiding this comment.
suggestion: parse_import assumes module names are single identifiers.
To handle complex import paths (e.g., with dots or slashes), the parser will need to be updated. Currently, it only supports single identifier module names.
| } | ||
| } | ||
|
|
||
| fn parse_tokens( |
There was a problem hiding this comment.
issue (complexity): Consider simplifying import extraction by walking the AST after parsing instead of using nested parser combinators.
Here’s a simpler approach that drops the nested Parser combinators entirely and just walks your AST to find imports. You can
- Revert
parse_tokensback to only return(Vec<SyntaxKind>, Vec<Simple<SyntaxKind>>). - Remove your
parse_import/declhelpers. - In
parse(...), after you buildroot, do a one‐pass walk over its syntax tree to collect imports.
pub fn parse(src: &str) -> Parsed {
let tokens = tokenize(src);
let (parsed_kinds, errors) = parse_tokens(&tokens, src.len());
debug_assert_eq!(
parsed_kinds.len(),
tokens.len(),
"parser output token count differs from lexer"
);
let green = build_green_tree(tokens, src);
let root = ast::Root::from_green(green.clone());
// --- new: extract imports in one pass over the AST ---
let items = root
.syntax()
.descendants()
.filter_map(ast::Import::cast) // see helper below
.map(ast::Item::Import)
.collect();
Parsed { green, root, errors, items }
}Then in your ast module add a tiny helper to turn an import node into ast::Import:
impl Import {
pub fn cast(node: SyntaxNode) -> Option<Import> {
if node.kind() != SyntaxKind::N_IMPORT {
return None;
}
// e.g. input: AST node for `import foo;`
// skip the `import` token and the trailing `;`
let text = node.text();
let module = text
.trim_start_matches("import")
.trim_end_matches(';')
.trim();
Some(Import { module: module.to_string() })
}
}And your enum stays the same:
pub enum Item {
Import(Import),
// … future items
}This removes all of the map/or/boxed boilerplate and delegates import-finding to a single AST walk, while preserving full functionality and error recovery.
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (3)
src/language.rs(1 hunks)src/parser/mod.rs(6 hunks)tests/parser.rs(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.rs`: Document public APIs using Rustdoc comments (`///`) so documentation ...
**/*.rs: Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Place function attributes after doc comments.
Do not usereturnin single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Preferexpectoverallow.
Prefer.expect()over.unwrap().
Prefer immutable data and avoid unnecessarymutbindings.
Handle errors with theResulttype instead of panicking where feasible.
Avoidunsafecode unless absolutely necessary and document any usage clearly.
Use explicit version ranges inCargo.tomland keep dependencies up-to-date.
Userstestfixtures for shared setup.
Replace duplicated tests with#[rstest(...)]parameterised cases.
Prefermockallfor mocks/stubs.
Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider usingArcto reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.
📄 Source: CodeRabbit Inference Engine (AGENTS.md)
List of files the instruction was applied to:
src/language.rstests/parser.rssrc/parser/mod.rs
🧬 Code Graph Analysis (1)
tests/parser.rs (1)
src/parser/mod.rs (2)
parse(57-70)items(46-48)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: build-test
🔇 Additional comments (7)
src/language.rs (1)
165-165: LGTM! Clear and descriptive naming.The rename from
N_IMPORTtoN_IMPORT_STMTbetter reflects that this syntax kind represents a complete import statement node in the AST, aligning with the naming pattern of other statement-level constructs.tests/parser.rs (1)
77-89: Well-structured test for import parsing.The test follows best practices by using rstest fixtures and provides clear verification of the parser's ability to construct typed import AST items. The pattern matching assertion is precise and readable.
src/parser/mod.rs (5)
22-22: Good addition of typed items to the parser output.The new
itemsfield and its accessor method follow the established patterns in theParsedstruct, with proper documentation and consistent use of#[must_use].Also applies to: 44-48
59-59: Correct integration of items parsing.The changes properly thread the source string to
parse_tokensand correctly initialise the newitemsfield in theParsedstruct.Also applies to: 68-68
72-85: Clean implementation of the token parsing logic.The updated
parse_tokensfunction correctly uses thedeclparser to extract AST items whilst allowing unrecognised tokens to be skipped. The use offlatten()to filter outNonevalues is idiomatic.
194-206: Well-designed AST types for import declarations.The
Importstruct andItemenum are appropriately minimal and extensible, with clear documentation and suitable trait derivations. The design allows for future expansion to other declaration types.
209-236: Comprehensive unit tests for the import parser.The tests provide excellent coverage of both the happy path and error cases. Testing the parser combinator directly at the unit level is a good practice for ensuring correctness before integration.
| fn parse_import( | ||
| src: &str, | ||
| ) -> impl Parser<SyntaxKind, ast::Import, Error = Simple<SyntaxKind>> + Clone + '_ { | ||
| just(SyntaxKind::K_IMPORT) | ||
| .then_ignore(select! { SyntaxKind::T_WHITESPACE => () }.repeated()) | ||
| .ignore_then(select!(|span| SyntaxKind::T_IDENT => span)) | ||
| .then_ignore(just(SyntaxKind::T_SEMI)) | ||
| .map(move |span: Span| { | ||
| let text = src.get(span.clone()).unwrap_or(""); | ||
| ast::Import { | ||
| module: text.to_string(), | ||
| } | ||
| }) | ||
| .boxed() | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Consider improving error handling for invalid spans.
The import parser implementation is well-structured, but the unwrap_or("") on line 122 could silently produce empty module names if the span is invalid. Consider logging a warning or handling this case more explicitly.
.map(move |span: Span| {
- let text = src.get(span.clone()).unwrap_or("");
+ let text = src.get(span.clone()).unwrap_or_else(|| {
+ warn!("Invalid span {:?} for import module name", span);
+ ""
+ });
ast::Import {
module: text.to_string(),
}
})Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/parser/mod.rs around lines 114 to 128, the parse_import function uses
unwrap_or("") on a span that might be invalid, which can silently produce empty
module names. To fix this, replace unwrap_or with explicit error handling by
checking if the span is valid before extracting the text. If invalid, log a
warning or return a parse error instead of defaulting to an empty string,
ensuring invalid spans are handled explicitly and do not produce empty module
names silently.
Summary
N_IMPORT_STMTsyntax kindimportstatements with whitespace supportTesting
make fmt(fails: markdownlint errors)make lintmake testhttps://chatgpt.com/codex/tasks/task_e_685c87f279b0832280865bab2c84a4cc
Summary by Sourcery
Implement import statement parsing by introducing an Import AST node, updating the parser to collect and expose top-level items, and adjusting syntax kinds and tests accordingly.
New Features:
Enhancements:
Tests: