Conversation
Reviewer's GuideEncapsulate parser span collections into a ParsedSpans struct and update parsing and green-tree construction interfaces to use it, simplifying span management and enforcing sorted invariants. Sequence diagram for updated parser span handlingsequenceDiagram
participant Parser
participant SpanCollector
participant CSTBuilder
participant ASTRoot
Parser->>SpanCollector: parse_tokens(tokens, src)
SpanCollector-->>Parser: (ParsedSpans, errors)
Parser->>CSTBuilder: build_green_tree(tokens, src, ParsedSpans)
CSTBuilder-->>ASTRoot: Root::from_green(green)
ASTRoot->>ASTRoot: functions() -> Vec<Function>
Class diagram for ParsedSpans and parser refactorclassDiagram
class ParsedSpans {
+Vec<Span> imports
+Vec<Span> typedefs
+Vec<Span> relations
+Vec<Span> indexes
+Vec<Span> functions
+Vec<Span> rules
+assert_sorted()
}
class Parsed {
+GreenNode green
+Vec<Simple<SyntaxKind>> errors
}
class GreenNode
class Simple
class SyntaxKind
class Span
ParsedSpans --* Span : contains
Parsed --* GreenNode : has
Parsed --* Simple : has errors
Simple --* SyntaxKind : for
File-Level Changes
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
Summary by CodeRabbit
Summary by CodeRabbit
WalkthroughThe parser module was refactored to introduce a Changes
Sequence Diagram(s)sequenceDiagram
participant Parser
participant CSTBuilder
Parser->>Parser: parse_tokens(tokens, src)
Parser->>Parser: returns (ParsedSpans, errors)
Parser->>CSTBuilder: build_green_tree(tokens, src, spans: ParsedSpans)
CSTBuilder-->>Parser: GreenNode
Possibly related PRs
Poem
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 Clippy (1.86.0)warning: failed to write cache, path: /usr/local/registry/index/index.crates.io-1949cf8c6b5b557f/.cache/ch/um/chumsky, error: Permission denied (os error 13) Caused by: ✨ Finishing Touches
🧪 Generate Unit Tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Hey @leynos - I've reviewed your changes - here's some feedback:
- You’ve got six repetitive
assert_spans_sorted(&spans.<field>)calls inbuild_green_tree– consider adding anassert_sorted(&self)method on ParsedSpans or using a small macro/iterator to DRY that up. - Deriving or implementing
Default(and possiblyClone) for ParsedSpans could improve ergonomics when you need an empty or duplicated spans struct in tests or future code paths. - You might add a method on ParsedSpans to iterate over all Span vectors (e.g.
all_spans(&self)) for any generic processing, rather than accessing each field manually.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- You’ve got six repetitive `assert_spans_sorted(&spans.<field>)` calls in `build_green_tree` – consider adding an `assert_sorted(&self)` method on ParsedSpans or using a small macro/iterator to DRY that up.
- Deriving or implementing `Default` (and possibly `Clone`) for ParsedSpans could improve ergonomics when you need an empty or duplicated spans struct in tests or future code paths.
- You might add a method on ParsedSpans to iterate over all Span vectors (e.g. `all_spans(&self)`) for any generic processing, rather than accessing each field manually.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (2)
docs/parser-plan.md(1 hunks)src/parser/mod.rs(5 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
`docs/**/*.md`: Documentation must use en-GB-oxendict spelling and grammar (with the exception of 'license' which is to be left unchanged for community consistency).
docs/**/*.md: Documentation must use en-GB-oxendict spelling and grammar (with the exception of 'license' which is to be left unchanged for community consistency).
📄 Source: CodeRabbit Inference Engine (AGENTS.md)
List of files the instruction was applied to:
docs/parser-plan.md
`**/*.md`: Markdown paragraphs and bullet points must be wrapped at 80 columns. Code blocks must be wrapped at 120 columns. Tables and headings must not be wrapped.
**/*.md: Markdown paragraphs and bullet points must be wrapped at 80 columns.
Code blocks must be wrapped at 120 columns.
Tables and headings must not be wrapped.
📄 Source: CodeRabbit Inference Engine (AGENTS.md)
List of files the instruction was applied to:
docs/parser-plan.md
`**/*.md`: * Avoid 2nd person or 1st person pronouns ("I", "you", "we") * Use en...
**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")
- Use en-oxendic spelling and grammar.
- Paragraphs and bullets must be wrapped to 80 columns, except where a long URL would prevent this (in which case, silence MD013 for that line)
- Code blocks should be wrapped to 120 columns.
- Headings must not be wrapped.
⚙️ Source: CodeRabbit Configuration File
List of files the instruction was applied to:
docs/parser-plan.md
`**/*.rs`: Comment why, not what. Explain assumptions, edge cases, trade-offs, o...
**/*.rs: Comment why, not what. Explain assumptions, edge cases, trade-offs, or complexity. Don't echo the obvious.
Comments must use en-GB-oxendict spelling and grammar.
Function documentation must include clear examples.
Name things precisely. Use clear, descriptive variable and function names. For booleans, prefer names with is, has, or should.
Each file should encapsulate a coherent module. Group related code (e.g., models + utilities + fixtures) close together.
Group by feature, not layer. Colocate views, logic, fixtures, and helpers related to a domain concept rather than splitting by type.
Every module must begin with a module level (//! ) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Place function attributes after doc comments.
Do not use return in single-line functions.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Clippy warnings MUST be disallowed.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Use predicate functions for conditional criteria with more than two branches.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Prefer .expect() over .unwrap().
📄 Source: CodeRabbit Inference Engine (AGENTS.md)
List of files the instruction was applied to:
src/parser/mod.rs
`**/*.rs`: * Seek to keep the cyclomatic complexity of functions no more than 12...
**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.
Adhere to single responsibility and CQRS
Place function attributes after doc comments.
Do not use
returnin single-line functions.Move conditionals with >2 branches into a predicate function.
Avoid
unsafeunless absolutely necessary.Every module must begin with a
//!doc comment that explains the module's purpose and utility.Comments must use en-GB-oxendict spelling and grammar.
Lints must not be silenced except as a last resort.
#[allow]is forbidden.- Only narrowly scoped
#[expect(lint, reason = "...")]is allowed.- No lint groups, no blanket or file-wide suppression.
- Include
FIXME:with link if a fix is expected.Use
rstestfixtures for shared setup and to avoid repetition between tests.Replace duplicated tests with
#[rstest(...)]parameterised cases.Prefer
mockallfor mocks/stubs.Prefer
.expect()over.unwrap()Ensure that any API or behavioural changes are reflected in the documentation in
docs/Ensure that any completed roadmap steps are recorded in the appropriate roadmap in
docs/
⚙️ Source: CodeRabbit Configuration File
List of files the instruction was applied to:
src/parser/mod.rs
🔇 Additional comments (4)
src/parser/mod.rs (3)
242-244: LGTM!The function correctly uses the new
ParsedSpansstruct, maintaining the same functionality whilst improving code organisation.
257-258: Comprehensive update to use the new struct.The documentation, function signature, and return value construction are all correctly updated to use
ParsedSpans. The examples in documentation remain clear and helpful.Also applies to: 263-264, 269-269, 283-290
832-833: Excellent API simplification.The function signature is much cleaner with a single
ParsedSpansparameter instead of multiple individual slice parameters. All usages are correctly updated to access the struct fields, and the validation logic remains intact.Also applies to: 835-841, 845-850
docs/parser-plan.md (1)
78-78: Documentation correctly updated to reflect code changes.The parameter name change from
function_spanstospansaccurately reflects the newParsedSpansstruct that contains all span types, not just function spans.
|
@coderabbitai resume |
✅ Actions performedReviews resumed. |
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey @leynos - I've reviewed your changes - here's some feedback:
- The doc comment on
build_green_treeonly mentionsspans.importsandspans.typedefs—please update it to list all of the ParsedSpans fields (imports, typedefs, relations, indexes, functions, rules) for clarity. - Consider renaming
ParsedSpans::assert_sortedto something likedebug_assert_sortedand usingdebug_assert!internally so that ordering checks are only active in debug builds. - Since you already derive
PartialEqforParsedSpans, you can also deriveEqto fully support equality semantics.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The doc comment on `build_green_tree` only mentions `spans.imports` and `spans.typedefs`—please update it to list all of the ParsedSpans fields (imports, typedefs, relations, indexes, functions, rules) for clarity.
- Consider renaming `ParsedSpans::assert_sorted` to something like `debug_assert_sorted` and using `debug_assert!` internally so that ordering checks are only active in debug builds.
- Since you already derive `PartialEq` for `ParsedSpans`, you can also derive `Eq` to fully support equality semantics.
## Individual Comments
### Comment 1
<location> `src/parser/mod.rs:850` </location>
<code_context>
+/// that tokens are wrapped into well-formed nodes during tree construction.
+/// Spans are checked with debug assertions.
+fn build_green_tree(tokens: Vec<(SyntaxKind, Span)>, src: &str, spans: &ParsedSpans) -> GreenNode {
+ spans.assert_sorted();
let mut builder = GreenNodeBuilder::new();
builder.start_node(DdlogLanguage::kind_to_raw(SyntaxKind::N_DATALOG_PROGRAM));
</code_context>
<issue_to_address>
assert_sorted is always called, which may impact performance in release builds.
Consider replacing with debug_assert! or guarding with a feature flag if this check is only needed during development.
</issue_to_address>
<suggested_fix>
<<<<<<< SEARCH
spans.assert_sorted();
=======
debug_assert!({
spans.assert_sorted();
true
});
>>>>>>> REPLACE
</suggested_fix>
### Comment 2
<location> `src/parser/mod.rs:197` </location>
<code_context>
errors: Vec<Simple<SyntaxKind>>,
}
+/// Spans for each parsed statement category.
+#[derive(Debug, Default, Clone, PartialEq)]
+pub struct ParsedSpans {
</code_context>
<issue_to_address>
Module is missing a `//!` module-level comment at the top.
Please add a `//!` doc comment at the very top of the module to describe its purpose, as required by the review instructions.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| /// that tokens are wrapped into well-formed nodes during tree construction. | ||
| /// Spans are checked with debug assertions. | ||
| fn build_green_tree(tokens: Vec<(SyntaxKind, Span)>, src: &str, spans: &ParsedSpans) -> GreenNode { | ||
| spans.assert_sorted(); |
There was a problem hiding this comment.
suggestion (performance): assert_sorted is always called, which may impact performance in release builds.
Consider replacing with debug_assert! or guarding with a feature flag if this check is only needed during development.
| spans.assert_sorted(); | |
| debug_assert!({ | |
| spans.assert_sorted(); | |
| true | |
| }); |
| errors: Vec<Simple<SyntaxKind>>, | ||
| } | ||
|
|
||
| /// Spans for each parsed statement category. |
There was a problem hiding this comment.
issue (review_instructions): Module is missing a //! module-level comment at the top.
Please add a //! doc comment at the very top of the module to describe its purpose, as required by the review instructions.
Review instructions:
Path patterns: **/*.rs
Instructions:
Every module must begin with a //! comment.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/parser/mod.rs (1)
214-226: Runtime cost of always callingassert_sorted.This check executes in every build, even release, which may become significant for large files. Replacing it with
debug_assert!(or an opt-in feature flag) avoids the cost in production while still catching bugs in debug/test builds.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (2)
docs/parser-plan.md(1 hunks)src/parser/mod.rs(7 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
`**/*.md`: * Avoid 2nd person or 1st person pronouns ("I", "you", "we") * Use en...
**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")
- Use en-oxendic spelling and grammar.
- Paragraphs and bullets must be wrapped to 80 columns, except where a long URL would prevent this (in which case, silence MD013 for that line)
- Code blocks should be wrapped to 120 columns.
- Headings must not be wrapped.
⚙️ Source: CodeRabbit Configuration File
List of files the instruction was applied to:
docs/parser-plan.md
`**/*.rs`: * Seek to keep the cyclomatic complexity of functions no more than 12...
**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.
Adhere to single responsibility and CQRS
Place function attributes after doc comments.
Do not use
returnin single-line functions.Move conditionals with >2 branches into a predicate function.
Avoid
unsafeunless absolutely necessary.Every module must begin with a
//!doc comment that explains the module's purpose and utility.Comments must use en-GB-oxendict spelling and grammar.
Lints must not be silenced except as a last resort.
#[allow]is forbidden.- Only narrowly scoped
#[expect(lint, reason = "...")]is allowed.- No lint groups, no blanket or file-wide suppression.
- Include
FIXME:with link if a fix is expected.Use
rstestfixtures for shared setup and to avoid repetition between tests.Replace duplicated tests with
#[rstest(...)]parameterised cases.Prefer
mockallfor mocks/stubs.Prefer
.expect()over.unwrap()Ensure that any API or behavioural changes are reflected in the documentation in
docs/Ensure that any completed roadmap steps are recorded in the appropriate roadmap in
docs/
⚙️ Source: CodeRabbit Configuration File
List of files the instruction was applied to:
src/parser/mod.rs
🧬 Code Graph Analysis (1)
src/parser/mod.rs (3)
src/parser/token_stream.rs (3)
tokens(105-107)src(118-120)new(41-47)src/parser/span_collector.rs (1)
new(44-50)src/language.rs (1)
kind_to_raw(185-190)
🔇 Additional comments (1)
docs/parser-plan.md (1)
76-79: Documentation line LGTM.The sequence diagram now matches the new
ParsedSpansparameter.
| /// `spans.imports` and `spans.typedefs` must be sorted and non-overlapping so | ||
| /// that tokens are wrapped into well-formed nodes during tree construction. | ||
| /// Spans are checked with debug assertions. | ||
| fn build_green_tree(tokens: Vec<(SyntaxKind, Span)>, src: &str, spans: &ParsedSpans) -> GreenNode { | ||
| spans.assert_sorted(); | ||
| let mut builder = GreenNodeBuilder::new(); | ||
| builder.start_node(DdlogLanguage::kind_to_raw(SyntaxKind::N_DATALOG_PROGRAM)); | ||
|
|
||
| let mut import_iter = imports.iter().peekable(); | ||
| let mut typedef_iter = typedefs.iter().peekable(); | ||
| let mut relation_iter = relations.iter().peekable(); | ||
| let mut index_iter = indexes.iter().peekable(); | ||
| let mut function_iter = functions.iter().peekable(); | ||
| let mut rule_iter = rules.iter().peekable(); | ||
| let mut import_iter = spans.imports.iter().peekable(); | ||
| let mut typedef_iter = spans.typedefs.iter().peekable(); | ||
| let mut relation_iter = spans.relations.iter().peekable(); | ||
| let mut index_iter = spans.indexes.iter().peekable(); | ||
| let mut function_iter = spans.functions.iter().peekable(); | ||
| let mut rule_iter = spans.rules.iter().peekable(); | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Pass tokens by slice to avoid an unnecessary move/clone.
build_green_tree consumes a whole Vec, but it only iterates. Accepting a slice keeps ownership with the caller and removes one move:
-fn build_green_tree(tokens: Vec<(SyntaxKind, Span)>, src: &str, spans: &ParsedSpans) -> GreenNode {
+fn build_green_tree(tokens: &[(SyntaxKind, Span)], src: &str, spans: &ParsedSpans) -> GreenNode {Call-site in parse:
let green = build_green_tree(&tokens, src, &spans);(Only the borrow changes; the rest of the code compiles unchanged.)
🤖 Prompt for AI Agents
In src/parser/mod.rs around lines 846 to 860, the function build_green_tree
currently takes ownership of a Vec of tokens, causing an unnecessary move.
Change the function signature to accept a slice of tokens instead of a Vec by
replacing Vec<(SyntaxKind, Span)> with &[(SyntaxKind, Span)]. Update all uses
inside the function accordingly to work with the slice. Also, update the call
site in the parse function to pass a reference to the tokens vector rather than
moving it, i.e., call build_green_tree(&tokens, src, &spans). This avoids
cloning or moving the vector and keeps ownership with the caller.
There was a problem hiding this comment.
@coderabbitai New GitHub issue for this, please
There was a problem hiding this comment.
🧩 Analysis chain
@leynos I'll create a GitHub issue for this optimization suggestion.
🏁 Script executed:
#!/bin/bash
gh issue create \
--title "Optimise build_green_tree to accept token slice instead of Vec" \
--body "## Summary
The \`build_green_tree\` function currently takes ownership of a \`Vec<(SyntaxKind, Span)>\` but only iterates over the tokens. This causes an unnecessary move operation.
## Proposed Change
Change the function signature from:
\`\`\`rust
fn build_green_tree(tokens: Vec<(SyntaxKind, Span)>, src: &str, spans: &ParsedSpans) -> GreenNode
\`\`\`
To:
\`\`\`rust
fn build_green_tree(tokens: &[(SyntaxKind, Span)], src: &str, spans: &ParsedSpans) -> GreenNode
\`\`\`
And update the call site in \`parse\` function:
\`\`\`rust
let green = build_green_tree(&tokens, src, &spans);
\`\`\`
## Benefits
- Avoids unnecessary move/clone operations
- Keeps ownership with the caller
- More idiomatic Rust (prefer borrowing over moving when possible)
## References
- Original PR: https://github.com/leynos/ddlint/pull/48
- Code review comment: https://github.com/leynos/ddlint/pull/48#discussion_r2186173998" \
--assignee leynos
Length of output: 1047
Perfect! I've successfully created GitHub issue #51 for this optimization:
Optimise build_green_tree to accept token slice instead of Vec
The issue captures the technical details of the proposed change, explains the benefits of avoiding unnecessary move operations, and includes backlinks to this PR and comment for context. It's been assigned to you as requested.
🧬 Code Graph Analysis Results
Relevant snippets from src/parser/token_stream.rs (lines 41-47, 59-61, 76-78, 89-93, 105-107, 118-120, 158-171, 214-229)
pub(crate) fn new(tokens: &'a [(SyntaxKind, Span)], src: &'a str) -> Self {
Self {
tokens,
src,
cursor: 0,
}
}
pub(crate) fn cursor(&self) -> usize {
self.cursor
}
pub(crate) fn peek(&self) -> Option<&(SyntaxKind, Span)> {
self.tokens.get(self.cursor)
}
pub(crate) fn advance(&mut self) {
if self.cursor < self.tokens.len() {
self.cursor += 1;
}
}
pub(crate) fn tokens(&self) -> &[(SyntaxKind, Span)] {
self.tokens
}
pub(crate) fn src(&self) -> &str {
self.src
}
/// Returns the position immediately after the end of the current line starting at token index `start`.
/// If the token at `start` is out of bounds, returns the length of the source string.
/// Iterates tokens from `start` forward until a token containing a newline is found, returning its end offset.
pub(crate) fn line_end(&self, start: usize) -> usize {
let mut end = self.tokens.get(start).map_or(self.src.len(), |t| t.1.end);
for tok in self.tokens.iter().skip(start) {
end = tok.1.end;
if self
.src
.get(tok.1.clone())
.is_some_and(|text| text.contains('\n'))
{
break;
}
}
end
}
/// Peeks the next token after the current cursor, skipping inline whitespace and comment tokens that do not contain newlines.
/// Returns the next non-whitespace/comment token or None if none found.
pub(crate) fn peek_after_ws_inline(&self) -> Option<&(SyntaxKind, Span)> {
let mut idx = self.cursor + 1;
while let Some(tok) = self.tokens.get(idx) {
if matches!(tok.0, SyntaxKind::T_WHITESPACE | SyntaxKind::T_COMMENT)
&& self
.src
.get(tok.1.clone())
.is_some_and(|text| !text.contains('\n'))
{
idx += 1;
} else {
break;
}
}
self.tokens.get(idx)
}Relevant snippet from src/parser/span_collector.rs (lines 44-50)
/// Creates a new `SpanCollector` with the given tokens, source string, and extra data.
/// Initializes the internal `TokenStream` and an empty vector for collected spans.
pub(crate) fn new(tokens: &'a [(SyntaxKind, Span)], src: &'a str, extra: Extra) -> Self {
Self {
stream: TokenStream::new(tokens, src),
spans: Vec::new(),
extra,
}
}Relevant snippet from src/parser/ast/parse_utils.rs (lines 17-45 and 93-124)
/// Parses name/type pairs from an iterator over syntax elements.
/// Skips tokens until the first '(' is found, then collects pairs of identifiers and types.
/// Returns a vector of (name, type) string tuples.
pub(super) fn parse_name_type_pairs<I>(mut iter: I) -> Vec<(String, String)>
where
I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
{
// Skip to the first '(' to handle leading trivia.
for e in &mut iter {
if e.kind() == SyntaxKind::T_LPAREN {
break;
}
}
let mut pairs = Vec::new();
let mut buf = String::new();
let mut name: Option<String> = None;
let mut depth = 0usize;
for e in iter {
match e {
NodeOrToken::Token(t) => {
if handle_token(&t, &mut buf, &mut name, &mut pairs, &mut depth) {
break;
}
}
NodeOrToken::Node(n) => buf.push_str(&n.text().to_string()),
}
}
pairs
}
/// Parses a type annotation following a colon from a peekable iterator over syntax elements.
/// Skips whitespace and comments, expects a colon token, then collects tokens until a terminating token or newline.
/// Returns the type as a trimmed string or None if no type found.
pub(super) fn parse_type_after_colon<I>(iter: &mut std::iter::Peekable<I>) -> Option<String>
where
I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
{
skip_whitespace_and_comments(iter);
if !matches!(
iter.peek().map(SyntaxElement::kind),
Some(SyntaxKind::T_COLON)
) {
return None;
}
iter.next();
let mut buf = String::new();
for e in iter {
match e {
NodeOrToken::Token(t) => match t.kind() {
SyntaxKind::T_LBRACE | SyntaxKind::T_SEMI => break,
SyntaxKind::T_WHITESPACE if t.text().contains('\n') => break,
_ => buf.push_str(t.text()),
},
NodeOrToken::Node(n) => buf.push_str(&n.text().to_string()),
}
}
let text = buf.trim();
if text.is_empty() {
None
} else {
Some(text.to_string())
}
}Relevant snippet from src/tokenizer.rs (lines 191-247)
/// Tokenizes the input source string into a vector of (SyntaxKind, Span) tuples.
/// Uses a lexer to identify tokens and map them to SyntaxKind variants.
/// On lexer errors, inserts N_ERROR tokens with the error span.
/// Supports keywords, identifiers, literals, punctuation, and operators.
pub fn tokenize(src: &str) -> Vec<(SyntaxKind, Span)> {
let mut lexer = Token::lexer(src);
let estimated_tokens = src.len() >> 2; // roughly four chars per token
let mut out = Vec::with_capacity(estimated_tokens);
while let Some(result) = lexer.next() {
let span = lexer.span();
let text = src.get(span.clone()).unwrap_or("");
let Ok(token) = result else {
out.push((SyntaxKind::N_ERROR, span));
continue;
};
let kind = match token {
Token::Whitespace => SyntaxKind::T_WHITESPACE,
Token::Comment => SyntaxKind::T_COMMENT,
Token::Ident => keyword_kind(text).unwrap_or(SyntaxKind::T_IDENT),
Token::Number => SyntaxKind::T_NUMBER,
Token::String => SyntaxKind::T_STRING,
Token::LParen => SyntaxKind::T_LPAREN,
Token::RParen => SyntaxKind::T_RPAREN,
Token::LBrace => SyntaxKind::T_LBRACE,
Token::RBrace => SyntaxKind::T_RBRACE,
Token::LBracket => SyntaxKind::T_LBRACKET,
Token::RBracket => SyntaxKind::T_RBRACKET,
Token::Semi => SyntaxKind::T_SEMI,
Token::Comma => SyntaxKind::T_COMMA,
Token::Dot => SyntaxKind::T_DOT,
Token::ColonColon => SyntaxKind::T_COLON_COLON,
Token::Colon => SyntaxKind::T_COLON,
Token::Pipe => SyntaxKind::T_PIPE,
Token::Amp => SyntaxKind::T_AMP,
Token::EqEq => SyntaxKind::T_EQEQ,
Token::Eq => SyntaxKind::T_EQ,
Token::Implies => SyntaxKind::T_IMPLIES,
Token::Percent => SyntaxKind::T_PERCENT,
Token::Star => SyntaxKind::T_STAR,
Token::Slash => SyntaxKind::T_SLASH,
Token::Plus => SyntaxKind::T_PLUS,
Token::Minus => SyntaxKind::T_MINUS,
Token::Arrow => SyntaxKind::T_ARROW,
Token::FatArrow => SyntaxKind::T_FAT_ARROW,
Token::Lte => SyntaxKind::T_LTE,
Token::Spaceship => SyntaxKind::T_SPACESHIP,
Token::Gte => SyntaxKind::T_GTE,
Token::Lt => SyntaxKind::T_LT,
Token::Gt => SyntaxKind::T_GT,
Token::Neq => SyntaxKind::T_NEQ,
Token::Shr => SyntaxKind::T_SHR,
Token::Shl => SyntaxKind::T_SHL,
Token::Tilde => SyntaxKind::T_TILDE,
Token::At => SyntaxKind::T_AT,
Token::Hash => SyntaxKind::T_HASH,
Token::Apostrophe => SyntaxKind::T_APOSTROPHE,
};
out.push((kind, span));
}
out
}Relevant snippet from src/language.rs (lines 185-190)
/// Converts a SyntaxKind variant to a raw rowan::SyntaxKind wrapper.
/// This is used to interface with the rowan syntax tree builder.
fn kind_to_raw(kind: Self::Kind) -> RowanSyntaxKind {
RowanSyntaxKind(
kind.to_u16()
.unwrap_or_else(|| unreachable!("all SyntaxKind variants map to u16")),
)
}Relevant snippet from src/parser/mod.rs (macro token_dispatch!)
/// Macro to iterate over the token stream and dispatch handlers by SyntaxKind.
/// Expects a parsing context `ctx` with a `stream` field.
/// Loops until the stream is exhausted, invoking the handler associated with each recognized kind.
/// Handlers must advance the stream to consume tokens they process.
/// Any unhandled kind is skipped.
macro_rules! token_dispatch {
( $ctx:ident, {
$( $kind:path => $handler:ident ),* $(,)?
} ) => {{
while let Some(&(kind, ref span_ref)) = $ctx.stream.peek() {
let span = span_ref.clone();
match kind {
$( $kind => $handler(&mut $ctx, span.clone()), )*
_ => $ctx.stream.advance(),
}
}
}};
}Summary of SpanCollector usage and role
SpanCollectoris a helper struct that wraps aTokenStreamand collects spans of recognized statements.- It is constructed with tokens, source string, and an extra data container (often a vector for errors).
- It provides methods to parse spans of statements using chumsky parsers and to advance or skip tokens.
- It is used extensively in functions like
collect_import_spans,collect_typedef_spans,collect_relation_spans,collect_index_spans,collect_function_spans, andcollect_rule_spansto identify statement spans and collect parse errors.
These snippets provide the core token stream handling, span collection, tokenization, and AST utilities necessary to understand and work with the src/parser/mod.rs file and its parsing logic.
| /// Spans for each parsed statement category. | ||
| #[derive(Debug, Default, Clone, PartialEq)] | ||
| pub struct ParsedSpans { | ||
| /// `import` statement spans. | ||
| pub imports: Vec<Span>, | ||
| /// `typedef` statement spans. | ||
| pub typedefs: Vec<Span>, | ||
| /// `relation` declaration spans. | ||
| pub relations: Vec<Span>, | ||
| /// `index` declaration spans. | ||
| pub indexes: Vec<Span>, | ||
| /// `function` definition spans. | ||
| pub functions: Vec<Span>, | ||
| /// Rule spans. | ||
| pub rules: Vec<Span>, | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Restrict the public surface of ParsedSpans (and future-proof it).
ParsedSpans is exported with all fields pub, allowing external crates to mutate internal invariants and making it painful to extend the struct without a breaking change.
Consider:
-#[derive(Debug, Default, Clone, PartialEq)]
-pub struct ParsedSpans {
+#[derive(Debug, Default, Clone, PartialEq, Eq)]
+#[non_exhaustive]
+pub struct ParsedSpans {and either (a) keep the fields private and expose a new() / builder API or (b) provide getter methods only. This guards invariants (e.g. sorted + non-overlapping) and lets you add more collections later without semver pain.
🤖 Prompt for AI Agents
In src/parser/mod.rs around lines 197 to 212, the ParsedSpans struct currently
exposes all its fields as public, which risks external mutation and breaks
encapsulation. To fix this, make all fields private by removing the pub keyword,
then add a public constructor method new() to initialize the struct and provide
public getter methods for each field to allow read-only access. This approach
protects internal invariants and allows future extension without breaking
changes.
There was a problem hiding this comment.
@coderabbitai New GitHub issue for this, please
There was a problem hiding this comment.
🧩 Analysis chain
@leynos I'll create a GitHub issue for the ParsedSpans encapsulation improvement.
🏁 Script executed:
#!/bin/bash
gh issue create \
--title "Improve encapsulation of ParsedSpans struct" \
--body "The \`ParsedSpans\` struct currently exposes all fields as public, which allows external mutation and breaks encapsulation principles. This makes it difficult to maintain invariants (such as sorted and non-overlapping spans) and complicates future extensions without breaking changes.
The struct should be refactored to:
- Make fields private
- Provide a public constructor method
- Expose read-only access through getter methods
- Consider adding \`#[non_exhaustive]\` for future extensibility
This issue was raised during code review and requested to be tracked separately.
**References:**
- Original PR: https://github.com/leynos/ddlint/pull/48
- Comment: https://github.com/leynos/ddlint/pull/48#discussion_r2186173999
- Requested by: @leynos" \
--assignee leynos
Length of output: 887
Perfect! I've created GitHub issue #52 for the ParsedSpans encapsulation improvement: #52
The issue tracks the need to improve encapsulation by making the struct fields private, adding proper constructor and getter methods, and considering #[non_exhaustive] for future extensibility. It's assigned to you and includes proper backlinks to this PR and comment thread.
🧬 Code Graph Analysis Results
Relevant snippets from src/parser/token_stream.rs (lines 41-47, 59-61, 76-78, 89-93, 105-107, 118-120, 158-171, 214-229)
pub(crate) fn new(tokens: &'a [(SyntaxKind, Span)], src: &'a str) -> Self {
Self {
tokens,
src,
cursor: 0,
}
}
pub(crate) fn cursor(&self) -> usize {
self.cursor
}
pub(crate) fn peek(&self) -> Option<&(SyntaxKind, Span)> {
self.tokens.get(self.cursor)
}
pub(crate) fn advance(&mut self) {
if self.cursor < self.tokens.len() {
self.cursor += 1;
}
}
pub(crate) fn tokens(&self) -> &[(SyntaxKind, Span)] {
self.tokens
}
pub(crate) fn src(&self) -> &str {
self.src
}
/// Returns the position immediately after the end of the current line starting at `start`.
/// If `start` is out of bounds, returns the length of the source string.
/// Iterates tokens from `start` forward until a token containing a newline is found.
pub(crate) fn line_end(&self, start: usize) -> usize {
let mut end = self.tokens.get(start).map_or(self.src.len(), |t| t.1.end);
for tok in self.tokens.iter().skip(start) {
end = tok.1.end;
if self
.src
.get(tok.1.clone())
.is_some_and(|text| text.contains('\n'))
{
break;
}
}
end
}
/// Peeks the next token after the current cursor, skipping inline whitespace and comments.
/// Inline whitespace/comments are those that do not contain a newline.
/// Returns the next non-whitespace/comment token or None if none found.
pub(crate) fn peek_after_ws_inline(&self) -> Option<&(SyntaxKind, Span)> {
let mut idx = self.cursor + 1;
while let Some(tok) = self.tokens.get(idx) {
if matches!(tok.0, SyntaxKind::T_WHITESPACE | SyntaxKind::T_COMMENT)
&& self
.src
.get(tok.1.clone())
.is_some_and(|text| !text.contains('\n'))
{
idx += 1;
} else {
break;
}
}
self.tokens.get(idx)
}Relevant snippet from src/parser/span_collector.rs (lines 44-50)
pub(crate) fn new(tokens: &'a [(SyntaxKind, Span)], src: &'a str, extra: Extra) -> Self {
Self {
stream: TokenStream::new(tokens, src),
spans: Vec::new(),
extra,
}
}Relevant snippet from src/parser/ast/parse_utils.rs (lines 17-45, 93-124)
/// Parses name/type pairs from an iterator of syntax elements.
/// Returns a vector of (name, type) string pairs.
/// Skips tokens until the first '(' is found, then parses pairs separated by commas.
/// Handles nested parentheses and collects tokens into strings.
pub(super) fn parse_name_type_pairs<I>(mut iter: I) -> Vec<(String, String)>
where
I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
{
// Skip to the first '(' to handle leading trivia.
for e in &mut iter {
if e.kind() == SyntaxKind::T_LPAREN {
break;
}
}
let mut pairs = Vec::new();
let mut buf = String::new();
let mut name: Option<String> = None;
let mut depth = 0usize;
for e in iter {
match e {
NodeOrToken::Token(t) => {
if handle_token(&t, &mut buf, &mut name, &mut pairs, &mut depth) {
break;
}
}
NodeOrToken::Node(n) => buf.push_str(&n.text().to_string()),
}
}
pairs
}
/// Parses a type annotation after a colon token from a peekable iterator of syntax elements.
/// Returns the type as a string if present, or None if no colon or empty type.
/// Stops parsing at '{', ';', or newline whitespace tokens.
pub(super) fn parse_type_after_colon<I>(iter: &mut std::iter::Peekable<I>) -> Option<String>
where
I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
{
skip_whitespace_and_comments(iter);
if !matches!(
iter.peek().map(SyntaxElement::kind),
Some(SyntaxKind::T_COLON)
) {
return None;
}
iter.next();
let mut buf = String::new();
for e in iter {
match e {
NodeOrToken::Token(t) => match t.kind() {
SyntaxKind::T_LBRACE | SyntaxKind::T_SEMI => break,
SyntaxKind::T_WHITESPACE if t.text().contains('\n') => break,
_ => buf.push_str(t.text()),
},
NodeOrToken::Node(n) => buf.push_str(&n.text().to_string()),
}
}
let text = buf.trim();
if text.is_empty() {
None
} else {
Some(text.to_string())
}
}Relevant snippet from src/tokenizer.rs (lines 191-247)
/// Tokenizes the source string into a vector of (SyntaxKind, Span) tuples.
/// Uses a lexer to produce tokens, mapping lexer tokens to SyntaxKind variants.
/// On lexer errors, inserts N_ERROR tokens with the error span.
/// Supports keywords, identifiers, literals, punctuation, and operators.
pub fn tokenize(src: &str) -> Vec<(SyntaxKind, Span)> {
let mut lexer = Token::lexer(src);
let estimated_tokens = src.len() >> 2; // roughly four chars per token
let mut out = Vec::with_capacity(estimated_tokens);
while let Some(result) = lexer.next() {
let span = lexer.span();
let text = src.get(span.clone()).unwrap_or("");
let Ok(token) = result else {
out.push((SyntaxKind::N_ERROR, span));
continue;
};
let kind = match token {
Token::Whitespace => SyntaxKind::T_WHITESPACE,
Token::Comment => SyntaxKind::T_COMMENT,
Token::Ident => keyword_kind(text).unwrap_or(SyntaxKind::T_IDENT),
Token::Number => SyntaxKind::T_NUMBER,
Token::String => SyntaxKind::T_STRING,
Token::LParen => SyntaxKind::T_LPAREN,
Token::RParen => SyntaxKind::T_RPAREN,
Token::LBrace => SyntaxKind::T_LBRACE,
Token::RBrace => SyntaxKind::T_RBRACE,
Token::LBracket => SyntaxKind::T_LBRACKET,
Token::RBracket => SyntaxKind::T_RBRACKET,
Token::Semi => SyntaxKind::T_SEMI,
Token::Comma => SyntaxKind::T_COMMA,
Token::Dot => SyntaxKind::T_DOT,
Token::ColonColon => SyntaxKind::T_COLON_COLON,
Token::Colon => SyntaxKind::T_COLON,
Token::Pipe => SyntaxKind::T_PIPE,
Token::Amp => SyntaxKind::T_AMP,
Token::EqEq => SyntaxKind::T_EQEQ,
Token::Eq => SyntaxKind::T_EQ,
Token::Implies => SyntaxKind::T_IMPLIES,
Token::Percent => SyntaxKind::T_PERCENT,
Token::Star => SyntaxKind::T_STAR,
Token::Slash => SyntaxKind::T_SLASH,
Token::Plus => SyntaxKind::T_PLUS,
Token::Minus => SyntaxKind::T_MINUS,
Token::Arrow => SyntaxKind::T_ARROW,
Token::FatArrow => SyntaxKind::T_FAT_ARROW,
Token::Lte => SyntaxKind::T_LTE,
Token::Spaceship => SyntaxKind::T_SPACESHIP,
Token::Gte => SyntaxKind::T_GTE,
Token::Lt => SyntaxKind::T_LT,
Token::Gt => SyntaxKind::T_GT,
Token::Neq => SyntaxKind::T_NEQ,
Token::Shr => SyntaxKind::T_SHR,
Token::Shl => SyntaxKind::T_SHL,
Token::Tilde => SyntaxKind::T_TILDE,
Token::At => SyntaxKind::T_AT,
Token::Hash => SyntaxKind::T_HASH,
Token::Apostrophe => SyntaxKind::T_APOSTROPHE,
};
out.push((kind, span));
}
out
}Summary of token_dispatch! macro (lines ~40-60)
A macro that iterates over a token stream in a parsing context ctx with a stream field. For each token, it matches the token kind against provided handlers. If a handler matches, it calls the handler function with the context and token span. Handlers must advance the stream to consume tokens. Unhandled tokens are skipped by advancing the stream.
Summary of collect_*_spans functions
collect_import_spans: Parsesimportstatements, collects their spans, and recovers on errors by skipping to line end.collect_typedef_spans: Collects spans fortypedefandextern typedeclarations by skipping to line ends.collect_relation_spans: Collects spans for relation declarations, including optionalinput/outputkeywords and primary key clauses.collect_index_spans: Parsesindexdeclarations with syntax validation, collects spans and errors.collect_function_spans: Parsesfunctionandextern functiondeclarations, collects spans and errors.collect_rule_spans: Parses rule declarations starting at line beginnings, collects spans and errors.
Summary of CST construction functions
build_green_tree: Constructs arowan::GreenNodeCST from tokens and statement spans. Uses iterators over sorted spans to start and finish syntax nodes for each statement type.advance_span_iter: Advances a peekable iterator over spans past tokens that end before a given position.maybe_startandmaybe_finish: Start or finish syntax nodes in the builder when token positions match span boundaries.push_token: Adds tokens to the CST builder, wrapping error tokens in error nodes.
Summary of Parsed and ParsedSpans structs
Parsed: Holds the CST green node, typed AST root, and parse errors.ParsedSpans: Holds vectors of spans for each statement category (imports, typedefs, relations, indexes, functions, rules). Has a method to assert spans are sorted and non-overlapping.
Summary of ast module
Defines typed wrappers around rowan syntax nodes for root, import, typedef, relation, index, rule, and function declarations. Provides methods to access syntax nodes, extract names, aliases, parameters, columns, primary keys, and other relevant information from the CST.
These snippets and summaries provide comprehensive context on the parsing process, token stream management, span collection, CST construction, and typed AST wrappers in src/parser/mod.rs. They are relevant for understanding the file and addressing issues related to parsing, token handling, span management, and AST construction.
|
@coderabbitai new GitHub issue advising that |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain@leynos, I'll create a GitHub issue for this suggestion. 🏁 Script executed: Length of output: 742 Perfect! I've successfully created a GitHub issue for your suggestion. The issue has been created at: The issue suggests renaming |
Summary
ParsedSpansstructparseandbuild_green_treeto use new structTesting
make fmtmake lintmake testmake markdownlintmake nixiehttps://chatgpt.com/codex/tasks/task_e_6867382d39a083228a9dab0ba898535c
Summary by Sourcery
Refactor parser span handling by consolidating individual span lists into a unified
ParsedSpansstruct, updating core functions and documentation to use the new struct, and adding validation for sorted spans.Enhancements:
ParsedSpansstruct to encapsulate all parser span collectionsparse_tokensandbuild_green_treesignatures to accept a singleParsedSpansargumentassert_sortedmethod onParsedSpansto enforce sorted and non-overlapping spansDocumentation:
parser-plan.mdand inline doc comments to referenceParsedSpansinstead of individual span listsTests:
build_green_treeto construct and useParsedSpansin panic scenarios