Conversation
Reviewer's GuideThe PR refactors the AST parser’s type parsing utilities by splitting a large monolithic file into three focused modules (outputs, params, and type_expr), moving relevant parsing functions into them, and updating module exports and tests to match. Class diagram for refactored type parsing modulesclassDiagram
class params {
+parse_name_type_pairs()
}
class outputs {
+parse_output_list()
}
class type_expr {
+parse_type_expr()
+parse_type_after_colon()
}
class delimiter {
+extract_parenthesized()
+paren_block_span
}
class errors {
+Delim
+DelimStack
+ParseError
}
class token_utils {
+TokenParseContext
+close_delimiter()
+open_delimiter()
+push()
+push_error()
}
params --> errors
params --> token_utils
outputs --> errors
outputs --> token_utils
type_expr --> errors
type_expr --> token_utils
delimiter --> errors
delimiter --> token_utils
File-Level Changes
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. Warning Rate limit exceeded@leynos has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 13 minutes and 46 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (6)
✨ Finishing Touches
🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Hey there - I've reviewed your changes and they look great!
Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments
### Comment 1
<location> `src/parser/ast/parse_utils/outputs.rs:13` </location>
<code_context>
+
+use super::super::skip_whitespace_and_comments;
+
+fn skip_to_top_level_colon<I>(iter: &mut std::iter::Peekable<I>)
+where
+ I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
+{
+ let mut depth = 0usize;
+ for e in iter.by_ref() {
+ match e.kind() {
+ SyntaxKind::T_LPAREN => depth += 1,
+ SyntaxKind::T_RPAREN => depth = depth.saturating_sub(1),
+ SyntaxKind::T_COLON if depth == 0 => break,
+ _ => {}
+ }
+ }
+}
</code_context>
<issue_to_address>
skip_to_top_level_colon uses saturating_sub for depth decrement; consider if this is robust for all input cases.
While saturating_sub avoids underflow, it may conceal issues with malformed input, such as excess closing parentheses. Consider adding error handling or logging to detect and address these cases.
</issue_to_address>
<suggested_fix>
<<<<<<< SEARCH
fn skip_to_top_level_colon<I>(iter: &mut std::iter::Peekable<I>)
where
I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
{
let mut depth = 0usize;
for e in iter.by_ref() {
match e.kind() {
SyntaxKind::T_LPAREN => depth += 1,
SyntaxKind::T_RPAREN => depth = depth.saturating_sub(1),
SyntaxKind::T_COLON if depth == 0 => break,
_ => {}
}
}
}
=======
fn skip_to_top_level_colon<I>(iter: &mut std::iter::Peekable<I>)
where
I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
{
let mut depth = 0isize;
for e in iter.by_ref() {
match e.kind() {
SyntaxKind::T_LPAREN => depth += 1,
SyntaxKind::T_RPAREN => {
depth -= 1;
if depth < 0 {
eprintln!("Warning: excess closing parenthesis detected in skip_to_top_level_colon");
depth = 0; // Reset to 0 to continue parsing, or consider breaking/returning error
}
},
SyntaxKind::T_COLON if depth == 0 => break,
_ => {}
}
}
}
>>>>>>> REPLACE
</suggested_fix>
### Comment 2
<location> `src/parser/ast/parse_utils/type_expr.rs:17` </location>
<code_context>
+ type_expr::parse_type_expr,
};
-macro_rules! delimiter_checker {
- ($(#[$meta:meta])* $name:ident, [$($variant:path),+ $(,)?]) => {
- $(#[$meta])* fn $name(kind: SyntaxKind) -> bool {
</code_context>
<issue_to_address>
delimiter_checker macro duplicated from previous location; consider centralizing for reuse.
If delimiter_checker is needed in several places, moving it to a shared location will help maintain consistency and reduce duplication.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| pub(super) fn parse_type_expr<I>(iter: &mut std::iter::Peekable<I>) -> (String, Vec<ParseError>) | ||
| where | ||
| I: Iterator<Item = SyntaxElement<DdlogLanguage>>, | ||
| { | ||
| use rowan::NodeOrToken; | ||
|
|
||
| let mut buf = String::new(); | ||
| let mut errors = Vec::new(); | ||
| let mut stack = DelimStack::default(); | ||
| let mut ctx = TokenParseContext::new(&mut buf, &mut stack); | ||
|
|
||
| while let Some(e) = iter.peek() { | ||
| match e { | ||
| NodeOrToken::Token(t) => match t.kind() { | ||
| kind if is_opening_delimiter(kind) => { | ||
| if handle_opening_delimiter(t, &mut ctx) { | ||
| iter.next(); | ||
| } | ||
| } | ||
| kind if is_closing_delimiter(kind) => { | ||
| if handle_closing_delimiter(t, &mut ctx, &mut errors) { | ||
| iter.next(); | ||
| } else { | ||
| break; | ||
| } | ||
| } | ||
| kind if should_break_parsing(kind, ctx.stack.is_empty()) => break, | ||
| _ => { | ||
| push(t, &mut ctx); | ||
| iter.next(); | ||
| } | ||
| }, | ||
| NodeOrToken::Node(n) => { | ||
| let text = n.text().to_string(); | ||
| let is_ws = text.chars().all(char::is_whitespace); | ||
| let is_comment = n.kind() == SyntaxKind::T_COMMENT; | ||
| if !is_ws && !is_comment { | ||
| ctx.buf.push_str(&text); | ||
| } | ||
| iter.next(); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| for (unclosed, span) in stack.unclosed() { | ||
| let ch = match unclosed { | ||
| Delim::Paren => ')', | ||
| Delim::Angle => '>', | ||
| Delim::Bracket => ']', | ||
| Delim::Brace => '}', | ||
| }; | ||
| errors.push(ParseError::UnclosedDelimiter { | ||
| delimiter: ch, | ||
| span, | ||
| }); | ||
| } | ||
|
|
||
| (buf.trim().to_string(), errors) | ||
| } |
There was a problem hiding this comment.
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix.
| pub(super) fn parse_type_expr<I>(iter: &mut std::iter::Peekable<I>) -> (String, Vec<ParseError>) | ||
| where | ||
| I: Iterator<Item = SyntaxElement<DdlogLanguage>>, | ||
| { | ||
| use rowan::NodeOrToken; | ||
|
|
||
| let mut buf = String::new(); | ||
| let mut errors = Vec::new(); | ||
| let mut stack = DelimStack::default(); | ||
| let mut ctx = TokenParseContext::new(&mut buf, &mut stack); | ||
|
|
||
| while let Some(e) = iter.peek() { | ||
| match e { | ||
| NodeOrToken::Token(t) => match t.kind() { | ||
| kind if is_opening_delimiter(kind) => { | ||
| if handle_opening_delimiter(t, &mut ctx) { | ||
| iter.next(); | ||
| } | ||
| } | ||
| kind if is_closing_delimiter(kind) => { | ||
| if handle_closing_delimiter(t, &mut ctx, &mut errors) { | ||
| iter.next(); | ||
| } else { | ||
| break; | ||
| } | ||
| } | ||
| kind if should_break_parsing(kind, ctx.stack.is_empty()) => break, | ||
| _ => { | ||
| push(t, &mut ctx); | ||
| iter.next(); | ||
| } | ||
| }, | ||
| NodeOrToken::Node(n) => { | ||
| let text = n.text().to_string(); | ||
| let is_ws = text.chars().all(char::is_whitespace); | ||
| let is_comment = n.kind() == SyntaxKind::T_COMMENT; | ||
| if !is_ws && !is_comment { | ||
| ctx.buf.push_str(&text); | ||
| } | ||
| iter.next(); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| for (unclosed, span) in stack.unclosed() { | ||
| let ch = match unclosed { | ||
| Delim::Paren => ')', | ||
| Delim::Angle => '>', | ||
| Delim::Bracket => ']', | ||
| Delim::Brace => '}', | ||
| }; | ||
| errors.push(ParseError::UnclosedDelimiter { | ||
| delimiter: ch, | ||
| span, | ||
| }); | ||
| } | ||
|
|
||
| (buf.trim().to_string(), errors) | ||
| } |
There was a problem hiding this comment.
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix.
| pub(super) fn parse_type_expr<I>(iter: &mut std::iter::Peekable<I>) -> (String, Vec<ParseError>) | ||
| where | ||
| I: Iterator<Item = SyntaxElement<DdlogLanguage>>, | ||
| { | ||
| use rowan::NodeOrToken; | ||
|
|
||
| let mut buf = String::new(); | ||
| let mut errors = Vec::new(); | ||
| let mut stack = DelimStack::default(); | ||
| let mut ctx = TokenParseContext::new(&mut buf, &mut stack); | ||
|
|
||
| while let Some(e) = iter.peek() { | ||
| match e { | ||
| NodeOrToken::Token(t) => match t.kind() { | ||
| kind if is_opening_delimiter(kind) => { | ||
| if handle_opening_delimiter(t, &mut ctx) { | ||
| iter.next(); | ||
| } | ||
| } | ||
| kind if is_closing_delimiter(kind) => { | ||
| if handle_closing_delimiter(t, &mut ctx, &mut errors) { | ||
| iter.next(); | ||
| } else { | ||
| break; | ||
| } | ||
| } | ||
| kind if should_break_parsing(kind, ctx.stack.is_empty()) => break, | ||
| _ => { | ||
| push(t, &mut ctx); | ||
| iter.next(); | ||
| } | ||
| }, | ||
| NodeOrToken::Node(n) => { | ||
| let text = n.text().to_string(); | ||
| let is_ws = text.chars().all(char::is_whitespace); | ||
| let is_comment = n.kind() == SyntaxKind::T_COMMENT; | ||
| if !is_ws && !is_comment { | ||
| ctx.buf.push_str(&text); | ||
| } | ||
| iter.next(); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| for (unclosed, span) in stack.unclosed() { | ||
| let ch = match unclosed { | ||
| Delim::Paren => ')', | ||
| Delim::Angle => '>', | ||
| Delim::Bracket => ']', | ||
| Delim::Brace => '}', | ||
| }; | ||
| errors.push(ParseError::UnclosedDelimiter { | ||
| delimiter: ch, | ||
| span, | ||
| }); | ||
| } | ||
|
|
||
| (buf.trim().to_string(), errors) | ||
| } |
There was a problem hiding this comment.
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix.
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix: Deep, Nested Complexity What lead to degradation? parse_type_expr has a nested complexity depth of 5, threshold = 4 Deep nested logic means that you have control structures like if-statements or loops inside other control structures. Deep nested logic increases the cognitive load on the programmer reading the code. The human working memory has a maximum capacity of 3-4 items; beyond that threshold, we struggle with keeping things in our head. Consequently, deep nested logic has a strong correlation to defects and accounts for roughly 20% of all programming mistakes. CodeScene measures the maximum nesting depth inside each function. The deeper the nesting, the lower the code health. The threshold for the Rust language is 5 levels of nesting. Occassionally, it's possible to get rid of the nested logic by Replacing Conditionals with Guard Clauses. Another viable strategy is to identify smaller building blocks inside the nested chunks of logic and extract those responsibilities into smaller, cohesive, and well-named functions. The EXTRACT FUNCTION refactoring explains the steps. |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chainLet me examine the Based on my analysis of the Suggested FixThe
Refactoring Strategy: Extract smaller, focused functions to reduce nesting depth. Specific Refactoring Steps:
AI Coding Agent Prompt@leynos: I've analyzed the ฅ(•ㅅ•❀)ฅ 🏁 Script executed: Length of output: 7587 🏁 Script executed: Length of output: 56 🏁 Script executed: Length of output: 1987 @leynos: After examining the actual Updated AnalysisThe nested complexity in
The code is already partially well-structured with helper functions, but the main complexity stems from the nested pattern matching structure. Revised Fix SuggestionPrimary Strategy: Replace guard clauses with early dispatch and flatten the token handling logic. Specific Refactoring Steps:
Updated AI Coding Agent PromptWith direct pattern matching: SyntaxKind::T_LPAREN | SyntaxKind::T_LT | SyntaxKind::T_SHL |
SyntaxKind::T_LBRACKET | SyntaxKind::T_LBRACE => {
handle_opening_delimiter(t, &mut ctx);
iter.next();
}
TARGET STRUCTURE: pub(super) fn parse_type_expr<I>(iter: &mut std::iter::Peekable<I>) -> (String, Vec<ParseError>) {
// Setup code (depth 0)
while let Some(e) = iter.peek() { // depth 1
match e { // depth 2
NodeOrToken::Token(t) => {
match handle_token(t, &mut ctx, &mut errors) { // depth 3
TokenAction::Continue => iter.next(),
TokenAction::Break => break,
TokenAction::Skip => {}
}
}
NodeOrToken::Node(n) => handle_node(n, &mut ctx), // depth 3
}
}
let cleanup_errors = handle_unclosed_delimiters(stack);
// Return statement
}CONSTRAINTS:
The goal is to flatten the nested guard clauses and conditional logic while preserving all functionality. |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain@leynos I've analyzed the AnalysisThe function has excessive nesting due to:
Suggested FixExtract the following helper functions to reduce nesting: // Helper function for parsing array types
fn parse_array_type(tokens: &mut TokenIter) -> Result<TypeExpr, ParseError> {
let mut dimensions = Vec::new();
while let Some(Token { kind: TokenKind::LeftBracket, .. }) = tokens.peek() {
tokens.next(); // consume '['
let dimension = if let Some(Token { kind: TokenKind::RightBracket, .. }) = tokens.peek() {
None // empty brackets []
} else {
// Parse dimension expression
let mut bracket_depth = 0;
let mut dimension_tokens = Vec::new();
while let Some(token) = tokens.peek() {
match token.kind {
TokenKind::LeftBracket => bracket_depth += 1,
TokenKind::RightBracket if bracket_depth == 0 => break,
TokenKind::RightBracket => bracket_depth -= 1,
_ => {}
}
dimension_tokens.push(tokens.next().unwrap());
}
if dimension_tokens.is_empty() {
None
} else {
Some(dimension_tokens)
}
};
expect_token!(tokens, TokenKind::RightBracket, "Expected ']' after array dimension")?;
dimensions.push(dimension);
}
Ok(TypeExpr::Array { dimensions })
}
// Helper function for parsing tuple types
fn parse_tuple_type(tokens: &mut TokenIter) -> Result<TypeExpr, ParseError> {
let mut elements = Vec::new();
tokens.next(); // consume '('
if let Some(Token { kind: TokenKind::RightParen, .. }) = tokens.peek() {
tokens.next(); // consume ')'
return Ok(TypeExpr::Tuple { elements });
}
loop {
elements.push(parse_type_expr(tokens)?);
match tokens.peek() {
Some(Token { kind: TokenKind::Comma, .. }) => {
tokens.next(); // consume ','
continue;
}
Some(Token { kind: TokenKind::RightParen, .. }) => {
tokens.next(); // consume ')'
break;
}
_ => return Err(ParseError::new("Expected ',' or ')' in tuple type")),
}
}
Ok(TypeExpr::Tuple { elements })
}
// Helper function for parsing function pointer types
fn parse_function_pointer_type(tokens: &mut TokenIter) -> Result<TypeExpr, ParseError> {
tokens.next(); // consume 'fn'
expect_token!(tokens, TokenKind::LeftParen, "Expected '(' after 'fn'")?;
let mut params = Vec::new();
if !matches!(tokens.peek(), Some(Token { kind: TokenKind::RightParen, .. })) {
loop {
params.push(parse_type_expr(tokens)?);
match tokens.peek() {
Some(Token { kind: TokenKind::Comma, .. }) => {
tokens.next();
continue;
}
Some(Token { kind: TokenKind::RightParen, .. }) => break,
_ => return Err(ParseError::new("Expected ',' or ')' in function parameters")),
}
}
}
expect_token!(tokens, TokenKind::RightParen, "Expected ')' after function parameters")?;
let return_type = if matches!(tokens.peek(), Some(Token { kind: TokenKind::Arrow, .. })) {
tokens.next(); // consume '->'
Some(Box::new(parse_type_expr(tokens)?))
} else {
None
};
Ok(TypeExpr::FunctionPointer {
params,
return_type,
})
}
// Simplified main function
pub fn parse_type_expr(tokens: &mut TokenIter) -> Result<TypeExpr, ParseError> {
let base_type = match tokens.peek() {
Some(Token { kind: TokenKind::Identifier(name), .. }) => {
let name = name.clone();
tokens.next();
TypeExpr::Named { name }
}
Some(Token { kind: TokenKind::LeftParen, .. }) => {
return parse_tuple_type(tokens);
}
Some(Token { kind: TokenKind::Fn, .. }) => {
return parse_function_pointer_type(tokens);
}
Some(Token { kind: TokenKind::Star, .. }) => {
tokens.next();
let pointee = Box::new(parse_type_expr(tokens)?);
TypeExpr::Pointer { pointee }
}
_ => return Err(ParseError::new("Expected type expression")),
};
// Handle array dimensions
if matches!(tokens.peek(), Some(Token { kind: TokenKind::LeftBracket, .. })) {
let array_type = parse_array_type(tokens)?;
if let TypeExpr::Array { dimensions } = array_type {
Ok(TypeExpr::Array {
base_type: Some(Box::new(base_type)),
dimensions
})
} else {
unreachable!()
}
} else {
Ok(base_type)
}
}AI Coding Agent PromptThis refactoring reduces the nesting depth by extracting complex parsing logic into dedicated helper functions, each with a single responsibility. The main function becomes much more readable and maintainable. 🏁 Script executed: Length of output: 7587 @leynos Thank you for the clarification! After examining the actual code, I can see the nested complexity issue more clearly. Here's my analysis and suggested fix: Analysis of Current Nested ComplexityThe
Suggested FixExtract token processing logic into focused helper functions: /// Process a single token during type expression parsing
fn process_token<I>(
token: &rowan::SyntaxToken<DdlogLanguage>,
iter: &mut std::iter::Peekable<I>,
ctx: &mut TokenParseContext<'_>,
errors: &mut Vec<ParseError>,
) -> bool
where
I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
{
match token.kind() {
kind if is_opening_delimiter(kind) => {
process_opening_delimiter(token, iter, ctx)
}
kind if is_closing_delimiter(kind) => {
process_closing_delimiter(token, iter, ctx, errors)
}
kind if should_break_parsing(kind, ctx.stack.is_empty()) => false,
_ => {
push(token, ctx);
iter.next();
true
}
}
}
/// Handle opening delimiter with simplified logic
fn process_opening_delimiter<I>(
token: &rowan::SyntaxToken<DdlogLanguage>,
iter: &mut std::iter::Peekable<I>,
ctx: &mut TokenParseContext<'_>,
) -> bool
where
I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
{
if handle_opening_delimiter(token, ctx) {
iter.next();
true
} else {
false
}
}
/// Handle closing delimiter with simplified logic
fn process_closing_delimiter<I>(
token: &rowan::SyntaxToken<DdlogLanguage>,
iter: &mut std::iter::Peekable<I>,
ctx: &mut TokenParseContext<'_>,
errors: &mut Vec<ParseError>,
) -> bool
where
I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
{
if handle_closing_delimiter(token, ctx, errors) {
iter.next();
true
} else {
false
}
}
/// Process a syntax node during parsing
fn process_node<I>(
node: &rowan::SyntaxNode<DdlogLanguage>,
iter: &mut std::iter::Peekable<I>,
ctx: &mut TokenParseContext<'_>,
)
where
I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
{
let text = node.text().to_string();
let is_ws = text.chars().all(char::is_whitespace);
let is_comment = node.kind() == SyntaxKind::T_COMMENT;
if !is_ws && !is_comment {
ctx.buf.push_str(&text);
}
iter.next();
}
/// Simplified main parse function
pub(super) fn parse_type_expr<I>(iter: &mut std::iter::Peekable<I>) -> (String, Vec<ParseError>)
where
I: Iterator<Item = SyntaxElement<DdlogLanguage>>,
{
use rowan::NodeOrToken;
let mut buf = String::new();
let mut errors = Vec::new();
let mut stack = DelimStack::default();
let mut ctx = TokenParseContext::new(&mut buf, &mut stack);
while let Some(e) = iter.peek() {
let should_continue = match e {
NodeOrToken::Token(t) => process_token(t, iter, &mut ctx, &mut errors),
NodeOrToken::Node(n) => {
process_node(n, iter, &mut ctx);
true
}
};
if !should_continue {
break;
}
}
// Handle unclosed delimiters
add_unclosed_delimiter_errors(&stack, &mut errors);
(buf.trim().to_string(), errors)
}
/// Extract unclosed delimiter error handling
fn add_unclosed_delimiter_errors(stack: &DelimStack, errors: &mut Vec<ParseError>) {
for (unclosed, span) in stack.unclosed() {
let ch = match unclosed {
Delim::Paren => ')',
Delim::Angle => '>',
Delim::Bracket => ']',
Delim::Brace => '}',
};
errors.push(ParseError::UnclosedDelimiter {
delimiter: ch,
span,
});
}
}AI Coding Agent PromptThis approach reduces nesting by extracting the complex token processing logic into dedicated helper functions, making the main function a simple coordinator that handles the overall parsing flow. ( .. ) |
79cb179 to
2237225
Compare
* Refactor type expr parser * Fix type parser borrows Clone peeked elements to avoid overlapping mutable borrows and drain delimiter stack for unclosed errors. Expose colon skipping helper for tests. * Inline token processing in type parser
There was a problem hiding this comment.
Code Health Improved
(1 files improve in Code Health)
Gates Failed
New code is healthy
(1 new file with code health below 9.00)
Enforce critical code health rules
(2 files with Bumpy Road Ahead, Deep, Nested Complexity)
Enforce advisory code health rules
(1 file with Complex Method)
Gates Passed
3 Quality Gates Passed
See analysis details in CodeScene
Reason for failure
| New code is healthy | Violations | Code Health Impact | |
|---|---|---|---|
| type_expr.rs | 3 rules | 10.00 → 8.91 | Suppress |
| Enforce critical code health rules | Violations | Code Health Impact | |
|---|---|---|---|
| type_expr.rs | 2 critical rules | 10.00 → 8.91 | Suppress |
| outputs.rs | 1 critical rule | 10.00 → 9.84 | Suppress |
| Enforce advisory code health rules | Violations | Code Health Impact | |
|---|---|---|---|
| type_expr.rs | 1 advisory rule | 10.00 → 8.91 | Suppress |
View Improvements
| File | Code Health Impact | Categories Improved |
|---|---|---|
| params.rs | 7.03 → 8.50 | Complex Method, Bumpy Road Ahead, Overall Code Complexity, Deep, Nested Complexity |
Quality Gate Profile: Pay Down Tech Debt
Want more control? Customize Code Health rules or catch issues early with our IDE extension and CLI tool.
| pub(super) fn parse_type_expr<I>(iter: &mut std::iter::Peekable<I>) -> (String, Vec<ParseError>) | ||
| where | ||
| I: Iterator<Item = SyntaxElement<DdlogLanguage>>, | ||
| { | ||
| use rowan::NodeOrToken; | ||
|
|
||
| let mut buf = String::new(); | ||
| let mut errors = Vec::new(); | ||
| let mut stack = DelimStack::default(); | ||
| let mut ctx = TokenParseContext::new(&mut buf, &mut stack); | ||
|
|
||
| while let Some(e) = iter.peek() { | ||
| let cont = match e { | ||
| NodeOrToken::Token(t) => { | ||
| let kind = t.kind(); | ||
| if is_opening_delimiter(kind) { | ||
| if handle_opening_delimiter(t, &mut ctx) { | ||
| iter.next(); | ||
| } | ||
| true | ||
| } else if is_closing_delimiter(kind) { | ||
| if handle_closing_delimiter(t, &mut ctx, &mut errors) { | ||
| iter.next(); | ||
| true | ||
| } else { | ||
| false | ||
| } | ||
| } else if should_break_parsing(kind, ctx.stack.is_empty()) { | ||
| false | ||
| } else { | ||
| push(t, &mut ctx); | ||
| iter.next(); | ||
| true | ||
| } | ||
| } | ||
| NodeOrToken::Node(n) => { | ||
| let text = n.text().to_string(); | ||
| let is_ws = text.chars().all(char::is_whitespace); | ||
| let is_comment = n.kind() == SyntaxKind::T_COMMENT; | ||
| if !is_ws && !is_comment { | ||
| ctx.buf.push_str(&text); | ||
| } | ||
| iter.next(); | ||
| true | ||
| } | ||
| }; | ||
| if !cont { | ||
| break; | ||
| } | ||
| } | ||
|
|
||
| for (delim, span) in stack.unclosed() { | ||
| let delimiter = match delim { | ||
| Delim::Paren => ')', | ||
| Delim::Angle => '>', | ||
| Delim::Bracket => ']', | ||
| Delim::Brace => '}', | ||
| }; | ||
| errors.push(ParseError::UnclosedDelimiter { delimiter, span }); | ||
| } | ||
|
|
||
| (buf.trim().to_string(), errors) | ||
| } |
There was a problem hiding this comment.
❌ New issue: Complex Method
parse_type_expr has a cyclomatic complexity of 13, threshold = 9
| pub(super) fn parse_type_expr<I>(iter: &mut std::iter::Peekable<I>) -> (String, Vec<ParseError>) | ||
| where | ||
| I: Iterator<Item = SyntaxElement<DdlogLanguage>>, | ||
| { | ||
| use rowan::NodeOrToken; | ||
|
|
||
| let mut buf = String::new(); | ||
| let mut errors = Vec::new(); | ||
| let mut stack = DelimStack::default(); | ||
| let mut ctx = TokenParseContext::new(&mut buf, &mut stack); | ||
|
|
||
| while let Some(e) = iter.peek() { | ||
| let cont = match e { | ||
| NodeOrToken::Token(t) => { | ||
| let kind = t.kind(); | ||
| if is_opening_delimiter(kind) { | ||
| if handle_opening_delimiter(t, &mut ctx) { | ||
| iter.next(); | ||
| } | ||
| true | ||
| } else if is_closing_delimiter(kind) { | ||
| if handle_closing_delimiter(t, &mut ctx, &mut errors) { | ||
| iter.next(); | ||
| true | ||
| } else { | ||
| false | ||
| } | ||
| } else if should_break_parsing(kind, ctx.stack.is_empty()) { | ||
| false | ||
| } else { | ||
| push(t, &mut ctx); | ||
| iter.next(); | ||
| true | ||
| } | ||
| } | ||
| NodeOrToken::Node(n) => { | ||
| let text = n.text().to_string(); | ||
| let is_ws = text.chars().all(char::is_whitespace); | ||
| let is_comment = n.kind() == SyntaxKind::T_COMMENT; | ||
| if !is_ws && !is_comment { | ||
| ctx.buf.push_str(&text); | ||
| } | ||
| iter.next(); | ||
| true | ||
| } | ||
| }; | ||
| if !cont { | ||
| break; | ||
| } | ||
| } | ||
|
|
||
| for (delim, span) in stack.unclosed() { | ||
| let delimiter = match delim { | ||
| Delim::Paren => ')', | ||
| Delim::Angle => '>', | ||
| Delim::Bracket => ']', | ||
| Delim::Brace => '}', | ||
| }; | ||
| errors.push(ParseError::UnclosedDelimiter { delimiter, span }); | ||
| } | ||
|
|
||
| (buf.trim().to_string(), errors) | ||
| } |
There was a problem hiding this comment.
❌ New issue: Bumpy Road Ahead
parse_type_expr has 3 blocks with nested conditional logic. Any nesting of 2 or deeper is considered. Threshold is one single, nested block per function
| pub(super) fn parse_type_expr<I>(iter: &mut std::iter::Peekable<I>) -> (String, Vec<ParseError>) | ||
| where | ||
| I: Iterator<Item = SyntaxElement<DdlogLanguage>>, | ||
| { | ||
| use rowan::NodeOrToken; | ||
|
|
||
| let mut buf = String::new(); | ||
| let mut errors = Vec::new(); | ||
| let mut stack = DelimStack::default(); | ||
| let mut ctx = TokenParseContext::new(&mut buf, &mut stack); | ||
|
|
||
| while let Some(e) = iter.peek() { | ||
| let cont = match e { | ||
| NodeOrToken::Token(t) => { | ||
| let kind = t.kind(); | ||
| if is_opening_delimiter(kind) { | ||
| if handle_opening_delimiter(t, &mut ctx) { | ||
| iter.next(); | ||
| } | ||
| true | ||
| } else if is_closing_delimiter(kind) { | ||
| if handle_closing_delimiter(t, &mut ctx, &mut errors) { | ||
| iter.next(); | ||
| true | ||
| } else { | ||
| false | ||
| } | ||
| } else if should_break_parsing(kind, ctx.stack.is_empty()) { | ||
| false | ||
| } else { | ||
| push(t, &mut ctx); | ||
| iter.next(); | ||
| true | ||
| } | ||
| } | ||
| NodeOrToken::Node(n) => { | ||
| let text = n.text().to_string(); | ||
| let is_ws = text.chars().all(char::is_whitespace); | ||
| let is_comment = n.kind() == SyntaxKind::T_COMMENT; | ||
| if !is_ws && !is_comment { | ||
| ctx.buf.push_str(&text); | ||
| } | ||
| iter.next(); | ||
| true | ||
| } | ||
| }; | ||
| if !cont { | ||
| break; | ||
| } | ||
| } | ||
|
|
||
| for (delim, span) in stack.unclosed() { | ||
| let delimiter = match delim { | ||
| Delim::Paren => ')', | ||
| Delim::Angle => '>', | ||
| Delim::Bracket => ']', | ||
| Delim::Brace => '}', | ||
| }; | ||
| errors.push(ParseError::UnclosedDelimiter { delimiter, span }); | ||
| } | ||
|
|
||
| (buf.trim().to_string(), errors) | ||
| } |
There was a problem hiding this comment.
❌ New issue: Deep, Nested Complexity
parse_type_expr has a nested complexity depth of 4, threshold = 4
|
Restarting due to overly complex merge |
Summary
type_exprTesting
make fmtmake lintmake testhttps://chatgpt.com/codex/tasks/task_e_689fcc31614083228961bb1fb2d96cbe
Summary by Sourcery
Refactor and reorganize type parsing utilities into dedicated modules to improve code clarity and maintainability.
Enhancements:
type_exprmoduleoutputsmoduleparamsmoduleparse_utilsexports to reference the new modules instead of the old monolithic fileTests:
outputsmoduleparamsmoduletype_exprmoduleChores:
type_parsing.rsfile