Refactor parser tests to use typed programs#129
Conversation
Reviewer's GuideRefactor parser integration tests by replacing ad-hoc string fixtures with typed program enums, introducing builder-based assertion structs, and converting all individual tests into parameterized rstest cases to reduce duplication and improve readability. File-Level Changes
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. Summary by CodeRabbit
WalkthroughRefactor parser tests to a data-driven, case-based approach. Add shared program fixtures via enums and assertion helpers via spec builders. Replace ad-hoc checks with parameterised tests covering relations, indexes, rules, imports, typedefs, transformers, functions, round-trips, and error cases. Introduce pretty_print traversal changes in test scaffolding. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Tester
participant Prog as Programs::<enum>
participant Parser as parse(src)
participant AST as Parsed AST
participant Spec as <Entity>Spec.assert()
Tester->>Prog: source()
Prog-->>Tester: &'static str
Tester->>Parser: parse(src)
Parser-->>AST: CST/AST result
Tester->>Spec: assert(AST entities)
Spec-->>Tester: pass/fail
rect rgba(230,250,230,0.5)
note over Parser,Spec: Success path
end
rect rgba(250,230,230,0.5)
Tester->>Parser: parse(invalid src)
Parser-->>Tester: errors
Tester->>Spec: assert no entities / errors present
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related issues
Poem
✨ Finishing Touches
🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Hey there - I've reviewed your changes - here's some feedback:
- The huge TestProgram enum is becoming unwieldy—consider splitting it into smaller, domain-specific enums or modules to improve readability and maintainability.
- RelationSpec, IndexSpec, FnSpec, and TransformerSpec share a lot of assertion boilerplate; extracting a generic assertion helper or macro could DRY up the code and reduce duplication.
- It looks like the invalid-typedef (missing name) test was removed—make sure you reintroduce or replace that edge-case to avoid coverage gaps for malformed typedefs.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The huge TestProgram enum is becoming unwieldy—consider splitting it into smaller, domain-specific enums or modules to improve readability and maintainability.
- RelationSpec, IndexSpec, FnSpec, and TransformerSpec share a lot of assertion boilerplate; extracting a generic assertion helper or macro could DRY up the code and reduce duplication.
- It looks like the invalid-typedef (missing name) test was removed—make sure you reintroduce or replace that edge-case to avoid coverage gaps for malformed typedefs.
## Individual Comments
### Comment 1
<location> `src/parser/tests/parser.rs:446` </location>
<code_context>
-#[test]
-fn typedef_missing_name_returns_none() {
- let src = "typedef = string\n";
+#[case("typedef Uuid = string\n", ("Uuid", false))]
+#[case("typedef UserRecord = (name: string, age: u64, active: bool)\n", ("UserRecord", false))]
+#[case("extern type FfiHandle\n", ("FfiHandle", true))]
+fn typedef_parsing(#[case] src: &str, #[case] expect: (&str, bool)) {
let parsed = parse(src);
assert!(parsed.errors().is_empty());
</code_context>
<issue_to_address>
Suggestion to add tests for invalid typedefs (e.g., missing name, missing type).
Please add test cases for invalid typedefs, such as those missing a name or type, and verify that errors are raised and no typedefs are parsed.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| #[case("typedef Uuid = string\n", ("Uuid", false))] | ||
| #[case("typedef UserRecord = (name: string, age: u64, active: bool)\n", ("UserRecord", false))] | ||
| #[case("extern type FfiHandle\n", ("FfiHandle", true))] | ||
| fn typedef_parsing(#[case] src: &str, #[case] expect: (&str, bool)) { |
There was a problem hiding this comment.
suggestion (testing): Suggestion to add tests for invalid typedefs (e.g., missing name, missing type).
Please add test cases for invalid typedefs, such as those missing a name or type, and verify that errors are raised and no typedefs are parsed.
There was a problem hiding this comment.
Has this now been resolved?
There was a problem hiding this comment.
Yes, this has now been resolved! The latest test suite includes cases for invalid typedefs, such as missing a name or missing a type. You can find these in the typedef_errors test, which verifies that errors are raised and no typedefs are parsed for inputs like typedef = string and typedef MissingType.
If you have any further scenarios you'd like covered, let me know!
There was a problem hiding this comment.
Code Health Improved
(1 files improve in Code Health)
Gates Passed
6 Quality Gates Passed
See analysis details in CodeScene
View Improvements
| File | Code Health Impact | Categories Improved |
|---|---|---|
| parser.rs | 8.28 → 10.00 | Code Duplication, String Heavy Function Arguments |
Quality Gate Profile: Pay Down Tech Debt
Want more control? Customize Code Health rules or catch issues early with our IDE extension and CLI tool.
|
@coderabbitai review |
|
@sourcery-ai review |
✅ Actions performedReview triggered.
|
363d9ba
into
codex/replace-manual-scanning-with-chumsky-parsers
* Use parsers for relation clauses * Improve relation parser diagnostics and tests * Generalise delimiter extractor and refine relation parsing * Deprecate delimiter alias and refine relation parsing * Handle missing opening delimiter * Clarify primary key parsing * Surface relation clause parse errors * Avoid closing delimiter allocations * Add relation clause parsers * Parse primary key identifiers via combinators * Clarify relation keyword parsing and delimiter errors * Align relation unclosed delimiter test with span semantics Adjust to assert the unclosed error over the statement span (0..len) rather than the opening index.\n\nRationale: the span scanner’s balanced-block combinator currently reports unclosed delimiters at the substream start; this preserves current branch behaviour while integrating main’s test module refactor.\n\nNo runtime changes; tests and lints pass. * Refactor parser tests (#127) * Refactor root collection assertions (#128) * Refactor parser tests to use typed programs (#129) * Refactor parser tests to use typed programs * Refactor parser tests with domain enums and shared specs --------- Co-authored-by: Payton McIntosh <pmcintosh@df12.net>
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (1)
src/parser/tests/parser.rs (1)
175-189: Resolve prior ask: add invalid typedef testsCover extern and alias forms for success, and include missing-name and missing-type error cases to address the earlier review request.
Also applies to: 191-199
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- Jira integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
src/parser/tests/parser.rs(1 hunks)src/parser/tests/programs.rs(1 hunks)src/parser/tests/specs.rs(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs
⚙️ CodeRabbit configuration file
**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.
Adhere to single responsibility and CQRS
Place function attributes after doc comments.
Do not use
returnin single-line functions.Move conditionals with >2 branches into a predicate function.
Avoid
unsafeunless absolutely necessary.Every module must begin with a
//!doc comment that explains the module's purpose and utility.Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar
Lints must not be silenced except as a last resort.
#[allow]is forbidden.- Only narrowly scoped
#[expect(lint, reason = "...")]is allowed.- No lint groups, no blanket or file-wide suppression.
- Include
FIXME:with link if a fix is expected.Where code is only used by specific features, it must be conditionally compiled or a conditional expectation for unused_code applied.
Use
rstestfixtures for shared setup and to avoid repetition between tests.Replace duplicated tests with
#[rstest(...)]parameterised cases.Prefer
mockallfor mocks/stubs.Prefer
.expect()over.unwrap()Ensure that any API or behavioural changes are reflected in the documentation in
docs/Ensure that any completed roadmap steps are recorded in the appropriate roadmap in
docs/Files must not exceed 400 lines in length
- Large modules must be decomposed
- Long match statements or dispatch tables should be decomposed by domain and collocated with targets
- Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.
Files:
src/parser/tests/programs.rssrc/parser/tests/parser.rssrc/parser/tests/specs.rs
🔇 Additional comments (15)
src/parser/tests/programs.rs (1)
1-4: Good: centralise program fixtures behind enumsKeep test sources co-located and typed. This improves readability and reduces duplication across the suite.
src/parser/tests/specs.rs (1)
1-3: Good: test DSL around AST assertionsProvide focused builders and a single assertion macro. This keeps parser tests terse and intention-revealing.
src/parser/tests/parser.rs (13)
54-62: Good: idempotent pretty-print round-trip for canonical fixturesAssert zero errors and exact source equivalence for the simple/complex fixtures. This is a robust smoke test for CST fidelity.
73-81: Good: relation parsing validated via spec DSLAssert input/output flags, columns, and PK composition using RelationSpec. This matches the PR objective to make tests declarative.
83-94: Good: negative cases ensure parser rejects malformed relationsCover unbalanced parens, empty columns, and PK syntax errors. Also assert no partial AST leakage by checking relations().is_empty().
96-106: Good: index parsing covers expressions and whitespace normalisationExercise single/multi-column, nested function expressions, and whitespace variants. Validated via IndexSpec column lists.
107-114: Good: index error handlingEnsure missing 'on' and unbalanced parens are rejected and no indexes are materialised.
116-125: Good: rule parsing round-trips with exact sourceKeep rules as precise tokens and assert pretty-printed equivalence. This complements relation/index coverage.
127-136: Good: rule error matrixInclude empty head, empty body, missing ':-', and missing trailing '.' cases. This guards common grammar slips.
137-155: Good: import parsing with alias supportAssert path/alias pairs match expected tuples. The mapping to String/Option is clean and explicit.
157-173: Strong: targeted diagnostics for missing import pathValidate chumsky error surface: reason Unexpected, expected T_IDENT, found K_AS, and ensure no imports are produced. This is a high-signal regression test.
200-216: Good: function signatures across edge casesValidate extern, multi-params, tuples, nested generics, comments/whitespace, and the shift token ambiguity (Vec<>). Asserting only signature shape keeps tests resilient to body formatting changes.
227-243: Good: transformer parsing with edge casesExercise no inputs, multiple outputs, duplicate input names, reserved identifiers as parameter names, and whitespace noise. The spec asserts inputs/outputs robustly.
245-251: Good: transformer error handlingReject incomplete signatures and missing outputs and assert no leaked nodes are produced.
39-41: Remove unused localnormalise_whitespacehelper insrc/parser/tests/parser.rsThe helper defined here is never invoked within this file. Other tests already use the shared version in
src/parser/tests/common.rs. Delete this dead code to quiet lints and avoid drift—re-introduce a local helper only when adding whitespace-insensitive round-trip tests.Affected location:
src/parser/tests/parser.rs, lines 39–41Apply:
-fn normalise_whitespace(text: &str) -> String { - text.split_whitespace().collect::<Vec<_>>().join(" ") -}
| fn pretty_print(node: &SyntaxNode) -> String { | ||
| let mut out = String::new(); | ||
| let mut stack = vec![SyntaxElement::Node(node.clone())]; | ||
|
|
||
| while let Some(item) = stack.pop() { | ||
| match item { | ||
| SyntaxElement::Token(t) => out.push_str(t.text()), | ||
| SyntaxElement::Node(n) => { | ||
| let children: Vec<SyntaxElement> = n.children_with_tokens().collect(); | ||
| for child in children.into_iter().rev() { | ||
| stack.push(child); | ||
| } | ||
| } | ||
| SyntaxElement::Node(n) => stack.extend(n.children_with_tokens().rev()), | ||
| } | ||
| } | ||
|
|
||
| out | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Pre-allocate output buffer in pretty_print
Reserve capacity from the root text range to reduce reallocations for large trees.
Apply:
fn pretty_print(node: &SyntaxNode) -> String {
- let mut out = String::new();
+ let mut out = String::with_capacity(node.text_range().len().into());
let mut stack = vec![SyntaxElement::Node(node.clone())];
while let Some(item) = stack.pop() {
match item {
SyntaxElement::Token(t) => out.push_str(t.text()),
SyntaxElement::Node(n) => stack.extend(n.children_with_tokens().rev()),
}
}
out
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn pretty_print(node: &SyntaxNode) -> String { | |
| let mut out = String::new(); | |
| let mut stack = vec![SyntaxElement::Node(node.clone())]; | |
| while let Some(item) = stack.pop() { | |
| match item { | |
| SyntaxElement::Token(t) => out.push_str(t.text()), | |
| SyntaxElement::Node(n) => { | |
| let children: Vec<SyntaxElement> = n.children_with_tokens().collect(); | |
| for child in children.into_iter().rev() { | |
| stack.push(child); | |
| } | |
| } | |
| SyntaxElement::Node(n) => stack.extend(n.children_with_tokens().rev()), | |
| } | |
| } | |
| out | |
| } | |
| fn pretty_print(node: &SyntaxNode) -> String { | |
| let mut out = String::with_capacity(node.text_range().len().into()); | |
| let mut stack = vec![SyntaxElement::Node(node.clone())]; | |
| while let Some(item) = stack.pop() { | |
| match item { | |
| SyntaxElement::Token(t) => out.push_str(t.text()), | |
| SyntaxElement::Node(n) => stack.extend(n.children_with_tokens().rev()), | |
| } | |
| } | |
| out | |
| } |
🤖 Prompt for AI Agents
In src/parser/tests/parser.rs around lines 27–37, the pretty_print function
should pre-allocate the output String to avoid reallocations for large trees;
obtain the root node text length (e.g. via node.text().len() or the node's
text_range length converted to usize) and use it to create the string with
String::with_capacity(len) or call out.reserve(len) before pushing, then proceed
with the existing traversal.
| #[derive(Copy, Clone)] | ||
| pub enum BasicProgram { | ||
| Simple, | ||
| Complex, | ||
| Empty, | ||
| MultilineRelation, | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Derive Debug on enums to improve failure diagnostics
Enable readable diffs when a specific variant is involved in a failing test by deriving Debug.
Apply:
-#[derive(Copy, Clone)]
+#[derive(Copy, Clone, Debug)]
pub enum BasicProgram {
@@
-#[derive(Copy, Clone)]
+#[derive(Copy, Clone, Debug)]
pub enum RelationProgram {
@@
-#[derive(Copy, Clone)]
+#[derive(Copy, Clone, Debug)]
pub enum IndexProgram {
@@
-#[derive(Copy, Clone)]
+#[derive(Copy, Clone, Debug)]
pub enum RuleProgram {
@@
-#[derive(Copy, Clone)]
+#[derive(Copy, Clone, Debug)]
pub enum FunctionProgram {
@@
-#[derive(Copy, Clone)]
+#[derive(Copy, Clone, Debug)]
pub enum TransformerProgram {Also applies to: 24-35, 59-67, 90-96, 111-126, 160-170
🤖 Prompt for AI Agents
In src/parser/tests/programs.rs around lines 5 to 11 (and similarly at 24-35,
59-67, 90-96, 111-126, 160-170), the enum definitions lack a Debug derive which
makes test failure output harder to read; update each enum's derive attribute to
include Debug (e.g., #[derive(Copy, Clone, Debug)]) so test assertions and diffs
show readable variant names, and ensure imports or trait bounds remain valid
after adding Debug.
| Self::Simple => "input relation R(x: u32);", | ||
| Self::Complex => "input relation R(x: u32);\noutput relation S(y: string);", | ||
| Self::Empty => "", | ||
| Self::MultilineRelation => "input relation Log(\n id: u32,\n message: string\n) primary key (id)\n", | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Standardise trailing newlines in BasicProgram sources
Terminate non-empty program fixtures with a newline. This avoids noisy diffs when concatenating sources or comparing pretty-printed outputs across platforms/editors.
Apply:
- Self::Simple => "input relation R(x: u32);",
- Self::Complex => "input relation R(x: u32);\noutput relation S(y: string);",
+ Self::Simple => "input relation R(x: u32);\n",
+ Self::Complex => "input relation R(x: u32);\noutput relation S(y: string);\n",
Self::Empty => "",
Self::MultilineRelation => "input relation Log(\n id: u32,\n message: string\n) primary key (id)\n",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Self::Simple => "input relation R(x: u32);", | |
| Self::Complex => "input relation R(x: u32);\noutput relation S(y: string);", | |
| Self::Empty => "", | |
| Self::MultilineRelation => "input relation Log(\n id: u32,\n message: string\n) primary key (id)\n", | |
| } | |
| Self::Simple => "input relation R(x: u32);\n", | |
| Self::Complex => "input relation R(x: u32);\noutput relation S(y: string);\n", | |
| Self::Empty => "", | |
| Self::MultilineRelation => "input relation Log(\n id: u32,\n message: string\n) primary key (id)\n", | |
| } |
🤖 Prompt for AI Agents
In src/parser/tests/programs.rs around lines 16 to 20, some non-empty
BasicProgram fixture strings are missing a trailing newline which causes noisy
diffs when concatenating or pretty-printing; update the Simple and Complex
variants to end with a '\n' (leave Empty as "" and keep MultilineRelation as-is
since it already ends with a newline) so every non-empty program fixture is
terminated with a newline.
| impl TransformerProgram { | ||
| pub fn source(self) -> &'static str { | ||
| match self { | ||
| Self::TransformerSingleIo => { | ||
| "extern transformer normalize(input: UnnormalizedData): NormalizedData" | ||
| } | ||
| Self::TransformerMultiIo => { | ||
| "extern transformer correlate(users: User, sessions: UserSession): UserActivity, SessionAlerts" | ||
| } | ||
| Self::TransformerInvalid => { | ||
| "extern transformer incomplete_transformer(input: SomeData):" | ||
| } | ||
| Self::TransformerNoInputs => { | ||
| "extern transformer no_inputs(): OutputType" | ||
| } | ||
| Self::TransformerNoOutputs => { | ||
| "extern transformer no_outputs(input: InputType):" | ||
| } | ||
| Self::TransformerExtraWs => { | ||
| " extern transformer spaced ( foo : Bar , baz : Qux ) : Out1 , Out2 " | ||
| } | ||
| Self::TransformerDupInputs => { | ||
| "extern transformer dup_inputs(foo: Bar, foo: Baz): Out" | ||
| } | ||
| Self::TransformerReservedNames => { | ||
| "extern transformer reserved(transformer: Type, extern: Type): out" | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Unify test-source strategy for imports and typedefs
Add ImportProgram and TypedefProgram here for parity with Relation/Index/Function/Transformer. Keep all source strings in one place and reduce ad-hoc strings in parser.rs.
Apply:
+#[derive(Copy, Clone, Debug)]
+pub enum ImportProgram {
+ SingleNoAlias,
+ SingleWithAlias,
+ MultipleWithAlias,
+ MissingPathError,
+}
+
+impl ImportProgram {
+ pub fn source(self) -> &'static str {
+ match self {
+ Self::SingleNoAlias => "import standard_library",
+ Self::SingleWithAlias => "import collections::vector as vec",
+ Self::MultipleWithAlias => "import a\nimport b as c",
+ Self::MissingPathError => "import as missing_path",
+ }
+ }
+}
+
+#[derive(Copy, Clone, Debug)]
+pub enum TypedefProgram {
+ AliasUuid,
+ RecordUser,
+ ExternFfiHandle,
+ MissingNameError,
+ MissingTypeError,
+}
+
+impl TypedefProgram {
+ pub fn source(self) -> &'static str {
+ match self {
+ Self::AliasUuid => "typedef Uuid = string\n",
+ Self::RecordUser => "typedef UserRecord = (name: string, age: u64, active: bool)\n",
+ Self::ExternFfiHandle => "extern type FfiHandle\n",
+ Self::MissingNameError => "typedef = string\n",
+ Self::MissingTypeError => "typedef MissingType\n",
+ }
+ }
+}Then update parser.rs to parameterise import/typedef tests using these enums.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/parser/tests/programs.rs around lines 172 to 201, add ImportProgram and
TypedefProgram enum variants (with their corresponding pub fn source() &'static
str entries) alongside the existing TransformerProgram cases so import and
typedef test source strings are centralized; then update src/parser.rs to
replace ad-hoc import/typedef test strings with these new enums (parameterize
the tests to accept ImportProgram/TypedefProgram and call their source()
methods) so all test sources live in one place and parser tests consume them.
| macro_rules! assert_entity { | ||
| ($node:expr, $spec:expr, $( $actual:expr => $expected:expr ),* $(,)?) => {{ | ||
| assert_eq!($node.name(), Some($spec.name.into())); | ||
| $( assert_eq!($actual, $expected); )* | ||
| }}; | ||
| } | ||
| pub(crate) use assert_entity; | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Add field-aware assertion messages for faster triage
Augment assertions with labels so failures identify which field mismatched without digging into long Vec dumps.
Apply:
macro_rules! assert_entity {
- ($node:expr, $spec:expr, $( $actual:expr => $expected:expr ),* $(,)?) => {{
- assert_eq!($node.name(), Some($spec.name.into()));
- $( assert_eq!($actual, $expected); )*
- }};
+ ($node:expr, $spec:expr, $( $actual:expr => $expected:expr ),* $(,)?) => {{
+ assert_eq!(
+ $node.name(),
+ Some($spec.name.into()),
+ "entity name mismatch"
+ );
+ $(
+ assert_eq!(
+ $actual,
+ $expected,
+ "mismatch in '{}::{}'",
+ $spec.name,
+ stringify!($actual)
+ );
+ )*
+ }};
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| macro_rules! assert_entity { | |
| ($node:expr, $spec:expr, $( $actual:expr => $expected:expr ),* $(,)?) => {{ | |
| assert_eq!($node.name(), Some($spec.name.into())); | |
| $( assert_eq!($actual, $expected); )* | |
| }}; | |
| } | |
| pub(crate) use assert_entity; | |
| macro_rules! assert_entity { | |
| ($node:expr, $spec:expr, $( $actual:expr => $expected:expr ),* $(,)?) => {{ | |
| assert_eq!( | |
| $node.name(), | |
| Some($spec.name.into()), | |
| "entity name mismatch" | |
| ); | |
| $( | |
| assert_eq!( | |
| $actual, | |
| $expected, | |
| "mismatch in '{}::{}'", | |
| $spec.name, | |
| stringify!($actual) | |
| ); | |
| )* | |
| }}; | |
| } | |
| pub(crate) use assert_entity; |
🤖 Prompt for AI Agents
In src/parser/tests/specs.rs around lines 16 to 23, the assert_entity! macro
should be updated to produce field-aware assertion messages so failing tests
identify which field mismatched; change the macro pattern to accept a label with
each pair (e.g. label => actual => expected) and replace the plain assert_eq!
calls with assert_eq! that includes a custom message using stringify!(label)
(and optionally the expected/actual values) so the failure prints the field
name; keep the existing name check and export but update the macro arms and
invocation pattern accordingly.
| pub struct FnSpec<'a> { | ||
| name: &'a str, | ||
| ext: bool, | ||
| params: Vec<(&'a str, &'a str)>, | ||
| ret: Option<&'a str>, | ||
| } | ||
| impl<'a> FnSpec<'a> { | ||
| pub fn new(name: &'a str) -> Self { | ||
| Self { name, ext: false, params: vec![], ret: None } | ||
| } | ||
| pub fn extern_(mut self) -> Self { | ||
| self.ext = true; | ||
| self | ||
| } | ||
| pub fn param(mut self, name: &'a str, ty: &'a str) -> Self { | ||
| self.params.push((name, ty)); |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Provide alias for extern_ with a raw identifier
Offer a r#extern() alias to read naturally at call sites while keeping extern_() for backwards compatibility.
Apply:
impl<'a> FnSpec<'a> {
@@
- pub fn extern_(mut self) -> Self {
+ pub fn extern_(mut self) -> Self {
self.ext = true;
self
}
+ pub fn r#extern(self) -> Self {
+ self.extern_()
+ }Call sites may then use .r#extern() if desired.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub struct FnSpec<'a> { | |
| name: &'a str, | |
| ext: bool, | |
| params: Vec<(&'a str, &'a str)>, | |
| ret: Option<&'a str>, | |
| } | |
| impl<'a> FnSpec<'a> { | |
| pub fn new(name: &'a str) -> Self { | |
| Self { name, ext: false, params: vec![], ret: None } | |
| } | |
| pub fn extern_(mut self) -> Self { | |
| self.ext = true; | |
| self | |
| } | |
| pub fn param(mut self, name: &'a str, ty: &'a str) -> Self { | |
| self.params.push((name, ty)); | |
| pub struct FnSpec<'a> { | |
| name: &'a str, | |
| ext: bool, | |
| params: Vec<(&'a str, &'a str)>, | |
| ret: Option<&'a str>, | |
| } | |
| impl<'a> FnSpec<'a> { | |
| pub fn new(name: &'a str) -> Self { | |
| Self { name, ext: false, params: vec![], ret: None } | |
| } | |
| pub fn extern_(mut self) -> Self { | |
| self.ext = true; | |
| self | |
| } | |
| pub fn r#extern(self) -> Self { | |
| self.extern_() | |
| } | |
| pub fn param(mut self, name: &'a str, ty: &'a str) -> Self { | |
| self.params.push((name, ty)); |
🤖 Prompt for AI Agents
In src/parser/tests/specs.rs around lines 86 to 101, add a public alias method
using a raw identifier so callers can write .r#extern() while preserving the
existing extern_() method: implement pub fn r#extern(mut self) -> Self that
simply forwards to and returns self.extern_(), keeping behavior and signature
identical for chaining and maintaining backwards compatibility.
Summary
TestProgramenum andsource()helperTesting
make fmtmake lintmake testhttps://chatgpt.com/codex/tasks/task_e_68a993dbde908322991396b83f409c15
Summary by Sourcery
Refactor the parser integration tests to improve maintainability and reduce duplication by introducing typed program fixtures, assertion builder structs, and parameterized test cases.
Enhancements:
Programenums andsource()helpers in a newprograms.rsfileRelationSpec,IndexSpec,FnSpec,TransformerSpec) inspecs.rsfor structured assertionsrstestcase parameters to cover multiple scenarios in single test functionspretty_printand whitespace normalizationTests: