Conversation
Reviewer's GuideThis PR enhances error context, refactors parallel processing to collect results and print outputs/errors sequentially (preserving file order), adds comprehensive parallel CLI tests, and updates the roadmap documentation. Sequence diagram for sequential output and error handling in parallel file processingsequenceDiagram
participant User as actor User
participant CLI as CLI Main
participant FileHandler as handle_file
participant Stdout as Stdout
participant Stderr as Stderr
User->>CLI: Run CLI with multiple files (not in-place)
CLI->>FileHandler: handle_file(file1)
CLI->>FileHandler: handle_file(file2)
CLI->>FileHandler: handle_file(file3)
Note over CLI,FileHandler: Files processed in parallel
FileHandler-->>CLI: Result (Ok(Some(output)) or Err(error))
loop For each file in input order
CLI->>Stdout: Print output (if Ok)
CLI->>Stderr: Print error (if Err)
end
CLI-->>User: Exit (with error if any file errored)
Class diagram for enhanced error context in file handlingclassDiagram
class CLI {
+Vec<Path> files
+bool in_place
+FormatOpts opts
}
class FileHandler {
+handle_file(path: &Path, in_place: bool, opts: FormatOpts) -> Result<Option<String>>
}
class ErrorContext {
+with_context(msg: String)
}
CLI --> FileHandler : uses
FileHandler ..> ErrorContext : adds context to errors
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
Warning Rate limit exceeded@leynos has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 9 minutes and 25 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
Summary by CodeRabbit
WalkthroughUpdate the documentation checklist to mark parallel test extension as complete. Refactor the main file processing logic to centralise error handling and reporting with contextual information. Add a utility for running CLI commands in tests. Introduce new integration tests to verify parallel CLI behaviour, including error scenarios and in-place file edits. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI
participant FileSystem
User->>CLI: Invoke with multiple files (possibly --in-place)
CLI->>CLI: Process files in parallel
loop For each file
CLI->>FileSystem: Read file (with context)
alt Error reading
CLI->>CLI: Record error with file context
else Success
CLI->>CLI: Reflow table
alt --in-place
CLI->>FileSystem: Write file (with context)
alt Error writing
CLI->>CLI: Record error with file context
end
else
CLI->>User: Print reflowed content
end
end
end
CLI->>User: Report all errors (if any), return first error or success
Estimated code review effort2 (~15 minutes) Possibly related PRs
Poem
✨ Finishing Touches
🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Hey @leynos - I've reviewed your changes and they look great!
Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments
### Comment 1
<location> `tests/parallel.rs:12` </location>
<code_context>
+fn test_cli_parallel_multiple_files() {
</code_context>
<issue_to_address>
Consider adding a test for empty file list input.
Adding this test will help verify that the CLI handles empty input without errors or unexpected behavior.
Suggested implementation:
```rust
#[rstest]
fn test_cli_parallel_empty_file_list() {
// Simulate calling the CLI with an empty file list.
// This test ensures the CLI handles empty input gracefully.
let args: Vec<&str> = vec![];
let output = run_cli_with_args(args);
// Adjust the assertion below to match the expected behavior of your CLI.
// For example, if the CLI should print nothing or a specific message, assert that.
assert!(output.is_empty() || output.contains("No files provided") || output.contains("Usage"), "CLI did not handle empty file list as expected");
}
#[rstest]
fn test_cli_parallel_multiple_files() {
```
- Ensure that a helper function like `run_cli_with_args` exists and is used for invoking the CLI with arguments. If it does not exist, you will need to implement it or adapt the test to your CLI invocation method.
- Adjust the assertion in the new test to match the actual expected output or error message of your CLI when no files are provided.
</issue_to_address>
### Comment 2
<location> `tests/parallel.rs:72` </location>
<code_context>
+}
+
+#[rstest]
+fn test_cli_parallel_missing_file_in_place(broken_table: Vec<String>) {
+ let dir = tempdir().expect("failed to create temporary directory");
+ let good = dir.path().join("good.md");
+ let mut f = File::create(&good).expect("failed to create file");
+ for line in &broken_table {
+ writeln!(f, "{line}").expect("failed to write line");
+ }
+ f.flush().expect("failed to flush file");
+ drop(f);
+ let missing = dir.path().join("missing.md");
+
+ Command::cargo_bin("mdtablefix")
+ .expect("failed to create command")
+ .arg("--in-place")
+ .arg(&good)
+ .arg(&missing)
+ .assert()
+ .failure();
+}
</code_context>
<issue_to_address>
Consider asserting on the error message content for in-place failures.
Asserting that the error message contains the missing file's name will help ensure users get clear, actionable feedback.
</issue_to_address>
<suggested_fix>
<<<<<<< SEARCH
Command::cargo_bin("mdtablefix")
.expect("failed to create command")
.arg("--in-place")
.arg(&good)
.arg(&missing)
.assert()
.failure();
=======
let output = Command::cargo_bin("mdtablefix")
.expect("failed to create command")
.arg("--in-place")
.arg(&good)
.arg(&missing)
.output()
.expect("failed to run command");
assert!(!output.status.success());
assert!(String::from_utf8_lossy(&output.stderr).contains("missing.md"));
>>>>>>> REPLACE
</suggested_fix>
### Comment 3
<location> `src/main.rs:135` </location>
<code_context>
+ .map(|p| handle_file(p, true, cli.opts).map(|_| ()))
+ .collect();
+
+ let mut first_err: Option<anyhow::Error> = None;
+ for res in results {
+ if let Err(e) = res {
</code_context>
<issue_to_address>
Consider extracting the repeated error-handling and result-processing logic in main() into a single helper function to avoid code duplication.
Consider extracting the “collect, print all errors, track first” pattern into one helper so you no longer have two almost‐identical loops in `main()`. For example:
```rust
/// Collects all results, prints each error, remembers the first and then returns it (if any).
fn report_results<T, F>(
results: Vec<anyhow::Result<T>>,
mut on_ok: F,
) -> anyhow::Result<()>
where
F: FnMut(T),
{
let mut first_err = None;
for r in results {
match r {
Ok(v) => on_ok(v),
Err(e) => {
eprintln!("{e}");
first_err.get_or_insert(e);
}
}
}
if let Some(e) = first_err {
Err(e)
} else {
Ok(())
}
}
```
Then in your `main()` you shrink both branches to
```rust
if cli.in_place {
let results = cli.files
.par_iter()
.map(|p| handle_file(p, true, cli.opts).map(|_| ()))
.collect();
report_results(results, |_| ())?;
} else {
let results = cli.files
.par_iter()
.map(|p| handle_file(p, false, cli.opts))
.collect();
report_results(results, |maybe_out| {
if let Some(out) = maybe_out {
println!("{out}");
}
})?;
}
```
This preserves printing all errors and returns the first, but collapses your two loops + `first_err` into one small helper.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
|
@sourcery-ai resolve |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (4)
src/main.rs (2)
93-114: Perfect implementation of the suggested error handling pattern.This function implements exactly what was recommended in the past review comment. The logic correctly collects all results, prints each error to stderr, remembers the first error, and returns it while allowing the closure to process successful values.
152-168: Excellent refactoring that eliminates code duplication.The parallel processing logic now uses the centralised report_results helper, maintaining sequential output order while properly collecting and reporting all errors. The separation between in-place and output modes is clean and correct.
tests/parallel.rs (2)
11-16: Good implementation of the suggested empty file list test.This test correctly addresses the past review comment by verifying that the CLI handles empty input gracefully, reading from stdin and producing appropriate output.
49-76: Good error handling test that addresses past review feedback.The test correctly verifies that missing file errors include the filename in stderr, as suggested in past review comments. However, use the run_cli_with_args helper for consistency.
Apply this diff:
- let output = Command::cargo_bin("mdtablefix") - .expect("failed to create command") - .arg(&good) - .arg(&missing) - .output() - .expect("failed to run command"); + let good_str = good.to_str().unwrap(); + let missing_str = missing.to_str().unwrap(); + let output = run_cli_with_args(&[good_str, missing_str]);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (4)
docs/parallel-processing-roadmap.md(1 hunks)src/main.rs(3 hunks)tests/common/mod.rs(2 hunks)tests/parallel.rs(1 hunks)
📓 Path-based instructions (2)
**/*.md
⚙️ CodeRabbit Configuration File
**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")
- Use en-GB-oxendict (-ize / -our) spelling and grammar
- Paragraphs and bullets must be wrapped to 80 columns, except where a long URL would prevent this (in which case, silence MD013 for that line)
- Code blocks should be wrapped to 120 columns.
- Headings must not be wrapped.
- Documents must start with a level 1 heading
- Headings must correctly increase or decrease by no more than one level at a time
- Use GitHub-flavoured Markdown style for footnotes and endnotes.
- Numbered footnotes must be numbered by order of appearance in the document.
Files:
docs/parallel-processing-roadmap.md
**/*.rs
⚙️ CodeRabbit Configuration File
**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.
Adhere to single responsibility and CQRS
Place function attributes after doc comments.
Do not use
returnin single-line functions.Move conditionals with >2 branches into a predicate function.
Avoid
unsafeunless absolutely necessary.Every module must begin with a
//!doc comment that explains the module's purpose and utility.Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar
Lints must not be silenced except as a last resort.
#[allow]is forbidden.- Only narrowly scoped
#[expect(lint, reason = "...")]is allowed.- No lint groups, no blanket or file-wide suppression.
- Include
FIXME:with link if a fix is expected.Use
rstestfixtures for shared setup and to avoid repetition between tests.Replace duplicated tests with
#[rstest(...)]parameterised cases.Prefer
mockallfor mocks/stubs.Prefer
.expect()over.unwrap()Ensure that any API or behavioural changes are reflected in the documentation in
docs/Ensure that any completed roadmap steps are recorded in the appropriate roadmap in
docs/Files must not exceed 400 lines in length
- Large modules must be decomposed
- Long match statements or dispatch tables should be decomposed by domain and collocated with targets
- Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.
Files:
tests/common/mod.rssrc/main.rstests/parallel.rs
🧬 Code Graph Analysis (1)
tests/parallel.rs (2)
tests/common/mod.rs (2)
run_cli_with_args(108-114)broken_table(97-102)src/table.rs (1)
reflow_table(165-184)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.md
⚙️ CodeRabbit Configuration File
**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")
- Use en-GB-oxendict (-ize / -our) spelling and grammar
- Paragraphs and bullets must be wrapped to 80 columns, except where a long URL would prevent this (in which case, silence MD013 for that line)
- Code blocks should be wrapped to 120 columns.
- Headings must not be wrapped.
- Documents must start with a level 1 heading
- Headings must correctly increase or decrease by no more than one level at a time
- Use GitHub-flavoured Markdown style for footnotes and endnotes.
- Numbered footnotes must be numbered by order of appearance in the document.
Files:
docs/parallel-processing-roadmap.md
**/*.rs
⚙️ CodeRabbit Configuration File
**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.
Adhere to single responsibility and CQRS
Place function attributes after doc comments.
Do not use
returnin single-line functions.Move conditionals with >2 branches into a predicate function.
Avoid
unsafeunless absolutely necessary.Every module must begin with a
//!doc comment that explains the module's purpose and utility.Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar
Lints must not be silenced except as a last resort.
#[allow]is forbidden.- Only narrowly scoped
#[expect(lint, reason = "...")]is allowed.- No lint groups, no blanket or file-wide suppression.
- Include
FIXME:with link if a fix is expected.Use
rstestfixtures for shared setup and to avoid repetition between tests.Replace duplicated tests with
#[rstest(...)]parameterised cases.Prefer
mockallfor mocks/stubs.Prefer
.expect()over.unwrap()Ensure that any API or behavioural changes are reflected in the documentation in
docs/Ensure that any completed roadmap steps are recorded in the appropriate roadmap in
docs/Files must not exceed 400 lines in length
- Large modules must be decomposed
- Long match statements or dispatch tables should be decomposed by domain and collocated with targets
- Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.
Files:
tests/common/mod.rssrc/main.rstests/parallel.rs
🧬 Code Graph Analysis (1)
tests/parallel.rs (2)
tests/common/mod.rs (2)
run_cli_with_args(108-114)broken_table(97-102)src/table.rs (1)
reflow_table(165-184)
🔇 Additional comments (6)
src/main.rs (2)
14-14: LGTM! Import addition supports improved error context.The anyhow::Context import is correctly placed and necessary for the enhanced error reporting in handle_file.
79-80: Excellent error context enhancement.The contextual error messages clearly identify which file operation failed and include the file path, making debugging much easier for users.
Also applies to: 85-86
tests/common/mod.rs (2)
4-4: LGTM! Import supports the new CLI testing utility.The assert_cmd::Command import is correctly placed and necessary for the run_cli_with_args function.
104-114: Well-implemented CLI testing utility.The function correctly uses Command::cargo_bin to test the binary, includes proper error handling with descriptive messages, and follows the coding guidelines for expect() usage and lint expectations.
tests/parallel.rs (1)
1-10: Well-structured test module with proper documentation.The module documentation clearly explains its purpose, and the imports are appropriately organised for the test functionality.
docs/parallel-processing-roadmap.md (1)
18-18: Accurate documentation update reflecting completed work.Correctly marks the parallel test extension as completed, aligning with the comprehensive tests added in tests/parallel.rs.
Summary
Testing
make fmtmake lintmake testmake markdownlintmake nixiehttps://chatgpt.com/codex/tasks/task_e_687ec4bbb08083229853a0c53fcdf1cd
Summary by Sourcery
Implement sequential result handling for parallel file operations with enriched error context and add tests to verify output order and error messages.
Enhancements:
Documentation:
Tests: