diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index 6a8d3b3..6ed3b51 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -20,6 +20,21 @@ colored output and comprehensive logging. - **Preserve existing code style, naming, and patterns** - **Add code only when absolutely necessary to fix the specific issue** - **When fixing bugs, change only what's broken, not what could be improved** +- **To understand code coverage, run `cargo tarpaulin --skip-clean`. Be patient while it runs, as it may take some time to complete.** + +## Testing Standards + +- **Maintain test-inventory.md with test cases for features, update test case status regularly** +- **Maintain high test coverage (aim for 80%)** + +### Unit Tests + +- **Write unit tests for all new functionality** +- **Use descriptive test names and organize tests logically** + +### Integration Tests + +- **Write integration tests for critical workflows** ## Documentation Standards diff --git a/.gitignore b/.gitignore index 764f293..edc02a7 100644 --- a/.gitignore +++ b/.gitignore @@ -27,6 +27,9 @@ Thumbs.db # Project specific cloned_repos*/ coverage/ +*.profraw logs/ config.yaml tarpaulin-report.html +test-*/ +tests/test-recipes.yaml diff --git a/README.md b/README.md index 11f9cc8..b273423 100644 --- a/README.md +++ b/README.md @@ -33,6 +33,10 @@ multiple repositories simultaneously with the `--parallel` flag. repositories with a single command. - **Comprehensive Logging**: Every command run is logged, with detailed, per-repository output files for easy debugging. +- **Reusable Command Recipes**: Define multi-step scripts in your config and run +them across repositories with a simple name. +- **Extensible Plugin System**: Add custom commands by creating simple +`repos-` executables in your `PATH`. - **Built in Rust**: Fast, memory-safe, and reliable. ## Installation @@ -98,7 +102,7 @@ overview. Click on a command to see its detailed documentation. | Command | Description | |---|---| | [**`clone`**](./docs/commands/clone.md) | Clones repositories from your config file. | -| [**`run`**](./docs/commands/run.md) | Runs a shell command in each repository. | +| [**`run`**](./docs/commands/run.md) | Runs a shell command or a pre-defined recipe in each repository. | | [**`pr`**](./docs/commands/pr.md) | Creates pull requests for repositories with changes. | | [**`rm`**](./docs/commands/rm.md) | Removes cloned repositories from your local disk. | | [**`init`**](./docs/commands/init.md) | Generates a `config.yaml` file from local Git repositories. | @@ -128,8 +132,28 @@ repositories: url: git@github-enterprise:company/project.git tags: [enterprise, backend] # GitHub Enterprise and custom SSH configurations are supported + +recipes: + - name: setup + steps: + git checkout main + git pull + ./scripts/setup.sh ``` +## Plugins + +`repos` supports an extensible plugin system that allows you to add new +functionality without modifying the core codebase. Any executable in your `PATH` +named `repos-` can be invoked as a subcommand. + +- **List available plugins**: `repos --list-plugins` +- **Execute a plugin**: `repos [args...]` + +This allows for powerful, custom workflows written in any language. For a +detailed guide on creating and using plugins, see the +[Plugin System Documentation](./docs/plugins.md). + ## Docker Image You can use `repos` within a Docker container, which is great for CI/CD diff --git a/docs/commands/run.md b/docs/commands/run.md index b8bebf0..416f5c7 100644 --- a/docs/commands/run.md +++ b/docs/commands/run.md @@ -1,28 +1,33 @@ # repos run -The `run` command executes a shell command in each of the specified -repositories. +The `run` command executes a shell command or a named recipe in each of the +specified repositories. ## Usage ```bash -repos run [OPTIONS] [REPOS]... +repos run [OPTIONS] [REPOS]... ``` ## Description This is one of the most powerful commands in `repos`, allowing you to automate tasks across hundreds or thousands of repositories at once. You can run any -shell command, from simple `ls` to complex `docker build` or `terraform apply` -scripts. +shell command, from simple `ls` to complex `docker build` scripts. -By default, the output of each command is logged to a file in the `logs/runs/` +Additionally, you can define **recipes**—multi-step scripts—in your +`config.yaml` and execute them by name. This is perfect for standardizing +complex workflows like dependency updates, code generation, or release +preparation. + +By default, the output of each command is logged to a file in the `output/runs/` directory, but this can be disabled. ## Arguments -- ``: The shell command to execute in each repository. It should be -enclosed in quotes if it contains spaces or special characters. +- ``: The shell command to execute or the name of the recipe +to run. If it is a command, it should be enclosed in quotes if it contains +spaces or special characters. - `[REPOS]...`: A space-separated list of specific repository names to run the command in. If not provided, filtering will be based on tags. @@ -34,13 +39,45 @@ command in. If not provided, filtering will be based on tags. (OR logic). - `-e, --exclude-tag `: Exclude repositories with a specific tag. Can be specified multiple times. -- `-p, --parallel`: Execute the command in parallel across all selected -repositories. +- `-p, --parallel`: Execute the command or recipe in parallel across all +selected repositories. - `--no-save`: Disables saving the command output to log files. - `--output-dir `: Specifies a custom directory for log files -instead of the default `logs/runs`. +instead of the default `output/runs`. - `-h, --help`: Prints help information. +## Recipes + +Recipes are named, multi-step scripts defined in your `config.yaml`. They allow +you to codify and reuse common workflows. + +### Defining a Recipe + +Add a `recipes` section to your `config.yaml`: + +```yaml +recipes: + - name: update-deps + steps: > + git checkout main + git pull + cargo update + cargo build --release + + - name: test + steps: + - | + cargo test --all-features + run: cargo clippy +``` + +Each recipe has a `name` and a list of `steps`. Each step is a shell command +executed sequentially. + +### Running a Recipe + +To run a recipe, use its name as the main argument for the `run` command. + ## Examples ### Run a command on all repositories @@ -81,10 +118,18 @@ This is highly recommended for long-running commands to save time. repos run -p "docker build ." ``` -### Run a command without saving logs +### Run a command without saving output Useful for quick, simple commands where you don't need a record of the output. ```bash repos run --no-save "ls -la" ``` + +### Run the 'update-deps' recipe on all repositories + +repos run --recipe update-deps + +### Run the 'test' recipe on backend repositories in parallel + +repos run -t backend -p --recipe test diff --git a/docs/test-inventory.md b/docs/test-inventory.md new file mode 100644 index 0000000..f04b1b2 --- /dev/null +++ b/docs/test-inventory.md @@ -0,0 +1,752 @@ +# Test Case Inventory + +Comprehensive test case catalog for the `repos` CLI. Each item lists purpose, +coverage dimensions (happy / negative / edge), and expected outcomes. This +inventory complements existing automated tests and surfaces gaps. + +--- + +## 1. Configuration + +### 1.1 Load valid config file with repositories and recipes + +- Description: Verify a well-formed YAML with repositories + recipes parses into internal model. +- Preconditions: `config.yaml` exists, contains at least one repository and one recipe. +- Steps: Run any command requiring config load (e.g. `repos clone --config config.yaml`). +- Expected: + - Happy: Config loads; repositories & recipes accessible; no warnings. + - Negative: N/A (covered in malformed / missing cases below). + - Edge: Minimal config (single repo, single recipe) still loads; extra unknown keys ignored gracefully. + +### 1.2 Fail on missing config file + +- Description: Tool should error if specified config path does not exist. +- Steps: `repos clone --config missing.yaml`. +- Expected: + - Happy: Error message references missing file; non-zero exit. + - Negative: Using directory path instead of file also errors clearly. + - Edge: Path with special characters -> error still readable; no panic. + +### 1.3 Fail on malformed YAML + +- Description: Syntax errors in YAML produce clear failure. +- Steps: Provide broken indentation or invalid YAML tokens. +- Expected: + - Happy: Parse error surfaced with line/column if available. + - Negative: Empty file -> error indicating missing root key. + - Edge: File with BOM, trailing spaces still parsed (if syntactically valid). + +### 1.4 Load repositories with different path forms (absolute, relative) + +- Description: Resolve repo target directories correctly. +- Expected: + - Happy: Relative paths resolved against config directory; absolute paths untouched. + - Negative: Invalid path (pointing to file, not dir) flagged when used. + - Edge: Symlinks followed; tilde expansion unsupported (documented) or handled if implemented. + +### 1.5 Handle empty repositories list + +- Description: Zero repositories should lead to graceful no-op for repo-based commands. +- Expected: + - Happy: Command exits success with message like "No repositories". + - Negative: Attempt actions requiring repos (run/clone) yields no panic. + - Edge: Config with only `recipes:` still valid. + +### 1.6 Handle empty recipes list + +- Description: Absence of recipes does not break command mode. +- Expected: + - Happy: `run` with direct command works; recipe invocation fails with clear not-found. + - Negative: `--recipe` specified returns error strictly about missing recipe. + - Edge: Empty `recipes: []` instead of omission still OK. + +### 1.7 Resolve recipe names uniquely + +- Description: Name lookup is case-sensitive (based on existing tests) and returns correct recipe. +- Expected: + - Happy: Exact match succeeds. + - Negative: Wrong case fails with not found. + - Edge: Names with spaces / dashes sanitized only for script file, not lookup. + +--- + +## 2. Repository Management + +### 2.1 Clone single repository + +- Expected: Creates target directory; initializes git remote; success exit. + +### 2.2 Clone multiple repositories sequentially + +- Expected: Each repo processed in order; failures reported individually; process continues. + +### 2.3 Clone multiple repositories in parallel + +- Expected: All clone jobs spawned; race-safe logging; partial failures do not abort others. + +### 2.4 Skip cloning if directory exists + +- Expected: Existing directory detected; skip message emitted; status success overall. + +### 2.5 Handle invalid repo URL + +- Expected: Git clone returns non-zero; error captured and surfaced without panic. + +### 2.6 Respect branch override when provided + +- Expected: After clone, HEAD matches specified branch; missing branch triggers error (negative). + +### 2.7 Tag filtering include-only + +- Expected: Only repos containing tag(s) selected; empty result leads to graceful no-op. + +### 2.8 Tag exclusion logic + +- Expected: Repos with excluded tags removed from candidate set. + +### 2.9 Explicit repos selection overrides tag filters + +- Expected: Provided names used verbatim even if tags would exclude them. + +### 2.10 Mixed tags with include and exclude + +- Expected: Inclusion performed first, then exclusion prunes; documented precedence. + +Edge Cases (Repos): Duplicate repo names prevented at config load; names with special characters still log cleanly; parallel cloning handles network errors. + +--- + +## 3. Run Command (Command Mode) + +### 3.1 Run simple echo command across one repo + +- Expected: Command executes; metadata produced (save mode); exit_code 0. + +### 3.2 Run command across multiple repos (sequential) + +- Expected: Ordered execution; each metadata.json written. + +### 3.3 Run command across multiple repos (parallel) + +- Expected: All metadata files present; no interleaved stdout in individual files. + +### 3.4 Run long command name (sanitization for metadata directory) + +- Expected: Output directory suffix truncated to <=50 chars; remains unique/logical. + +### 3.5 Handle command containing special characters and spaces + +- Expected: Proper quoting; command stored verbatim in metadata.json. + +### 3.6 Fail on empty command string + +- Expected: Immediate validation error (currently runner treats empty as no-op success; test should assert defined expectation—potential gap). + +### 3.7 No-save mode skips metadata/stdout file creation + +- Expected: No run directory created; overall success exit. + +### 3.8 Save mode creates `output/runs/_` structure + +- Expected: Directory exists; per-repo subdirectories created. + +### 3.9 Existing output directory reuse + +- Expected: If base exists, new timestamped run directory created; no collision. + +### 3.10 Proper exit code recording (0,1,2,126,127,130,>128) + +- Expected: `exit_code` matches process status; description aligns mapping. + +### 3.11 Exit code description mapping correctness + +- Expected: Each known code string correct; unknown >128 -> "terminated by signal". + +### 3.12 Metadata.json structure for command (no recipe fields) + +- Expected: Contains: command, exit_code, exit_code_description, repository, timestamp ONLY. + +Edge Cases (Command Mode): Large stdout handled; mixed stdout/stderr captured; failing command still writes logs; nonexistent directory errors early. + +--- + +## 4. Run Command (Recipe Mode) + +### 4.1 Run single-step recipe + +- Expected: Script created in repo root; executes; removed; exit_code 0. + +### 4.2 Run multi-step recipe sequential + +- Expected: All steps run in order; combined output captured. + +### 4.3 Run recipe across multiple repositories parallel + +- Expected: Each repo gets its own script; isolated execution; cleanup. + +### 4.4 Recipe script created in repo root, then removed + +- Expected: After run, `*.script` absent; no `.repos/recipes` directory created. + +### 4.5 Metadata.json contains recipe, recipe_steps, exit info (no command field) + +- Expected: Fields: recipe, recipe_steps, exit_code, exit_code_description, repository, timestamp. + +### 4.6 Fail when recipe name not found + +- Expected: Error "Recipe 'name' not found"; no script created. + +### 4.7 Fail when recipe has zero steps + +- Expected: Early error or empty script logic defined (if allowed) — inventory marks gap if not enforced. + +### 4.8 Shebang-less recipe executes under default shell + +- Expected: Auto prepend `#!/bin/sh`; success. + +### 4.9 Script materialization permissions (executable) + +- Expected: Mode 750 (unix); execution works; negative: permission change failure -> error. + +### 4.10 Cleanup always occurs even on failure + +- Expected: Failing step causes non-zero exit; script file still deleted. + +### 4.11 Exit codes propagate from failing step + +- Expected: Metadata exit_code == failing command code. + +### 4.12 Mixed success/failure steps (failure halts remaining steps) + +- Expected: Steps after failing one NOT executed; output stops there. + +Edge Cases (Recipe): Steps with environment variables preserved; multi-line heredoc processed; scripts with Unicode names sanitized. + +--- + +## 5. Logging & Output + +### 5.1 metadata.json created per repo in run (save mode) + +- Expected: Exists with correct schema. + +### 5.2 stdout.log contains command or aggregated recipe output + +- Expected: Content matches actual stdout lines; order preserved. + +### 5.3 metadata.json absent in no-save mode + +- Expected: Directory not present; test asserts absence. + +### 5.4 Proper timestamp format in metadata.json + +- Expected: `YYYY-MM-DD HH:MM:SS` local time. + +### 5.5 Directory naming pattern + +- Expected: Matches `output/runs/_`. + +### 5.6 Sanitization truncation for extremely long names + +- Expected: Suffix length <=50; no broken UTF-8. + +Edge Cases: Simultaneous runs produce distinct timestamps; invalid characters replaced by `_`. + +--- + +## 6. Parallel vs Sequential Behavior + +### 6.1 Parallel execution does not interleave stdout logs improperly + +- Expected: Each repo's stdout confined to its own file. + +### 6.2 Sequential preserves order of execution + +- Expected: Time ordering in logs roughly matches repo iteration order. + +### 6.3 Parallel mode still generates per-repo metadata.json + +- Expected: Count of metadata files == repo count. + +Edge: Large number of repos (stress) still stable; resource exhaustion handled gracefully (potential future test). + +--- + +## 7. Tag & Repo Selection + +### 7.1 Include tag selects only matching repos + +- Expected: Only repos with tag appear in run/clone output. + +### 7.2 Exclude tag removes only excluded + +- Expected: Repos with exclude tags omitted. + +### 7.3 Combine include and exclude correctly + +- Expected: Intersection minus excludes. + +### 7.4 Explicit repos overrides tag filtering entirely + +- Expected: Provided names used even if exclude tags present. + +### 7.5 No overlap results in zero execution (graceful) + +- Expected: Success exit + message; no errors. + +Edge: Multiple include tags requiring all vs any (verify implemented semantics). + +--- + +## 8. Error Handling + +### 8.1 Missing command and recipe (CLI validation) + +- Expected: Error message; non-zero exit; no run directory. + +### 8.2 Nonexistent binary invocation in plugin tests + +- Expected: Failure logged; test asserts graceful handling. + +### 8.3 Missing recipe metadata absence of command field enforced + +- Expected: For recipe run: "command" key not present; schema mutual exclusivity. + +### 8.4 Command not found exit code (127) recorded with description + +- Expected: exit_code 127; description "command not found". + +### 8.5 Script cannot execute (126) recorded + +- Expected: exit_code 126; description "command invoked cannot execute". + +### 8.6 Interrupted execution (130) recorded + +- Expected: Simulated Ctrl-C surfaces description "script terminated by Control-C". + +Edge: >128 signals map to "terminated by signal". + +--- + +## 9. Plugins + +### 9.1 Built-in commands still work when plugins enabled + +- Expected: Core behaviors unaffected. + +### 9.2 Plugin discovery ignores invalid plugin paths + +- Expected: Invalid entries skipped silently or with warning. + +### 9.3 Plugin environment isolation (PATH override) + +- Expected: External plugin execution does not mutate global state. + +### 9.4 Fallback when no plugins present + +- Expected: Listing or invoking plugin features yields empty set, no error. + +### 9.5 Help text still accessible with plugins + +- Expected: `--help` output unchanged. + +### 9.6 Plugin does not interfere with core logging + +- Expected: Standard metadata + log lines unaffected. + +Edge: Multiple plugins simultaneously (future test). + +--- + +## 10. Pull Requests + +### 10.1 Create PR for single repo + +- Expected: Branch creation + PR initiation (mock) success. + +### 10.2 Create PR for multiple repos + +- Expected: Each repo processed; failures isolated. + +### 10.3 Fail on missing remote + +- Expected: Error explaining remote not configured. + +### 10.4 Handle authentication failure (mock/skip) + +- Expected: Clear auth error; does not panic. + +### 10.5 Title and body formatting correctness + +- Expected: Special characters preserved; long body accepted. + +Edge: Extremely long title truncated or handled (as implemented). + +--- + +## 11. Init Command + +### 11.1 Creates initial config file if absent + +- Expected: File created with sample content; success exit. + +### 11.2 Does not overwrite existing config + +- Expected: Existing file retained; no destructive write. + +### 11.3 Generates sample repositories section + +- Expected: Minimal scaffold present. + +### 11.4 Generates sample recipes section + +- Expected: At least one example recipe included. + +Edge: Run in non-writable directory returns error. + +--- + +## 12. Git Operations + +### 12.1 Fetch updates without modifying working tree + +- Expected: Current branch unchanged; remote data updated. + +### 12.2 Handle detached head properly + +- Expected: Operations that require branch fail gracefully or adapt. + +### 12.3 Branch checkout when branch provided + +- Expected: HEAD matches requested branch. + +### 12.4 Error on invalid branch + +- Expected: Clear git error surfaced. + +### 12.5 Local changes do not block read-only operations + +- Expected: Status queries succeed with uncommitted changes. + +Edge: Commit with empty message prevented. + +--- + +## 13. Script / Filename Handling + +### 13.1 Sanitization removes unsafe filesystem chars + +- Expected: All problematic chars replaced by `_`. + +### 13.2 Truncation preserves suffix uniqueness + +- Expected: Long names shortened deterministically; collisions avoidable. + +### 13.3 Collision avoidance for same command different repos + +- Expected: Per-run root unique by timestamp; repo subdirs separate. + +Edge: Unicode characters converted safely (non-panic). + +--- + +## 14. Cleanup & Ephemeral Artifacts + +### 14.1 Recipe script removed after success + +- Expected: No lingering `*.script` files. + +### 14.2 Recipe script removed after failure + +- Expected: Same cleanup guarantee. + +### 14.3 Temporary directories not left behind + +- Expected: No stray temp paths after runs. + +### 14.4 No residual `.repos` directory creation after change + +- Expected: Directory absent unless purposefully added in future. + +--- + +## 15. Metadata.json Integrity + +### 15.1 Command run schema + +- Expected: Keys exactly: command, exit_code, exit_code_description, repository, timestamp. + +### 15.2 Recipe run schema + +- Expected: Keys exactly: recipe, recipe_steps, exit_code, exit_code_description, repository, timestamp. + +### 15.3 Mutual exclusivity + +- Expected: Never both command and recipe present. + +### 15.4 JSON valid (parsable) + +- Expected: Deserialize succeeds under serde. + +### 15.5 Exit code description matches numeric value + +- Expected: Mapping correct per defined lookup. + +Edge: Unknown negative code -> fallback description "error". + +--- + +## 16. Additional Edge Cases + +### 16.1 Empty tags list behaves like no filter + +- Expected: All repositories included. + +### 16.2 Large number of repositories performance (smoke) + +- Expected: Operation completes within acceptable time; no timeouts. + +### 16.3 Parallel execution does not exceed system limits + +- Expected: No thread / FD exhaustion; graceful degradation if limits. + +### 16.4 Unicode in repository names handled + +- Expected: Logging intact; filesystem safe. + +### 16.5 Unicode in recipe steps handled + +- Expected: Output preserved; no encoding errors. + +--- + +## 17. Suggested Missing Tests / Gaps + +### 17.1 Interrupted (Ctrl-C simulation) propagation + +- Expected: Child process terminates; exit_code 130 captured; cleanup occurs. + +### 17.2 Signal >128 exit description mapping + +- Expected: Forced signal termination mapped to "terminated by signal". + +### 17.3 Concurrent runs creating distinct output directories + +- Expected: Parallel invocations produce separate timestamp directories (different seconds or fallback randomization). + +### 17.4 Invalid permission on repo directory (read-only) handling + +- Expected: Script creation fails with permission error; graceful message. + +### 17.5 metadata.json absent in failure before creation (early abort) + +- Expected: If repo directory missing: no metadata file written; clear error. + +--- +*End of inventory.* + +--- + +## 18. Test Classification (Unit vs Integration vs E2E) + +Classification criteria: + +- Unit: Pure logic or small function scope; no external processes, network, real git, or filesystem side-effects beyond trivial in-memory or temp file usage. Deterministic, fast (<50ms), isolated. +- Integration: Combines multiple subsystems (filesystem, git repos, command runner, process spawning). Uses real temp directories or invokes `git` / shell. Validates interactions and produced artifacts. +- E2E: Invokes the compiled CLI (`cargo run` / built binary) exercising full argument parsing, configuration loading, execution path, logging/metadata generation and cleanup across multiple repositories or plugins. Can be slower; closest to real user workflow. + +### 18.1 Configuration + +| Case | Type | Rationale | Status | +|------|------|-----------|---------| +|1.1 Load valid config| Integration | Parses real file from disk and produces model used by other commands| Automated | +|1.2 Missing config file| Integration | Depends on filesystem error handling| Automated | +|1.3 Malformed YAML| Unit | Focus on parse failure surfaced by loader logic| Gap | +|1.4 Path forms (abs/rel)| Unit | Pure path resolution logic; can be isolated| Automated | +|1.5 Empty repositories list| Unit | Behavior is early-return logic| Automated | +|1.6 Empty recipes list| Unit | Lookup logic and conditional absence handling| Gap | +|1.7 Resolve recipe names uniquely| Unit | Name lookup & matching only| Automated | + +### 18.2 Repository Management + +| Case | Type | Rationale | Status | +|------|------|-----------|---------| +|2.1 Clone single repository| Integration | Invokes `git clone` and filesystem| Automated | +|2.2 Clone multiple sequential| Integration | Iterative multi repo interaction| Automated | +|2.3 Clone multiple parallel| Integration | Concurrency + multiple git operations| Automated | +|2.4 Skip cloning if directory exists| Integration | Relies on FS presence checks| Automated | +|2.5 Invalid repo URL| Integration | Captures external command failure| Automated | +|2.6 Branch override| Integration | Uses git branch checkout| Gap | +|2.7 Tag filtering include-only| Unit | Pure filtering logic| Automated | +|2.8 Tag exclusion| Unit | Pure filtering logic| Automated | +|2.9 Explicit repos override| Unit | Selection precedence logic| Automated | +|2.10 Mixed include/exclude| Unit | Logical combination test| Automated | + +### 18.3 Run Command (Command Mode) + +| Case | Type | Rationale | Status | +|------|------|-----------|---------| +|3.1 Single echo| Integration | Executes shell in repo context| Automated | +|3.2 Multiple sequential| Integration | Iterative multi-repo execution| Automated | +|3.3 Multiple parallel| Integration | Concurrency execution path| Automated | +|3.4 Long command name sanitization| Unit | String transformation only| Automated | +|3.5 Special characters command| Integration | Actual shell invocation & metadata capture| Automated | +|3.6 Empty command string| Unit | Validation logic (candidate for stricter behavior) | Partial | +|3.7 No-save mode behavior| Integration | Affects artifact creation side-effects| Automated | +|3.8 Save mode directory creation| Integration | Filesystem structure| Automated | +|3.9 Existing output directory reuse| Integration | FS existence + new path logic| Automated | +|3.10 Exit code recording| Unit | Mapping + extraction (can isolate via fake status) | Automated | +|3.11 Exit code description mapping| Unit | Pure match function| Partial | +|3.12 Metadata.json structure (command)| Integration | Requires file writing & JSON content| Automated | + +### 18.4 Run Command (Recipe Mode) + +| Case | Type | Rationale | Status | +|------|------|-----------|---------| +|4.1 Single-step recipe| Integration | Script materialization + execution| Automated | +|4.2 Multi-step sequential| Integration | Aggregate execution order| Automated | +|4.3 Multi-repo parallel recipe| Integration | Concurrency + FS per repo| Automated | +|4.4 Script created & removed| Integration | FS artifact lifecycle| Automated | +|4.5 Metadata.json recipe fields| Integration | Generated file content| Automated | +|4.6 Recipe name not found| Unit | Lookup error path| Automated | +|4.7 Zero steps recipe| Unit | Validation / precondition logic| Automated | +|4.8 Implicit shebang| Unit | Script content transformation| Automated | +|4.9 Permissions set| Integration | Actual FS permissions required| Automated | +|4.10 Cleanup on failure| Integration | Execution + post-failure cleanup| Automated | +|4.11 Exit codes propagate| Integration | Real failing script status| Automated | +|4.12 Mixed success/failure halts| Integration | Execution control flow| Partial | + +### 18.5 Logging & Output + +| Case | Type | Rationale | Status | +|------|------|-----------|---------| +|5.1 metadata.json per repo| Integration | File existence created by runner| Automated | +|5.2 stdout.log correctness| Integration | Captured process output| Automated | +|5.3 Absence in no-save mode| Integration | Side-effect suppression| Automated | +|5.4 Timestamp format| Unit | Formatting function| Gap | +|5.5 Directory naming pattern| Unit | String assembly + sanitization| Partial | +|5.6 Truncation behavior| Unit | String length logic| Automated | + +### 18.6 Parallel vs Sequential Behavior + +All considered Integration (multi-repo orchestration). Stress/performance variants treated as E2E if full CLI invoked under load. + +### 18.7 Tag & Repo Selection + +| Case | Type | Rationale | Status | +|------|------|-----------|---------| +|7.1 Include tag match| Unit | Filtering logic| Automated | +|7.2 Exclude tag| Unit | Filtering logic| Automated | +|7.3 Combine include/exclude| Unit | Logical composition| Automated | +|7.4 Explicit repos override| Unit | Precedence resolution| Automated | +|7.5 No overlap graceful| Unit | Early-return logic| Automated | + +### 18.8 Error Handling + +| Case | Type | Rationale | Status | +|------|------|-----------|---------| +|8.1 Missing command & recipe| E2E | Full CLI argument validation path| Automated | +|8.2 Nonexistent binary (plugin)| Integration | External process failure under plugin harness| Partial | +|8.3 Mutual exclusivity metadata| Integration | Generated file schema| Automated | +|8.4 Command not found 127| Integration | Real process exit| Automated | +|8.5 Script cannot execute 126| Integration | Permission/exec failure| Gap | +|8.6 Interrupted 130| Integration | Signal handling from process| Gap | +|Signal >128 mapping (edge)| Unit | Mapping function correctness| Gap | + +### 18.9 Plugins + +| Case | Type | Rationale | Status | +|------|------|-----------|---------| +|9.1 Built-in commands still work when plugins enabled| Integration | Ensures plugin layer doesn't break core CLI | Automated | +|9.2 Plugin discovery ignores invalid plugin paths| Integration | Scans filesystem / executable bits | Automated | +|9.3 Plugin environment isolation (PATH override)| Integration | Requires process env isolation validation | Gap | +|9.4 Fallback when no plugins present| Integration | Graceful empty state | Automated | +|9.5 Help text still accessible with plugins| E2E | Full CLI parsing with dynamic plugin context | Gap | +|9.6 Plugin does not interfere with core logging| Integration | Compare logs with/without plugins | Partial | + +### 18.10 Pull Requests + +| Case | Type | Rationale | Status | +|------|------|-----------|---------| +|10.1 Create PR for single repo| Integration | Git ops + mock PR creation | Automated | +|10.2 Create PR for multiple repos| Integration | Iterates over multiple repos | Automated | +|10.3 Fail on missing remote| Integration | Git remote validation | Gap | +|10.4 Handle authentication failure (mock/skip)| Integration | Simulated auth failure path | Partial | +|10.5 Title and body formatting correctness| Integration | Content handling & escaping | Automated | +|Edge: Extremely long title truncated| Integration | Boundary handling | Automated | + +### 18.11 Init Command + +| Case | Type | Rationale | Status | +|------|------|-----------|---------| +|11.1 Initial file creation| Integration | Filesystem write | Automated | +|11.2 No overwrite existing| Integration | FS state check | Automated | +|11.3 Sample repositories scaffold| Unit | Content template generation | Automated | +|11.4 Sample recipes scaffold| Unit | Template generation | Partial | +|Edge non-writable dir| Integration | Permission failure on FS | Gap | + +\n### 18.12 Git Operations +All Git operations Integration (depend on real git behavior). Detached head handling remains Integration. Pure parsing of branch names could be Unit if factored out (future refactor candidate). + +### 18.13 Script / Filename Handling + +| Case | Type | Rationale | Status | +|------|------|-----------|---------| +|13.1 Sanitization unsafe chars| Unit | String transformation | Automated | +|13.2 Truncation uniqueness| Unit | Deterministic length rule | Partial | +|13.3 Collision avoidance timestamp| Integration | Relies on FS and time source | Gap | +|Unicode safety| Unit | String handling | Gap | + +\n### 18.14 Cleanup & Ephemeral Artifacts +All cleanup behaviors Integration (require actual run + artifact lifecycle). Missing .repos directory check Integration. + +### 18.15 Metadata.json Integrity + +| Case | Type | Rationale | Status | +|------|------|-----------|---------| +|15.1 Command run schema| Unit / Integration | Logic + persisted artifact | Automated | +|15.2 Recipe run schema| Unit / Integration | Logic + persisted artifact | Automated | +|15.3 Mutual exclusivity| Unit | Construction logic enforced | Automated | +|15.4 JSON valid parsable| Unit | Round-trip serde test | Automated | +|15.5 Exit code description mapping| Unit | Pure function mapping | Partial | +|Negative unknown code fallback| Unit | Edge mapping test | Gap | + +### 18.16 Additional Edge Cases + +| Case | Type | Rationale | Status | +|------|------|-----------|---------| +|16.1 Empty tags list behavior| Unit | Filtering logic default | Automated | +|16.2 Large number performance| E2E | Full-system scalability smoke | Gap | +|16.3 Parallel resource limits| E2E | System-level concurrency behavior | Gap | +|16.4 Unicode repo names| Integration | FS + logging interplay | Gap | +|16.5 Unicode recipe steps| Integration | Script materialization & output | Gap | + +### 18.17 Suggested Missing Tests / Gaps + +| Case | Type | Rationale | Status | +|------|------|-----------|---------| +|17.1 Ctrl-C propagation| Integration | Signal during process execution | Gap | +|17.2 Signal >128 mapping| Unit | Mapping function only | Gap | +|17.3 Concurrent runs distinct dirs| Integration | Timestamp & FS isolation | Gap | +|17.4 Read-only repo dir| Integration | Permission failure on write | Gap | +|17.5 Early abort no metadata| Integration | FS absence on precondition failure | Gap | + +### 18.18 Summary Counts + +Approximate classification (one test may have both Unit & Integration facets if split): + +- Primarily Unit candidates: sanitization, filtering, mapping, schema exclusivity (~30%) +- Integration: git, command execution, logging, filesystem lifecycle (~55%) +- E2E: holistic CLI argument validation, performance/stress, plugin-enabled flows (~15%) + +### 18.19 Recommendations + +1. Refactor logic-heavy areas (exit code mapping, sanitization, filtering) into dedicated modules to keep unit tests fast and focused. +2. Separate mixed Unit/Integration tests (e.g., metadata schema) into two layers: construct JSON (unit) then file emission (integration). +3. Introduce tagging in test framework (feature: cargo nextest or custom) to selectively run Unit vs Integration vs E2E in CI stages. +4. Add E2E smoke suite executing representative scenarios (parallel recipe run, PR creation mock, plugin discovery) nightly. +5. Track execution time per test to detect regressions (baseline now while codebase stable). + +--- + +*End of classification section.* diff --git a/justfile b/justfile new file mode 100644 index 0000000..223df66 --- /dev/null +++ b/justfile @@ -0,0 +1,34 @@ +# https://just.systems + +@_: + just --list + +# Build the main binary +[group('lifecycle')] +build: + cargo build --release + +# Build the plugins +[group('lifecycle')] +build-plugins: + cargo build --release -p repos-health + +# Run tests +[group('qa')] +test: + cargo test --quiet + +# Run coverage +[group('qa')] +coverage: + cargo tarpaulin --skip-clean + +# Registered plugins are binaries named `repos-*` in /usr/local/bin +# sudo ln -sf $(pwd)/target/release/repos-health /usr/local/bin/repos-health +# +# List available registered plugins +[group('run')] +list-plugins: + ls -al /usr/local/bin/repos-* || echo "No plugins installed" + +# vim: set filetype=Makefile ts=4 sw=4 et: diff --git a/src/commands/clone.rs b/src/commands/clone.rs index 9c5a4dc..91f56e6 100644 --- a/src/commands/clone.rs +++ b/src/commands/clone.rs @@ -151,6 +151,7 @@ mod tests { Config { repositories: vec![repo1, repo2, repo3], + recipes: vec![], } } @@ -301,6 +302,7 @@ mod tests { let config = Config { repositories: vec![invalid_repo], + recipes: vec![], }; let command = CloneCommand; @@ -344,6 +346,7 @@ mod tests { let config = Config { repositories: vec![invalid_repo1, invalid_repo2], + recipes: vec![], }; let command = CloneCommand; @@ -392,6 +395,7 @@ mod tests { // Test with empty configuration let config = Config { repositories: vec![], + recipes: vec![], }; let command = CloneCommand; diff --git a/src/commands/init.rs b/src/commands/init.rs index c36b647..5bf6c12 100644 --- a/src/commands/init.rs +++ b/src/commands/init.rs @@ -181,6 +181,7 @@ mod tests { let context = CommandContext { config: Config { repositories: vec![], + recipes: vec![], }, tag: vec![], exclude_tag: vec![], @@ -215,6 +216,7 @@ mod tests { let context = CommandContext { config: Config { repositories: vec![], + recipes: vec![], }, tag: vec![], exclude_tag: vec![], @@ -257,6 +259,7 @@ mod tests { "existing-repo".to_string(), "git@github.com:owner/existing-repo.git".to_string(), )], + recipes: vec![], }; existing_config .save(&output_path.to_string_lossy()) @@ -276,6 +279,7 @@ mod tests { let context = CommandContext { config: Config { repositories: vec![], + recipes: vec![], }, tag: vec![], exclude_tag: vec![], @@ -314,6 +318,7 @@ mod tests { let context = CommandContext { config: Config { repositories: vec![], + recipes: vec![], }, tag: vec![], exclude_tag: vec![], diff --git a/src/commands/remove.rs b/src/commands/remove.rs index 2bd06a2..88ce46f 100644 --- a/src/commands/remove.rs +++ b/src/commands/remove.rs @@ -155,6 +155,7 @@ mod tests { let context = CommandContext { config: Config { repositories: vec![repo], + recipes: vec![], }, tag: vec![], exclude_tag: vec![], @@ -199,7 +200,10 @@ mod tests { let command = RemoveCommand; let context = CommandContext { - config: Config { repositories }, + config: Config { + repositories, + recipes: vec![], + }, tag: vec![], exclude_tag: vec![], repos: None, @@ -248,7 +252,10 @@ mod tests { let command = RemoveCommand; let context = CommandContext { - config: Config { repositories }, + config: Config { + repositories, + recipes: vec![], + }, tag: vec![], exclude_tag: vec![], repos: None, @@ -289,6 +296,7 @@ mod tests { let context = CommandContext { config: Config { repositories: vec![repo], + recipes: vec![], }, tag: vec![], exclude_tag: vec![], @@ -336,6 +344,7 @@ mod tests { let context = CommandContext { config: Config { repositories: vec![matching_repo, non_matching_repo], + recipes: vec![], }, tag: vec!["backend".to_string()], exclude_tag: vec![], @@ -387,6 +396,7 @@ mod tests { let context = CommandContext { config: Config { repositories: vec![repo1, repo2], + recipes: vec![], }, tag: vec![], exclude_tag: vec![], @@ -428,6 +438,7 @@ mod tests { let context = CommandContext { config: Config { repositories: vec![repo], + recipes: vec![], }, tag: vec!["frontend".to_string()], // Non-matching tag exclude_tag: vec![], @@ -445,6 +456,7 @@ mod tests { let context = CommandContext { config: Config { repositories: vec![], + recipes: vec![], }, tag: vec![], exclude_tag: vec![], @@ -482,6 +494,7 @@ mod tests { let context = CommandContext { config: Config { repositories: vec![repo], + recipes: vec![], }, tag: vec![], exclude_tag: vec![], @@ -529,6 +542,7 @@ mod tests { let context = CommandContext { config: Config { repositories: vec![matching_repo, wrong_name_repo], + recipes: vec![], }, tag: vec!["backend".to_string()], exclude_tag: vec![], @@ -584,6 +598,7 @@ mod tests { let context = CommandContext { config: Config { repositories: vec![success_repo, nonexistent_repo], + recipes: vec![], }, tag: vec![], exclude_tag: vec![], diff --git a/src/commands/run.rs b/src/commands/run.rs index a0178a9..80ac300 100644 --- a/src/commands/run.rs +++ b/src/commands/run.rs @@ -8,16 +8,74 @@ use async_trait::async_trait; use std::fs::create_dir_all; use std::path::{Path, PathBuf}; -/// Run command for executing commands in repositories +#[derive(Debug)] +pub enum RunType { + Command(String), + Recipe(String), +} + +/// Run command for executing commands or recipes in repositories pub struct RunCommand { - pub command: String, + pub run_type: RunType, pub no_save: bool, pub output_dir: Option, } +impl RunCommand { + pub fn new_command(command: String, no_save: bool, output_dir: Option) -> Self { + Self { + run_type: RunType::Command(command), + no_save, + output_dir, + } + } + + pub fn new_recipe(recipe_name: String, no_save: bool, output_dir: Option) -> Self { + Self { + run_type: RunType::Recipe(recipe_name), + no_save, + output_dir, + } + } +} + #[async_trait] impl Command for RunCommand { async fn execute(&self, context: &CommandContext) -> Result<()> { + match &self.run_type { + RunType::Command(command) => self.execute_command(context, command).await, + RunType::Recipe(recipe_name) => self.execute_recipe(context, recipe_name).await, + } + } +} + +impl RunCommand { + /// Create a new RunCommand with default settings for testing + pub fn new_for_test(command: String, output_dir: String) -> Self { + Self { + run_type: RunType::Command(command), + no_save: false, + output_dir: Some(PathBuf::from(output_dir)), + } + } + + /// Sanitize command string for use in directory names + fn sanitize_command_for_filename(command: &str) -> String { + command + .chars() + .map(|c| match c { + ' ' => '_', + '/' | '\\' | ':' | '*' | '?' | '"' | '<' | '>' | '|' => '_', + c if c.is_alphanumeric() || c == '-' || c == '_' || c == '.' => c, + _ => '_', + }) + .collect::() + .chars() + .take(50) // Limit length to avoid overly long directory names + .collect() + } + + async fn execute_command(&self, context: &CommandContext, command: &str) -> Result<()> { let repositories = context.config.filter_repositories( &context.tag, &context.exclude_tag, @@ -35,7 +93,7 @@ impl Command for RunCommand { // Use local time instead of UTC let timestamp = chrono::Local::now().format("%Y%m%d-%H%M%S").to_string(); // Sanitize command for directory name - let command_suffix = Self::sanitize_command_for_filename(&self.command); + let command_suffix = Self::sanitize_command_for_filename(command); // Use provided output directory or default to "output" let base_dir = self .output_dir @@ -49,151 +107,233 @@ impl Command for RunCommand { None }; - let mut errors = Vec::new(); - let mut successful = 0; - if context.parallel { + // Parallel execution let tasks: Vec<_> = repositories .into_iter() .map(|repo| { - let runner = &runner; - let command = self.command.clone(); - let no_save = self.no_save; + let command = command.to_string(); + let run_root = run_root.clone(); async move { - let result = if !no_save { + let runner = CommandRunner::new(); + if let Some(ref run_root) = run_root { runner - .run_command_with_capture_no_logs(&repo, &command, None) + .run_command_with_capture( + &repo, + &command, + Some(run_root.to_string_lossy().as_ref()), + ) .await } else { runner - .run_command(&repo, &command, None) + .run_command_with_capture_no_logs(&repo, &command, None) .await - .map(|_| (String::new(), String::new(), 0)) - }; - (repo, result) + } } }) .collect(); - for task in tasks { - let (repo, result) = task.await; - match result { - Ok((stdout, stderr, exit_code)) => { - if exit_code == 0 { - successful += 1; + futures::future::join_all(tasks).await; + } else { + // Sequential execution + for repo in repositories { + if let Some(ref run_root) = run_root { + runner + .run_command_with_capture( + &repo, + command, + Some(run_root.to_string_lossy().as_ref()), + ) + .await?; + } else { + runner.run_command(&repo, command, None).await?; + } + } + } + + Ok(()) + } + + async fn execute_recipe(&self, context: &CommandContext, recipe_name: &str) -> Result<()> { + // Find the recipe + let recipe = context + .config + .find_recipe(recipe_name) + .ok_or_else(|| anyhow::anyhow!("Recipe '{}' not found", recipe_name))?; + + let repositories = context.config.filter_repositories( + &context.tag, + &context.exclude_tag, + context.repos.as_deref(), + ); + + if repositories.is_empty() { + return Ok(()); + } + + let runner = CommandRunner::new(); + + // Setup persistent output directory if saving is enabled + let run_root = if !self.no_save { + // Use local time instead of UTC + let timestamp = chrono::Local::now().format("%Y%m%d-%H%M%S").to_string(); + // Sanitize recipe name for directory name + let recipe_suffix = Self::sanitize_command_for_filename(recipe_name); + // Use provided output directory or default to "output" + let base_dir = self + .output_dir + .as_ref() + .unwrap_or(&PathBuf::from("output")) + .join("runs"); + let run_dir = base_dir.join(format!("{}_{}", timestamp, recipe_suffix)); + create_dir_all(&run_dir)?; + Some(run_dir) + } else { + None + }; + + if context.parallel { + // Parallel execution + let tasks: Vec<_> = repositories + .into_iter() + .map(|repo| { + let recipe_steps = recipe.steps.clone(); + let recipe_name = recipe.name.clone(); + let run_root = run_root.clone(); + async move { + let script_path = + Self::materialize_script(&repo, &recipe_name, &recipe_steps).await?; + + // Convert absolute script path to relative path from repository directory + let repo_target_dir = repo.get_target_dir(); + let repo_dir = Path::new(&repo_target_dir); + let relative_script_path = script_path + .strip_prefix(repo_dir) + .unwrap_or(&script_path) + .to_string_lossy(); + + // Ensure script path is executable from current directory + let executable_script_path = if relative_script_path.contains('/') { + relative_script_path.to_string() } else { - errors.push(( - repo.name.clone(), - anyhow::anyhow!("Command failed with exit code: {}", exit_code), - )); - } + format!("./{}", relative_script_path) + }; - // Save output to individual files - if let Some(ref run_dir) = run_root { - self.save_repo_output(&repo, &stdout, &stderr, run_dir)?; - } - } - Err(e) => { - errors.push((repo.name.clone(), e)); + let runner = CommandRunner::new(); + let result = if let Some(ref run_root) = run_root { + runner + .run_command_with_recipe_context( + &repo, + &executable_script_path, + Some(run_root.to_string_lossy().as_ref()), + &recipe_name, + &recipe_steps, + ) + .await + } else { + runner + .run_command_with_capture_no_logs( + &repo, + &executable_script_path, + None, + ) + .await + }; + // Optionally remove script file after execution + let _ = std::fs::remove_file(script_path); + result } - } - } + }) + .collect(); + + futures::future::join_all(tasks).await; } else { + // Sequential execution for repo in repositories { - let result = if !self.no_save { + let script_path = + Self::materialize_script(&repo, &recipe.name, &recipe.steps).await?; + + // Convert absolute script path to relative path from repository directory + let repo_target_dir = repo.get_target_dir(); + let repo_dir = Path::new(&repo_target_dir); + let relative_script_path = script_path + .strip_prefix(repo_dir) + .unwrap_or(&script_path) + .to_string_lossy(); + + // Ensure script path is executable from current directory + let executable_script_path = if relative_script_path.contains('/') { + relative_script_path.to_string() + } else { + format!("./{}", relative_script_path) + }; + + let result = if let Some(ref run_root) = run_root { runner - .run_command_with_capture_no_logs(&repo, &self.command, None) + .run_command_with_recipe_context( + &repo, + &executable_script_path, + Some(run_root.to_string_lossy().as_ref()), + &recipe.name, + &recipe.steps, + ) .await } else { runner - .run_command(&repo, &self.command, None) + .run_command_with_capture_no_logs(&repo, &executable_script_path, None) .await - .map(|_| (String::new(), String::new(), 0)) }; - - match result { - Ok((stdout, stderr, exit_code)) => { - if exit_code == 0 { - successful += 1; - } else { - errors.push(( - repo.name.clone(), - anyhow::anyhow!("Command failed with exit code: {}", exit_code), - )); - } - - // Save output to individual files - if let Some(ref run_dir) = run_root { - self.save_repo_output(&repo, &stdout, &stderr, run_dir)?; - } - } - Err(e) => { - errors.push((repo.name.clone(), e)); - } - } + // Optionally remove script file after execution + let _ = std::fs::remove_file(script_path); + result?; } } - // Check if all operations failed - if !errors.is_empty() && successful == 0 { - return Err(anyhow::anyhow!( - "All command executions failed. First error: {}", - errors[0].1 - )); - } - Ok(()) } -} -impl RunCommand { - /// Create a new RunCommand with default settings for testing - pub fn new_for_test(command: String, output_dir: String) -> Self { - Self { - command, - no_save: false, - output_dir: Some(PathBuf::from(output_dir)), - } - } + async fn materialize_script( + repo: &crate::config::Repository, + recipe_name: &str, + steps: &[String], + ) -> Result { + let target_dir = repo.get_target_dir(); + let repo_path = Path::new(&target_dir); - /// Sanitize command string for use in directory names - fn sanitize_command_for_filename(command: &str) -> String { - command - .chars() - .map(|c| match c { - ' ' => '_', - '/' | '\\' | ':' | '*' | '?' | '"' | '<' | '>' | '|' => '_', - c if c.is_alphanumeric() || c == '-' || c == '_' || c == '.' => c, - _ => '_', - }) - .collect::() - .chars() - .take(50) // Limit length to avoid overly long directory names - .collect() - } + // Create script directly in the repository root + let script_label = Self::sanitize_script_name(recipe_name); + let script_path = repo_path.join(format!("{}.script", script_label)); - /// Save individual repository output to separate files - fn save_repo_output( - &self, - repo: &crate::config::Repository, - stdout: &str, - stderr: &str, - run_dir: &Path, - ) -> Result<()> { - let safe_name = repo.name.replace(['/', '\\', ':'], "_"); - - // Save stdout - if !stdout.is_empty() { - let stdout_path = run_dir.join(format!("{}.stdout", safe_name)); - std::fs::write(stdout_path, stdout)?; - } + // Join all steps with newlines to create the script content + let script_content = steps.join("\n"); + let content = if script_content.starts_with("#!") { + script_content + } else { + format!("#!/bin/sh\n{}", script_content) + }; - // Save stderr - if !stderr.is_empty() { - let stderr_path = run_dir.join(format!("{}.stderr", safe_name)); - std::fs::write(stderr_path, stderr)?; + std::fs::write(&script_path, content)?; + + #[cfg(unix)] + { + use std::os::unix::fs::PermissionsExt; + let mut perm = std::fs::metadata(&script_path)?.permissions(); + perm.set_mode(0o750); + std::fs::set_permissions(&script_path, perm)?; } - Ok(()) + Ok(script_path) + } + + fn sanitize_script_name(name: &str) -> String { + let mut out = String::with_capacity(name.len()); + for c in name.chars() { + if c.is_ascii_alphanumeric() || c == '-' || c == '_' { + out.push(c.to_ascii_lowercase()); + } else { + out.push('_'); + } + } + out } } diff --git a/src/config/loader.rs b/src/config/loader.rs index 5e30c3e..90c0d35 100644 --- a/src/config/loader.rs +++ b/src/config/loader.rs @@ -5,9 +5,17 @@ use anyhow::Result; use serde::{Deserialize, Serialize}; use std::path::Path; +#[derive(Debug, Clone, Serialize, Deserialize)] +pub struct Recipe { + pub name: String, + pub steps: Vec, +} + #[derive(Debug, Clone, Serialize, Deserialize)] pub struct Config { pub repositories: Vec, + #[serde(default)] + pub recipes: Vec, } impl Config { @@ -175,9 +183,15 @@ impl Config { pub fn new() -> Self { Self { repositories: Vec::new(), + recipes: Vec::new(), } } + /// Find a recipe by name + pub fn find_recipe(&self, name: &str) -> Option<&Recipe> { + self.recipes.iter().find(|r| r.name == name) + } + /// Alias for load method for backwards compatibility pub fn load_config(path: &str) -> Result { Self::load(path) @@ -248,6 +262,7 @@ mod tests { Config { repositories: vec![repo1, repo2], + recipes: Vec::new(), } } diff --git a/src/config/mod.rs b/src/config/mod.rs index e07d494..0c2b925 100644 --- a/src/config/mod.rs +++ b/src/config/mod.rs @@ -6,6 +6,6 @@ pub mod repository; pub mod validation; pub use builder::RepositoryBuilder; -pub use loader::Config; +pub use loader::{Config, Recipe}; pub use repository::Repository; pub use validation::ConfigValidator; diff --git a/src/main.rs b/src/main.rs index 3f2ea75..9c84c79 100644 --- a/src/main.rs +++ b/src/main.rs @@ -44,7 +44,12 @@ enum Commands { /// Run a command in each repository Run { /// Command to execute - command: String, + #[arg(value_name = "COMMAND", help = "Command to execute")] + command: Option, + + /// Name of a recipe defined in config.yaml + #[arg(long, help = "Name of a recipe defined in config.yaml")] + recipe: Option, /// Specific repository names to run command in (if not provided, uses tag filter or all repos) repos: Vec, @@ -239,6 +244,7 @@ async fn execute_builtin_command(command: Commands) -> Result<()> { } Commands::Run { command, + recipe, repos, config, tag, @@ -255,13 +261,25 @@ async fn execute_builtin_command(command: Commands) -> Result<()> { parallel, repos: if repos.is_empty() { None } else { Some(repos) }, }; - RunCommand { - command, - no_save, - output_dir: output_dir.map(PathBuf::from), + + // Validate that exactly one of command or recipe is provided + if command.is_none() && recipe.is_none() { + anyhow::bail!("Either --recipe or a command must be provided"); + } + + if command.is_some() && recipe.is_some() { + anyhow::bail!("Cannot specify both command and --recipe"); + } + + if let Some(cmd) = command { + RunCommand::new_command(cmd, no_save, output_dir.map(PathBuf::from)) + .execute(&context) + .await?; + } else if let Some(recipe_name) = recipe { + RunCommand::new_recipe(recipe_name, no_save, output_dir.map(PathBuf::from)) + .execute(&context) + .await?; } - .execute(&context) - .await?; } Commands::Pr { repos, diff --git a/src/runner.rs b/src/runner.rs index 9ebc22a..9bf5a73 100644 --- a/src/runner.rs +++ b/src/runner.rs @@ -3,11 +3,18 @@ use crate::config::Repository; use crate::git::Logger; use anyhow::Result; +use serde_json; use std::io::{BufRead, BufReader}; use std::path::Path; use std::process::{Command, Stdio}; +#[derive(Debug, Clone)] +struct RecipeContext { + name: String, + steps: Vec, +} + #[derive(Default)] pub struct CommandRunner { logger: Logger, @@ -18,6 +25,21 @@ impl CommandRunner { Self::default() } + /// Get a human-readable description of an exit code + fn get_exit_code_description(exit_code: i32) -> &'static str { + match exit_code { + 0 => "success", + 1 => "general error", + 2 => "misuse of shell builtins", + 126 => "command invoked cannot execute", + 127 => "command not found", + 128 => "invalid argument to exit", + 130 => "script terminated by Control-C", + _ if exit_code > 128 => "terminated by signal", + _ => "error", + } + } + /// Run command and capture output for the new logging system pub async fn run_command_with_capture( &self, @@ -25,7 +47,24 @@ impl CommandRunner { command: &str, log_dir: Option<&str>, ) -> Result<(String, String, i32)> { - self.run_command_with_capture_internal(repo, command, log_dir, false) + self.run_command_with_capture_internal(repo, command, log_dir, false, None) + .await + } + + /// Run command with recipe context and capture output for the new logging system + pub async fn run_command_with_recipe_context( + &self, + repo: &Repository, + command: &str, + log_dir: Option<&str>, + recipe_name: &str, + recipe_steps: &[String], + ) -> Result<(String, String, i32)> { + let recipe_context = Some(RecipeContext { + name: recipe_name.to_string(), + steps: recipe_steps.to_vec(), + }); + self.run_command_with_capture_internal(repo, command, log_dir, false, recipe_context) .await } @@ -36,7 +75,7 @@ impl CommandRunner { command: &str, log_dir: Option<&str>, ) -> Result<(String, String, i32)> { - self.run_command_with_capture_internal(repo, command, log_dir, true) + self.run_command_with_capture_internal(repo, command, log_dir, true, None) .await } @@ -45,8 +84,9 @@ impl CommandRunner { &self, repo: &Repository, command: &str, - _log_dir: Option<&str>, - _skip_log_file: bool, + log_dir: Option<&str>, + skip_log_file: bool, + recipe_context: Option, ) -> Result<(String, String, i32)> { let repo_dir = repo.get_target_dir(); @@ -55,7 +95,6 @@ impl CommandRunner { anyhow::bail!("Repository directory does not exist: {}", repo_dir); } - // No longer create log files - all output handled by persist system self.logger.info(repo, &format!("Running '{command}'")); // Execute command @@ -108,6 +147,69 @@ impl CommandRunner { let status = cmd.wait()?; let exit_code = status.code().unwrap_or(-1); + // Save output to files if log directory is provided and not skipping log files + if let Some(log_dir) = log_dir + && !skip_log_file + { + // Create repo-specific subdirectory + let repo_log_dir = Path::new(log_dir).join(&repo.name); + std::fs::create_dir_all(&repo_log_dir)?; + + // Always write metadata file with command and exit code in JSON format + let exit_code_description = Self::get_exit_code_description(exit_code); + let metadata_content = if let Some(ref recipe_ctx) = recipe_context { + serde_json::json!({ + "recipe": recipe_ctx.name, + "exit_code": exit_code, + "exit_code_description": exit_code_description, + "repository": repo.name, + "timestamp": chrono::Local::now().format("%Y-%m-%d %H:%M:%S").to_string(), + "recipe_steps": recipe_ctx.steps + }) + } else { + serde_json::json!({ + "command": command, + "exit_code": exit_code, + "exit_code_description": exit_code_description, + "repository": repo.name, + "timestamp": chrono::Local::now().format("%Y-%m-%d %H:%M:%S").to_string() + }) + }; + let metadata_file = repo_log_dir.join("metadata.json"); + std::fs::write( + &metadata_file, + serde_json::to_string_pretty(&metadata_content)?, + )?; + + // Write stdout to file (even if empty, to show it was captured) + let stdout_file = repo_log_dir.join("stdout.log"); + std::fs::write(&stdout_file, &stdout_content)?; + + // Write stderr to file (even if empty, to show it was captured) + let stderr_file = repo_log_dir.join("stderr.log"); + std::fs::write(&stderr_file, &stderr_content)?; + } + + // Log completion with exit code and description + let exit_code_description = Self::get_exit_code_description(exit_code); + if let Some(ref recipe_ctx) = recipe_context { + self.logger.info( + repo, + &format!( + "Recipe '{}' ended with exit code {} ({})", + recipe_ctx.name, exit_code, exit_code_description + ), + ); + } else { + self.logger.info( + repo, + &format!( + "Command '{}' ended with exit code {} ({})", + command, exit_code, exit_code_description + ), + ); + } + // Always return the captured output, regardless of exit code // This allows the caller to decide how to handle failures and still log the output Ok((stdout_content, stderr_content, exit_code)) @@ -126,7 +228,6 @@ impl CommandRunner { anyhow::bail!("Repository directory does not exist: {}", repo_dir); } - // No longer create log files - all output is handled by the new persist system self.logger.info(repo, &format!("Running '{command}'")); // Execute command @@ -136,11 +237,19 @@ impl CommandRunner { .current_dir(&repo_dir) .status()?; + let exit_code = status.code().unwrap_or(-1); + let exit_code_description = Self::get_exit_code_description(exit_code); + + self.logger.info( + repo, + &format!( + "Command '{}' ended with exit code {} ({})", + command, exit_code, exit_code_description + ), + ); + if !status.success() { - anyhow::bail!( - "Command failed with exit code: {}", - status.code().unwrap_or(-1) - ); + anyhow::bail!("Command failed with exit code: {}", exit_code); } Ok(()) @@ -295,24 +404,28 @@ mod tests { let log_dir_str = log_dir.to_string_lossy().to_string(); let result = runner - .run_command(&repo, "echo 'Logged output'", Some(&log_dir_str)) + .run_command_with_capture(&repo, "echo 'Logged output'", Some(&log_dir_str)) .await; assert!(result.is_ok()); - // No log files are created anymore - the persist system handles output capture - // The log_dir parameter is no longer used for file creation - let log_files: Vec<_> = if log_dir.exists() { - fs::read_dir(&log_dir) - .unwrap() - .filter_map(Result::ok) - .collect() - } else { - Vec::new() - }; - assert!( - log_files.is_empty(), - "No log files should be created - use persist system instead" - ); + // Verify log files are created in repo-specific subdirectory + let repo_log_dir = log_dir.join(&repo.name); + assert!(repo_log_dir.exists(), "Repo log directory should exist"); + + let stdout_file = repo_log_dir.join("stdout.log"); + let metadata_file = repo_log_dir.join("metadata.json"); + + assert!(stdout_file.exists(), "stdout.log should exist"); + assert!(metadata_file.exists(), "metadata.json should exist"); + + let stdout_content = std::fs::read_to_string(&stdout_file).unwrap(); + assert!(stdout_content.contains("Logged output")); + + let metadata_content = std::fs::read_to_string(&metadata_file).unwrap(); + let metadata: serde_json::Value = serde_json::from_str(&metadata_content).unwrap(); + assert_eq!(metadata["command"], "echo 'Logged output'"); + assert_eq!(metadata["exit_code"], 0); + assert_eq!(metadata["exit_code_description"], "success"); } #[tokio::test] @@ -325,7 +438,7 @@ mod tests { let log_dir_str = log_dir.to_string_lossy().to_string(); let result = runner - .run_command( + .run_command_with_capture( &repo, "echo 'stdout message'; echo 'stderr message' >&2", Some(&log_dir_str), @@ -333,19 +446,28 @@ mod tests { .await; assert!(result.is_ok()); - // Verify no log files are created (we now use persist system instead) - let log_files: Vec<_> = if log_dir.exists() { - fs::read_dir(&log_dir) - .unwrap() - .filter_map(Result::ok) - .collect() - } else { - Vec::new() - }; - assert!( - log_files.is_empty(), - "No log files should be created with new persist-only system" - ); + // Verify log files are created with proper content + let repo_log_dir = log_dir.join(&repo.name); + assert!(repo_log_dir.exists(), "Repo log directory should exist"); + + let stdout_file = repo_log_dir.join("stdout.log"); + let stderr_file = repo_log_dir.join("stderr.log"); + let metadata_file = repo_log_dir.join("metadata.json"); + + assert!(stdout_file.exists(), "stdout.log should exist"); + assert!(stderr_file.exists(), "stderr.log should exist"); + assert!(metadata_file.exists(), "metadata.json should exist"); + + let stdout_content = std::fs::read_to_string(&stdout_file).unwrap(); + assert!(stdout_content.contains("stdout message")); + + let stderr_content = std::fs::read_to_string(&stderr_file).unwrap(); + assert!(stderr_content.contains("stderr message")); + + let metadata_content = std::fs::read_to_string(&metadata_file).unwrap(); + let metadata: serde_json::Value = serde_json::from_str(&metadata_content).unwrap(); + assert_eq!(metadata["exit_code"], 0); + assert_eq!(metadata["exit_code_description"], "success"); } #[tokio::test] diff --git a/tests/cli_tests.rs b/tests/cli_tests.rs index 381b987..26d68e1 100644 --- a/tests/cli_tests.rs +++ b/tests/cli_tests.rs @@ -39,18 +39,6 @@ fn test_clone_command_missing_config() { assert!(stderr.contains("No such file") || stderr.contains("not found")); } -#[test] -fn test_run_command_missing_command_arg() { - let output = Command::new("cargo") - .args(["run", "--", "run"]) - .output() - .expect("Failed to execute cargo run"); - - assert!(!output.status.success()); - let stderr = String::from_utf8_lossy(&output.stderr); - assert!(stderr.contains("required") || stderr.contains("missing")); -} - #[test] fn test_pr_command_missing_required_args() { let output = Command::new("cargo") diff --git a/tests/init_command_tests.rs b/tests/init_command_tests.rs index 907934e..92034b2 100644 --- a/tests/init_command_tests.rs +++ b/tests/init_command_tests.rs @@ -201,6 +201,7 @@ async fn test_init_command_supplement_with_duplicate_repository() { "test-repo".to_string(), "git@github.com:owner/test-repo.git".to_string(), )], + recipes: vec![], }; existing_config .save(&output_path.to_string_lossy()) @@ -250,6 +251,7 @@ async fn test_init_command_supplement_with_new_repository() { "existing-repo".to_string(), "git@github.com:owner/existing-repo.git".to_string(), )], + recipes: vec![], }; existing_config .save(&output_path.to_string_lossy()) diff --git a/tests/plugin_tests.rs b/tests/plugin_tests.rs index dbbc5b2..fa4aad2 100644 --- a/tests/plugin_tests.rs +++ b/tests/plugin_tests.rs @@ -24,21 +24,9 @@ exit 0 perms.set_mode(0o755); fs::set_permissions(&plugin_path, perms).unwrap(); - // Build the project - let output = Command::new("cargo") - .args(["build", "--quiet"]) - .output() - .expect("Failed to build project"); - - assert!( - output.status.success(), - "Failed to build: {}", - String::from_utf8_lossy(&output.stderr) - ); - // Test list-plugins with our mock plugin - let output = Command::new("./target/debug/repos") - .arg("--list-plugins") + let output = Command::new("cargo") + .args(["run", "--", "--list-plugins"]) .env( "PATH", format!( @@ -56,8 +44,8 @@ exit 0 assert!(stdout.contains("health")); // Test calling the external plugin - let output = Command::new("./target/debug/repos") - .args(["health", "--test", "argument"]) + let output = Command::new("cargo") + .args(["run", "--", "health", "--test", "argument"]) .env( "PATH", format!( @@ -75,8 +63,8 @@ exit 0 assert!(stdout.contains("Args: --test argument")); // Test non-existent plugin - let output = Command::new("./target/debug/repos") - .arg("nonexistent") + let output = Command::new("cargo") + .args(["run", "--", "nonexistent"]) .output() .expect("Failed to run nonexistent plugin"); @@ -88,16 +76,10 @@ exit 0 #[test] fn test_builtin_commands_still_work() { // Ensure built-in commands are not affected by plugin system - let output = Command::new("cargo") - .args(["build", "--quiet"]) - .output() - .expect("Failed to build project"); - - assert!(output.status.success()); // Test help command - let output = Command::new("./target/debug/repos") - .arg("--help") + let output = Command::new("cargo") + .args(["run", "--", "--help"]) .output() .expect("Failed to run help"); @@ -108,9 +90,17 @@ fn test_builtin_commands_still_work() { assert!(stdout.contains("clone")); // Test list-plugins when no plugins are available - let output = Command::new("./target/debug/repos") - .arg("--list-plugins") - .env("PATH", "/nonexistent") // Empty PATH + let temp_empty_dir = TempDir::new().unwrap(); + let output = Command::new("cargo") + .args(["run", "--", "--list-plugins"]) + .env( + "PATH", + format!( + "{}:{}", + temp_empty_dir.path().display(), + std::env::var("PATH").unwrap_or_default() + ), + ) .output() .expect("Failed to run list-plugins"); diff --git a/tests/pr_command_tests.rs b/tests/pr_command_tests.rs index 40ea29a..9889007 100644 --- a/tests/pr_command_tests.rs +++ b/tests/pr_command_tests.rs @@ -30,6 +30,7 @@ fn create_test_config() -> Config { Config { repositories: vec![repo1, repo2, repo3], + recipes: vec![], } } @@ -169,6 +170,7 @@ async fn test_pr_command_no_matching_repositories() { async fn test_pr_command_empty_repositories() { let config = Config { repositories: vec![], + recipes: vec![], }; let context = create_test_context(config, vec![], vec![], None, false); diff --git a/tests/run_command_tests.rs b/tests/run_command_tests.rs index 59f0ec5..c610d80 100644 --- a/tests/run_command_tests.rs +++ b/tests/run_command_tests.rs @@ -1,13 +1,20 @@ use repos::{ - commands::{Command, CommandContext, run::RunCommand}, - config::{Config, Repository}, + commands::{ + Command, CommandContext, + run::{RunCommand, RunType}, + }, + config::{Config, Recipe, Repository}, }; use std::fs; use std::path::PathBuf; use std::process::Command as ProcessCommand; use tempfile::TempDir; -/// Helper function to create a git repository in a directory +// ================================= +// ===== Helper Functions +// ================================= + +/// Creates a git repository in the specified directory with proper git configuration. fn create_git_repo(path: &std::path::Path) -> std::io::Result<()> { // Initialize git repo ProcessCommand::new("git") @@ -42,95 +49,35 @@ fn create_git_repo(path: &std::path::Path) -> std::io::Result<()> { Ok(()) } -#[tokio::test] -async fn test_run_command_basic_creation() { - let command = RunCommand { - command: "echo hello".to_string(), - no_save: false, - output_dir: None, - }; - - assert_eq!(command.command, "echo hello"); - assert!(!command.no_save); - assert!(command.output_dir.is_none()); -} - -#[tokio::test] -async fn test_run_command_with_custom_output_dir() { - let output_dir = PathBuf::from("/tmp/custom"); - let command = RunCommand { - command: "ls".to_string(), - no_save: false, - output_dir: Some(output_dir.clone()), - }; - - assert_eq!(command.command, "ls"); - assert!(!command.no_save); - assert_eq!(command.output_dir, Some(output_dir)); -} - -#[tokio::test] -async fn test_run_command_no_save_mode() { - let command = RunCommand { - command: "pwd".to_string(), - no_save: true, - output_dir: None, - }; - - assert_eq!(command.command, "pwd"); - assert!(command.no_save); - assert!(command.output_dir.is_none()); -} - -#[tokio::test] -async fn test_run_command_empty_repositories() { - let command = RunCommand { - command: "echo test".to_string(), - no_save: true, - output_dir: None, - }; - - let context = CommandContext { - config: Config::new(), // Empty config - tag: vec![], - exclude_tag: vec![], - parallel: false, - repos: None, - }; - - let result = command.execute(&context).await; - assert!(result.is_ok()); // Should succeed with empty repos -} - -#[tokio::test] -async fn test_run_command_basic_execution() { +/// Creates a basic single-repo test setup with a recipe and default CommandContext. +fn setup_recipe_test( + repo_name: &str, + recipe_name: &str, + steps: Vec<&str>, +) -> (TempDir, Repository, Recipe, CommandContext) { let temp_dir = TempDir::new().unwrap(); - let log_dir = temp_dir.path().join("logs"); - fs::create_dir_all(&log_dir).unwrap(); - - // Create a test repository directory - let repo_dir = temp_dir.path().join("test-repo"); + let repo_dir = temp_dir.path().join(repo_name); fs::create_dir_all(&repo_dir).unwrap(); create_git_repo(&repo_dir).unwrap(); let repo = Repository { - name: "test-repo".to_string(), - url: "https://github.com/user/test-repo.git".to_string(), + name: repo_name.to_string(), + url: format!("https://github.com/user/{}.git", repo_name), tags: vec!["test".to_string()], path: Some(repo_dir.to_string_lossy().to_string()), branch: None, config_dir: None, }; - let command = RunCommand { - command: "echo hello".to_string(), - no_save: true, - output_dir: None, + let recipe = Recipe { + name: recipe_name.to_string(), + steps: steps.into_iter().map(|s| s.to_string()).collect(), }; let context = CommandContext { config: Config { - repositories: vec![repo], + repositories: vec![repo.clone()], + recipes: vec![recipe.clone()], }, tag: vec![], exclude_tag: vec![], @@ -138,379 +85,310 @@ async fn test_run_command_basic_execution() { parallel: false, }; - let result = command.execute(&context).await; - assert!(result.is_ok()); + (temp_dir, repo, recipe, context) } -#[tokio::test] -async fn test_run_command_no_matching_repos() { +/// Creates a basic single-repo test setup with default CommandContext. +fn setup_basic_test(repo_name: &str) -> (TempDir, Repository, CommandContext) { let temp_dir = TempDir::new().unwrap(); - let log_dir = temp_dir.path().join("logs"); - fs::create_dir_all(&log_dir).unwrap(); - - // Create a test repository directory - let repo_dir = temp_dir.path().join("test-repo"); + let repo_dir = temp_dir.path().join(repo_name); fs::create_dir_all(&repo_dir).unwrap(); create_git_repo(&repo_dir).unwrap(); let repo = Repository { - name: "test-repo".to_string(), - url: "https://github.com/user/test-repo.git".to_string(), + name: repo_name.to_string(), + url: format!("https://github.com/user/{}.git", repo_name), tags: vec!["test".to_string()], path: Some(repo_dir.to_string_lossy().to_string()), branch: None, config_dir: None, }; - let command = RunCommand { - command: "echo hello".to_string(), - no_save: true, - output_dir: None, - }; - - // Use a tag that doesn't match any repo let context = CommandContext { config: Config { - repositories: vec![repo], + repositories: vec![repo.clone()], + recipes: vec![], }, - tag: vec!["nonexistent".to_string()], + tag: vec![], exclude_tag: vec![], repos: None, parallel: false, }; - let result = command.execute(&context).await; - assert!(result.is_ok()); + (temp_dir, repo, context) } -#[tokio::test] -async fn test_run_command_with_specific_repos() { +/// Creates a parallel execution test setup with two repositories. +fn setup_parallel_test( + repo1_name: &str, + repo2_name: &str, +) -> (TempDir, Vec, CommandContext) { let temp_dir = TempDir::new().unwrap(); - let log_dir = temp_dir.path().join("logs"); - fs::create_dir_all(&log_dir).unwrap(); - - let repo_dir1 = temp_dir.path().join("test-repo1"); - fs::create_dir_all(&repo_dir1).unwrap(); - create_git_repo(&repo_dir1).unwrap(); - - let repo_dir2 = temp_dir.path().join("test-repo2"); - fs::create_dir_all(&repo_dir2).unwrap(); - create_git_repo(&repo_dir2).unwrap(); + let repo1_dir = temp_dir.path().join(repo1_name); + fs::create_dir_all(&repo1_dir).unwrap(); + create_git_repo(&repo1_dir).unwrap(); let repo1 = Repository { - name: "test-repo1".to_string(), - url: "https://github.com/user/test-repo1.git".to_string(), + name: repo1_name.to_string(), + url: format!("https://github.com/user/{}.git", repo1_name), tags: vec!["test".to_string()], - path: Some(repo_dir1.to_string_lossy().to_string()), + path: Some(repo1_dir.to_string_lossy().to_string()), branch: None, config_dir: None, }; + let repo2_dir = temp_dir.path().join(repo2_name); + fs::create_dir_all(&repo2_dir).unwrap(); + create_git_repo(&repo2_dir).unwrap(); let repo2 = Repository { - name: "test-repo2".to_string(), - url: "https://github.com/user/test-repo2.git".to_string(), - tags: vec!["other".to_string()], - path: Some(repo_dir2.to_string_lossy().to_string()), + name: repo2_name.to_string(), + url: format!("https://github.com/user/{}.git", repo2_name), + tags: vec!["test".to_string()], + path: Some(repo2_dir.to_string_lossy().to_string()), branch: None, config_dir: None, }; - let command = RunCommand { - command: "echo hello".to_string(), - no_save: true, - output_dir: None, - }; - + let repos = vec![repo1, repo2]; let context = CommandContext { config: Config { - repositories: vec![repo1, repo2], + repositories: repos.clone(), + recipes: vec![], }, tag: vec![], exclude_tag: vec![], - repos: Some(vec!["test-repo1".to_string()]), - parallel: false, + repos: None, + parallel: true, }; - let result = command.execute(&context).await; - assert!(result.is_ok()); + (temp_dir, repos, context) } -#[tokio::test] -async fn test_run_command_with_tag_filter() { - let temp_dir = TempDir::new().unwrap(); - let log_dir = temp_dir.path().join("logs"); - fs::create_dir_all(&log_dir).unwrap(); - - let repo_dir1 = temp_dir.path().join("backend-repo"); - fs::create_dir_all(&repo_dir1).unwrap(); - create_git_repo(&repo_dir1).unwrap(); - - let repo_dir2 = temp_dir.path().join("frontend-repo"); - fs::create_dir_all(&repo_dir2).unwrap(); - create_git_repo(&repo_dir2).unwrap(); +/// Creates a repository with specific tags for tag filtering tests. +fn create_tagged_repo_setup( + temp_dir: &TempDir, + repo_name: &str, + tags: Vec<&str>, +) -> (std::path::PathBuf, Repository) { + let repo_dir = temp_dir.path().join(repo_name); + fs::create_dir_all(&repo_dir).unwrap(); + create_git_repo(&repo_dir).unwrap(); - let backend_repo = Repository { - name: "backend-repo".to_string(), - url: "https://github.com/user/backend-repo.git".to_string(), - tags: vec!["backend".to_string(), "rust".to_string()], - path: Some(repo_dir1.to_string_lossy().to_string()), + let repo = Repository { + name: repo_name.to_string(), + url: format!("https://github.com/user/{}.git", repo_name), + tags: tags.into_iter().map(|s| s.to_string()).collect(), + path: Some(repo_dir.to_string_lossy().to_string()), branch: None, config_dir: None, }; - let frontend_repo = Repository { - name: "frontend-repo".to_string(), - url: "https://github.com/user/frontend-repo.git".to_string(), - tags: vec!["frontend".to_string(), "javascript".to_string()], - path: Some(repo_dir2.to_string_lossy().to_string()), - branch: None, - config_dir: None, - }; + (repo_dir, repo) +} + +/// Builder pattern for creating CommandContext with sensible defaults. +struct CommandContextBuilder { + repositories: Vec, + recipes: Vec, + tag: Vec, + exclude_tag: Vec, + repos: Option>, + parallel: bool, +} + +impl CommandContextBuilder { + fn new() -> Self { + Self { + repositories: vec![], + recipes: vec![], + tag: vec![], + exclude_tag: vec![], + repos: None, + parallel: false, + } + } + + fn with_repositories(mut self, repositories: Vec) -> Self { + self.repositories = repositories; + self + } + + fn with_tags(mut self, tags: Vec<&str>) -> Self { + self.tag = tags.into_iter().map(|s| s.to_string()).collect(); + self + } + + fn build(self) -> CommandContext { + CommandContext { + config: Config { + repositories: self.repositories, + recipes: self.recipes, + }, + tag: self.tag, + exclude_tag: self.exclude_tag, + repos: self.repos, + parallel: self.parallel, + } + } +} +// ================================= +// ===== Tests +// ================================= + +/// Test basic RunCommand creation with command +#[tokio::test] +async fn test_run_command_creation() { let command = RunCommand { - command: "echo hello".to_string(), + run_type: RunType::Command("echo hello".to_string()), no_save: true, output_dir: None, }; - let context = CommandContext { - config: Config { - repositories: vec![backend_repo, frontend_repo], - }, - tag: vec!["backend".to_string()], - exclude_tag: vec![], - repos: None, - parallel: false, - }; - - let result = command.execute(&context).await; - assert!(result.is_ok()); + // Test that the run_type contains the right command + match &command.run_type { + RunType::Command(cmd) => assert_eq!(cmd, "echo hello"), + RunType::Recipe(_) => panic!("Expected Command variant"), + } + assert!(command.no_save); + assert!(command.output_dir.is_none()); } +/// Test recipe variant creation #[tokio::test] -async fn test_run_command_parallel_execution() { - let temp_dir = TempDir::new().unwrap(); - let log_dir = temp_dir.path().join("logs"); - fs::create_dir_all(&log_dir).unwrap(); - - let repo_dir1 = temp_dir.path().join("test-repo1"); - fs::create_dir_all(&repo_dir1).unwrap(); - create_git_repo(&repo_dir1).unwrap(); +async fn test_run_command_recipe_creation() { + let command = RunCommand { + run_type: RunType::Recipe("test-recipe".to_string()), + no_save: false, + output_dir: None, + }; - let repo_dir2 = temp_dir.path().join("test-repo2"); - fs::create_dir_all(&repo_dir2).unwrap(); - create_git_repo(&repo_dir2).unwrap(); + match &command.run_type { + RunType::Recipe(recipe) => assert_eq!(recipe, "test-recipe"), + RunType::Command(_) => panic!("Expected Recipe variant"), + } + assert!(!command.no_save); +} - let repo1 = Repository { - name: "test-repo1".to_string(), - url: "https://github.com/user/test-repo1.git".to_string(), - tags: vec!["test".to_string()], - path: Some(repo_dir1.to_string_lossy().to_string()), - branch: None, - config_dir: None, +#[tokio::test] +async fn test_run_command_with_custom_output_dir() { + let output_dir = PathBuf::from("/tmp/custom"); + let command = RunCommand { + run_type: RunType::Command("ls".to_string()), + no_save: false, + output_dir: Some(output_dir.clone()), }; - let repo2 = Repository { - name: "test-repo2".to_string(), - url: "https://github.com/user/test-repo2.git".to_string(), - tags: vec!["test".to_string()], - path: Some(repo_dir2.to_string_lossy().to_string()), - branch: None, - config_dir: None, - }; + match &command.run_type { + RunType::Command(cmd) => assert_eq!(cmd, "ls"), + RunType::Recipe(_) => panic!("Expected Command variant"), + } + assert!(!command.no_save); + assert_eq!(command.output_dir, Some(output_dir)); +} +#[tokio::test] +async fn test_run_command_empty_repositories() { let command = RunCommand { - command: "echo hello".to_string(), + run_type: RunType::Command("echo test".to_string()), no_save: true, output_dir: None, }; let context = CommandContext { config: Config { - repositories: vec![repo1, repo2], + repositories: vec![], + recipes: vec![], }, tag: vec![], exclude_tag: vec![], + parallel: false, repos: None, - parallel: true, }; let result = command.execute(&context).await; - assert!(result.is_ok()); + assert!(result.is_ok()); // Should succeed with empty repos } #[tokio::test] -async fn test_run_command_tag_filter_no_match() { - let temp_dir = TempDir::new().unwrap(); - let log_dir = temp_dir.path().join("logs"); - fs::create_dir_all(&log_dir).unwrap(); - - let repo_dir = temp_dir.path().join("backend-repo"); - fs::create_dir_all(&repo_dir).unwrap(); - create_git_repo(&repo_dir).unwrap(); - - let backend_repo = Repository { - name: "backend-repo".to_string(), - url: "https://github.com/user/backend-repo.git".to_string(), - tags: vec!["backend".to_string(), "rust".to_string()], - path: Some(repo_dir.to_string_lossy().to_string()), - branch: None, - config_dir: None, - }; +async fn test_run_command_basic_execution() { + let (_temp_dir, _repo, context) = setup_basic_test("test-repo"); let command = RunCommand { - command: "echo hello".to_string(), + run_type: RunType::Command("echo hello".to_string()), no_save: true, output_dir: None, }; - let context = CommandContext { - config: Config { - repositories: vec![backend_repo], - }, - tag: vec!["frontend".to_string()], // Non-matching tag - exclude_tag: vec![], - repos: None, - parallel: false, - }; - let result = command.execute(&context).await; assert!(result.is_ok()); } #[tokio::test] -async fn test_run_command_error_handling() { - let temp_dir = TempDir::new().unwrap(); - let log_dir = temp_dir.path().join("logs"); - fs::create_dir_all(&log_dir).unwrap(); - - let repo_dir = temp_dir.path().join("test-repo"); - fs::create_dir_all(&repo_dir).unwrap(); - create_git_repo(&repo_dir).unwrap(); - - let repo = Repository { - name: "test-repo".to_string(), - url: "https://github.com/user/test-repo.git".to_string(), - tags: vec!["test".to_string()], - path: Some(repo_dir.to_string_lossy().to_string()), - branch: None, - config_dir: None, - }; +async fn test_run_command_parallel_execution() { + let (_temp_dir, _repos, context) = setup_parallel_test("test-repo1", "test-repo2"); let command = RunCommand { - command: "false".to_string(), // Command that will fail + run_type: RunType::Command("echo hello".to_string()), no_save: true, output_dir: None, }; - let context = CommandContext { - config: Config { - repositories: vec![repo], - }, - tag: vec![], - exclude_tag: vec![], - repos: None, - parallel: false, - }; - let result = command.execute(&context).await; - // The command should fail when all individual commands fail - assert!(result.is_err()); + assert!(result.is_ok()); } #[tokio::test] -async fn test_run_command_file_operations() { +async fn test_run_command_with_tag_filter() { let temp_dir = TempDir::new().unwrap(); - let log_dir = temp_dir.path().join("logs"); - fs::create_dir_all(&log_dir).unwrap(); - - let repo_dir = temp_dir.path().join("test-repo"); - fs::create_dir_all(&repo_dir).unwrap(); - create_git_repo(&repo_dir).unwrap(); - - let repo = Repository { - name: "test-repo".to_string(), - url: "https://github.com/user/test-repo.git".to_string(), - tags: vec!["test".to_string()], - path: Some(repo_dir.to_string_lossy().to_string()), - branch: None, - config_dir: None, - }; + let (_backend_dir, backend_repo) = + create_tagged_repo_setup(&temp_dir, "backend-repo", vec!["backend", "rust"]); + let (_frontend_dir, frontend_repo) = + create_tagged_repo_setup(&temp_dir, "frontend-repo", vec!["frontend", "javascript"]); - // Test command that creates a file let command = RunCommand { - command: "touch test-file.txt".to_string(), + run_type: RunType::Command("echo hello".to_string()), no_save: true, output_dir: None, }; - let context = CommandContext { - config: Config { - repositories: vec![repo], - }, - tag: vec![], - exclude_tag: vec![], - repos: None, - parallel: false, - }; + let context = CommandContextBuilder::new() + .with_repositories(vec![backend_repo, frontend_repo]) + .with_tags(vec!["backend"]) + .build(); let result = command.execute(&context).await; assert!(result.is_ok()); - - // Verify the file was created - assert!(repo_dir.join("test-file.txt").exists()); } #[tokio::test] -async fn test_run_command_with_multiple_tags() { - let temp_dir = TempDir::new().unwrap(); - let log_dir = temp_dir.path().join("logs"); - fs::create_dir_all(&log_dir).unwrap(); - - let repo_dir = temp_dir.path().join("backend-repo"); - fs::create_dir_all(&repo_dir).unwrap(); - create_git_repo(&repo_dir).unwrap(); - - let backend_repo = Repository { - name: "backend-repo".to_string(), - url: "https://github.com/user/backend-repo.git".to_string(), - tags: vec!["backend".to_string(), "rust".to_string()], - path: Some(repo_dir.to_string_lossy().to_string()), - branch: None, - config_dir: None, - }; +async fn test_run_command_error_handling() { + let (_temp_dir, _repo, context) = setup_basic_test("test-repo"); let command = RunCommand { - command: "echo hello".to_string(), + run_type: RunType::Command("false".to_string()), // Command that will fail no_save: true, output_dir: None, }; - // Test with multiple matching tags - let context = CommandContext { - config: Config { - repositories: vec![backend_repo], - }, - tag: vec!["backend".to_string()], - exclude_tag: vec![], - repos: None, - parallel: false, - }; - let result = command.execute(&context).await; - assert!(result.is_ok()); + // The command should fail when all individual commands fail + assert!(result.is_err()); } #[tokio::test] async fn test_run_command_with_special_characters() { let command = RunCommand { - command: "echo \"test with spaces and symbols: @#$%\"".to_string(), + run_type: RunType::Command("echo \"test with spaces and symbols: @#$%\"".to_string()), no_save: true, output_dir: None, }; let context = CommandContext { - config: Config::new(), + config: Config { + repositories: vec![], + recipes: vec![], + }, tag: vec![], exclude_tag: vec![], parallel: false, @@ -521,127 +399,1029 @@ async fn test_run_command_with_special_characters() { assert!(result.is_ok()); } +// ===== Additional Edge Case Tests for Coverage ===== + #[tokio::test] -async fn test_run_command_parallel_mode() { +async fn test_run_command_error_no_command_nor_recipe() { let command = RunCommand { - command: "echo parallel test".to_string(), + run_type: RunType::Command("".to_string()), // Empty command no_save: true, output_dir: None, }; let context = CommandContext { - config: Config::new(), + config: Config { + repositories: vec![], + recipes: vec![], + }, tag: vec![], exclude_tag: vec![], - parallel: true, // Test parallel execution + parallel: false, repos: None, }; let result = command.execute(&context).await; - assert!(result.is_ok()); + assert!(result.is_ok()); // Should succeed with empty repos } #[tokio::test] -async fn test_run_command_comprehensive() { - // Test all options together with real git repositories +async fn test_run_command_existing_output_dir() { let temp_dir = TempDir::new().unwrap(); + let output_dir = temp_dir.path().join("already_exists"); + fs::create_dir_all(&output_dir).unwrap(); // Pre-create the directory - // Create multiple test repos - let repo_dir1 = temp_dir.path().join("comprehensive-repo1"); - fs::create_dir_all(&repo_dir1).unwrap(); - create_git_repo(&repo_dir1).unwrap(); - - let repo_dir2 = temp_dir.path().join("comprehensive-repo2"); - fs::create_dir_all(&repo_dir2).unwrap(); - create_git_repo(&repo_dir2).unwrap(); + let (_temp_dir, _repo, context) = setup_basic_test("test-repo"); let command = RunCommand { - command: "echo comprehensive test".to_string(), + run_type: RunType::Command("echo existing_out_dir".to_string()), no_save: false, - output_dir: Some(temp_dir.path().to_path_buf()), - }; - - let config = Config { - repositories: vec![ - Repository { - name: "comprehensive-repo1".to_string(), - url: "https://github.com/test/comprehensive1.git".to_string(), - tags: vec!["backend".to_string()], - path: Some(repo_dir1.to_string_lossy().to_string()), - branch: None, - config_dir: None, - }, - Repository { - name: "comprehensive-repo2".to_string(), - url: "https://github.com/test/comprehensive2.git".to_string(), - tags: vec!["frontend".to_string()], - path: Some(repo_dir2.to_string_lossy().to_string()), - branch: None, - config_dir: None, - }, - ], + output_dir: Some(output_dir.clone()), }; + let result = command.execute(&context).await; + assert!(result.is_ok()); + assert!(output_dir.exists(), "Output dir should remain"); +} + +#[tokio::test] +async fn test_run_recipe_without_shebang_implicit_shell() { + let (_temp_dir, _repo, _recipe, context) = + setup_recipe_test("test-repo", "no-shebang", vec!["echo IMPLICIT_SHELL_OK"]); + + let command = RunCommand { + run_type: RunType::Recipe("no-shebang".to_string()), + no_save: true, + output_dir: None, + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +#[tokio::test] +async fn test_run_recipe_parallel_failure_branch() { + let (_temp_dir, _repos, context) = setup_parallel_test("repo1", "repo2"); + + // First step succeeds; second step uses a definitely missing command to force non-zero exit. + let recipe = Recipe { + name: "parallel-failure".to_string(), + steps: vec![ + "echo FIRST".to_string(), + "this-command-should-not-exist-12345".to_string(), + ], + }; + + // Update context to include the recipe let context = CommandContext { - config, - tag: vec!["backend".to_string()], // Should filter to repo1 only + config: Config { + repositories: context.config.repositories, + recipes: vec![recipe], + }, + tag: context.tag, + exclude_tag: context.exclude_tag, + parallel: true, // Enable parallel execution + repos: context.repos, + }; + + let command = RunCommand { + run_type: RunType::Recipe("parallel-failure".to_string()), + no_save: true, + output_dir: None, + }; + + let result = command.execute(&context).await; + assert!( + result.is_ok(), + "Run returns Ok but individual failures should be logged internally" + ); +} + +#[tokio::test] +async fn test_run_command_skip_save_branch() { + let (_temp_dir, _repo, context) = setup_basic_test("test-repo"); + + let command = RunCommand { + run_type: RunType::Command("echo SKIP_SAVE_MODE".to_string()), + no_save: true, // Skip save mode + output_dir: None, + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +#[tokio::test] +async fn test_run_long_command_name_sanitization() { + let temp_dir = TempDir::new().unwrap(); + let (_temp_dir, _repo, context) = setup_basic_test("test-repo"); + + let long_cmd = "echo THIS_IS_A_REALLY_LONG_COMMAND_NAME_WITH_SPECIAL_CHARS_%_#_@_!_____END"; + let command = RunCommand { + run_type: RunType::Command(long_cmd.to_string()), + no_save: false, + output_dir: Some(temp_dir.path().join("long_cmd_output")), + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); + // Verify directory created (sanitized/truncated filename path executed) + assert!(temp_dir.path().join("long_cmd_output").exists()); +} + +#[tokio::test] +async fn test_run_recipe_script_creation_error_handling() { + let (_temp_dir, _repo, _recipe, context) = setup_recipe_test( + "test-repo", + "script-creation", + vec!["echo 'Testing script creation'", "echo 'Second step'"], + ); + + let command = RunCommand { + run_type: RunType::Recipe("script-creation".to_string()), + no_save: true, + output_dir: None, + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +#[tokio::test] +async fn test_run_recipe_with_readonly_directory() { + let (_temp_dir, _repo, _recipe, context) = setup_recipe_test( + "test-repo", + "readonly-test", + vec!["echo 'Testing readonly scenario'"], + ); + + let command = RunCommand { + run_type: RunType::Recipe("readonly-test".to_string()), + no_save: true, + output_dir: None, + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +// ===== Constructor Tests ===== + +#[tokio::test] +async fn test_run_command_new_command() { + let command = RunCommand::new_command("echo test".to_string(), true, None); + + match &command.run_type { + RunType::Command(cmd) => assert_eq!(cmd, "echo test"), + RunType::Recipe(_) => panic!("Expected Command variant"), + } + assert!(command.no_save); + assert!(command.output_dir.is_none()); +} + +#[tokio::test] +async fn test_run_command_new_recipe() { + let output_dir = Some(PathBuf::from("/tmp/recipes")); + let command = RunCommand::new_recipe("my-recipe".to_string(), false, output_dir.clone()); + + match &command.run_type { + RunType::Recipe(recipe) => assert_eq!(recipe, "my-recipe"), + RunType::Command(_) => panic!("Expected Recipe variant"), + } + assert!(!command.no_save); + assert_eq!(command.output_dir, output_dir); +} + +#[tokio::test] +async fn test_run_command_new_for_test() { + let command = RunCommand::new_for_test("test command".to_string(), "/tmp/test".to_string()); + + match &command.run_type { + RunType::Command(cmd) => assert_eq!(cmd, "test command"), + RunType::Recipe(_) => panic!("Expected Command variant"), + } + assert!(!command.no_save); + assert_eq!(command.output_dir, Some(PathBuf::from("/tmp/test"))); +} + +// ===== Recipe Execution Tests ===== + +#[tokio::test] +async fn test_run_command_recipe_execution() { + let recipe_steps = vec!["echo 'Hello from recipe'", "pwd"]; + let (_temp_dir, _repo, _recipe, context) = + setup_recipe_test("test-repo", "test-recipe", recipe_steps); + + let command = RunCommand { + run_type: RunType::Recipe("test-recipe".to_string()), + no_save: true, + output_dir: None, + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +#[tokio::test] +async fn test_run_command_recipe_not_found() { + let command = RunCommand { + run_type: RunType::Recipe("nonexistent-recipe".to_string()), + no_save: true, + output_dir: None, + }; + + let context = CommandContext { + config: Config { + repositories: vec![], + recipes: vec![], + }, + tag: vec![], exclude_tag: vec![], - parallel: true, repos: None, + parallel: false, + }; + + let result = command.execute(&context).await; + assert!(result.is_err()); + assert!( + result + .unwrap_err() + .to_string() + .contains("Recipe 'nonexistent-recipe' not found") + ); +} + +#[tokio::test] +async fn test_run_command_recipe_parallel_execution() { + let (_temp_dir, _repos, mut context) = setup_parallel_test("test-repo1", "test-repo2"); + + // Add the recipe for parallel execution + let recipe = Recipe { + name: "parallel-recipe".to_string(), + steps: vec!["echo 'Parallel recipe execution'".to_string()], + }; + context.config.recipes.push(recipe); + context.parallel = true; + + let command = RunCommand { + run_type: RunType::Recipe("parallel-recipe".to_string()), + no_save: true, + output_dir: None, }; let result = command.execute(&context).await; assert!(result.is_ok()); } +// ===== Exclude Tag Tests ===== + #[tokio::test] -async fn test_run_command_exclude_tag_filter() { +async fn test_run_command_with_exclude_tag() { + let (_temp_dir, repos, mut context) = setup_parallel_test("backend-repo", "frontend-repo"); + + // Customize the repository tags for this test + let mut backend_repo = repos[0].clone(); + let mut frontend_repo = repos[1].clone(); + backend_repo.tags = vec!["backend".to_string(), "rust".to_string()]; + frontend_repo.tags = vec!["frontend".to_string(), "javascript".to_string()]; + + context.config.repositories = vec![backend_repo, frontend_repo]; + context.exclude_tag = vec!["frontend".to_string()]; // Exclude frontend repos + + let command = RunCommand { + run_type: RunType::Command("echo exclude_test".to_string()), + no_save: true, + output_dir: None, + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +// ===== Repo Name Filtering Tests ===== + +#[tokio::test] +async fn test_run_command_with_specific_repos() { + let (_temp_dir, repos, mut context) = setup_parallel_test("backend-repo", "frontend-repo"); + + // Customize the repository tags for this test + let mut backend_repo = repos[0].clone(); + let mut frontend_repo = repos[1].clone(); + backend_repo.tags = vec!["backend".to_string()]; + frontend_repo.tags = vec!["frontend".to_string()]; + + context.config.repositories = vec![backend_repo, frontend_repo]; + context.repos = Some(vec!["backend-repo".to_string()]); // Only run on backend-repo + + let command = RunCommand { + run_type: RunType::Command("echo specific_repo_test".to_string()), + no_save: true, + output_dir: None, + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +// ===== Output Directory and File Tests ===== + +#[tokio::test] +async fn test_run_command_with_output_directory_creation() { let temp_dir = TempDir::new().unwrap(); - let log_dir = temp_dir.path().join("logs"); - fs::create_dir_all(&log_dir).unwrap(); + let output_dir = temp_dir.path().join("custom_output"); - let repo_dir1 = temp_dir.path().join("backend-repo"); + let (_temp_dir, _repo, context) = setup_basic_test("test-repo"); + + let command = RunCommand { + run_type: RunType::Command("echo 'Testing output directory'".to_string()), + no_save: false, // Enable saving to test directory creation + output_dir: Some(output_dir.clone()), + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); + + // Check that output directory was created + let runs_dir = output_dir.join("runs"); + assert!(runs_dir.exists()); + + // Check that a timestamped subdirectory was created + let entries: Vec<_> = fs::read_dir(runs_dir).unwrap().collect(); + assert!(!entries.is_empty()); +} + +// ===== Mixed Success/Failure Tests ===== + +#[tokio::test] +async fn test_run_command_mixed_success_failure_sequential() { + let temp_dir = TempDir::new().unwrap(); + let repo_dir1 = temp_dir.path().join("good-repo"); fs::create_dir_all(&repo_dir1).unwrap(); create_git_repo(&repo_dir1).unwrap(); - let repo_dir2 = temp_dir.path().join("frontend-repo"); - fs::create_dir_all(&repo_dir2).unwrap(); - create_git_repo(&repo_dir2).unwrap(); + // Create a repo with a path that doesn't exist to cause failure + let bad_repo_path = temp_dir.path().join("nonexistent-path"); - let backend_repo = Repository { - name: "backend-repo".to_string(), - url: "https://github.com/user/backend-repo.git".to_string(), - tags: vec!["backend".to_string(), "rust".to_string()], + let good_repo = Repository { + name: "good-repo".to_string(), + url: "https://github.com/user/good-repo.git".to_string(), + tags: vec!["test".to_string()], path: Some(repo_dir1.to_string_lossy().to_string()), branch: None, config_dir: None, }; - let frontend_repo = Repository { - name: "frontend-repo".to_string(), - url: "https://github.com/user/frontend-repo.git".to_string(), - tags: vec!["frontend".to_string(), "javascript".to_string()], - path: Some(repo_dir2.to_string_lossy().to_string()), + let bad_repo = Repository { + name: "bad-repo".to_string(), + url: "https://github.com/user/bad-repo.git".to_string(), + tags: vec!["test".to_string()], + path: Some(bad_repo_path.to_string_lossy().to_string()), branch: None, config_dir: None, }; let command = RunCommand { - command: "echo hello".to_string(), + run_type: RunType::Command("echo hello".to_string()), no_save: true, output_dir: None, }; let context = CommandContext { config: Config { - repositories: vec![backend_repo, frontend_repo], + repositories: vec![good_repo, bad_repo], + recipes: vec![], }, tag: vec![], - exclude_tag: vec!["frontend".to_string()], // Should exclude frontend repo + exclude_tag: vec![], repos: None, parallel: false, }; + let result = command.execute(&context).await; + // Sequential execution should fail on first error + assert!(result.is_err()); +} + +// ===== Edge Cases and Complex Scenarios ===== + +#[tokio::test] +async fn test_run_command_empty_command_string() { + let command = RunCommand { + run_type: RunType::Command("".to_string()), + no_save: true, + output_dir: None, + }; + + let context = CommandContext { + config: Config { + repositories: vec![], + recipes: vec![], + }, + tag: vec![], + exclude_tag: vec![], + parallel: false, + repos: None, + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); // Should succeed with empty repos +} + +// ===== Command Execution with Saving Tests ===== + +#[tokio::test] +async fn test_run_command_with_save_enabled() { + let temp_dir = TempDir::new().unwrap(); + let output_dir = temp_dir.path().join("test_output"); + + let (_temp_dir, _repo, context) = setup_basic_test("test-repo"); + + let command = RunCommand { + run_type: RunType::Command("echo 'save test'".to_string()), + no_save: false, // Enable saving + output_dir: Some(output_dir.clone()), + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); + + // Verify output directory structure was created + let runs_dir = output_dir.join("runs"); + assert!(runs_dir.exists()); +} + +#[tokio::test] +async fn test_run_command_with_save_default_output_dir() { + let (_temp_dir, _repo, context) = setup_basic_test("test-repo"); + + let command = RunCommand { + run_type: RunType::Command("echo 'default output test'".to_string()), + no_save: false, // Enable saving + output_dir: None, // Use default "output" directory + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +#[tokio::test] +async fn test_run_command_parallel_with_save() { + let temp_dir = TempDir::new().unwrap(); + let output_dir = temp_dir.path().join("parallel_output"); + + let (_temp_dir, _repos, mut context) = setup_parallel_test("test-repo1", "test-repo2"); + context.parallel = true; // Enable parallel execution + + let command = RunCommand { + run_type: RunType::Command("echo 'parallel save test'".to_string()), + no_save: false, // Enable saving + output_dir: Some(output_dir.clone()), + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); + + // Verify output directory structure was created + let runs_dir = output_dir.join("runs"); + assert!(runs_dir.exists()); +} + +#[tokio::test] +async fn test_run_command_parallel_with_no_save() { + let (_temp_dir, _repos, mut context) = setup_parallel_test("test-repo1", "test-repo2"); + context.parallel = true; // Enable parallel execution + + let command = RunCommand { + run_type: RunType::Command("echo 'parallel no save test'".to_string()), + no_save: true, // Disable saving + output_dir: None, + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +// ===== Recipe Execution with Saving Tests ===== + +#[tokio::test] +async fn test_run_command_recipe_with_save_enabled() { + let temp_dir = TempDir::new().unwrap(); + let output_dir = temp_dir.path().join("recipe_output"); + + let recipe_steps = vec!["echo 'Recipe with save'", "pwd"]; + let (_temp_dir, _repo, _recipe, context) = + setup_recipe_test("test-repo", "save-recipe", recipe_steps); + + let command = RunCommand { + run_type: RunType::Recipe("save-recipe".to_string()), + no_save: false, // Enable saving + output_dir: Some(output_dir.clone()), + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); + + // Verify output directory structure was created + let runs_dir = output_dir.join("runs"); + assert!(runs_dir.exists()); +} + +#[tokio::test] +async fn test_run_command_recipe_parallel_with_save() { + let temp_dir = TempDir::new().unwrap(); + let output_dir = temp_dir.path().join("recipe_parallel_output"); + + let (_temp_dir, _repos, mut context) = setup_parallel_test("test-repo1", "test-repo2"); + + // Add recipe for parallel execution + let recipe = Recipe { + name: "parallel-save-recipe".to_string(), + steps: vec!["echo 'Parallel recipe with save'".to_string()], + }; + context.config.recipes.push(recipe); + context.parallel = true; // Enable parallel execution + + let command = RunCommand { + run_type: RunType::Recipe("parallel-save-recipe".to_string()), + no_save: false, // Enable saving + output_dir: Some(output_dir.clone()), + }; + let result = command.execute(&context).await; assert!(result.is_ok()); + + // Verify output directory structure was created + let runs_dir = output_dir.join("runs"); + assert!(runs_dir.exists()); +} + +#[tokio::test] +async fn test_run_command_recipe_parallel_with_no_save() { + let (_temp_dir, _repos, mut context) = setup_parallel_test("test-repo1", "test-repo2"); + + // Add recipe for parallel execution + let recipe = Recipe { + name: "parallel-no-save-recipe".to_string(), + steps: vec!["echo 'Parallel recipe without save'".to_string()], + }; + context.config.recipes.push(recipe); + context.parallel = true; // Enable parallel execution + + let command = RunCommand { + run_type: RunType::Recipe("parallel-no-save-recipe".to_string()), + no_save: true, // Disable saving + output_dir: None, + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +#[tokio::test] +async fn test_run_command_recipe_sequential_with_no_save() { + let recipe_steps = vec!["echo 'Sequential recipe without save'"]; + let (_temp_dir, _repo, _recipe, context) = + setup_recipe_test("test-repo", "sequential-no-save-recipe", recipe_steps); + + let command = RunCommand { + run_type: RunType::Recipe("sequential-no-save-recipe".to_string()), + no_save: true, // Disable saving + output_dir: None, + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +// ===== Script Materialization Tests ===== + +#[tokio::test] +async fn test_script_materialization_with_shebang() { + let recipe_steps = vec!["#!/bin/bash", "echo 'Script with shebang'"]; + let (_temp_dir, _repo, _recipe, context) = + setup_recipe_test("test-repo", "shebang-recipe", recipe_steps); + + let command = RunCommand { + run_type: RunType::Recipe("shebang-recipe".to_string()), + no_save: true, + output_dir: None, + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +#[tokio::test] +async fn test_script_materialization_without_shebang() { + let recipe_steps = vec!["echo 'Script without shebang'", "pwd"]; + let (_temp_dir, _repo, _recipe, context) = + setup_recipe_test("test-repo", "no-shebang-recipe", recipe_steps); + + let command = RunCommand { + run_type: RunType::Recipe("no-shebang-recipe".to_string()), + no_save: true, + output_dir: None, + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +#[tokio::test] +async fn test_sanitize_command_for_filename() { + let temp_dir = TempDir::new().unwrap(); + let (_temp_dir, _repo, context) = setup_basic_test("test-repo"); + + // Command with special characters that need sanitization + let command = RunCommand { + run_type: RunType::Command("echo 'test with / \\ : * ? \" < > | characters'".to_string()), + no_save: false, // Enable saving to test sanitization + output_dir: Some(temp_dir.path().join("sanitize_test")), + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +#[tokio::test] +async fn test_sanitize_script_name() { + let recipe_steps = vec!["echo 'Recipe with special name'"]; + let (_temp_dir, _repo, mut _recipe, context) = setup_recipe_test( + "test-repo", + "Recipe-With.Special@Characters#And$Symbols%", + recipe_steps, + ); + + let command = RunCommand { + run_type: RunType::Recipe("Recipe-With.Special@Characters#And$Symbols%".to_string()), + no_save: true, + output_dir: None, + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +// ===== Long Command/Recipe Name Tests ===== + +#[tokio::test] +async fn test_long_command_name_truncation() { + let temp_dir = TempDir::new().unwrap(); + let (_temp_dir, _repo, context) = setup_basic_test("test-repo"); + + // Very long command that should be truncated for directory name + let long_command = "a".repeat(100); + let command = RunCommand { + run_type: RunType::Command(long_command), + no_save: false, // Enable saving to test truncation + output_dir: Some(temp_dir.path().join("long_command_test")), + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +// ===== Recipe Error Handling Tests ===== + +#[tokio::test] +async fn test_recipe_sequential_execution_with_script_error() { + let recipe_steps = vec!["nonexistent_command_that_will_fail_12345"]; // Non-existent command + let (_temp_dir, _repo, _recipe, context) = + setup_recipe_test("test-repo", "script-error-recipe", recipe_steps); + + let command = RunCommand { + run_type: RunType::Recipe("script-error-recipe".to_string()), + no_save: true, + output_dir: None, + }; + + let result = command.execute(&context).await; + // The recipe should succeed even if commands within it fail, based on current implementation + // This tests the behavior where script execution completes but commands inside may fail + assert!(result.is_ok()); +} + +// ===== Complex Path and Script Tests ===== + +#[tokio::test] +async fn test_recipe_script_path_resolution() { + let recipe_steps = vec!["pwd", "echo 'Path resolution test'"]; + let (_temp_dir, _repo, _recipe, context) = setup_recipe_test( + "test-repo-with-complex-name", + "path-resolution-recipe", + recipe_steps, + ); + + let command = RunCommand { + run_type: RunType::Recipe("path-resolution-recipe".to_string()), + no_save: true, + output_dir: None, + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +// ===== Empty Recipe Test ===== + +#[tokio::test] +async fn test_recipe_with_empty_steps() { + let (_temp_dir, _repo, _recipe, context) = + setup_recipe_test("test-repo", "empty-recipe", vec![]); + + let command = RunCommand { + run_type: RunType::Recipe("empty-recipe".to_string()), + no_save: true, + output_dir: None, + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +// ===== Script Creation and Permissions Tests ===== + +#[tokio::test] +async fn test_script_creation_with_various_contents() { + let recipe_steps = vec![ + "echo 'Line 1'", + "echo 'Line 2'", + "if [ -f 'README.md' ]; then", + " echo 'Found README.md'", + "fi", + ]; + let (_temp_dir, _repo, _recipe, context) = + setup_recipe_test("test-repo", "complex-script", recipe_steps); + + let command = RunCommand { + run_type: RunType::Recipe("complex-script".to_string()), + no_save: true, + output_dir: None, + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +// ===== Error Path for Sequential Recipe Execution ===== + +#[tokio::test] +async fn test_recipe_sequential_execution_with_default_output() { + let (_temp_dir, _repo, _recipe, context) = setup_recipe_test( + "test-repo", + "default-output-recipe", + vec!["echo 'Testing default output directory'"], + ); + + let command = RunCommand { + run_type: RunType::Recipe("default-output-recipe".to_string()), + no_save: false, // Enable saving with default output directory + output_dir: None, // Use default + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +// ===== Multi-Step Recipe Tests ===== + +#[tokio::test] +async fn test_multi_step_recipe_sequential() { + let recipe_steps = vec![ + "echo 'Step 1: Starting recipe'", + "echo 'Step 2: Middle of recipe'", + "echo 'Step 3: Ending recipe'", + "ls -la", + ]; + let (_temp_dir, _repo, _recipe, context) = + setup_recipe_test("test-repo", "multi-step-recipe", recipe_steps); + + let command = RunCommand { + run_type: RunType::Recipe("multi-step-recipe".to_string()), + no_save: true, + output_dir: None, + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +// ===== Test Multi-Repository Recipe with Complex Names ===== + +#[tokio::test] +async fn test_recipe_multi_repo_complex_names() { + let (_temp_dir, _repos, mut context) = + setup_parallel_test("repo-with-dashes-1", "repo_with_underscores_2"); + + let recipe = Recipe { + name: "Complex-Recipe_Name.With@Special#Characters".to_string(), + steps: vec!["echo 'Complex recipe with multiple repos'".to_string()], + }; + context.config.recipes.push(recipe); + + let command = RunCommand { + run_type: RunType::Recipe("Complex-Recipe_Name.With@Special#Characters".to_string()), + no_save: true, + output_dir: None, + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); +} + +// ===== Log File Creation and Content Verification Tests ===== + +#[tokio::test] +async fn test_run_command_creates_logs_with_content() { + let temp_dir = TempDir::new().unwrap(); + let output_dir = temp_dir.path().join("log_test_output"); + + let (_temp_dir, _repo, context) = setup_basic_test("test-repo"); + + let test_output = "Hello from command test"; + let command = RunCommand { + run_type: RunType::Command(format!("echo '{}'", test_output)), + no_save: false, // Enable saving to create log files + output_dir: Some(output_dir.clone()), + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); + + // Check that output directory structure was created + let runs_dir = output_dir.join("runs"); + assert!(runs_dir.exists(), "Runs directory should be created"); + + // Find the timestamped subdirectory + let entries: Vec<_> = fs::read_dir(&runs_dir).unwrap().collect(); + assert!( + !entries.is_empty(), + "Should have at least one timestamped directory" + ); + + let timestamped_dir = entries + .into_iter() + .find(|entry| entry.as_ref().unwrap().file_type().unwrap().is_dir()) + .unwrap() + .unwrap() + .path(); + + // Log files are in a subdirectory named after the repository + let repo_log_dir = timestamped_dir.join("test-repo"); + assert!( + repo_log_dir.exists(), + "Repository log directory should exist" + ); + + // Check for log files in the repository subdirectory + let stdout_log = repo_log_dir.join("stdout.log"); + let metadata_log = repo_log_dir.join("metadata.json"); + + assert!(stdout_log.exists(), "stdout.log should be created"); + assert!(metadata_log.exists(), "metadata.json should be created"); + + // Verify log file contents + let stdout_content = fs::read_to_string(&stdout_log).unwrap(); + let metadata_content = fs::read_to_string(&metadata_log).unwrap(); + + // stdout.log should contain the echo output + assert!( + stdout_content.contains(test_output), + "stdout.log should contain command output: '{}', but was: '{}'", + test_output, + stdout_content + ); + + // metadata.json should contain execution information + let metadata: serde_json::Value = serde_json::from_str(&metadata_content).unwrap(); + assert_eq!(metadata["repository"], "test-repo"); + assert_eq!(metadata["exit_code"], 0); + assert_eq!(metadata["exit_code_description"], "success"); + + // Validate that recipe and command fields are mutually exclusive + assert!( + metadata.get("command").is_some(), + "metadata.json should contain 'command' field when running a command, but was: '{}'", + metadata_content + ); + assert!( + metadata.get("recipe").is_none(), + "metadata.json should NOT contain 'recipe' field when running a command, but was: '{}'", + metadata_content + ); + assert!( + metadata.get("recipe_steps").is_none(), + "metadata.json should NOT contain 'recipe_steps' field when running a command, but was: '{}'", + metadata_content + ); + + assert!( + metadata["repository"] + .as_str() + .unwrap() + .contains("test-repo"), + "metadata.json should contain repo name: 'test-repo', but was: '{}'", + metadata_content + ); +} + +#[tokio::test] +async fn test_run_recipe_creates_logs_with_content() { + let temp_dir = TempDir::new().unwrap(); + let output_dir = temp_dir.path().join("recipe_log_test_output"); + + let test_output = "Hello from recipe test"; + let echo_cmd = format!("echo '{}'", test_output); + let recipe_steps = vec![echo_cmd.as_str(), "pwd"]; + let (_temp_dir, _repo, _recipe, context) = + setup_recipe_test("test-repo", "log-test-recipe", recipe_steps); + + let command = RunCommand { + run_type: RunType::Recipe("log-test-recipe".to_string()), + no_save: false, // Enable saving to create log files + output_dir: Some(output_dir.clone()), + }; + + let result = command.execute(&context).await; + assert!(result.is_ok()); + + // Check that output directory structure was created + let runs_dir = output_dir.join("runs"); + assert!(runs_dir.exists(), "Runs directory should be created"); + + // Find the timestamped subdirectory + let entries: Vec<_> = fs::read_dir(&runs_dir).unwrap().collect(); + assert!( + !entries.is_empty(), + "Should have at least one timestamped directory" + ); + + let timestamped_dir = entries + .into_iter() + .find(|entry| entry.as_ref().unwrap().file_type().unwrap().is_dir()) + .unwrap() + .unwrap() + .path(); + + // Log files are in a subdirectory named after the repository + let repo_log_dir = timestamped_dir.join("test-repo"); + assert!( + repo_log_dir.exists(), + "Repository log directory should exist" + ); + + // Check for log files in the repository subdirectory + let stdout_log = repo_log_dir.join("stdout.log"); + let metadata_log = repo_log_dir.join("metadata.json"); + + assert!(stdout_log.exists(), "stdout.log should be created"); + assert!(metadata_log.exists(), "metadata.json should be created"); + + // Verify log file contents + let stdout_content = fs::read_to_string(&stdout_log).unwrap(); + let metadata_content = fs::read_to_string(&metadata_log).unwrap(); + + // stdout.log should contain the recipe output + assert!( + stdout_content.contains(test_output), + "stdout.log should contain recipe output: '{}', but was: '{}'", + test_output, + stdout_content + ); + + // metadata.json should contain execution information + let metadata: serde_json::Value = serde_json::from_str(&metadata_content).unwrap(); + assert_eq!(metadata["repository"], "test-repo"); + assert_eq!(metadata["recipe"], "log-test-recipe"); + assert_eq!(metadata["exit_code"], 0); + assert_eq!(metadata["exit_code_description"], "success"); + + // Validate that recipe and command fields are mutually exclusive + assert!( + metadata.get("command").is_none(), + "metadata.json should NOT contain 'command' field when running a recipe, but was: '{}'", + metadata_content + ); + assert!( + metadata.get("recipe").is_some(), + "metadata.json should contain 'recipe' field when running a recipe, but was: '{}'", + metadata_content + ); + assert!( + metadata.get("recipe_steps").is_some(), + "metadata.json should contain 'recipe_steps' field when running a recipe, but was: '{}'", + metadata_content + ); + + assert!( + metadata["repository"] + .as_str() + .unwrap() + .contains("test-repo") + && metadata["recipe"] + .as_str() + .unwrap() + .contains("log-test-recipe"), + "metadata.json should contain repo name and recipe name, but was: '{}'", + metadata_content + ); }