[jsweep] Clean apply_safe_outputs_replay.cjs#25982
Conversation
- Reformat long Object.fromEntries chain in buildHandlerConfigFromOutput for readability - Add 3 new main() tests covering error paths: missing env var, invalid URL, failed download Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
This comment has been minimized.
This comment has been minimized.
|
Hey Here's a quick contribution checklist summary:
Verdict: 🟢 Aligned — looks ready for maintainer review. The three new test cases cover the previously-untested error paths of
|
There was a problem hiding this comment.
Pull request overview
Adds unit test coverage for main() in apply_safe_outputs_replay.cjs, specifically targeting previously untested failure paths when required inputs are missing or external commands fail.
Changes:
- Added tests to verify
main()callscore.setFailedwhenGH_AW_RUN_URLis missing. - Added tests to verify
main()forwardsparseRunUrlvalidation failures tocore.setFailed. - Added tests to verify
main()fails when artifact download (gh run download) returns a non-zero exit code.
Show a summary per file
| File | Description |
|---|---|
| actions/setup/js/apply_safe_outputs_replay.test.cjs | Adds 3 new main() tests covering key error paths (missing env var, invalid run URL, artifact download failure). |
Copilot's findings
Tip
Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Files reviewed: 1/1 changed files
- Comments generated: 0
🧪 Test Quality Sentinel ReportTest Quality Score: 80/100✅ Excellent — All new tests enforce behavioral contracts with full error-path coverage.
Test Classification Details
NotesTest inflation flag (informational, not blocking): 27 lines were added to the test file and 0 lines changed in the production file. By strict ratio this exceeds the 2:1 threshold, but this is a backfill scenario — Mocking is appropriate: Assertion quality: All 6 Language SupportTests analyzed:
Verdict
📖 Understanding Test ClassificationsDesign Tests (High Value) verify what the system does:
Implementation Tests (Low Value) verify how the system does it:
Goal: Shift toward tests that describe the system's behavioral contract — the promises it makes to its users and collaborators.
|
There was a problem hiding this comment.
✅ Test Quality Sentinel: 80/100. Test quality is excellent — 0% of new tests are implementation tests (threshold: 30%). All 3 new tests cover main() error paths (missing env var, invalid URL, failed download) and verify observable behavioral contracts with well-described assertions.
Summary
Cleaned
apply_safe_outputs_replay.cjs— a github-script context file that downloads a previous run's agent artifact and replays safe outputs.The file was already clean with
@ts-checkenabled. This PR focuses on improving test coverage for themain()function, which had zero tests.Changes
apply_safe_outputs_replay.test.cjsAdded 3 new
main()tests covering previously untested error paths:calls setFailed when GH_AW_RUN_URL is not setcalls setFailed for an invalid GH_AW_RUN_URLparseRunUrlthrows → caught and forwarded tocore.setFailedcalls setFailed when exec fails to download the artifactgh run downloadnon-zero exit →ERR_SYSTEMerrorTest count: 17 → 20 ✅
Validation ✅
npm run format:cjs✓npm run lint:cjs✓npm run typecheck✓npm run test:js— 20 passed ✓Warning
The following domain was blocked by the firewall during workflow execution:
invalid.example.invalidTo allow these domains, add them to the
network.allowedlist in your workflow frontmatter:See Network Configuration for more information.