Skip to content

[jsweep] Clean apply_safe_outputs_replay.cjs#25982

Merged
pelikhan merged 1 commit intomainfrom
jsweep/clean-apply-safe-outputs-replay-c7433e392b535f0f
Apr 13, 2026
Merged

[jsweep] Clean apply_safe_outputs_replay.cjs#25982
pelikhan merged 1 commit intomainfrom
jsweep/clean-apply-safe-outputs-replay-c7433e392b535f0f

Conversation

@github-actions
Copy link
Copy Markdown
Contributor

Summary

Cleaned apply_safe_outputs_replay.cjs — a github-script context file that downloads a previous run's agent artifact and replays safe outputs.

The file was already clean with @ts-check enabled. This PR focuses on improving test coverage for the main() function, which had zero tests.

Changes

apply_safe_outputs_replay.test.cjs

Added 3 new main() tests covering previously untested error paths:

Test Path covered
calls setFailed when GH_AW_RUN_URL is not set Missing required env var → early exit
calls setFailed for an invalid GH_AW_RUN_URL parseRunUrl throws → caught and forwarded to core.setFailed
calls setFailed when exec fails to download the artifact gh run download non-zero exit → ERR_SYSTEM error

Test count: 17 → 20

Validation ✅

  • Formatting: npm run format:cjs
  • Linting: npm run lint:cjs
  • Type checking: npm run typecheck
  • Tests: npm run test:js — 20 passed ✓

Warning

⚠️ Firewall blocked 1 domain

The following domain was blocked by the firewall during workflow execution:

  • invalid.example.invalid

To allow these domains, add them to the network.allowed list in your workflow frontmatter:

network:
  allowed:
    - defaults
    - "invalid.example.invalid"

See Network Configuration for more information.

Generated by jsweep - JavaScript Unbloater · ● 2.1M ·

  • expires on Apr 15, 2026, 5:02 AM UTC

- Reformat long Object.fromEntries chain in buildHandlerConfigFromOutput for readability
- Add 3 new main() tests covering error paths: missing env var, invalid URL, failed download

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@github-actions

This comment has been minimized.

@github-actions
Copy link
Copy Markdown
Contributor Author

Hey @github-actions 👋 — great work on this one! The jsweep cleanup of apply_safe_outputs_replay.cjs is well-scoped and the new main() tests are exactly the kind of error-path coverage this file needed.

Here's a quick contribution checklist summary:

Check Result
On-topic ✅ yes
Follows process ✅ yes (core-team agentic workflow)
Focused ✅ yes
New dependencies ✅ no
Tests included ✅ yes (17 → 20 tests)
Has description ✅ yes
Diff size 27 lines

Verdict: 🟢 Aligned — looks ready for maintainer review.

The three new test cases cover the previously-untested error paths of main() clearly and concisely, the PR description is thorough, and all CI validations (format, lint, typecheck, tests) passed. Nothing to add here — this is a clean contribution.

Generated by Contribution Check · ● 3.2M ·

@pelikhan pelikhan marked this pull request as ready for review April 13, 2026 13:47
Copilot AI review requested due to automatic review settings April 13, 2026 13:47
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds unit test coverage for main() in apply_safe_outputs_replay.cjs, specifically targeting previously untested failure paths when required inputs are missing or external commands fail.

Changes:

  • Added tests to verify main() calls core.setFailed when GH_AW_RUN_URL is missing.
  • Added tests to verify main() forwards parseRunUrl validation failures to core.setFailed.
  • Added tests to verify main() fails when artifact download (gh run download) returns a non-zero exit code.
Show a summary per file
File Description
actions/setup/js/apply_safe_outputs_replay.test.cjs Adds 3 new main() tests covering key error paths (missing env var, invalid run URL, artifact download failure).

Copilot's findings

Tip

Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

  • Files reviewed: 1/1 changed files
  • Comments generated: 0

@github-actions
Copy link
Copy Markdown
Contributor Author

🧪 Test Quality Sentinel Report

Test Quality Score: 80/100

Excellent — All new tests enforce behavioral contracts with full error-path coverage.

Metric Value
New/modified tests analyzed 3
✅ Design tests (behavioral contracts) 3 (100%)
⚠️ Implementation tests (low value) 0 (0%)
Tests with error/edge cases 3 (100%)
Duplicate test clusters 0
Test inflation detected ⚠️ Yes (27 test lines added, 0 production lines changed — backfill scenario)
🚨 Coding-guideline violations None

Test Classification Details

Test File Classification Notes
calls setFailed when GH_AW_RUN_URL is not set apply_safe_outputs_replay.test.cjs:260 ✅ Design Verifies required-env-var guard; asserts error message content
calls setFailed for an invalid GH_AW_RUN_URL apply_safe_outputs_replay.test.cjs:268 ✅ Design Verifies URL-parse error path; asserts message matches /Cannot parse run ID/
calls setFailed when exec fails to download the artifact apply_safe_outputs_replay.test.cjs:276 ✅ Design Verifies non-zero exit-code path; asserts message matches /Failed to download agent artifact/

Notes

Test inflation flag (informational, not blocking): 27 lines were added to the test file and 0 lines changed in the production file. By strict ratio this exceeds the 2:1 threshold, but this is a backfill scenario — main() error paths were previously untested, and no production code change was required to make them testable. This is expected behaviour for a coverage improvement PR.

Mocking is appropriate: global.core (GitHub Actions runtime) and global.exec.exec (external process runner) are external I/O — exactly the kind of dependency that should be mocked in unit tests. No internal business-logic is mocked without a behavioral assertion.

Assertion quality: All 6 expect() calls carry descriptive string messages as second arguments — consistent with project guidelines. ✅


Language Support

Tests analyzed:

  • 🟨 JavaScript (*.test.cjs): 3 tests (vitest)

Verdict

Check passed. 0% of new tests are implementation tests (threshold: 30%). All three new tests cover genuine error paths in main() and enforce observable behavioral contracts.


📖 Understanding Test Classifications

Design Tests (High Value) verify what the system does:

  • Assert on observable outputs, return values, or state changes
  • Cover error paths and boundary conditions
  • Would catch a behavioral regression if deleted
  • Remain valid even after internal refactoring

Implementation Tests (Low Value) verify how the system does it:

  • Assert on internal function calls (mocking internals)
  • Only test the happy path with typical inputs
  • Break during legitimate refactoring even when behavior is correct
  • Give false assurance: they pass even when the system is wrong

Goal: Shift toward tests that describe the system's behavioral contract — the promises it makes to its users and collaborators.

🧪 Test quality analysis by Test Quality Sentinel · ● 430.9K ·

Copy link
Copy Markdown
Contributor Author

@github-actions github-actions Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Test Quality Sentinel: 80/100. Test quality is excellent — 0% of new tests are implementation tests (threshold: 30%). All 3 new tests cover main() error paths (missing env var, invalid URL, failed download) and verify observable behavioral contracts with well-described assertions.

@pelikhan pelikhan merged commit a22ff15 into main Apr 13, 2026
78 of 79 checks passed
@pelikhan pelikhan deleted the jsweep/clean-apply-safe-outputs-replay-c7433e392b535f0f branch April 13, 2026 14:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants