Skip to content

Fix cross install warning expectation#139

Merged
leynos merged 6 commits intopython-lib-release-actionfrom
codex/fix-stream-assertion-in-test_cross_install.py
Sep 24, 2025
Merged

Fix cross install warning expectation#139
leynos merged 6 commits intopython-lib-release-actionfrom
codex/fix-stream-assertion-in-test_cross_install.py

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Sep 23, 2025

Summary

  • update the Windows failure test to read the warning from stderr
  • assert the exact fallback message emitted when cross installation fails

Testing

  • make test

https://chatgpt.com/codex/tasks/task_e_68d2e85bd0008322bd696c0ccc7a8381

Summary by Sourcery

Sanitize and enforce RUNTIME_PROBE_TIMEOUT values with fallback and warnings, emit stderr warnings on runtime probe timeouts, fix cross-install warning assertions to read from stderr, guard Windows-only setup action, and refine tests and build formatting target.

New Features:

  • Introduce sanitized PROBE_TIMEOUT logic with warnings for invalid, non-positive, or excessive RUNTIME_PROBE_TIMEOUT values

Bug Fixes:

  • Fix cross-install tests to read and assert exact fallback warning from stderr

Enhancements:

  • Add _reload_runtime_module test helper and expand tests for probe timeout scenarios and stderr warnings
  • Improve test_utils assertions with descriptive messages
  • Update Makefile fmt target to include Ruff auto-fixes and refine conftest return type annotations

Deployment:

  • Guard setup-windows-gnu action to only run on Windows and include pkg-config in MinGW install

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Sep 23, 2025

Reviewer's Guide

This PR refactors the probe timeout mechanism to validate and cap values with stderr warnings, enhances runtime availability checks to emit warnings on timeouts, and updates tests to capture and assert stderr output for cross install and timeout scenarios, along with minor CI and utility test improvements.

Sequence diagram for runtime probe timeout validation and warning emission

sequenceDiagram
    participant Env as Environment
    participant Runtime as "runtime.py (_get_probe_timeout)"
    participant Typer as "typer.echo()"
    Env->>Runtime: Get RUNTIME_PROBE_TIMEOUT
    alt Value is not set
        Runtime->>Runtime: Use default timeout (10s)
    else Value is not integer
        Runtime->>Typer: Emit warning to stderr (invalid value)
        Runtime->>Runtime: Use default timeout (10s)
    else Value <= 0
        Runtime->>Typer: Emit warning to stderr (must be positive)
        Runtime->>Runtime: Use default timeout (10s)
    else Value > 300
        Runtime->>Typer: Emit warning to stderr (exceeds max)
        Runtime->>Runtime: Use capped timeout (300s)
    else Valid value
        Runtime->>Runtime: Use provided timeout
    end
Loading

File-Level Changes

Change Details Files
Sanitize and warn on probe timeout configuration
  • Extract timeout logic into a new _get_probe_timeout function
  • Warn and fallback on invalid, non-positive, or too large values
  • Replace static PROBE_TIMEOUT assignment with a call to _get_probe_timeout()
.github/actions/rust-build-release/src/runtime.py
.github/actions/rust-build-release/tests/test_runtime.py
Emit stderr warnings on runtime info/security probe timeouts
  • Add typer.echo in TimeoutExpired handler for info probe
  • Add typer.echo in TimeoutExpired handler for security probe
.github/actions/rust-build-release/src/runtime.py
.github/actions/rust-build-release/tests/test_runtime.py
Capture and assert stderr warnings in test_runtime
  • Introduce _reload_runtime_module helper for isolated imports
  • Monkeypatch typer.echo to collect (message, err) pairs
  • Assert that warning messages are emitted on stderr in existing timeout tests
.github/actions/rust-build-release/tests/test_runtime.py
Harden cross-install failure tests and expectations
  • Read warning from capsys.err and assert exact fallback message
  • Ensure no --git or --tag flags are used in failure paths
  • Replace dict-based release_called with list-based call args assertion
.github/actions/rust-build-release/tests/test_cross_install.py
Add Windows runner guard in setup action
  • Fail early with error if RUNNER_OS is not Windows
.github/actions/setup-windows-gnu/action.yml
Improve utility tests and linting targets
  • Add descriptive messages to assertions in test_utils
  • Adjust conftest return types to typ.Self
  • Update Makefile fmt target to include Ruff lint auto-fixes
.github/actions/rust-build-release/tests/test_utils.py
conftest.py
Makefile

Possibly linked issues


Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Sep 23, 2025

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

✨ Finishing touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch codex/fix-stream-assertion-in-test_cross_install.py

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Sep 23, 2025

@sourcery-ai review

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and found some issues that need to be addressed.

  • Consider extracting the repeated fake_echo and message capturing logic into a reusable pytest fixture to reduce duplication.
  • Reloading modules with _reload_runtime_module injects new entries into sys.modules; consider cleaning them up after each test to avoid side effects across runs.
  • The invalid/zero/capped probe timeout tests share the same structure and could be combined into a single parametrized test to DRY up the suite.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Consider extracting the repeated fake_echo and message capturing logic into a reusable pytest fixture to reduce duplication.
- Reloading modules with _reload_runtime_module injects new entries into sys.modules; consider cleaning them up after each test to avoid side effects across runs.
- The invalid/zero/capped probe timeout tests share the same structure and could be combined into a single parametrized test to DRY up the suite.

## Individual Comments

### Comment 1
<location> `.github/actions/rust-build-release/tests/test_runtime.py:97` </location>
<code_context>
     _patch_run_validated_timeout(runtime_module, harness)

     assert runtime_module.runtime_available("docker") is False
+    assert any(err for _, err in messages), "expected stderr warning to be emitted"
+    assert any(
+        "docker info probe exceeded" in msg and str(runtime_module.PROBE_TIMEOUT) in msg
+        for msg, err in messages
+        if err
+    ), "docker info probe timeout warning missing"


</code_context>

<issue_to_address>
**suggestion (testing):** Consider adding a test for OSError handling in runtime_available.

A test simulating OSError would ensure this error path is covered and confirm that no inappropriate warning is emitted.

```suggestion
    assert runtime_module.runtime_available("docker") is False

def test_runtime_available_oserror(monkeypatch, harness, runtime_module):
    """Test that OSError in runtime_available does not emit a warning."""
    harness.patch_attr("ensure_allowed_executable", lambda path, allowed: path)
    messages: list[tuple[str, bool]] = []

    def fake_echo(message: str, *, err: bool = False) -> None:
        messages.append((message, err))

    monkeypatch.setattr(runtime_module.typer, "echo", fake_echo)

    def fake_run_validated(*args, **kwargs):
        raise OSError("simulated OSError")

    monkeypatch.setattr(runtime_module, "run_validated", fake_run_validated)

    assert runtime_module.runtime_available("docker") is False
    # Ensure no error message was emitted
    assert not any(err for _, err in messages), "OSError should not emit a warning"
```
</issue_to_address>

### Comment 2
<location> `.github/actions/rust-build-release/src/runtime.py:54` </location>
<code_context>
+_MAX_PROBE_TIMEOUT = 300
+
+
+def _get_probe_timeout() -> int:
+    """Return the sanitized probe timeout for runtime detection."""
+    raw = os.environ.get("RUNTIME_PROBE_TIMEOUT")
</code_context>

<issue_to_address>
**issue (complexity):** Consider refactoring the timeout handling and probe execution logic into shared helper functions to reduce code duplication.

```markdown
You can cut down the duplication in both the timeout‐sanitizer and the two `run_validated` blocks by

 1. collapsing your min/max clamping into one expression with a single warning on overflow,  
 2. factoring out the “run & catch” logic into a shared `_run_probe` helper.

Example changes:

```python
_DEFAULT_PROBE_TIMEOUT = 10
_MAX_PROBE_TIMEOUT     = 300

def _get_probe_timeout() -> int:
    raw = os.environ.get("RUNTIME_PROBE_TIMEOUT")
    try:
        v = int(raw) if raw is not None else _DEFAULT_PROBE_TIMEOUT
    except ValueError:
        typer.echo(f"::warning:: invalid RUNTIME_PROBE_TIMEOUT={raw!r}, using {_DEFAULT_PROBE_TIMEOUT}s",
                   err=True)
        return _DEFAULT_PROBE_TIMEOUT

    # clamp into [_DEFAULT_PROBE_TIMEOUT, _MAX_PROBE_TIMEOUT]
    clamped = max(_DEFAULT_PROBE_TIMEOUT, min(v, _MAX_PROBE_TIMEOUT))
    if clamped != v:
        typer.echo(
            f"::warning:: RUNTIME_PROBE_TIMEOUT={v}s {'capped' if v>clamped else 'raised'} to {clamped}s",
            err=True,
        )
    return clamped
```

```python
def _run_probe(exec_path: str, name: str, args: list[str], *,
               cwd: Path | str | None = None, **kw) -> CompletedProcess | None:
    try:
        return run_validated(
            exec_path, args,
            allowed_names=(name, f"{name}.exe"),
            timeout=PROBE_TIMEOUT,
            cwd=cwd,
            **kw,
        )
    except subprocess.TimeoutExpired:
        typer.echo(
            f"::warning:: {name} {' '.join(args)} probe "
            f"exceeded {PROBE_TIMEOUT}s; treating as unavailable",
            err=True,
        )
    except (OSError, subprocess.CalledProcessError):
        pass
    return None
```

Then in `runtime_available`:

```python
info = _run_probe(exec_path, name, ["info"], stdout=DEVNULL, stderr=DEVNULL, cwd=cwd)
if not info or info.returncode != 0:
    return False

if name == "podman":
    sec = _run_probe(
        exec_path,
        name,
        ["info", "--format", "{{json .Host.Security}}"],
        capture_output=True, text=True, check=True, cwd=cwd
    )
    if not sec:
        return False

    # ... follow with JSON-decode logic unchanged ...
```

This keeps the same warnings/semantics but centralizes your exception handling and timeout logic into one small helper.
</issue_to_address>

### Comment 3
<location> `.github/actions/rust-build-release/src/runtime.py:54` </location>
<code_context>
+_MAX_PROBE_TIMEOUT = 300
+
+
+def _get_probe_timeout() -> int:
+    """Return the sanitized probe timeout for runtime detection."""
+    raw = os.environ.get("RUNTIME_PROBE_TIMEOUT")
</code_context>

<issue_to_address>
**issue (review_instructions):** Add behavioural and unit tests for the new _get_probe_timeout function.

You introduced a new function, _get_probe_timeout, which contains logic for sanitizing and capping the probe timeout value. Add both behavioural and unit tests to verify its correct operation under various environment variable values, including invalid, negative, and excessive values.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*`

**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.

</details>
</issue_to_address>

### Comment 4
<location> `.github/actions/rust-build-release/src/runtime.py:108` </location>
<code_context>
         )
-    except (OSError, subprocess.TimeoutExpired):
+    except (OSError, subprocess.TimeoutExpired) as exc:
+        if isinstance(exc, subprocess.TimeoutExpired):
+            typer.echo(
+                "::warning:: "
</code_context>

<issue_to_address>
**issue (review_instructions):** Add tests to verify warning output when runtime probe times out.

You added logic to emit a warning when a runtime probe times out. Add behavioural tests to ensure this warning is emitted as expected when subprocess.TimeoutExpired is raised.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*`

**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.

</details>
</issue_to_address>

### Comment 5
<location> `.github/actions/rust-build-release/src/runtime.py:108` </location>
<code_context>
         )
-    except (OSError, subprocess.TimeoutExpired):
+    except (OSError, subprocess.TimeoutExpired) as exc:
+        if isinstance(exc, subprocess.TimeoutExpired):
+            typer.echo(
+                "::warning:: "
</code_context>

<issue_to_address>
**issue (review_instructions):** Add tests to verify warning output when security probe times out.

You added logic to emit a warning when a security probe times out. Add behavioural tests to ensure this warning is emitted as expected when subprocess.TimeoutExpired is raised during the security probe.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*`

**Instructions:**
For any new feature or change to an existing feature, both behavioural *and* unit tests are required.

</details>
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread .github/actions/rust-build-release/tests/test_runtime.py
Comment thread .github/actions/rust-build-release/src/runtime.py
@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Sep 23, 2025

@coderabbitai Have the following now been resolved?

  • Consider extracting the repeated fake_echo and message capturing logic into a reusable pytest fixture to reduce duplication.
  • Reloading modules with _reload_runtime_module injects new entries into sys.modules; consider cleaning them up after each test to avoid side effects across runs.
  • The invalid/zero/capped probe timeout tests share the same structure and could be combined into a single parametrized test to DRY up the suite.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Sep 23, 2025

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find test files to examine the current implementation
fd -e py test

# Look for fake_echo usage patterns
rg -n "fake_echo" --type py

# Look for _reload_runtime_module usage
rg -n "_reload_runtime_module" --type py

# Look for probe timeout related tests
rg -n -C3 "probe.*timeout|timeout.*probe" --type py

# Look for sys.modules cleanup patterns
rg -n "sys\.modules" --type py

# Look for parametrized tests
rg -n "@pytest\.mark\.parametrize" --type py

Length of output: 99


🏁 Script executed:

#!/bin/bash
# Get a broader view of all Python files
fd -e py

# Check directory structure
find . -type d -name "*test*" -o -name "*Test*" 2>/dev/null

# Look for files with "test" in the name
fd test

# Examine conftest.py content
cat conftest.py

# Search more broadly for echo patterns
rg -n "echo" --type py

# Search for module reloading patterns
rg -n "reload" --type py -i

Length of output: 5368


🏁 Script executed:

#!/bin/bash
# Look at the test files in different directories
for test_dir in ./.github/actions/*/tests; do
    echo "=== Files in $test_dir ==="
    ls -la "$test_dir"
    echo
done

# Look for any Python test files
fd -e py . ./.github/actions/*/tests

# Search for fake_echo pattern more broadly
rg -n -C2 "fake.*echo|echo.*fake" --type py

# Search for _reload patterns
rg -n -C2 "_reload" --type py

# Search for timeout related tests
rg -n -C2 "timeout" --type py

# Look for pytest parametrize usage
rg -n -C2 "parametrize" --type py

Length of output: 10423


🏁 Script executed:

#!/bin/bash
# Examine the test files that are most likely to contain the patterns mentioned
cat ./.github/actions/rust-build-release/tests/test_runtime.py

echo "========================================="
echo "test_cross_install.py content:"
echo "========================================="

cat ./.github/actions/rust-build-release/tests/test_cross_install.py

# Look for specific patterns in these files
echo "========================================="
echo "Looking for fake_echo patterns:"
echo "========================================="
rg -n -A3 -B3 "fake.*echo|echo.*fake" ./.github/actions/rust-build-release/tests/

echo "========================================="
echo "Looking for _reload patterns:"
echo "========================================="
rg -n -A3 -B3 "_reload" ./.github/actions/rust-build-release/tests/

echo "========================================="
echo "Looking for parametrize patterns:"
echo "========================================="
rg -n -A3 -B3 "parametrize" ./.github/actions/rust-build-release/tests/

Length of output: 43913


@leynos Looking at the current codebase, here's the status of those suggestions:


🏁 Script executed:

#!/bin/bash
# Let's get a clearer view of the echo_recorder fixture usage
rg -n -A5 -B2 "echo_recorder" ./.github/actions/rust-build-release/tests/

# Check for sys.modules cleanup in _reload_runtime_module
rg -n -A10 -B2 "sys\.modules\.pop" ./.github/actions/rust-build-release/tests/

# Look at the parametrized test structure more clearly
rg -n -A15 "test_probe_timeout.*parametrize" ./.github/actions/rust-build-release/tests/

Length of output: 11253


Based on my examination of the codebase, here's the status of the three suggestions:

RESOLVED: sys.modules cleanup

The _reload_runtime_module function properly cleans up sys.modules entries in a finally block:

def _reload_runtime_module(runtime_module: ModuleType, module_name: str) -> ModuleType:
    # ... module loading code ...
    try:
        module_spec.loader.exec_module(module)
    finally:
        sys.modules.pop(module_name, None)  # ✅ Cleanup implemented
    return module

RESOLVED: Parametrised probe timeout tests

The invalid/zero/capped probe timeout tests have been combined into parametrised tests:

@pytest.mark.parametrize(
    ("env_value", "expected_kind", "message_fragment"),
    [
        pytest.param("not-a-number", "default", "Invalid RUNTIME_PROBE_TIMEOUT value", id="invalid"),
        pytest.param("0", "default", "0s raised to", id="zero"),
        pytest.param("-5", "default", "-5s raised to", id="negative"),
        pytest.param("999", "max", "999s capped to", id="capped"),
    ],
)
def test_probe_timeout_sanitization_warnings(...)

NOT FULLY RESOLVED: fake_echo fixture extraction

While there's an echo_recorder fixture in test_runtime.py that's used consistently within that file, the test_cross_install.py file still contains inline fake_echo implementations:

# In test_cross_install.py - still duplicated
def fake_echo(message: str, *, err: bool = False) -> None:
    messages.append((message, err))

harness.monkeypatch.setattr(module.typer, "echo", fake_echo)

The echo_recorder fixture from test_runtime.py could be moved to a shared location (like conftest.py) and reused across all test files to eliminate this duplication.

Summary: Two out of three suggestions have been fully resolved. The fake_echo duplication still exists in test_cross_install.py and could benefit from using the centralised echo_recorder fixture.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Sep 24, 2025

@coderabbitai Has the following now been resolved?

NOT FULLY RESOLVED: fake_echo fixture extraction

While there's an echo_recorder fixture in test_runtime.py that's used consistently within that file, the test_cross_install.py file still contains inline fake_echo implementations:

# In test_cross_install.py - still duplicated
def fake_echo(message: str, *, err: bool = False) -> None:
    messages.append((message, err))

harness.monkeypatch.setattr(module.typer, "echo", fake_echo)

The echo_recorder fixture from test_runtime.py could be moved to a shared location (like conftest.py) and reused across all test files to eliminate this duplication.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Sep 24, 2025

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

@leynos Looking at the current codebase:

STILL NOT RESOLVED: fake_echo fixture extraction

The duplication issue persists. Here's what I found:

  1. test_cross_install.py still contains inline fake_echo implementations:
def fake_echo(message: str, *, err: bool = False) -> None:
    messages.append((message, err))
  1. test_runtime.py still has its own echo_recorder fixture:
@pytest.fixture
def echo_recorder(harness: Harness) -> EchoRecorder:
    # ... implementation
  1. conftest.py does not contain any echo_recorder fixture - it only has the basic harness fixture.

The code duplication remains unaddressed. The echo_recorder fixture from test_runtime.py has not been moved to a shared location, and test_cross_install.py continues to use its inline fake_echo implementation.

Recommendation: Move the echo_recorder fixture from test_runtime.py to conftest.py and update test_cross_install.py to use it, eliminating the duplication.


🏁 Script executed:

#!/bin/bash
# Check current content of test_cross_install.py for fake_echo implementations
echo "=== Current test_cross_install.py content ==="
cat ./.github/actions/rust-build-release/tests/test_cross_install.py

echo -e "\n=== Checking for fake_echo patterns in test_cross_install.py ==="
rg -n -C3 "fake_echo" ./.github/actions/rust-build-release/tests/test_cross_install.py

echo -e "\n=== Checking if echo_recorder fixture exists in conftest.py ==="
rg -n -C5 "echo_recorder" ./.github/actions/rust-build-release/tests/conftest.py

echo -e "\n=== Checking echo_recorder in test_runtime.py ==="
rg -n -C5 "echo_recorder" ./.github/actions/rust-build-release/tests/test_runtime.py

Length of output: 22111


@leynos Looking at the current codebase based on the script results:

RESOLVED: fake_echo fixture extraction

The duplication issue has been fully resolved! Here's the evidence:

  1. ✅ echo_recorder fixture moved to conftest.py:
# In conftest.py (lines 151-156)
@pytest.fixture
def echo_recorder(
    monkeypatch: pytest.MonkeyPatch,
) -> cabc.Callable[[ModuleType], list[tuple[str, bool]]]:
    """Return a helper that patches ``typer.echo`` and records messages."""
  1. ✅ No more inline fake_echo implementations in test_cross_install.py:

    • The rg -n -C3 "fake_echo" search returned empty results, confirming no duplication exists
  2. ✅ test_cross_install.py now uses the shared fixture:

def test_install_cross_release_validates_binary(
    cross_module: ModuleType,
    module_harness: HarnessFactory,
    echo_recorder: typ.Callable[[ModuleType], list[tuple[str, bool]]],  # ✅ Using shared fixture
    tmp_path: Path,
) -> None:
  1. ✅ Other tests use appropriate alternatives:
    • The Windows failure test uses capsys fixture instead, which is appropriate for stderr capture

The code duplication has been eliminated by moving the echo_recorder fixture to the shared conftest.py location, making it available to all test files in the directory.

@leynos leynos merged commit 5eb716d into python-lib-release-action Sep 24, 2025
7 of 9 checks passed
@leynos leynos deleted the codex/fix-stream-assertion-in-test_cross_install.py branch September 24, 2025 00:13
leynos added a commit that referenced this pull request Sep 24, 2025
* Fix cross install warning assertion

* Harden cross install harness checks

* Sanitize runtime probe timeout and guard actions

* Refine runtime probe helpers and add timeout tests

* Fix runtime probe lint findings

* Share echo recorder fixture across runtime tests
leynos added a commit that referenced this pull request Sep 24, 2025
* Add release-to-pypi-uv composite action

* Document trusted publishing permissions

* Clarify GH_TOKEN handling in README

* Fix local usage example

* Improve GitHub release error handling

* Fix cmd_utils discovery in publish script

* Skip more build caches

* Fail fast on TOML parse errors

* Format release summary output

* Add no-tag error test

* Test TOML parse failures

* Make uv python version configurable

* Document concurrency guard in usage

* Address review feedback with retries and tests

* Improve TOML version validation and test strategy

Extends boolean parsing to support more truthy values when validating dynamic version flags in TOML files. Refactors tests to run in-process with better coverage of various flag values and scenarios, replacing subprocess-based tests for efficiency and detail. Updates action to simplify Python version handling with uv and improves path detection for testing.

These changes make validation logic more robust and enhance test reliability.

* Add cmd-mox users guide

* Use cmd-mox to stub rust build command tests (#92)

* Use cmd-mox for rust build command tests

* Guard cmd-mox integration on Windows

* Refactor cmd-mox helpers into shared conftest

* Apply cmd-mox fake which simplification

* Mark Windows smoke tests as xfail (#94)

* Mark Windows smoke tests as xfail

* Tighten Windows smoke test expectations

* Address reviewer feedback for release-to-pypi-uv action (#98)

* Address review comments for release to PyPI action

* Add tests and docs for python-version handling

* Fix cmd_mox fixture usage and expose dev extra (#101)

* Fix cmd_mox fixture usage and expose dev extras

* Remove pyproject optional dev extra

* Remove Pyright configuration

* Add missing docstrings to release-to-pypi-uv modules (#103)

* Silence type-check import lints for release action (#104)

* Add toolchain-specific cargo cache key and clean shellstub imports (#88)

* Address review feedback for shellstub imports

* Fix MSYS2 install comments breaking pacman (#89)

* Fix MSYS2 install comments breaking pacman

* Fix Windows llvm-mingw metadata export

* Handle runtime detection timeouts

* Fix rust host detection timeout and Windows bin path (#91)

* Fix rust host detection timeout and Windows bin path

* Add timeout coverage for rust host detection

* Unify runtime probe timeout configuration

* test: capture rustc probe kwargs in timeout test

* Remove inline comments from Windows GNU MSYS2 package list (#99)

* Remove inline MSYS2 install comments

* Skip nfpm steps on Windows builds

* Handle nfpm packaging only for Linux

* Fix Windows invocation of rust-build-release (#102)

* Silence type-check import lints for release action

* Adds ignore rule for Crush agent artifacts

Ignores files generated by the Crush AI agent to prevent them from being tracked in version control.

Adds a symlink to AGENTS.md for discovery by Crush

* Format code after rebase validation to satisfy repo style checks.

💘 Generated with Crush
Co-Authored-By: Crush <crush@charm.land>

* fmt: extend fmt target to run ruff check --select D202,I001 --fix; apply resulting fixes

💘 Generated with Crush
Co-Authored-By: Crush <crush@charm.land>

---------

Co-authored-by: Payton McIntosh <pmcintosh@df12.net>
Co-authored-by: Crush <crush@charm.land>

* Narrow module fixtures to ModuleType (#105)

* Add docstrings for release-to-pypi-uv tests and helpers (#106)

* Add missing docstrings for release-to-pypi-uv tests

* Simplify cmd-mox typing and streamline docstrings

* Apply formatting

* chore: remove unused imports across scripts and tests

* style(_helpers): alphabetise __all__ tuple

* tests: drop superfluous parentheses from @pytest.fixture usage

---------

Co-authored-by: Payton McIntosh <pmcintosh@df12.net>

* Scope type-checking imports (#111)

* Scope type-checking imports

* Fix lint warnings across release scripts

* Clarify cargo stream capture error

* ci: add lint and format checks

* Install action-validator in CI lint workflow

* Install bun validator packages in CI

* Test full success message for matching versions (#113)

* Address review comments (#116)

* Address review feedback for release-to-pypi-uv actions (#112)

* Address review feedback

* Add CLI coverage for publish and validate scripts

* Add deterministic validation tests and bunx validator checks

* Update ci.yml remove unneeded step

* Fix formatting

* Fix Windows xfail marker removal for pytest 8 (#120)

* Fix Windows xfail marker removal for pytest 8

* Tighten Windows xfail marker filtering

* Handle runtime probe timeouts and expand release tests (#119)

* Handle runtime probe timeouts and expand release tests

* Add _probe_runtime coverage and assert timeout warnings

* Expand TOML skip directories for release validation (#122)

* Expand pyproject skip list handling

* Strengthen TOML and runtime timeout tests

* Assert cargo fallback logs podman failure

* Parameterize skip-directory regression

* Reinforce regression coverage for release tooling (#130)

* Refine skip directory regression parameterization

* Document skip directories and DRY timeout assertions

* Parametrize publish index test

* Refactor runtime timeout tests

* Rebase onto origin/python-lib-release-action and resolve conflicts

- Resolve merge in validate_toml_versions tests by standardizing module loading
- Register dynamically loaded scripts in sys.modules to support reload semantics
- Make toolchain triple test robust to host arch by selecting matching target
- Run formatting, lint, typecheck, and tests to validate integration

💘 Generated with Crush
Co-Authored-By: Crush <crush@charm.land>

---------

Co-authored-by: Crush <crush@charm.land>

* Harden release-to-pypi-uv workflow and regression coverage (#134)

* Improve release validation jitter handling

* Adjust runtime fallback host triples per platform

* Make release validation deterministic and configurable

* Rebase python-lib-release-action onto origin/main; resolve conflicts preserving branch intent; drop uv.lock in favor of main; fix tests and typing/lint issues; all tests pass and linters clean

* Close cargo pipes when coverage stream missing (#137)

* Close cargo pipes when missing

* Ensure cargo pipes close on all paths

* Ensure guard closes cargo pipes before exiting

* Handle release script auth failure and multiline outputs (#136)

* Handle release script auth failure and multiline outputs

* Harden windows toolchain setup and extend runtime tests

* Handle release auth errors and tweak toolchain retries

* Fix cross install warning expectation (#139)

* Fix cross install warning assertion

* Harden cross install harness checks

* Sanitize runtime probe timeout and guard actions

* Refine runtime probe helpers and add timeout tests

* Fix runtime probe lint findings

* Share echo recorder fixture across runtime tests

* Update fmt target description (#140)

* Resolve rebase conflicts and align runtime tests with platform/timeouts from main while preserving branch improvements; ensure formatting passes and all tests/linters are green.

💘 Generated with Crush
Co-Authored-By: Crush <crush@charm.land>

---------

Co-authored-by: Payton McIntosh <pmcintosh@df12.net>
Co-authored-by: Crush <crush@charm.land>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant