Skip to content

Conversation

@dsarno
Copy link
Collaborator

@dsarno dsarno commented Jan 2, 2026

Optimizes run_test tool -- returns summary by default instead of entire test result dump. Options for failed tests only, and for detailed results. 98% less context used.

Summary by Sourcery

Optimize Unity MCP run_tests tool responses to be more concise by default while adding optional verbosity controls and improving test stability and coverage.

New Features:

  • Add include_failed_tests and include_details options to the run_tests tool to control whether failed-only or all test details are returned.
  • Add a warning message when no tests match the specified filters in test run summaries.

Enhancements:

  • Change test run serialization to omit per-test results by default, reducing response payload size.
  • Improve Unity EditMode test setup to wait for the editor to be ready using coroutine-based waiting instead of blocking sleeps.
  • Annotate domain reload resilience tests with category and explicit attributes to better control when they are executed.
  • Adjust test namespaces to be consistent across editor test suites.

Tests:

  • Add tests for RunTests message formatting, including no-tests and non-empty test scenarios.

Summary by CodeRabbit

  • New Features

    • Test runner now supports optional parameters to filter test results—include all details, show only failed/skipped tests, or omit individual results.
  • Tests

    • Added test coverage for the new test result formatting functionality.
    • Improved test setup patterns to use coroutine-based initialization.
  • Chores

    • Reorganized test namespaces for better project structure.

✏️ Tip: You can customize this high-level summary in your review settings.

dsarno added 5 commits January 1, 2026 18:20
… by 98%

- Add includeFailedTests parameter: returns only failed/skipped test details
- Add includeDetails parameter: returns all test details (original behavior)
- Default behavior now returns summary only (~150 tokens vs ~13k tokens)
- Make results field optional in Python schema for backward compatibility

Token savings:
- Default: ~13k tokens saved (98.9% reduction)
- With failures: minimal tokens (only non-passing tests)
- Full details: same as before when explicitly requested

This prevents context bloat for typical test runs where you only need
pass/fail counts, while still allowing detailed debugging when needed.
TDD Feature:
- Add warning message when filter criteria match zero tests
- New RunTestsTests.cs validates message formatting logic
- Modified RunTests.cs to append "(No tests matched the specified filters)" when total=0

Test Organization Fixes:
- Move MCPToolParameterTests.cs from EditMode/ to EditMode/Tools/ (matches folder hierarchy)
- Fix inconsistent namespaces to MCPForUnityTests.Editor.{Subfolder}:
  - MCPToolParameterTests: Tests.EditMode → MCPForUnityTests.Editor.Tools
  - DomainReloadResilienceTests: Tests.EditMode.Tools → MCPForUnityTests.Editor.Tools
  - Matrix4x4ConverterTests: MCPForUnityTests.EditMode.Helpers → MCPForUnityTests.Editor.Helpers
- Make ManageScriptableObjectTests setup yield-based with longer Unity-ready timeout

- Mark DomainReloadResilienceTests explicit to avoid triggering domain reload during Run All
@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Jan 2, 2026

Reviewer's Guide

Adds configurable verbosity to the run_tests tool so it returns a compact summary by default, with optional inclusion of failed-only or full per-test details, and updates supporting Unity tests and server-side schema accordingly.

Sequence diagram for configurable run_tests verbosity handling

sequenceDiagram
    actor Client
    participant RunTestsTool as RunTests_HandleCommand
    participant TestRunnerService
    participant TestRunResult

    Client->>RunTestsTool: HandleCommand(params: JObject)
    RunTestsTool->>RunTestsTool: Parse mode, timeout, filters
    RunTestsTool->>RunTestsTool: Read includeDetails, includeFailedTests

    RunTestsTool->>TestRunnerService: RunTests(mode, filters, timeout)
    TestRunnerService-->>RunTestsTool: TestRunResult

    RunTestsTool->>RunTestsTool: message = FormatTestResultMessage(mode, result)

    RunTestsTool->>TestRunResult: ToSerializable(mode, includeDetails, includeFailedTests)
    TestRunResult-->>RunTestsTool: anonymous_object { mode, summary, results }

    alt includeDetails == true
        RunTestsTool->>TestRunResult: include all Results
    else includeFailedTests == true
        RunTestsTool->>TestRunResult: include only failed and skipped Results
    else
        RunTestsTool->>TestRunResult: omit individual Results (results = null)
    end

    RunTestsTool-->>Client: SuccessResponse(message, data)
Loading

Class diagram for updated test run result serialization and RunTests tool

classDiagram
    class RunTests {
        +Task~object~ HandleCommand(JObject @params)
        +string FormatTestResultMessage(string mode, TestRunResult result)
        -TestFilterOptions ParseFilterOptions(JObject @params)
        -string[] ParseStringArray(JObject @params, string key)
    }

    class TestRunSummary {
        +int Total
        +int Passed
        +int Failed
        +int Skipped
        +object ToSerializable()
    }

    class TestRunTestResult {
        +string Name
        +string State
        +double Duration
        +string Message
        +object ToSerializable()
    }

    class TestRunResult {
        -TestRunSummary Summary
        -IReadOnlyList~TestRunTestResult~ Results
        +int Total
        +int Passed
        +int Failed
        +int Skipped
        +object ToSerializable(string mode, bool includeDetails, bool includeFailedTests)
    }

    RunTests ..> TestRunResult : uses
    RunTests ..> TestRunSummary : uses
    RunTests ..> TestRunTestResult : uses

    TestRunResult *-- TestRunSummary : summary
    TestRunResult *-- TestRunTestResult : results
Loading

Flow diagram for TestRunResult.ToSerializable verbosity selection

flowchart TD
    A["Start ToSerializable(mode, includeDetails, includeFailedTests)"] --> B["includeDetails == true?"]
    B -- Yes --> C["resultsToSerialize = all Results (r.ToSerializable())"]
    B -- No --> D["includeFailedTests == true?"]
    D -- Yes --> E["resultsToSerialize = Results where State != Passed"]
    D -- No --> F["resultsToSerialize = null"]

    C --> G["Return { mode, summary = Summary.ToSerializable(), results = resultsToSerialize.ToList() }"]
    E --> G
    F --> G
Loading

File-Level Changes

Change Details Files
Make Unity run_tests tool return a concise summary message and opt-in detailed results, including a warning when no tests match filters.
  • Parse includeDetails and includeFailedTests flags from tool parameters with safe defaulting on parse errors.
  • Replace inline result message formatting with a reusable FormatTestResultMessage helper that appends a "no tests matched" warning when total tests is zero.
  • Pass verbosity flags into TestRunResult.ToSerializable and return only summary or filtered results based on those flags.
MCPForUnity/Editor/Tools/RunTests.cs
Support selective serialization of test results in the Unity TestRunnerService to reduce payload size and allow failed-only or full-detail modes.
  • Extend TestRunResult.ToSerializable to accept includeDetails and includeFailedTests options.
  • When includeDetails is true, serialize all test results as before; when includeFailedTests is true, serialize only non-passed tests; otherwise omit per-test results.
  • Allow results to be null when no per-test data is requested.
MCPForUnity/Editor/Services/TestRunnerService.cs
Align Python run_tests tool API with new verbosity behavior and optional results field.
  • Update RunTestsResult model to allow results to be None when Unity omits per-test data.
  • Add include_failed_tests and include_details parameters to run_tests tool with descriptive annotations.
  • Forward verbosity options to Unity by setting includeFailedTests/includeDetails in the command params when requested.
Server/src/services/tools/run_tests.py
Improve reliability and classification of Unity EditMode tests, and add coverage for run_tests message formatting.
  • Convert ManageScriptableObjectTests setup/wait helpers to Unity coroutines using UnitySetUp and IEnumerator-based WaitForUnityReady to avoid thread sleeps and improve stability.
  • Tag DomainReloadResilienceTests with domain_reload category and mark Explicit to avoid slowing/flaking normal runs.
  • Fix test namespaces to MCPForUnityTests.Editor.* so they align with project conventions.
  • Add RunTestsTests to verify FormatTestResultMessage behavior for zero-test and non-zero-test scenarios.
TestProjects/UnityMCPTests/Assets/Tests/EditMode/Tools/ManageScriptableObjectTests.cs
TestProjects/UnityMCPTests/Assets/Tests/EditMode/Tools/DomainReloadResilienceTests.cs
TestProjects/UnityMCPTests/Assets/Tests/EditMode/Helpers/Matrix4x4ConverterTests.cs
TestProjects/UnityMCPTests/Assets/Tests/EditMode/Tools/MCPToolParameterTests.cs
TestProjects/UnityMCPTests/Assets/Tests/EditMode/Tools/RunTestsTests.cs
TestProjects/UnityMCPTests/Assets/Tests/EditMode/Tools/RunTestsTests.cs.meta

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 2, 2026

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

This PR introduces optional parameters to the test result serialization pipeline, enabling selective inclusion of test details based on caller preferences. The ToSerializable method now accepts includeDetails and includeFailedTests flags to control whether all test results, only failed/skipped tests, or no individual results are serialized. The Python server and Unity command handlers are updated to propagate these parameters. Test files are reorganized with namespace standardizations and a new test helper method is added with comprehensive unit tests.

Changes

Cohort / File(s) Summary
Test Result Serialization Control
MCPForUnity/Editor/Services/TestRunnerService.cs, MCPForUnity/Editor/Tools/RunTests.cs, Server/src/services/tools/run_tests.py
Modified TestRunResult.ToSerializable() to accept optional includeDetails and includeFailedTests parameters controlling which test results are included in serialization. Added FormatTestResultMessage() helper method in RunTests.cs. Updated Python server to define and propagate these flags through the run_tests command.
Test Namespace Standardization
TestProjects/UnityMCPTests/Assets/Tests/EditMode/Helpers/Matrix4x4ConverterTests.cs, TestProjects/UnityMCPTests/Assets/Tests/EditMode/Tools/DomainReloadResilienceTests.cs, TestProjects/UnityMCPTests/Assets/Tests/EditMode/Tools/MCPToolParameterTests.cs
Updated namespaces from Tests.EditMode.* to MCPForUnityTests.Editor.* for consistency. DomainReloadResilienceTests adds [Category("domain_reload")] and [Explicit(...)] attributes.
Test Infrastructure Updates
TestProjects/UnityMCPTests/Assets/Tests/EditMode/Tools/ManageScriptableObjectTests.cs
Converted synchronous SetUp to UnitySetUp coroutine pattern with configurable timeout constant (180 seconds). Replaced blocking Thread.Sleep with yield-based waiting.
New Test Coverage
TestProjects/UnityMCPTests/Assets/Tests/EditMode/Tools/RunTestsTests.cs, RunTestsTests.cs.meta
Introduced new unit tests validating FormatTestResultMessage() behavior: tests for "no tests matched" warning and pass ratio formatting. Includes Unity meta file for asset import.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Suggested reviewers

  • msanatan

Poem

🐰 Through tests we hop, with flags so fine,
Details now optional, by design,
Failed or all, the choice is yours,
Serialization opens new doors! ✨
Namespaces tidy, coroutines flow,
The pipeline's stronger, as we go.

✨ Finishing touches
  • 📝 Generate docstrings

📜 Recent review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c2a6b0a and 4b94751.

📒 Files selected for processing (10)
  • MCPForUnity/Editor/Services/TestRunnerService.cs
  • MCPForUnity/Editor/Tools/RunTests.cs
  • Server/src/services/tools/run_tests.py
  • TestProjects/UnityMCPTests/Assets/Tests/EditMode/Helpers/Matrix4x4ConverterTests.cs
  • TestProjects/UnityMCPTests/Assets/Tests/EditMode/Tools/DomainReloadResilienceTests.cs
  • TestProjects/UnityMCPTests/Assets/Tests/EditMode/Tools/MCPToolParameterTests.cs
  • TestProjects/UnityMCPTests/Assets/Tests/EditMode/Tools/MCPToolParameterTests.cs.meta
  • TestProjects/UnityMCPTests/Assets/Tests/EditMode/Tools/ManageScriptableObjectTests.cs
  • TestProjects/UnityMCPTests/Assets/Tests/EditMode/Tools/RunTestsTests.cs
  • TestProjects/UnityMCPTests/Assets/Tests/EditMode/Tools/RunTestsTests.cs.meta

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@dsarno dsarno merged commit 35a5c75 into CoplayDev:main Jan 2, 2026
0 of 2 checks passed
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 1 issue, and left some high level feedback:

  • In RunTests.HandleCommand, the includeDetails/includeFailedTests parsing can be simplified and made safer by using @params?['includeDetails']?.Value<bool?>() / Value<bool?>() instead of ToString() + bool.TryParse wrapped in a broad try/catch.
  • In TestRunResult.ToSerializable, consider explicitly documenting or enforcing the precedence when both includeDetails and includeFailedTests are true (currently includeDetails wins), so that callers and future maintainers have a clear expectation of which verbosity mode is applied.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `RunTests.HandleCommand`, the `includeDetails`/`includeFailedTests` parsing can be simplified and made safer by using `@params?['includeDetails']?.Value<bool?>()` / `Value<bool?>()` instead of `ToString()` + `bool.TryParse` wrapped in a broad `try/catch`.
- In `TestRunResult.ToSerializable`, consider explicitly documenting or enforcing the precedence when both `includeDetails` and `includeFailedTests` are true (currently `includeDetails` wins), so that callers and future maintainers have a clear expectation of which verbosity mode is applied.

## Individual Comments

### Comment 1
<location> `Server/src/services/tools/run_tests.py:55-59` </location>
<code_context>
     group_names: Annotated[list[str] | str, "Same as test_names, except it allows for Regex"] | None = None,
     category_names: Annotated[list[str] | str, "NUnit category names to filter by (tests marked with [Category] attribute)"] | None = None,
     assembly_names: Annotated[list[str] | str, "Assembly names to filter tests by"] | None = None,
+    include_failed_tests: Annotated[bool, "Include details for failed/skipped tests only (default: false)"] = False,
+    include_details: Annotated[bool, "Include details for all tests (default: false)"] = False,
 ) -> RunTestsResponse:
</code_context>

<issue_to_address>
**suggestion (testing):** Add server-side tests to verify wiring of `include_failed_tests` / `include_details` into the Unity command params.

The Python `run_tests` entrypoint now exposes these flags and forwards them as `includeFailedTests` / `includeDetails` params. Please add tests (e.g., using a mocked `send_with_unity_instance`) that:

- Call `run_tests` with default args and assert neither key appears in `params`.
- Call `run_tests(include_failed_tests=True)` and assert only `includeFailedTests` is present and `True`.
- Call `run_tests(include_details=True)` and assert only `includeDetails` is present and `True`.
- Optionally cover both flags `True` together to document combined behavior.

This will lock in the client/server contract for test-result verbosity as the feature evolves.

Suggested implementation:

```python
import asyncio
from unittest.mock import AsyncMock

import pytest

from Server.src.services.tools.run_tests import run_tests


@pytest.mark.asyncio
async def test_run_tests_does_not_include_verbosity_flags_by_default(monkeypatch):
    captured_params = {}

    async def fake_send_with_unity_instance(unity_instance, command, params):
        nonlocal captured_params
        captured_params = params
        # Simulate a minimal successful response structure
        return {
            "mode": "edit",
            "summary": {
                "total": 0,
                "passed": 0,
                "failed": 0,
                "inconclusive": 0,
                "skipped": 0,
                "duration": 0.0,
            },
            "results": [],
        }

    monkeypatch.setattr(
        "Server.src.services.tools.run_tests.send_with_unity_instance",
        fake_send_with_unity_instance,
    )
    # Also stub out Unity instance resolution to avoid hitting real Unity
    monkeypatch.setattr(
        "Server.src.services.tools.run_tests.get_unity_instance_from_context",
        lambda ctx: object(),
    )

    # Act
    await run_tests(ctx=None)

    # Assert
    assert "includeFailedTests" not in captured_params
    assert "includeDetails" not in captured_params


@pytest.mark.asyncio
async def test_run_tests_includes_failed_tests_flag(monkeypatch):
    captured_params = {}

    async def fake_send_with_unity_instance(unity_instance, command, params):
        nonlocal captured_params
        captured_params = params
        return {
            "mode": "edit",
            "summary": {
                "total": 0,
                "passed": 0,
                "failed": 0,
                "inconclusive": 0,
                "skipped": 0,
                "duration": 0.0,
            },
            "results": [],
        }

    monkeypatch.setattr(
        "Server.src.services.tools.run_tests.send_with_unity_instance",
        fake_send_with_unity_instance,
    )
    monkeypatch.setattr(
        "Server.src.services.tools.run_tests.get_unity_instance_from_context",
        lambda ctx: object(),
    )

    # Act
    await run_tests(ctx=None, include_failed_tests=True)

    # Assert
    assert captured_params.get("includeFailedTests") is True
    # Only includeFailedTests should be present
    assert "includeDetails" not in captured_params


@pytest.mark.asyncio
async def test_run_tests_includes_details_flag(monkeypatch):
    captured_params = {}

    async def fake_send_with_unity_instance(unity_instance, command, params):
        nonlocal captured_params
        captured_params = params
        return {
            "mode": "edit",
            "summary": {
                "total": 0,
                "passed": 0,
                "failed": 0,
                "inconclusive": 0,
                "skipped": 0,
                "duration": 0.0,
            },
            "results": [],
        }

    monkeypatch.setattr(
        "Server.src.services.tools.run_tests.send_with_unity_instance",
        fake_send_with_unity_instance,
    )
    monkeypatch.setattr(
        "Server.src.services.tools.run_tests.get_unity_instance_from_context",
        lambda ctx: object(),
    )

    # Act
    await run_tests(ctx=None, include_details=True)

    # Assert
    assert captured_params.get("includeDetails") is True
    # Only includeDetails should be present
    assert "includeFailedTests" not in captured_params


@pytest.mark.asyncio
async def test_run_tests_includes_both_flags_when_true(monkeypatch):
    captured_params = {}

    async def fake_send_with_unity_instance(unity_instance, command, params):
        nonlocal captured_params
        captured_params = params
        return {
            "mode": "edit",
            "summary": {
                "total": 0,
                "passed": 0,
                "failed": 0,
                "inconclusive": 0,
                "skipped": 0,
                "duration": 0.0,
            },
            "results": [],
        }

    monkeypatch.setattr(
        "Server.src.services.tools.run_tests.send_with_unity_instance",
        fake_send_with_unity_instance,
    )
    monkeypatch.setattr(
        "Server.src.services.tools.run_tests.get_unity_instance_from_context",
        lambda ctx: object(),
    )

    # Act
    await run_tests(ctx=None, include_failed_tests=True, include_details=True)

    # Assert
    assert captured_params.get("includeFailedTests") is True
    assert captured_params.get("includeDetails") is True

```

I assumed a pytest-based test suite and a module path `Server/tests/services/tools/test_run_tests.py`. If your tests live elsewhere (e.g., `tests/server/services/tools/test_run_tests.py` or similar), adjust the `file_path` and the import/monkeypatch targets accordingly so that:
- `from Server.src.services.tools.run_tests import run_tests` matches your actual import path.
- The `monkeypatch.setattr` targets for `send_with_unity_instance` and `get_unity_instance_from_context` use the correct fully-qualified module name.

If your async test runner differs (e.g., `pytest.mark.anyio` or a custom helper), swap `@pytest.mark.asyncio` for the appropriate decorator.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +55 to 59
include_failed_tests: Annotated[bool, "Include details for failed/skipped tests only (default: false)"] = False,
include_details: Annotated[bool, "Include details for all tests (default: false)"] = False,
) -> RunTestsResponse:
unity_instance = get_unity_instance_from_context(ctx)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Add server-side tests to verify wiring of include_failed_tests / include_details into the Unity command params.

The Python run_tests entrypoint now exposes these flags and forwards them as includeFailedTests / includeDetails params. Please add tests (e.g., using a mocked send_with_unity_instance) that:

  • Call run_tests with default args and assert neither key appears in params.
  • Call run_tests(include_failed_tests=True) and assert only includeFailedTests is present and True.
  • Call run_tests(include_details=True) and assert only includeDetails is present and True.
  • Optionally cover both flags True together to document combined behavior.

This will lock in the client/server contract for test-result verbosity as the feature evolves.

Suggested implementation:

import asyncio
from unittest.mock import AsyncMock

import pytest

from Server.src.services.tools.run_tests import run_tests


@pytest.mark.asyncio
async def test_run_tests_does_not_include_verbosity_flags_by_default(monkeypatch):
    captured_params = {}

    async def fake_send_with_unity_instance(unity_instance, command, params):
        nonlocal captured_params
        captured_params = params
        # Simulate a minimal successful response structure
        return {
            "mode": "edit",
            "summary": {
                "total": 0,
                "passed": 0,
                "failed": 0,
                "inconclusive": 0,
                "skipped": 0,
                "duration": 0.0,
            },
            "results": [],
        }

    monkeypatch.setattr(
        "Server.src.services.tools.run_tests.send_with_unity_instance",
        fake_send_with_unity_instance,
    )
    # Also stub out Unity instance resolution to avoid hitting real Unity
    monkeypatch.setattr(
        "Server.src.services.tools.run_tests.get_unity_instance_from_context",
        lambda ctx: object(),
    )

    # Act
    await run_tests(ctx=None)

    # Assert
    assert "includeFailedTests" not in captured_params
    assert "includeDetails" not in captured_params


@pytest.mark.asyncio
async def test_run_tests_includes_failed_tests_flag(monkeypatch):
    captured_params = {}

    async def fake_send_with_unity_instance(unity_instance, command, params):
        nonlocal captured_params
        captured_params = params
        return {
            "mode": "edit",
            "summary": {
                "total": 0,
                "passed": 0,
                "failed": 0,
                "inconclusive": 0,
                "skipped": 0,
                "duration": 0.0,
            },
            "results": [],
        }

    monkeypatch.setattr(
        "Server.src.services.tools.run_tests.send_with_unity_instance",
        fake_send_with_unity_instance,
    )
    monkeypatch.setattr(
        "Server.src.services.tools.run_tests.get_unity_instance_from_context",
        lambda ctx: object(),
    )

    # Act
    await run_tests(ctx=None, include_failed_tests=True)

    # Assert
    assert captured_params.get("includeFailedTests") is True
    # Only includeFailedTests should be present
    assert "includeDetails" not in captured_params


@pytest.mark.asyncio
async def test_run_tests_includes_details_flag(monkeypatch):
    captured_params = {}

    async def fake_send_with_unity_instance(unity_instance, command, params):
        nonlocal captured_params
        captured_params = params
        return {
            "mode": "edit",
            "summary": {
                "total": 0,
                "passed": 0,
                "failed": 0,
                "inconclusive": 0,
                "skipped": 0,
                "duration": 0.0,
            },
            "results": [],
        }

    monkeypatch.setattr(
        "Server.src.services.tools.run_tests.send_with_unity_instance",
        fake_send_with_unity_instance,
    )
    monkeypatch.setattr(
        "Server.src.services.tools.run_tests.get_unity_instance_from_context",
        lambda ctx: object(),
    )

    # Act
    await run_tests(ctx=None, include_details=True)

    # Assert
    assert captured_params.get("includeDetails") is True
    # Only includeDetails should be present
    assert "includeFailedTests" not in captured_params


@pytest.mark.asyncio
async def test_run_tests_includes_both_flags_when_true(monkeypatch):
    captured_params = {}

    async def fake_send_with_unity_instance(unity_instance, command, params):
        nonlocal captured_params
        captured_params = params
        return {
            "mode": "edit",
            "summary": {
                "total": 0,
                "passed": 0,
                "failed": 0,
                "inconclusive": 0,
                "skipped": 0,
                "duration": 0.0,
            },
            "results": [],
        }

    monkeypatch.setattr(
        "Server.src.services.tools.run_tests.send_with_unity_instance",
        fake_send_with_unity_instance,
    )
    monkeypatch.setattr(
        "Server.src.services.tools.run_tests.get_unity_instance_from_context",
        lambda ctx: object(),
    )

    # Act
    await run_tests(ctx=None, include_failed_tests=True, include_details=True)

    # Assert
    assert captured_params.get("includeFailedTests") is True
    assert captured_params.get("includeDetails") is True

I assumed a pytest-based test suite and a module path Server/tests/services/tools/test_run_tests.py. If your tests live elsewhere (e.g., tests/server/services/tools/test_run_tests.py or similar), adjust the file_path and the import/monkeypatch targets accordingly so that:

  • from Server.src.services.tools.run_tests import run_tests matches your actual import path.
  • The monkeypatch.setattr targets for send_with_unity_instance and get_unity_instance_from_context use the correct fully-qualified module name.

If your async test runner differs (e.g., pytest.mark.anyio or a custom helper), swap @pytest.mark.asyncio for the appropriate decorator.

@dsarno dsarno deleted the feature/run-tests-summary-clean branch January 4, 2026 22:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant