Skip to content

[aw][test audit] Test gaps: session detail rendering — failed tools, unknown events, hours-range durations #18

@github-actions

Description

@github-actions

Several branches inside report.py's session-detail rendering pipeline are exercised by no test today. Together they cover meaningful user-visible output that could silently regress.

Root cause

_build_event_details, _event_type_label, _format_relative_time, and _format_detail_duration each contain branches that the existing unit tests do not reach.

Untested paths

_build_event_details (report.py ~L231)

Scenario Expected output
TOOL_EXECUTION_COMPLETE with success=False row details contain "✗" (not "✓")
TOOL_EXECUTION_COMPLETE with toolTelemetry=None tool name is empty; row still renders without error
TOOL_EXECUTION_COMPLETE with no tool_name in telemetry properties empty string for tool name, rest of row is correct
SESSION_SHUTDOWN event in recent events details contain "type=(shutdownType)"
ASSISTANT_MESSAGE with outputTokens=0 but non-empty content only content is shown (no tokens=0 prefix)

_event_type_label (report.py ~L208)

The match statement has four arms never reached by tests:

  • TOOL_EXECUTION_START → should render "tool start"
  • ASSISTANT_TURN_END → should render "turn end"
  • SESSION_START → should render "session start"
  • SESSION_SHUTDOWN → should render "session end"
  • fallback _ arm for a completely unknown event type → should return Text(event_type, style="dim")

_format_relative_time (report.py ~L173)

Only the < 1 hour path is hit. The hours branch (return f"+{hours}:{minutes:02d}:{seconds:02d}") is unreachable with current test data because no fixture or synthetic event spans more than an hour from session_start.

_format_detail_duration (report.py ~L190)

The existing test (test_completed_session_shows_duration) covers the Xm Ys branch. The Xh Ym branch (sessions longer than 60 minutes) has no test.

Tests to add

All new tests belong in tests/copilot_usage/test_report.py.

class TestBuildEventDetails:
    """Direct tests for _build_event_details (imported from report)."""

    def test_tool_failure_shows_cross(self):
        ev = _make_event(EventType.TOOL_EXECUTION_COMPLETE,
                         data={"toolCallId": "t1", "success": False,
                               "toolTelemetry": {"properties": {"tool_name": "bash"}}})
        details = _build_event_details(ev)
        assert "✗" in details
        assert "✓" not in details

    def test_tool_no_telemetry(self):
        ev = _make_event(EventType.TOOL_EXECUTION_COMPLETE,
                         data={"toolCallId": "t1", "success": True})
        details = _build_event_details(ev)  # must not raise
        assert "✓" in details

    def test_tool_no_tool_name_in_properties(self):
        ev = _make_event(EventType.TOOL_EXECUTION_COMPLETE,
                         data={"toolCallId": "t1", "success": True,
                               "toolTelemetry": {"properties": {}}})
        details = _build_event_details(ev)
        assert "✓" in details   # no crash, no spurious tool name

    def test_session_shutdown_details(self):
        ev = _make_event(EventType.SESSION_SHUTDOWN,
                         data={"shutdownType": "routine",
                               "totalPremiumRequests": 5,
                               "totalApiDurationMs": 1000,
                               "modelMetrics": {}})
        details = _build_event_details(ev)
        assert "routine" in details

    def test_assistant_message_zero_tokens_shows_content(self):
        ev = _make_event(EventType.ASSISTANT_MESSAGE,
                         data={"messageId": "m1", "content": "hello",
                               "outputTokens": 0})
        details = _build_event_details(ev)
        assert "hello" in details
        assert "tokens=0" not in details


class TestEventTypeLabel:
    """Tests for _event_type_label covering all match arms."""

    `@pytest`.mark.parametrize("event_type,expected_text", [
        (EventType.TOOL_EXECUTION_START, "tool start"),
        (EventType.ASSISTANT_TURN_END,   "turn end"),
        (EventType.SESSION_START,        "session start"),
        (EventType.SESSION_SHUTDOWN,     "session end"),
        ("some.future.event",            "some.future.event"),  # fallback
    ])
    def test_label_text(self, event_type, expected_text):
        label = _event_type_label(event_type)
        assert label.plain == expected_text


class TestFormatRelativeTime:
    def test_hours_branch(self):
        from copilot_usage.report import _format_relative_time
        from datetime import timedelta
        delta = timedelta(hours=2, minutes=5, seconds=30)
        assert _format_relative_time(delta) == "+2:05:30"

    def test_minutes_only(self):
        from copilot_usage.report import _format_relative_time
        from datetime import timedelta
        delta = timedelta(minutes=3, seconds=15)
        assert _format_relative_time(delta) == "+3:15"


class TestFormatDetailDuration:
    def test_hours_branch(self):
        from copilot_usage.report import _format_detail_duration
        from datetime import UTC, datetime, timedelta
        start = datetime(2025, 1, 1, 0, 0, 0, tzinfo=UTC)
        end   = start + timedelta(hours=2, minutes=30)
        assert _format_detail_duration(start, end) == "2h 30m"

    def test_seconds_branch(self):
        from copilot_usage.report import _format_detail_duration
        from datetime import UTC, datetime, timedelta
        start = datetime(2025, 1, 1, tzinfo=UTC)
        end   = start + timedelta(seconds=45)
        assert _format_detail_duration(start, end) == "45s"

Additionally, add an integration test that calls render_session_detail with a TOOL_EXECUTION_START and ASSISTANT_TURN_END event so that those labels appear in the rendered "Recent Events" table.

Regression scenarios

  • A future refactor of the match statement in _event_type_label could silently drop a label; parameterised tests will catch every arm.
  • If _format_detail_duration's hours branch contains a bug it would only surface for sessions > 1 h — the exact case users would notice.

Generated by Test Suite Analysis ·

Metadata

Metadata

Assignees

No one assigned

    Labels

    awCreated by agentic workflowenhancementNew feature or requesttest-auditTest coverage gaps

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions