Skip to content

[aw][test audit] render_cost_view untested for pure active session with no model_metrics #78

@microsasa

Description

@microsasa

Root Cause Analysis

render_cost_view in report.py has two conditional branches that are executed in sequence for each session:

if s.model_metrics:
    # renders per-model rows
else:
    table.add_row(name, s.model or "—", "—", "—", str(s.model_calls), "—")  # ← branch A

grand_model_calls += s.model_calls

if s.is_active:
    table.add_row("  ↳ Since last shutdown", ...)  # ← branch B
    grand_output += s.active_output_tokens

The untested combination is branch A + branch B: a session where model_metrics={} (no shutdown data, no tool calls yet) and is_active=True (just started). This happens in practice whenever a user opens copilot-usage cost with a session that has only a session.start event — no model activity recorded yet.

Why existing tests miss this

Test is_active model_metrics Branches hit
test_session_without_metrics False {} A only
test_active_session_shows_shutdown_row True {"claude-opus-4.6": ...} B only (via if-branch)
Missing True {} A then B

There is no test that exercises both branches together.

Expected Behaviour to Assert

For a session with is_active=True and model_metrics={}:

  1. The session row appears with "—" for all metric columns (branch A fires)
  2. A "↳ Since last shutdown" row appears immediately after (branch B fires)
  3. The "N/A" placeholder is shown for the premium-related columns
  4. active_model_calls and format_tokens(active_output_tokens) are rendered in that row
  5. The grand total grand_output includes active_output_tokens (not double-counted)
  6. No exception is raised

Tests to Add in tests/copilot_usage/test_report.py

Add to TestRenderCostView:

def test_pure_active_session_no_metrics_shows_both_rows(self) -> None:
    """Active session with no model_metrics shows placeholder row AND Since-last-shutdown row."""
    session = SessionSummary(
        session_id="pure-active-1234",
        name="Just Started",
        model="claude-sonnet-4",
        start_time=datetime(2025, 1, 15, 10, 0, tzinfo=UTC),
        is_active=True,
        model_calls=2,
        user_messages=1,
        active_model_calls=2,
        active_output_tokens=300,
        # model_metrics intentionally empty
    )
    output = _capture_cost_view([session])
    assert "Just Started" in output
    assert "—" in output           # placeholder row (no metrics)
    assert "Since last shutdown" in output  # active row
    assert "N/A" in output


def test_pure_active_no_metrics_grand_total_includes_active_tokens(self) -> None:
    """Grand total output tokens includes active_output_tokens for no-metrics active session."""
    session = SessionSummary(
        session_id="pure-active-5678",
        name="Token Check",
        model="claude-sonnet-4",
        start_time=datetime(2025, 1, 15, 10, 0, tzinfo=UTC),
        is_active=True,
        model_calls=1,
        user_messages=1,
        active_model_calls=1,
        active_output_tokens=1500,
    )
    output = _capture_cost_view([session])
    assert "Grand Total" in output
    # 1500 output tokens → formatted as "1.5K"
    assert "1.5K" in output

Regression Scenarios

  • If the if s.is_active: block were accidentally nested inside the if s.model_metrics: block, pure-active sessions would silently lose the "↳ Since last shutdown" row
  • If grand_output += s.active_output_tokens were removed from the if s.is_active: block (only reachable here when model_metrics is empty), the grand total would under-count output for pure-active sessions

Generated by Test Suite Analysis ·

Metadata

Metadata

Assignees

No one assigned

    Labels

    awCreated by agentic workflowtest-auditTest coverage gaps

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions