fix(claude-code): correct token counts, add per-call cost, and show inline activity#1015
Merged
zbigniewsobiecki merged 1 commit intodevfrom Mar 23, 2026
Conversation
…ivity
Token counts in the LLM Calls tab were near-zero for Claude Code runs because
only usage.input_tokens (the uncached portion) was recorded. With Anthropic
prompt caching, the vast majority of tokens land in cache_read_input_tokens and
cache_creation_input_tokens instead.
Fix logClaudeCodeLlmCall:
- inputTokens = input_tokens + cache_read_input_tokens + cache_creation_input_tokens
- cachedTokens = cache_read_input_tokens (tokens served from cache)
- costUsd computed via calculateCost() using toPricingKey() which converts
'claude-sonnet-4-5-20250929' → 'anthropic:claude-sonnet-4-5' for pricing lookup
Also refactor processAssistantMessage to extract processContentBlock helper,
reducing Biome cognitive complexity from 17 to under 15.
LLM Calls tab activity column improvements:
- listLlmCalls API now returns toolCalls [{name, inputSummary}] instead of flat
toolNames [], surfacing the parsed input summary (file path, command, etc.) for
each tool call without requiring row expansion
- thinkingChars added: total chars across all thinking blocks shown inline as
"thinking (N chars)" in muted italic
- Each tool call rendered as [Badge] monospace-param inline in the list row
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
logClaudeCodeLlmCallwas recording onlyusage.input_tokens, which with Anthropic prompt caching is the uncached portion (typically 0–8 tokens). Now sums all three usage fields:input_tokens + cache_read_input_tokens + cache_creation_input_tokenscachedTokens: now set tocache_read_input_tokensso the Cached stat card in the LLM Calls tab reflects realitytoPricingKey()helper converts raw Anthropic model IDs (claude-sonnet-4-5-20250929) to theanthropic:claude-sonnet-4-5format expected bycalculateCost()listLlmCallsAPI now returnstoolCalls [{name, inputSummary}]instead of a flattoolNames [], so each row shows[Read] /path/to/file.tsor[Bash] git commit -m "..."without requiring expansion; thinking turns showthinking (N chars)in muted italicprocessAssistantMessage: extractedprocessContentBlockhelper to bring Biome cognitive complexity from 17 → under 15Test plan
npm run typecheck— cleancd web && npx tsc --noEmit— cleannpm run lint— no errorsnpm test— 6625 tests pass including newclaude-code-messageProcessing.test.ts[Tool] paraminline per row; thinking turns showthinking (N chars)🤖 Generated with Claude Code