fix(llmobs): fix missing estimated cost on Bedrock LLM spans#7952
fix(llmobs): fix missing estimated cost on Bedrock LLM spans#7952
Conversation
Pass the full modelId and "amazon_bedrock" as model_provider to LLMObs instead of the split values from parseModelId(). The split values prevented the backend cost estimator from matching Bedrock pricing entries. Mirrors the Python tracer fix in DataDog/dd-trace-py#17293.
Overall package sizeSelf size: 5.48 MB Dependency sizes| name | version | self size | total size | |------|---------|-----------|------------| | import-in-the-middle | 3.0.1 | 82.56 kB | 817.39 kB | | dc-polyfill | 0.1.10 | 26.73 kB | 26.73 kB |🤖 This report was automatically generated by heaviest-objects-in-the-universe |
|
✅ Tests 🎉 All green!❄️ No new flaky tests detected 🎯 Code Coverage (details) 🔗 Commit SHA: 9ed911f | Docs | Datadog PR Page | Was this helpful? React with 👍/👎 or give us feedback! |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #7952 +/- ##
==========================================
- Coverage 74.25% 74.24% -0.01%
==========================================
Files 769 769
Lines 36068 36076 +8
==========================================
+ Hits 26783 26786 +3
- Misses 9285 9290 +5 Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Codex Review: Didn't find any major issues. Hooray! ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
BenchmarksBenchmark execution time: 2026-04-08 16:02:23 Comparing candidate commit 9ed911f in PR branch Found 0 performance improvements and 0 performance regressions! Performance is the same for 234 metrics, 26 unstable metrics. |
Pass the full modelId and "amazon_bedrock" as model_provider to LLMObs instead of the split values from parseModelId(). The split values prevented the backend cost estimator from matching Bedrock pricing entries. Mirrors the Python tracer fix in DataDog/dd-trace-py#17293.
Pass the full modelId and "amazon_bedrock" as model_provider to LLMObs instead of the split values from parseModelId(). The split values prevented the backend cost estimator from matching Bedrock pricing entries. Mirrors the Python tracer fix in DataDog/dd-trace-py#17293.
What does this PR do?
Fixes
model_nameandmodel_provideron Bedrock LLMObs span events so the backend cost estimator can match them to pricing entries.Before:
model_provider: split vendor name (e.g.,"anthropic","amazon","meta")model_name: split model name (e.g.,"claude-3-sonnet-20240229-v1:0","nova-lite-v1:0")After:
model_provider:"amazon_bedrock"(matches backend cost estimator and existing frontend icon mapping)model_name: fullmodelId(e.g.,"anthropic.claude-3-sonnet-20240229-v1:0","amazon.nova-lite-v1:0")Motivation
Estimated cost is missing or incorrect on Bedrock LLM spans.
parseModelId()splits model IDs like"amazon.nova-lite-v1:0"intomodelProvider="amazon"andmodelName="nova-lite-v1:0"for APM span tags. The LLMObs plugin passed these split values through toregisterLLMObsSpan(), which sends them to the backend cost estimator. The estimator couldn't match either:"amazon"doesn't match the AWS Bedrock provider (provider_match: contains "bedrock")"nova-lite-v1:0"doesn't match pricing entries (match: contains "amazon.nova-lite")For vendors with their own direct-API provider (anthropic, mistral), the split provider name matched the wrong provider, resulting in direct-API pricing instead of Bedrock pricing.
This is the same issue fixed in the Python tracer: DataDog/dd-trace-py#17293.
Additional Notes
Why
parseModelId()is still called: The splitmodelProvider/modelNamevalues are still needed byextractRequestParams()andextractTextAndResponseReason(), which use the provider name to select the correct request/response body parser for each Bedrock provider."amazon_bedrock"was chosen over"bedrock"because the frontend already maps"amazon_bedrock"to theLlmType.AMAZON_BEDROCKicon [ref].APM span tags (
aws.bedrock.request.model_providerandaws.bedrock.request.model) are unchanged.Testing:
bedrockruntime.spec.js(32 passed, 4 skipped - pre-existing skips)Before

After

Risks:
meta.model_providerandmeta.model_nameon LLMObs span events. Customers filtering by@meta.model_provider:anthropicfor Bedrock models would need to update to@meta.model_provider:amazon_bedrock. However, those customers were getting incorrect pricing, so the filter was already misleading.model_name. Cost matching still works for system inference profiles (the model name is embedded in the ARN and matched viacontainsclauses).