📡 OTel Instrumentation Improvement: add gen_ai.request.model to conclusion span attributes
Analysis Date: 2026-04-26
Priority: Medium
Effort: Small (< 2h)
Problem
sendJobConclusionSpan captures awInfo.model at line 707 of send_otlp_span.cjs but never adds it to the conclusion span's attributes array. The model name is pushed only onto agentAttributes (the gh-aw.agent.agent sub-span) at line 884, and the sub-span is conditionally emitted only when jobName === "agent" AND valid timing data is available.
This means:
- Engineers querying
gh-aw.*.conclusion spans in Grafana/Honeycomb/Datadog cannot group or filter by model — the most natural query point carries no model information.
- On non-agent jobs, timed-out runs, or any case where
agent_output.json has no mtimeMs, the model name is not present in any span for that run.
- Cost attribution by model requires joining the conclusion span with a child sub-span, which is brittle and requires backend-specific computed fields.
Why This Matters (DevOps Perspective)
When the model is rotated (e.g., claude-sonnet-4-5 → claude-sonnet-4-6) or an experiment runs two models in parallel, engineers need to answer:
- "Is the failure rate higher for one model than another?"
- "Which model is consuming the most tokens per run?"
- "When did we switch from model A to model B, and did error rates change?"
All three questions require grouping conclusion spans by model name. Without gen_ai.request.model on the conclusion span, every such query requires a sub-span join that may not be available in all backends, and silently returns no results for runs where the agent sub-span was not emitted.
Adding the model name to the conclusion span unblocks out-of-the-box model-level dashboards and makes failure-rate-by-model alerts trivial to configure.
Current Behavior
In actions/setup/js/send_otlp_span.cjs:
// Line 707 — model is captured but never added to the conclusion span
const model = awInfo.model || "";
// Lines 748–800 — conclusion span attributes (model is absent)
const attributes = [buildAttr("gh-aw.workflow.name", workflowName), ...];
if (engineId) attributes.push(buildAttr("gh-aw.engine.id", engineId));
// ... no model push here ...
// Line 880–884 — model only reaches the agent sub-span
const agentAttributes = [...attributes];
agentAttributes.push(buildAttr("gen_ai.operation.name", "chat"));
if (model) agentAttributes.push(buildAttr("gen_ai.request.model", model));
// Lines 873–923 — agent sub-span is conditionally emitted
if (jobName === "agent" && typeof agentStartMs === "number" && agentStartMs > 0
&& typeof agentEndMs === "number" && agentEndMs > agentStartMs) {
// Only here does gen_ai.request.model appear in the trace
}
// Lines 926–940 — conclusion span uses `attributes`, not `agentAttributes`
const payload = buildOTLPPayload({
spanName,
attributes, // ← no model name
...
});
Proposed Change
Add a single push to the conclusion span's attributes array, immediately after the existing engineId push. Place it near the other GenAI-adjacent attributes for readability:
// actions/setup/js/send_otlp_span.cjs — inside sendJobConclusionSpan
// After line 751: if (engineId) attributes.push(buildAttr("gh-aw.engine.id", engineId));
if (model) attributes.push(buildAttr("gen_ai.request.model", model));
This mirrors the pattern already used in agentAttributes and follows the OTel GenAI semantic convention (gen_ai.request.model is the standardized key).
No other files need to change for this fix. If a gh-aw.model alias is also desired for non-GenAI querying, a second line can be added in the same location:
if (model) attributes.push(buildAttr("gh-aw.model", model));
Expected Outcome
After this change:
- In Grafana / Honeycomb / Datadog:
gen_ai.request.model becomes available as a dimension on every gh-aw.*.conclusion span. Engineers can create a panel grouped by model, filter failure rates per model, and set alerts on model-specific error rates — all without sub-span joins.
- In the JSONL mirror:
/tmp/gh-aw/otel.jsonl conclusion span lines will include gen_ai.request.model, making post-hoc debugging of model-related failures possible from the artifact alone.
- For on-call engineers: "Which model was running when this job failed?" becomes a single-attribute lookup on the conclusion span rather than a multi-span join.
- For timed-out and cancelled runs: The model name will appear even when the agent sub-span is not emitted, closing a silent data gap.
Implementation Steps
Evidence from Live Sentry Data
Live Sentry telemetry was not available during this analysis run — the Sentry MCP server returned no tools ([]). The gap was identified through static analysis of send_otlp_span.cjs.
The absence of gen_ai.request.model on conclusion spans can be confirmed by querying your OTel backend for spans where span.name matches gh-aw.*.conclusion and checking whether gen_ai.request.model is populated. It will be absent on all such spans with the current code.
Related Files
actions/setup/js/send_otlp_span.cjs — primary file to modify (sendJobConclusionSpan, ~line 751)
actions/setup/js/action_conclusion_otlp.cjs — no change needed; calls sendJobConclusionSpan unchanged
actions/setup/js/send_otlp_span.test.cjs — add test assertion for gen_ai.request.model on conclusion spans
Generated by the Daily OTel Instrumentation Advisor workflow
Generated by Daily OTel Instrumentation Advisor · ● 155.3K · ◷
📡 OTel Instrumentation Improvement: add
gen_ai.request.modelto conclusion span attributesAnalysis Date: 2026-04-26
Priority: Medium
Effort: Small (< 2h)
Problem
sendJobConclusionSpancapturesawInfo.modelat line 707 ofsend_otlp_span.cjsbut never adds it to the conclusion span'sattributesarray. The model name is pushed only ontoagentAttributes(thegh-aw.agent.agentsub-span) at line 884, and the sub-span is conditionally emitted only whenjobName === "agent"AND valid timing data is available.This means:
gh-aw.*.conclusionspans in Grafana/Honeycomb/Datadog cannot group or filter by model — the most natural query point carries no model information.agent_output.jsonhas nomtimeMs, the model name is not present in any span for that run.Why This Matters (DevOps Perspective)
When the model is rotated (e.g.,
claude-sonnet-4-5→claude-sonnet-4-6) or an experiment runs two models in parallel, engineers need to answer:All three questions require grouping conclusion spans by model name. Without
gen_ai.request.modelon the conclusion span, every such query requires a sub-span join that may not be available in all backends, and silently returns no results for runs where the agent sub-span was not emitted.Adding the model name to the conclusion span unblocks out-of-the-box model-level dashboards and makes failure-rate-by-model alerts trivial to configure.
Current Behavior
In
actions/setup/js/send_otlp_span.cjs:Proposed Change
Add a single push to the conclusion span's
attributesarray, immediately after the existingengineIdpush. Place it near the other GenAI-adjacent attributes for readability:This mirrors the pattern already used in
agentAttributesand follows the OTel GenAI semantic convention (gen_ai.request.modelis the standardized key).No other files need to change for this fix. If a
gh-aw.modelalias is also desired for non-GenAI querying, a second line can be added in the same location:Expected Outcome
After this change:
gen_ai.request.modelbecomes available as a dimension on everygh-aw.*.conclusionspan. Engineers can create a panel grouped by model, filter failure rates per model, and set alerts on model-specific error rates — all without sub-span joins./tmp/gh-aw/otel.jsonlconclusion span lines will includegen_ai.request.model, making post-hoc debugging of model-related failures possible from the artifact alone.Implementation Steps
actions/setup/js/send_otlp_span.cjssendJobConclusionSpan, afterif (engineId) attributes.push(buildAttr("gh-aw.engine.id", engineId));(~line 751), addif (model) attributes.push(buildAttr("gen_ai.request.model", model));send_otlp_span.test.cjs) to assert thatgen_ai.request.modelis present on the conclusion span whenawInfo.modelis setcd actions/setup/js && npx vitest runto confirm tests passmake fmtto ensure formattingEvidence from Live Sentry Data
Live Sentry telemetry was not available during this analysis run — the Sentry MCP server returned no tools (
[]). The gap was identified through static analysis ofsend_otlp_span.cjs.The absence of
gen_ai.request.modelon conclusion spans can be confirmed by querying your OTel backend forspanswherespan.namematchesgh-aw.*.conclusionand checking whethergen_ai.request.modelis populated. It will be absent on all such spans with the current code.Related Files
actions/setup/js/send_otlp_span.cjs— primary file to modify (sendJobConclusionSpan, ~line 751)actions/setup/js/action_conclusion_otlp.cjs— no change needed; callssendJobConclusionSpanunchangedactions/setup/js/send_otlp_span.test.cjs— add test assertion forgen_ai.request.modelon conclusion spansGenerated by the Daily OTel Instrumentation Advisor workflow