Skip to content

Expose elapsed_ms in step.http_call output for latency measurement#383

Merged
intel352 merged 4 commits intomainfrom
copilot/add-elapsed-ms-to-http-call-output
Mar 26, 2026
Merged

Expose elapsed_ms in step.http_call output for latency measurement#383
intel352 merged 4 commits intomainfrom
copilot/add-elapsed-ms-to-http-call-output

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented Mar 26, 2026

step.http_call had no way to surface request duration, forcing workarounds (e.g. -1 sentinel values) in latency metrics for shadow mode / proxy patterns.

Changes

  • module/pipeline_step_http_call.go — Wraps httpClient.Do + io.ReadAll in a time.Now()/time.Since measurement; adds elapsed_ms (int64, ms) to the output map on both the normal and OAuth2 token-refresh retry paths.
  • schema/step_schema_builtins.go — Declares elapsed_ms as a static output field on step.http_call so template validation and UI tooling recognize it.
  • TestsTestHTTPCallStep_ElapsedMS asserts the field is present and non-negative; TestInferStepOutputs_Fallback updated to include elapsed_ms.

Usage

- name: call_primary
  type: step.http_call
  config:
    url: "https://primary-service/api/resource"
    method: GET

- name: log_latency
  type: step.log
  config:
    level: info
    message: "latency={{ .steps.call_primary.elapsed_ms }}ms"
Original prompt

This section details on the original issue you should resolve

<issue_title>Expose elapsed_ms in step.http_call output for latency measurement</issue_title>
<issue_description>## Summary

step.http_call does not currently expose the request duration in its output. We need an elapsed_ms field in the step output so pipelines can measure and log upstream latency.

Use Case

In a facade/proxy pattern with shadow mode, a pipeline calls two upstream services in parallel and logs divergence metrics including latency:

- name: call_primary
  type: step.http_call
  config:
    url: "https://primary-service/api/resource"
    method: GET

- name: call_shadow
  type: step.http_call
  config:
    url: "https://shadow-service/api/resource"
    method: GET

- name: log_divergence
  type: step.log
  config:
    level: info
    message: "primary_latency={{.steps.call_primary.elapsed_ms}}ms shadow_latency={{.steps.call_shadow.elapsed_ms}}ms"

Currently there is no way to access the request duration from step.http_call output, forcing workarounds like -1 sentinel values in latency metrics.

Proposed Behavior

  • Add elapsed_ms (int64, milliseconds) to step.http_call output alongside existing fields (status_code, body, headers)
  • Measured as wall-clock time from request send to response fully read
  • Available in pipeline context as steps.<step_name>.elapsed_ms

Context

This is a blocker for enabling shadow mode divergence logging in a multi-service migration. Without real latency values, divergence metrics are meaningless for capacity planning and performance comparison between services.</issue_description>

Comments on the Issue (you are @copilot in this section)


🔒 GitHub Advanced Security automatically protects Copilot coding agent pull requests. You can protect all pull requests by enabling Advanced Security for your repositories. Learn more about Advanced Security.

Copilot AI changed the title [WIP] Add elapsed_ms field to step.http_call output for latency measurement Expose elapsed_ms in step.http_call output for latency measurement Mar 26, 2026
Copilot AI requested a review from intel352 March 26, 2026 08:13
@intel352 intel352 marked this pull request as ready for review March 26, 2026 08:17
Copilot AI review requested due to automatic review settings March 26, 2026 08:17
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds request-duration visibility to the step.http_call pipeline step so workflows can record upstream latency without external timers/workarounds.

Changes:

  • Measure request wall-clock duration around httpClient.Do + io.ReadAll, and expose it as elapsed_ms in the step output (including the OAuth2 401 retry path).
  • Update the builtin step schema to declare elapsed_ms as an output field for validation/tooling.
  • Add/adjust unit tests to assert elapsed_ms is present and included in inferred outputs.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.

File Description
module/pipeline_step_http_call.go Adds timing measurements and returns elapsed_ms in outputs for normal and retry execution paths.
module/pipeline_step_http_call_test.go Adds a new test ensuring elapsed_ms exists and is non-negative.
schema/step_schema_builtins.go Declares elapsed_ms as a static output on step.http_call for schema-based tooling.
schema/step_schema_test.go Updates fallback output inference test to include elapsed_ms.

Comment on lines 84 to 89
Outputs: []StepOutputDef{
{Key: "status", Type: "number", Description: "HTTP response status code"},
{Key: "body", Type: "any", Description: "Response body (parsed as JSON if Content-Type is application/json)"},
{Key: "headers", Type: "map", Description: "Response headers"},
{Key: "elapsed_ms", Type: "number", Description: "Request duration in milliseconds (wall-clock time from send to response fully read)"},
},
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The declared outputs for step.http_call don’t match the actual runtime output from parseHTTPResponse: runtime includes status_code (int) and status (string like "200 OK"), but the schema currently declares status as a numeric status code and does not declare status_code at all. This will cause template validation/UI tooling to mis-type status and potentially warn/error on valid status_code references; consider updating the schema outputs to include status_code (number) and status (string), alongside body, headers, and the new elapsed_ms.

Copilot uses AI. Check for mistakes.
Comment on lines 523 to 531
respBody, err = io.ReadAll(retryResp.Body)
retryElapsedMS := time.Since(retryStart).Milliseconds()
if err != nil {
return nil, fmt.Errorf("http_call step %q: failed to read retry response: %w", s.name, err)
}

output := parseHTTPResponse(retryResp, respBody)
output["elapsed_ms"] = retryElapsedMS
if instanceURL := s.oauthEntry.getInstanceURL(); instanceURL != "" {
Copy link

Copilot AI Mar 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The retry-on-401 path now sets elapsed_ms, but there’s no assertion covering this behavior. Since there’s already a 401 refresh test covering the retry flow, consider extending it (or adding a dedicated test) to assert elapsed_ms is present and non-negative on the retry output as well, so this key doesn’t regress on the refresh path.

Copilot generated this review using guidance from organization custom instructions.
@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 26, 2026

⏱ Benchmark Results

No significant performance regressions detected.

benchstat comparison (baseline → PR)
## benchstat: baseline → PR
baseline-bench.txt:245: parsing iteration count: invalid syntax
baseline-bench.txt:295278: parsing iteration count: invalid syntax
baseline-bench.txt:605527: parsing iteration count: invalid syntax
baseline-bench.txt:923438: parsing iteration count: invalid syntax
baseline-bench.txt:1209279: parsing iteration count: invalid syntax
baseline-bench.txt:1507855: parsing iteration count: invalid syntax
benchmark-results.txt:245: parsing iteration count: invalid syntax
benchmark-results.txt:308325: parsing iteration count: invalid syntax
benchmark-results.txt:602065: parsing iteration count: invalid syntax
benchmark-results.txt:861143: parsing iteration count: invalid syntax
benchmark-results.txt:1153107: parsing iteration count: invalid syntax
benchmark-results.txt:1443586: parsing iteration count: invalid syntax
goos: linux
goarch: amd64
pkg: github.com/GoCodeAlone/workflow/dynamic
cpu: AMD EPYC 7763 64-Core Processor                
                            │ baseline-bench.txt │        benchmark-results.txt        │
                            │       sec/op       │    sec/op      vs base              │
InterpreterCreation-4              3.170m ± 207%   3.238m ± 217%       ~ (p=0.394 n=6)
ComponentLoad-4                    3.630m ±   6%   3.804m ±  12%  +4.80% (p=0.041 n=6)
ComponentExecute-4                 1.932µ ±   2%   1.957µ ±   1%  +1.32% (p=0.024 n=6)
PoolContention/workers-1-4         1.085µ ±   1%   1.099µ ±   2%  +1.29% (p=0.037 n=6)
PoolContention/workers-2-4         1.085µ ±   1%   1.097µ ±   4%  +1.06% (p=0.011 n=6)
PoolContention/workers-4-4         1.083µ ±   1%   1.113µ ±   2%  +2.77% (p=0.002 n=6)
PoolContention/workers-8-4         1.086µ ±   1%   1.106µ ±   1%  +1.84% (p=0.002 n=6)
PoolContention/workers-16-4        1.088µ ±   1%   1.110µ ±   2%  +1.98% (p=0.002 n=6)
ComponentLifecycle-4               3.595m ±   1%   3.761m ±   2%  +4.62% (p=0.002 n=6)
SourceValidation-4                 2.231µ ±   2%   2.270µ ±   1%  +1.75% (p=0.009 n=6)
RegistryConcurrent-4               789.5n ±   3%   800.0n ±   2%       ~ (p=0.589 n=6)
LoaderLoadFromString-4             3.667m ±   2%   3.784m ±   4%  +3.18% (p=0.002 n=6)
geomean                            17.41µ          17.82µ         +2.33%

                            │ baseline-bench.txt │        benchmark-results.txt         │
                            │        B/op        │     B/op      vs base                │
InterpreterCreation-4               2.027Mi ± 0%   2.027Mi ± 0%       ~ (p=0.608 n=6)
ComponentLoad-4                     2.180Mi ± 0%   2.180Mi ± 0%       ~ (p=0.461 n=6)
ComponentExecute-4                  1.203Ki ± 0%   1.203Ki ± 0%       ~ (p=1.000 n=6) ¹
PoolContention/workers-1-4          1.203Ki ± 0%   1.203Ki ± 0%       ~ (p=1.000 n=6) ¹
PoolContention/workers-2-4          1.203Ki ± 0%   1.203Ki ± 0%       ~ (p=1.000 n=6) ¹
PoolContention/workers-4-4          1.203Ki ± 0%   1.203Ki ± 0%       ~ (p=1.000 n=6) ¹
PoolContention/workers-8-4          1.203Ki ± 0%   1.203Ki ± 0%       ~ (p=1.000 n=6) ¹
PoolContention/workers-16-4         1.203Ki ± 0%   1.203Ki ± 0%       ~ (p=1.000 n=6) ¹
ComponentLifecycle-4                2.183Mi ± 0%   2.183Mi ± 0%       ~ (p=0.619 n=6)
SourceValidation-4                  1.984Ki ± 0%   1.984Ki ± 0%       ~ (p=1.000 n=6) ¹
RegistryConcurrent-4                1.133Ki ± 0%   1.133Ki ± 0%       ~ (p=1.000 n=6) ¹
LoaderLoadFromString-4              2.182Mi ± 0%   2.182Mi ± 0%       ~ (p=0.329 n=6)
geomean                             15.25Ki        15.25Ki       -0.00%
¹ all samples are equal

                            │ baseline-bench.txt │        benchmark-results.txt        │
                            │     allocs/op      │  allocs/op   vs base                │
InterpreterCreation-4                15.68k ± 0%   15.68k ± 0%       ~ (p=1.000 n=6)
ComponentLoad-4                      18.02k ± 0%   18.02k ± 0%       ~ (p=1.000 n=6)
ComponentExecute-4                    25.00 ± 0%    25.00 ± 0%       ~ (p=1.000 n=6) ¹
PoolContention/workers-1-4            25.00 ± 0%    25.00 ± 0%       ~ (p=1.000 n=6) ¹
PoolContention/workers-2-4            25.00 ± 0%    25.00 ± 0%       ~ (p=1.000 n=6) ¹
PoolContention/workers-4-4            25.00 ± 0%    25.00 ± 0%       ~ (p=1.000 n=6) ¹
PoolContention/workers-8-4            25.00 ± 0%    25.00 ± 0%       ~ (p=1.000 n=6) ¹
PoolContention/workers-16-4           25.00 ± 0%    25.00 ± 0%       ~ (p=1.000 n=6) ¹
ComponentLifecycle-4                 18.07k ± 0%   18.07k ± 0%       ~ (p=1.000 n=6) ¹
SourceValidation-4                    32.00 ± 0%    32.00 ± 0%       ~ (p=1.000 n=6) ¹
RegistryConcurrent-4                  2.000 ± 0%    2.000 ± 0%       ~ (p=1.000 n=6) ¹
LoaderLoadFromString-4               18.06k ± 0%   18.06k ± 0%       ~ (p=1.000 n=6) ¹
geomean                               183.3         183.3       +0.00%
¹ all samples are equal

pkg: github.com/GoCodeAlone/workflow/middleware
                                  │ baseline-bench.txt │       benchmark-results.txt       │
                                  │       sec/op       │   sec/op     vs base              │
CircuitBreakerDetection-4                  287.4n ± 4%   286.8n ± 7%       ~ (p=0.329 n=6)
CircuitBreakerExecution_Success-4          22.49n ± 0%   22.51n ± 0%       ~ (p=0.249 n=6)
CircuitBreakerExecution_Failure-4          64.45n ± 0%   64.47n ± 1%       ~ (p=0.675 n=6)
geomean                                    74.69n        74.67n       -0.03%

                                  │ baseline-bench.txt │       benchmark-results.txt        │
                                  │        B/op        │    B/op     vs base                │
CircuitBreakerDetection-4                 144.0 ± 0%     144.0 ± 0%       ~ (p=1.000 n=6) ¹
CircuitBreakerExecution_Success-4         0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=6) ¹
CircuitBreakerExecution_Failure-4         0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=6) ¹
geomean                                              ²               +0.00%               ²
¹ all samples are equal
² summaries must be >0 to compute geomean

                                  │ baseline-bench.txt │       benchmark-results.txt        │
                                  │     allocs/op      │ allocs/op   vs base                │
CircuitBreakerDetection-4                 1.000 ± 0%     1.000 ± 0%       ~ (p=1.000 n=6) ¹
CircuitBreakerExecution_Success-4         0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=6) ¹
CircuitBreakerExecution_Failure-4         0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=6) ¹
geomean                                              ²               +0.00%               ²
¹ all samples are equal
² summaries must be >0 to compute geomean

pkg: github.com/GoCodeAlone/workflow/module
                                 │ baseline-bench.txt │       benchmark-results.txt        │
                                 │       sec/op       │    sec/op     vs base              │
JQTransform_Simple-4                     854.6n ± 27%   872.3n ± 31%       ~ (p=0.589 n=6)
JQTransform_ObjectConstruction-4         1.414µ ±  1%   1.466µ ± 31%  +3.68% (p=0.002 n=6)
JQTransform_ArraySelect-4                3.297µ ±  1%   3.421µ ±  3%  +3.78% (p=0.002 n=6)
JQTransform_Complex-4                    38.02µ ±  1%   38.90µ ±  1%  +2.32% (p=0.002 n=6)
JQTransform_Throughput-4                 1.746µ ±  1%   1.777µ ±  1%  +1.75% (p=0.002 n=6)
SSEPublishDelivery-4                     69.95n ±  1%   70.05n ±  0%       ~ (p=0.589 n=6)
geomean                                  1.626µ         1.663µ        +2.28%

                                 │ baseline-bench.txt │        benchmark-results.txt         │
                                 │        B/op        │     B/op      vs base                │
JQTransform_Simple-4                   1.273Ki ± 0%     1.273Ki ± 0%       ~ (p=1.000 n=6) ¹
JQTransform_ObjectConstruction-4       1.773Ki ± 0%     1.773Ki ± 0%       ~ (p=1.000 n=6) ¹
JQTransform_ArraySelect-4              2.625Ki ± 0%     2.625Ki ± 0%       ~ (p=1.000 n=6) ¹
JQTransform_Complex-4                  16.22Ki ± 0%     16.22Ki ± 0%       ~ (p=1.000 n=6) ¹
JQTransform_Throughput-4               1.984Ki ± 0%     1.984Ki ± 0%       ~ (p=1.000 n=6) ¹
SSEPublishDelivery-4                     0.000 ± 0%       0.000 ± 0%       ~ (p=1.000 n=6) ¹
geomean                                             ²                 +0.00%               ²
¹ all samples are equal
² summaries must be >0 to compute geomean

                                 │ baseline-bench.txt │       benchmark-results.txt        │
                                 │     allocs/op      │ allocs/op   vs base                │
JQTransform_Simple-4                     10.00 ± 0%     10.00 ± 0%       ~ (p=1.000 n=6) ¹
JQTransform_ObjectConstruction-4         15.00 ± 0%     15.00 ± 0%       ~ (p=1.000 n=6) ¹
JQTransform_ArraySelect-4                30.00 ± 0%     30.00 ± 0%       ~ (p=1.000 n=6) ¹
JQTransform_Complex-4                    324.0 ± 0%     324.0 ± 0%       ~ (p=1.000 n=6) ¹
JQTransform_Throughput-4                 17.00 ± 0%     17.00 ± 0%       ~ (p=1.000 n=6) ¹
SSEPublishDelivery-4                     0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=6) ¹
geomean                                             ²               +0.00%               ²
¹ all samples are equal
² summaries must be >0 to compute geomean

pkg: github.com/GoCodeAlone/workflow/schema
                                    │ baseline-bench.txt │       benchmark-results.txt       │
                                    │       sec/op       │   sec/op     vs base              │
SchemaValidation_Simple-4                    1.086µ ± 1%   1.081µ ± 1%       ~ (p=0.258 n=6)
SchemaValidation_AllFields-4                 1.660µ ± 8%   1.661µ ± 6%       ~ (p=0.905 n=6)
SchemaValidation_FormatValidation-4          1.573µ ± 2%   1.583µ ± 2%       ~ (p=0.619 n=6)
SchemaValidation_ManySchemas-4               1.819µ ± 2%   1.801µ ± 2%       ~ (p=0.394 n=6)
geomean                                      1.507µ        1.504µ       -0.21%

                                    │ baseline-bench.txt │       benchmark-results.txt        │
                                    │        B/op        │    B/op     vs base                │
SchemaValidation_Simple-4                   0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=6) ¹
SchemaValidation_AllFields-4                0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=6) ¹
SchemaValidation_FormatValidation-4         0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=6) ¹
SchemaValidation_ManySchemas-4              0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=6) ¹
geomean                                                ²               +0.00%               ²
¹ all samples are equal
² summaries must be >0 to compute geomean

                                    │ baseline-bench.txt │       benchmark-results.txt        │
                                    │     allocs/op      │ allocs/op   vs base                │
SchemaValidation_Simple-4                   0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=6) ¹
SchemaValidation_AllFields-4                0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=6) ¹
SchemaValidation_FormatValidation-4         0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=6) ¹
SchemaValidation_ManySchemas-4              0.000 ± 0%     0.000 ± 0%       ~ (p=1.000 n=6) ¹
geomean                                                ²               +0.00%               ²
¹ all samples are equal
² summaries must be >0 to compute geomean

pkg: github.com/GoCodeAlone/workflow/store
                                   │ baseline-bench.txt │       benchmark-results.txt        │
                                   │       sec/op       │    sec/op     vs base              │
EventStoreAppend_InMemory-4                1.212µ ± 12%   1.319µ ± 10%       ~ (p=0.093 n=6)
EventStoreAppend_SQLite-4                  1.420m ±  4%   1.357m ±  5%       ~ (p=0.065 n=6)
GetTimeline_InMemory/events-10-4           13.99µ ±  4%   14.19µ ±  3%       ~ (p=0.699 n=6)
GetTimeline_InMemory/events-50-4           60.68µ ± 27%   61.97µ ± 15%       ~ (p=0.065 n=6)
GetTimeline_InMemory/events-100-4          121.6µ ±  1%   124.0µ ±  2%  +1.96% (p=0.004 n=6)
GetTimeline_InMemory/events-500-4          625.0µ ±  0%   634.5µ ±  1%  +1.52% (p=0.002 n=6)
GetTimeline_InMemory/events-1000-4         1.282m ±  1%   1.298m ±  1%  +1.27% (p=0.009 n=6)
GetTimeline_SQLite/events-10-4             110.5µ ±  1%   110.8µ ±  1%       ~ (p=0.818 n=6)
GetTimeline_SQLite/events-50-4             252.0µ ±  0%   253.2µ ±  2%  +0.50% (p=0.026 n=6)
GetTimeline_SQLite/events-100-4            423.6µ ±  2%   428.1µ ±  2%       ~ (p=0.310 n=6)
GetTimeline_SQLite/events-500-4            1.805m ±  1%   1.797m ±  1%  -0.42% (p=0.015 n=6)
GetTimeline_SQLite/events-1000-4           3.492m ±  1%   3.511m ±  2%       ~ (p=0.065 n=6)
geomean                                    216.5µ         219.0µ        +1.18%

                                   │ baseline-bench.txt │        benchmark-results.txt         │
                                   │        B/op        │     B/op      vs base                │
EventStoreAppend_InMemory-4                 824.0 ± 10%     831.0 ± 4%       ~ (p=0.377 n=6)
EventStoreAppend_SQLite-4                 1.985Ki ±  1%   1.988Ki ± 1%       ~ (p=0.455 n=6)
GetTimeline_InMemory/events-10-4          7.953Ki ±  0%   7.953Ki ± 0%       ~ (p=1.000 n=6) ¹
GetTimeline_InMemory/events-50-4          46.62Ki ±  0%   46.62Ki ± 0%       ~ (p=1.000 n=6) ¹
GetTimeline_InMemory/events-100-4         94.48Ki ±  0%   94.48Ki ± 0%       ~ (p=1.000 n=6) ¹
GetTimeline_InMemory/events-500-4         472.8Ki ±  0%   472.8Ki ± 0%       ~ (p=1.000 n=6)
GetTimeline_InMemory/events-1000-4        944.3Ki ±  0%   944.3Ki ± 0%       ~ (p=0.346 n=6)
GetTimeline_SQLite/events-10-4            16.74Ki ±  0%   16.74Ki ± 0%       ~ (p=1.000 n=6) ¹
GetTimeline_SQLite/events-50-4            87.14Ki ±  0%   87.14Ki ± 0%       ~ (p=1.000 n=6) ¹
GetTimeline_SQLite/events-100-4           175.4Ki ±  0%   175.4Ki ± 0%       ~ (p=1.000 n=6) ¹
GetTimeline_SQLite/events-500-4           846.1Ki ±  0%   846.1Ki ± 0%       ~ (p=0.273 n=6)
GetTimeline_SQLite/events-1000-4          1.639Mi ±  0%   1.639Mi ± 0%       ~ (p=1.000 n=6)
geomean                                   67.58Ki         67.64Ki       +0.08%
¹ all samples are equal

                                   │ baseline-bench.txt │        benchmark-results.txt        │
                                   │     allocs/op      │  allocs/op   vs base                │
EventStoreAppend_InMemory-4                  7.000 ± 0%    7.000 ± 0%       ~ (p=1.000 n=6) ¹
EventStoreAppend_SQLite-4                    53.00 ± 0%    53.00 ± 0%       ~ (p=1.000 n=6) ¹
GetTimeline_InMemory/events-10-4             125.0 ± 0%    125.0 ± 0%       ~ (p=1.000 n=6) ¹
GetTimeline_InMemory/events-50-4             653.0 ± 0%    653.0 ± 0%       ~ (p=1.000 n=6) ¹
GetTimeline_InMemory/events-100-4           1.306k ± 0%   1.306k ± 0%       ~ (p=1.000 n=6) ¹
GetTimeline_InMemory/events-500-4           6.514k ± 0%   6.514k ± 0%       ~ (p=1.000 n=6) ¹
GetTimeline_InMemory/events-1000-4          13.02k ± 0%   13.02k ± 0%       ~ (p=1.000 n=6) ¹
GetTimeline_SQLite/events-10-4               382.0 ± 0%    382.0 ± 0%       ~ (p=1.000 n=6) ¹
GetTimeline_SQLite/events-50-4              1.852k ± 0%   1.852k ± 0%       ~ (p=1.000 n=6) ¹
GetTimeline_SQLite/events-100-4             3.681k ± 0%   3.681k ± 0%       ~ (p=1.000 n=6) ¹
GetTimeline_SQLite/events-500-4             18.54k ± 0%   18.54k ± 0%       ~ (p=1.000 n=6) ¹
GetTimeline_SQLite/events-1000-4            37.29k ± 0%   37.29k ± 0%       ~ (p=1.000 n=6) ¹
geomean                                     1.162k        1.162k       +0.00%
¹ all samples are equal

Benchmarks run with go test -bench=. -benchmem -count=6.
Regressions ≥ 20% are flagged. Results compared via benchstat.

@intel352
Copy link
Copy Markdown
Contributor

@copilot apply changes based on the comments in this thread

Copy link
Copy Markdown
Contributor Author

Copilot AI commented Mar 26, 2026

@copilot apply changes based on the comments in this thread

Done in 7ff8266:

  • Fixed step.http_call schema outputs to declare status_code (number) and status (string) separately, matching the actual runtime output from parseHTTPResponse
  • Updated TestInferStepOutputs_Fallback to assert status_code instead of status
  • Extended TestHTTPCallStep_OAuth2_Retry401 to assert elapsed_ms is present and non-negative on the 401 retry path

@intel352 intel352 merged commit 19d512b into main Mar 26, 2026
18 checks passed
@intel352 intel352 deleted the copilot/add-elapsed-ms-to-http-call-output branch March 26, 2026 09:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Expose elapsed_ms in step.http_call output for latency measurement

3 participants