Skip to content

test(azure): concurrency timing test for parallelised dispatcher (DI seam + benchmark) #262

@cristim

Description

@cristim

Summary

PR #259 parallelised the 5 sequential Azure service calls in providers/azure/recommendations.go::GetRecommendations with errgroup. CodeRabbit's pass-1 review on that PR asked for a concurrency timing test that uses fake clients per service (each sleeping a configurable duration) to assert that total elapsed time ≈ max-single-client-sleep (proving the calls actually run in parallel rather than serially).

That test is not yet possible: the existing service-client constructors (compute.NewClient, database.NewClient, cache.NewClient, cosmosdb.NewClient) build concrete Azure SDK types directly, with no injection seam to swap in fakes. Adding the timing test requires a small DI refactor first.

Proposal

Introduce package-level function variables that the production code defaults to the real constructors and the test can override:

// providers/azure/recommendations.go
var (
    newComputeClient  = compute.NewClient
    newDatabaseClient = database.NewClient
    newCacheClient    = cache.NewClient
    newCosmosDBClient = cosmosdb.NewClient
)

Then GetRecommendations calls newComputeClient(...) etc. instead of compute.NewClient(...) directly, and the test can override the variables with fakes that sleep + return a fixed slice.

Test to add

  • Timing assertion: each fake client sleeps 100ms and returns 1 distinct rec; assert GetRecommendations total elapsed < 200ms (vs ~500ms sequential — proves parallelism).
  • Order preservation: assert merged slice order is compute → database → cache → cosmosdb → advisor regardless of which fake's sleep wakes first.
  • Error isolation: one fake returns error; assert other 4 results are still in the merged slice (no cancellation of siblings).

Why deferred from #259

The CR-pass-1 fix focused on the correctness gap (post-Wait ctx propagation, which has its own pinning test). Introducing the DI seam expands the diff scope across all 5 service-client construction sites, which is bigger than what a CR-pass-1 follow-up should touch. Tracking separately so the timing benchmark gets done.

Acceptance

  • One new Go test + benchmark in providers/azure/recommendations_test.go covering all three assertions above
  • Production code unchanged in behaviour (DI seam is variable indirection only)
  • Existing tests remain green

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions