Add benchmark for metrics recording#2052
Conversation
| } | ||
| }) | ||
| }) | ||
| UnregisterResourceView(v...) |
There was a problem hiding this comment.
nit: defer this after registering the views?
| } | ||
|
|
||
| func BenchmarkMetricsRecording(b *testing.B) { | ||
| ctx := context.Background() |
There was a problem hiding this comment.
Can we add some tags etc. to this? AFAIK that's causing quite some allocations as well.
Codecov Report
@@ Coverage Diff @@
## main #2052 +/- ##
=======================================
Coverage 67.39% 67.39%
=======================================
Files 215 215
Lines 9095 9095
=======================================
Hits 6130 6130
Misses 2690 2690
Partials 275 275 Continue to review full report at Codecov.
|
|
I added a few tags and also getting the ctx is part of the benchmark evaluation since this is what we do in eventing a) get new ctx with tags b) record. A mem optimized version that skips tags validation and mutator creation: I am wondering if there is a way to bypass all the tags creation stuff, probably not. |
Seems unrelated. |
|
/retest |
|
Btw otel-go seems really promising at least for the labels part, almost no heap allocations. It seems straightforward from an api pov as it should be. |
|
@markusthoemmes gentle ping for approval. |
markusthoemmes
left a comment
There was a problem hiding this comment.
/lgtm
/approve
Thanks
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: markusthoemmes The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This is helpful especially because metrics reporting happens in many places and after some user request eg. eventing.
/cc @evankanderson
I used this benchmark to compare with branch release-0.20. Results are bellow.
Possibly due to our change in Fix potential deadlock when k8s client is used #2031 moving to a channel implementation we add some overhead ~1ms but for overloaded
pods with many goroutines things get better (I will compare in the future with using atomic values) probably due to this.
It would be nice numbers to be validated independently (or percentage diffs).
main
release-0.20