Skip to content

update vendored prom client and use module vendoring#162

Closed
jtlisi wants to merge 1 commit intoweaveworks:masterfrom
jtlisi:20190712_update_vendored_prometheus
Closed

update vendored prom client and use module vendoring#162
jtlisi wants to merge 1 commit intoweaveworks:masterfrom
jtlisi:20190712_update_vendored_prometheus

Conversation

@jtlisi
Copy link
Contributor

@jtlisi jtlisi commented Jul 13, 2019

This PR updates the prometheus client_golang library and also updates the repo to use go modules.

Signed-off-by: Jacob Lisi jacob.t.lisi@gmail.com

Signed-off-by: Jacob Lisi <jacob.t.lisi@gmail.com>
@jtlisi
Copy link
Contributor Author

jtlisi commented Jul 15, 2019

#161 makes this unnecessary

@jtlisi jtlisi closed this Jul 15, 2019
@jtlisi jtlisi deleted the 20190712_update_vendored_prometheus branch July 15, 2019 19:45
yeya24 pushed a commit to yeya24/common that referenced this pull request Jun 12, 2024
This is for consistency with OpenMetrics, so we can fully distinguish
ints and floats.

This has no impact on allocs, and is a 5% hit on CPU.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
yeya24 pushed a commit to yeya24/common that referenced this pull request Jun 12, 2024
This reverts commit 67670fe.

The format change wreaks havoc with histograms if you have a mixed set
of monitored targets. `histogram_quantile` spits out weird values,
which are difficult to diagnose.

A developer team here at SoundCloud pulled in the current version of
prometheus/common via a different indirect dependency than
prometheus/client_golang. (client_golang alone would not pull it in
yet if you are using Go modules.) Canary instances then had the newer
prometheus/common, while the normal production instances had not. The
calculated quantiles for the complete service jumped up dramatically,
so the team rolled back and started to look for a performance
regression. (Just looking at the canary alone still worked, but since
nobody suspected this kind of monitoring failure case, the
investigation went totally the wrong way.)

Given the wide distribution of prometheus/client_golang and the way Go
modules work, we will see many of those innocuous upgrades that
suddenly change the `le` values on new deployments. Since this change
is buried behind dependencies, users will run into this problem
without suspecting it, even if we announce it very loud and clearly in
prometheus/common or even prometheus/client_golang.

For now, we have to revert the change and then think about a way to
mitigate it. I'm thinking of sanitizing `le` values in Prometheus
2.x. But let's think of it without haste.

Signed-off-by: beorn7 <beorn@soundcloud.com>
yeya24 pushed a commit to yeya24/common that referenced this pull request Jun 12, 2024
Revert "Ensure there's always a . or e in floats. (weaveworks#162)"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant