-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Increase overhaed budget for test_sampler_transition_overhead #10214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seeing the comment on line #127 now - the upper bound is set to 20x expected value already - if the test still fails after 3 attempts perhaps we should be investigating what triggers the failure? Otherwise I am not sure we should be keeping this test. What overhead is bad enough to investigate? Also, if we happen to parallelize test execution, this may increase the flakiness of this test. Perhaps it needs to be a [micro]benchmark.
LGTM to address the flakiness issue regardless of this discussion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have a good point. Perhaps 3 retries is just enough. I do regularly see overhead in the order of 5.0 - 10.0 in slow environments. > 10.0 is rare and likely 3 retries will address the flakiness.
I can revert this part maybe. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this test needs to be converted to a benchmark, and be run in an isolated environment. I suspect we are not getting much the value of having it in the test suite. If the overhead increases 5x, the test will become more flaky and we will likely up the upper bound more instead of investigating. Given current state if this test exercises some codepath that other tests don't, it may be better to keep it in the suite as long is it does not flake, but long term we should move it out from unit tests. That said, I am ok with keeping the value to be 20 and even removing the assertion until we make it a benchmark.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It make sense to convert this to a micro benchmark. I think the main value of this being a test and not a benchmark is that it can potentially catch issues that might add large increases here with unittests whereas most authors will not run benchmarks for their changes unless they know/suspect of potential perf implications.
/cc @pabloem
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had in mind a benchmark with alerts / dashboards. I realize that we probably don't have infrastructure for this now. I am not sure how likely is that someone will make a change that will exceed the threshold by 20x, i'd say unlikely but not impossible. A more likely scenario would be a 3x regression, but the test might not catch this in time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that this is better as a microbenchmark, but we need alerting and updates, yes. We have a good amount of infrastructure to set our microbenchmarks to have this in the next few months (we just need alerts).
Once we have all the infrastructure pieces, I will be happy to look into creating the microbenchmark suite with metrics exported.