-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Increase overhaed budget for test_sampler_transition_overhead #10214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Run Portable_Python PreCommit |
| # take 0.17us when compiled in opt mode or 0.48 us when compiled with in | ||
| # debug mode). | ||
| self.assertLess(overhead_us, 10.0) | ||
| self.assertLess(overhead_us, 20.0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seeing the comment on line #127 now - the upper bound is set to 20x expected value already - if the test still fails after 3 attempts perhaps we should be investigating what triggers the failure? Otherwise I am not sure we should be keeping this test. What overhead is bad enough to investigate? Also, if we happen to parallelize test execution, this may increase the flakiness of this test. Perhaps it needs to be a [micro]benchmark.
LGTM to address the flakiness issue regardless of this discussion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have a good point. Perhaps 3 retries is just enough. I do regularly see overhead in the order of 5.0 - 10.0 in slow environments. > 10.0 is rare and likely 3 retries will address the flakiness.
I can revert this part maybe. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this test needs to be converted to a benchmark, and be run in an isolated environment. I suspect we are not getting much the value of having it in the test suite. If the overhead increases 5x, the test will become more flaky and we will likely up the upper bound more instead of investigating. Given current state if this test exercises some codepath that other tests don't, it may be better to keep it in the suite as long is it does not flake, but long term we should move it out from unit tests. That said, I am ok with keeping the value to be 20 and even removing the assertion until we make it a benchmark.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It make sense to convert this to a micro benchmark. I think the main value of this being a test and not a benchmark is that it can potentially catch issues that might add large increases here with unittests whereas most authors will not run benchmarks for their changes unless they know/suspect of potential perf implications.
/cc @pabloem
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had in mind a benchmark with alerts / dashboards. I realize that we probably don't have infrastructure for this now. I am not sure how likely is that someone will make a change that will exceed the threshold by 20x, i'd say unlikely but not impossible. A more likely scenario would be a 3x regression, but the test might not catch this in time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that this is better as a microbenchmark, but we need alerting and updates, yes. We have a good amount of infrastructure to set our microbenchmarks to have this in the next few months (we just need alerts).
Once we have all the infrastructure pieces, I will be happy to look into creating the microbenchmark suite with metrics exported.
|
Run Python PreCommit |
1 similar comment
|
Run Python PreCommit |
|
Could not find a good way to pull from master in apache/beam branch to get the test fixes. Moved the changes to a new PR: #10264 |
Fixes the mistake from #10012. #10012 doubled the measured time instead of the allowed overhead.
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
R: @username).[BEAM-XXX] Fixes bug in ApproximateQuantiles, where you replaceBEAM-XXXwith the appropriate JIRA issue, if applicable. This will automatically link the pull request to the issue.See the Contributor Guide for more tips on how to make review process smoother.
Post-Commit Tests Status (on master branch)
Pre-Commit Tests Status (on master branch)
See .test-infra/jenkins/README for trigger phrase, status and link of all Jenkins jobs.