Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 1 addition & 5 deletions sdks/python/apache_beam/runners/worker/statesampler_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,15 +123,11 @@ def test_sampler_transition_overhead(self):
state_transition_count = sampler.get_info().transition_count
overhead_us = 1000000.0 * elapsed_time / state_transition_count

# TODO: This test is flaky when it is run under load. A better solution
# would be to change the test structure to not depend on specific timings.
overhead_us = 2 * overhead_us

_LOGGER.info('Overhead per transition: %fus', overhead_us)
# Conservative upper bound on overhead in microseconds (we expect this to
# take 0.17us when compiled in opt mode or 0.48 us when compiled with in
# debug mode).
self.assertLess(overhead_us, 10.0)
self.assertLess(overhead_us, 20.0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seeing the comment on line #127 now - the upper bound is set to 20x expected value already - if the test still fails after 3 attempts perhaps we should be investigating what triggers the failure? Otherwise I am not sure we should be keeping this test. What overhead is bad enough to investigate? Also, if we happen to parallelize test execution, this may increase the flakiness of this test. Perhaps it needs to be a [micro]benchmark.
LGTM to address the flakiness issue regardless of this discussion.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have a good point. Perhaps 3 retries is just enough. I do regularly see overhead in the order of 5.0 - 10.0 in slow environments. > 10.0 is rare and likely 3 retries will address the flakiness.

I can revert this part maybe. What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this test needs to be converted to a benchmark, and be run in an isolated environment. I suspect we are not getting much the value of having it in the test suite. If the overhead increases 5x, the test will become more flaky and we will likely up the upper bound more instead of investigating. Given current state if this test exercises some codepath that other tests don't, it may be better to keep it in the suite as long is it does not flake, but long term we should move it out from unit tests. That said, I am ok with keeping the value to be 20 and even removing the assertion until we make it a benchmark.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It make sense to convert this to a micro benchmark. I think the main value of this being a test and not a benchmark is that it can potentially catch issues that might add large increases here with unittests whereas most authors will not run benchmarks for their changes unless they know/suspect of potential perf implications.

/cc @pabloem

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had in mind a benchmark with alerts / dashboards. I realize that we probably don't have infrastructure for this now. I am not sure how likely is that someone will make a change that will exceed the threshold by 20x, i'd say unlikely but not impossible. A more likely scenario would be a 3x regression, but the test might not catch this in time.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that this is better as a microbenchmark, but we need alerting and updates, yes. We have a good amount of infrastructure to set our microbenchmarks to have this in the next few months (we just need alerts).

Once we have all the infrastructure pieces, I will be happy to look into creating the microbenchmark suite with metrics exported.



if __name__ == '__main__':
Expand Down