Skip to content

Conversation

@mrocklin
Copy link
Member

Previously this was timing out in some test runs.
One possible issue is that we started a coroutine to ask the
scheduler to release us, but the scheduler doesn't exist. This
removes that request.

I'm not confident that this will fix the issue. I give around a 30%
chance. However, I am fairly confident that this won't have negative
consequences.

Previously this was timing out in some test runs.
One *possible* issue is that we started a coroutine to ask the
scheduler to release us, but the scheduler doesn't exist.  This
removes that request.

I'm not confident that this will fix the issue.  I give around a 30%
chance.  However, I am fairly confident that this won't have negative
consequences.
@github-actions
Copy link
Contributor

Unit Test Results

       12 files  ±0         12 suites  ±0   6h 9m 18s ⏱️ - 18m 54s
  2 669 tests ±0    2 587 ✔️ ±0    81 💤 ±0  1 +1 
15 926 runs  ±0  15 066 ✔️ +4  859 💤  - 3  1 ±0 

For more details on these failures, see this check.

Results for commit af7d9cb. ± Comparison against base commit 5c7d555.

Copy link
Member

@fjetter fjetter left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The report is definitely unnecessary.

pass


@pytest.mark.slow
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would expect this test run very fast, assuming the timeout in wait_for is reduced. At least after the report is turned off.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It tests that the worker waits for at least three seconds. I though about removing it but I think that it genuinely adds signal.

@fjetter
Copy link
Member

fjetter commented Mar 25, 2022

This should overall accelerate the test but I doubt it will resolve the flakyness. The report should time out after 30s (possibly after 5s, depending on whether there was a gen_cluster with the dirty set_config around)

I traced down timing out workers on close to #5883 and #5910
The first one had to be reverted but I'll have another look once #5910 is in

@mrocklin
Copy link
Member Author

Cool. Thank you for the pointer. Looking.

@fjetter
Copy link
Member

fjetter commented May 17, 2022

We closed #5883 and #5910

I haven't seen this test being flaky recently (https://dask.org/distributed/test_report.html)

@fjetter fjetter closed this May 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants