-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Retry initializing TTY size a bit more #3573
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## master #3573 +/- ##
==========================================
+ Coverage 59.36% 59.47% +0.10%
==========================================
Files 285 287 +2
Lines 24097 24174 +77
==========================================
+ Hits 14306 14377 +71
- Misses 8930 8931 +1
- Partials 861 866 +5 |
|
Let me comment here, as we were discussing this;
Thinking about this a bit more;
|
cli/command/container/tty.go
Outdated
| for retry := 0; retry < 5; retry++ { | ||
| time.Sleep(10 * time.Millisecond) | ||
| for retry := 0; retry < 10; retry++ { | ||
| time.Sleep(100 * time.Millisecond) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering if we should consider making the delay incremental, e.g.;
time.Sleep((retry + 1) * 10 * time.Millisecond)- retry after 10 milliseconds ("happy path" for when the daemon wasn't able to handle it immediately)
- if failed; add 10 milliseconds (so, retry after 20 milliseconds)
- if failed; add 10 milliseconds (so, retry after 30)
- and so on, until we reach 10 attempts (which would be using 100 milliseconds)
If incrementing by 10 is not "aggressive" enough, perhaps we could make it exponential (but with a "cap", which could be 100ms);
10 -> 20 -> 40 -> 80 -> 100 -> 100 -> 100
Doing so would allow the resize to be handled "faster" (in the happy case), while still allowing it to take longer (if the daemon happens to be under load)
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
c63bc19 to
b6ed967
Compare
thaJeztah
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
|
Hmm.... CI fails on the related test, so wondering if incrementing with |
I some cases, for example if there is a heavy load, the initialization of the TTY size would fail. This change makes the cli retry 10 times instead of 5 and we wait incrementally from 10ms to 100ms Relates to docker#3554 Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
|
It's the other way around, the test is about whether we return that error if we can't resize, to the trick is to sleep enough time in the test for the loop to finish and the function to return and error. It should be good now |
|
Ah, whoop! Let's get this one merged 👍 |
- What I did
I some cases, for example if there is a heavy load, the initialization of the TTY size would fail. This change makes the cli retry more times, 10 instead of 5 and we wait incrementally from 10ms to 100ms between two calls to resize the TTY.
Running 150 containers (see script below) takes ~2 minutes to complete without this change, with this change the time to complete was ~1m55s and 2m12s so I don't think this change will impact users.
Running one container
docker run --rm -t hello-worldtakes the same amount of time (1.5s on my machine) with or without this change.Relates to #3554
- How I did it
Changed the retry logic to retry 10 times instead of 5 and increase the wait between retries to 100ms
- How to verify it
Running this script shouldn't show any
failed to resize tty, using default sizeerrors.- Description for the changelog
- A picture of a cute animal (not mandatory but encouraged)