-
Notifications
You must be signed in to change notification settings - Fork 1.4k
SlidingWindowInfererAdapt #6251
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
I'm not sure why it fails premerge/flake test. It's fine for me. can someone help plz @mingxin-zheng @wyli |
|
Hi @myron , as I replied in the slack channel, this could be caused by using a different version of I ran |
31a67b5 to
ad83119
Compare
Signed-off-by: myron <amyronenko@nvidia.com>
|
@wyli I've updated this PR to include buffered case handling, please check and merge also mypy complained on the super().call_() lines with error: even though, the code looks correct. |
wyli
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks, I'll test and merge it soon
Signed-off-by: Wenqi Li <wenqil@nvidia.com>
|
/build |
SlidingWindowInfererAdapt extends SlidingWindowInferer to automatically switch to buffered and then to CPU stitching, when OOM on GPU. It also records a size of such large images to automatically try CPU stitching for the next large image of a similar size. If the stitching 'device' input parameter is provided,
automatic adaptation won't be attempted, please keep the default option device = None for adaptive behavior.
Note: the output might be on CPU (even if the input was on GPU), if the GPU memory was not sufficient.
also fixes #6340 by adding one line to the resampling