Skip to content

Conversation

@madil90
Copy link
Contributor

@madil90 madil90 commented Feb 6, 2020

Fixed flake8 error and changed name to generic

@madil90 madil90 requested a review from ericspod February 6, 2020 00:13
@madil90 madil90 force-pushed the additional-42-multi-gpu branch from 7db95db to 72e1a44 Compare February 6, 2020 01:52
@Nic-Ma
Copy link
Contributor

Nic-Ma commented Feb 6, 2020

Thanks for your PR.

Hi @ericspod and @wyli ,

Could you please help review it? I think it looks good to me.
Of course, glad to have a discussion if you have any comments.
Thanks.



@monai.utils.export("monai.application.engine")
def create_multigpu_supervised_trainer(net, optimizer, loss_fn, devices=None, non_blocking=False,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I said elsewhere, I wouldn't change this name it clashes with that from Ignite. I can see people using one or the other in their code and we shouldn't introduce confusion.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @ericspod and @madil90 ,

OK, let's temporarily close and postpone this PR to when we organize all kinds of trainers.
Do you agree?
Thanks.

Copy link
Contributor Author

@madil90 madil90 Feb 7, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Nic-Ma @ericspod I removed the original changes and added a small commit to resolve CI issues. Basically add a decorator which "expects" an exception to be raised when no GPUs are available. This way we get better coverage also. @ericspod can you please review and merge if it looks good.

@madil90 madil90 force-pushed the additional-42-multi-gpu branch 3 times, most recently from f7641d3 to 0c063b1 Compare February 7, 2020 03:07
@madil90 madil90 force-pushed the additional-42-multi-gpu branch from 0c063b1 to 1443257 Compare February 7, 2020 05:10
@wyli wyli merged commit 463affe into 42-multi-gpu Feb 7, 2020
wyli pushed a commit that referenced this pull request Feb 7, 2020
fixes #49 
* Adding adaptation to use the default Ignite supervised training function with multiple GPUs using data parallelism

* Update test_parallel_execution.py

* Add decorator to expect failure if no GPUs. (#51)

Co-authored-by: Mohammad Adil <madil@nvidia.com>
@wyli wyli deleted the additional-42-multi-gpu branch May 21, 2020 13:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants