skip get_gpus subprocess when TF is cpu only#2135
Merged
wanghan-iapcm merged 4 commits intodeepmodeling:develfrom Nov 27, 2022
Merged
skip get_gpus subprocess when TF is cpu only#2135wanghan-iapcm merged 4 commits intodeepmodeling:develfrom
get_gpus subprocess when TF is cpu only#2135wanghan-iapcm merged 4 commits intodeepmodeling:develfrom
Conversation
When I benchmark deepmd-kit on my machine, I found `get_gpus` takes about 2s and is quite slow. In deepmodeling#905, a subprocess is added to get available GPUs. As benchmarked in deepmodeling#2121, it's quite slow to import tensorflow. I don't have better ideas not to call a subprocess, but we can skip this process when TensorFlow is not built against GPUs. The tests on the GitHub Actions will also benefit. Attached selected profiling: > ncalls tottime percall cumtime percall filename:lineno(function) > 1 0.000 0.000 2.141 2.141 local.py:15(get_gpus) > 1 0.000 0.000 2.133 2.133 subprocess.py:1090(communicate)
Codecov ReportBase: 74.22% // Head: 74.22% // Decreases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## devel #2135 +/- ##
==========================================
- Coverage 74.22% 74.22% -0.01%
==========================================
Files 201 201
Lines 19790 19792 +2
Branches 1416 1416
==========================================
+ Hits 14689 14690 +1
Misses 4162 4162
- Partials 939 940 +1
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
wanghan-iapcm
approved these changes
Nov 27, 2022
mingzhong15
pushed a commit
to mingzhong15/deepmd-kit
that referenced
this pull request
Jan 15, 2023
When I benchmark deepmd-kit on my machine, I found `get_gpus` takes about 2s and is quite slow. In deepmodeling#905, a subprocess was added to get available GPUs. As benchmarked in deepmodeling#2121, it's quite slow to import tensorflow. I don't have better ideas not to call a subprocess, but we can skip this process when TensorFlow is not built against GPUs. The tests on the GitHub Actions will also benefit. Attached is the selected profiling: > ncalls tottime percall cumtime percall filename:lineno(function) > 1 0.000 0.000 2.141 2.141 local.py:15(get_gpus) > 1 0.000 0.000 2.133 2.133 subprocess.py:1090(communicate) Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
When I benchmark deepmd-kit on my machine, I found
get_gpustakes about 2s and is quite slow. In #905, a subprocess was added to get available GPUs. As benchmarked in #2121, it's quite slow to import tensorflow.I don't have better ideas not to call a subprocess, but we can skip this process when TensorFlow is not built against GPUs. The tests on the GitHub Actions will also benefit.
Attached is the selected profiling: