Skip to content

Avoid idle gpu#1939

Merged
scarlehoff merged 5 commits into
masterfrom
avoid-idle-gpu
Jul 17, 2024
Merged

Avoid idle gpu#1939
scarlehoff merged 5 commits into
masterfrom
avoid-idle-gpu

Conversation

@APJansen
Copy link
Copy Markdown
Collaborator

@APJansen APJansen commented Feb 13, 2024

The idea

We observed large gaps between training steps in the tensorboard profile when running on the GPU. After a lot of fiddling were found to be (at least partially) due to a per epoch overhead from tensorflow. This is reduced by redefining one actual training step as a single batch of size 1, not as a whole epoch as it used to be.

Implementation wise, this is done by copying the input up to 100 times, creating a set of 100 identical training inputs. Existing callbacks simply implement on_step_end instead of on_epoch_end and inherit from CallbackStep to take care of the conversion between steps, batches and epochs.
One extra issue is that Keras computes metrics cumulatively, they are converted back to per step in CallbackStep.correct_logs. This is the only source of slight numerical differences, which however only appear in the logs and do not propagate at all, training results remain identical.

Performance

Timings for 1000 epochs of the main runcard (NNPDF40_nnlo_as_01180_1000), on Snellius, with 100 replicas on the GPU or 1 replica on the CPU. In brackets the GPU memory used.

branch commit hash 1 replica 100 replicas
master 0a5fc61 145 91
avoid-idle-gpu bb366aa 145 67

This saves 24 seconds (or 25%) per 1k epochs.

Profile

Note: slightly outdated profiles

This branch:
image
and before this single commit for comparison:
image

@APJansen APJansen added n3fit Issues and PRs related to n3fit performance escience labels Feb 13, 2024
@APJansen APJansen self-assigned this Feb 13, 2024
@APJansen APJansen force-pushed the avoid-idle-gpu branch 2 times, most recently from ae92568 to bb366aa Compare March 6, 2024 07:58
@APJansen
Copy link
Copy Markdown
Collaborator Author

APJansen commented Mar 6, 2024

The issue with the regression test was fixed by simply rebasing. (And then broke another, but I think it's just a fluke that will be fixed by rerunning).

I've updated the timings as well.
(Don't worry about the higher time on the CPU, that's just because the CPU nodes I was using before aren't available today, this PR only affects >1 replica.)

Side comment: I added in a slight fix to the logging, where for multiple replicas it would print "Validation chi2s:" and then nothing. Incidentally I don't think the comment # The partial chi2 makes no sense for more than one replica at once is true anymore?

@scarlehoff scarlehoff changed the base branch from master to small_fixes_tf June 6, 2024 13:46
@scarlehoff scarlehoff assigned scarlehoff and unassigned APJansen Jun 6, 2024
@scarlehoff scarlehoff self-requested a review June 6, 2024 15:26
@scarlehoff scarlehoff marked this pull request as ready for review June 6, 2024 15:27
@scarlehoff
Copy link
Copy Markdown
Member

I'll try to review this tomorrow and leave it ready for you to have a second look and merge it in case you need to touch n3fit as discussed @RoyStegeman . I need to check a few corner cases but afaics this seems ok.

@scarlehoff scarlehoff added the run-fit-bot Starts fit bot from a PR. label Jun 7, 2024
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Jun 7, 2024

Greetings from your nice fit 🤖 !
I have good news for you, I just finished my tasks:

Check the report carefully, and please buy me a ☕ , or better, a GPU 😉!

Base automatically changed from small_fixes_tf to master June 18, 2024 15:07
@scarlehoff scarlehoff added redo-regressions Recompute the regression data and removed run-fit-bot Starts fit bot from a PR. labels Jul 15, 2024
Comment thread n3fit/src/n3fit/backends/keras_backend/MetaModel.py Outdated
Copy link
Copy Markdown
Member

@scarlehoff scarlehoff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this can be merged. #2188 can be dealt with in a separate PR.

scarlehoff and others added 2 commits July 17, 2024 14:13
epoch.

Avoids memory overhead by only combining up to 100 steps into one epoch,
and not changing anything when using only 1 replica (i.e. on CPU).
@scarlehoff scarlehoff added run-fit-bot Starts fit bot from a PR. and removed redo-regressions Recompute the regression data labels Jul 17, 2024
@github-actions
Copy link
Copy Markdown

Greetings from your nice fit 🤖 !
I have good news for you, I just finished my tasks:

Check the report carefully, and please buy me a ☕ , or better, a GPU 😉!

@scarlehoff
Copy link
Copy Markdown
Member

I'm going to merge this since the tests are passing and it is rebased on top of master (which means it is probably fixing something that has changed since the last merge, probably the TF / np version).

Worst case scenario, it can be reverted. The report in #2127 https://vp.nnpdf.science/WbBCvsjfQV-6ncIQ3GhCVw== was made with a PR which is on top of this one, so we have a reproduction of 4.0 with this changes that seems to do ok.

@scarlehoff scarlehoff merged commit 68722a9 into master Jul 17, 2024
@scarlehoff scarlehoff deleted the avoid-idle-gpu branch July 17, 2024 13:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

escience n3fit Issues and PRs related to n3fit performance run-fit-bot Starts fit bot from a PR.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants