Skip to content

[PyTorch] Avoid parameters function in op backward pass#1403

Merged
timmoon10 merged 4 commits intoNVIDIA:mainfrom
timmoon10:te-sequential-param-count-bugfix
Jan 22, 2025
Merged

[PyTorch] Avoid parameters function in op backward pass#1403
timmoon10 merged 4 commits intoNVIDIA:mainfrom
timmoon10:te-sequential-param-count-bugfix

Conversation

@timmoon10
Copy link
Collaborator

Description

We have recently experienced some esoteric errors in the LayerNorm backward pass:

[rank0]:   File "/usr/local/lib/python3.12/dist-packages/torch/_tensor.py", line 626, in backward
[rank0]:     torch.autograd.backward(
[rank0]:   File "/usr/local/lib/python3.12/dist-packages/torch/autograd/__init__.py", line 347, in backward
[rank0]:     _engine_run_backward(
[rank0]:   File "/usr/local/lib/python3.12/dist-packages/torch/autograd/graph.py", line 823, in _engine_run_backward
[rank0]:     return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/usr/local/lib/python3.12/dist-packages/torch/autograd/function.py", line 307, in apply
[rank0]:     return user_fn(self, *args)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/usr/local/lib/python3.12/dist-packages/torch/autograd/function.py", line 600, in wrapper
[rank0]:     outputs = fn(ctx, *args)
[rank0]:               ^^^^^^^^^^^^^^
[rank0]:   File "/usr/local/lib/python3.12/dist-packages/transformer_engine/pytorch/ops/fuser.py", line 267, in backward
[rank0]:     raise RuntimeError(
[rank0]: RuntimeError: Expected op 0 to generate 0 param grads, but got 2

I haven't fully investigated, but I suspect that FSDP is manipulating module parameters so that they are only available in the forward pass. This messes with a check the operation fuser does in the backward pass to make sure the number of params and param grads match.

This PR tweaks the operation fuser to avoid calling parameters() in the backward pass. In particular, it counts params for each op in the forward pass and caches the counts for use in the backward pass.

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refractor

Changes

  • Operation fuser avoids calling op parameters function in backward pass

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Signed-off-by: Tim Moon <tmoon@nvidia.com>
@timmoon10 timmoon10 added the bug Something isn't working label Jan 11, 2025
@timmoon10
Copy link
Collaborator Author

/te-ci pytorch

@timmoon10
Copy link
Collaborator Author

/te-ci pytorch

@timmoon10 timmoon10 merged commit 3d7ff1c into NVIDIA:main Jan 22, 2025
25 of 26 checks passed
@timmoon10 timmoon10 deleted the te-sequential-param-count-bugfix branch February 5, 2025 02:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments