Skip to content

[PyTorch] Handle non-constant FP8 scales in ONNX export#861

Closed
timmoon10 wants to merge 1 commit intoNVIDIA:mainfrom
timmoon10:onnx-export-debug
Closed

[PyTorch] Handle non-constant FP8 scales in ONNX export#861
timmoon10 wants to merge 1 commit intoNVIDIA:mainfrom
timmoon10:onnx-export-debug

Conversation

@timmoon10
Copy link
Collaborator

Description

ONNX export currently assumes that FP8 scales can be represented with constant operations, which requires that scales are initialized during the export process. However, we expect that the scales are initialized and updated during training. This PR uses slice operations to access the correct FP8 scales.

These changes are also included in #820.

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)

Changes

  • Access FP8 scales in ONNX export with slice operations instead of constant operations

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Signed-off-by: Tim Moon <tmoon@nvidia.com>
@timmoon10 timmoon10 added the bug Something isn't working label May 21, 2024
@timmoon10
Copy link
Collaborator Author

/te-ci pytorch

@timmoon10
Copy link
Collaborator Author

timmoon10 commented May 22, 2024

Looking more closely, I was a little too pessimistic in my investigation of ONNX exporting. ONNX export does appear to work correctly if the FP8 scales are initialized outside of the export process and it can correctly convert FP8 scale buffers to Constant operations. The issue I saw in #820 was because we copy an FP8 scale with Tensor.copy_, which is translated into the Expand operation (I think to deal with array broadcasting). This expand op is trivial but ONNX isn't smart enough to remove it. In any case, the simplest fix is to replace the Tensor.copy_ with a Tensor.fill_ during ONNX exports (see 4fdd63c).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant