Fix CUDA EP: add opset 24 kernel registrations for Reshape/Cast + CUTLASS alignment#28366
Closed
justinchuby wants to merge 1 commit intomainfrom
Closed
Fix CUDA EP: add opset 24 kernel registrations for Reshape/Cast + CUTLASS alignment#28366justinchuby wants to merge 1 commit intomainfrom
justinchuby wants to merge 1 commit intomainfrom
Conversation
…lignment Two fixes for CUDA EP models using ONNX opset 24: 1. Add opset 24 CUDA kernel registrations for Reshape and Cast. ONNX opset 24 bumped these ops (added float8e8m0 type support). Without opset 24 registrations, these ops fall to CPUExecutionProvider, producing ~280 MemcpyFromHost/MemcpyToHost nodes that cascade through the entire model. Version existing opset 23 registrations to (23, 23) and add new non-versioned opset 24 registrations. Same kernel code. Result: 282 memcpy -> 4 memcpy for opset 24 models. 2. Fix CUTLASS FMHA BiasLoader vectorized load alignment. BiasLoader hardcoded 128-bit (8 fp16 element) vectorized loads via `ElementsPerAccess = 128 / sizeof_bits<scalar_t>` regardless of the isAligned template parameter. When the attention bias stride (total_sequence_length) was not a multiple of 8 elements, the unaligned kernel was selected but still used 128-bit loads on the bias, causing cudaErrorMisalignedAddress. Fix: Use kAlignmentA (which is kMinimumAlignment=4 for the unaligned path, kAlignmentA=8 for the aligned path) as BiasLoader's ElementsPerAccess. This allows the unaligned kernel to use 64-bit loads for the bias while the aligned kernel continues with 128-bit. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> Signed-off-by: Justin Chu <justinchu@microsoft.com>
Contributor
Author
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Two fixes for CUDA EP models using ONNX opset 24:
1. Missing opset 24 CUDA kernel registrations (Reshape, Cast)
ONNX opset 24 bumped Reshape and Cast (added
float8e8m0type). ORT CUDA EP only has opset 23 registrations. When these ops fall to CPUExecutionProvider, they produce ~280 MemcpyFromHost/MemcpyToHost nodes that cascade through the entire model pipeline.Fix: Version opset 23 registrations to (23, 23) and add non-versioned opset 24 registrations. Same kernel code, only registration metadata.
Result: 282 memcpy → 4 memcpy for Gemma4 opset 24 CUDA EP model.
2. CUTLASS FMHA BiasLoader alignment
BiasLoader hardcoded 128-bit vectorized loads (
ElementsPerAccess = 128/sizeof_bits = 8for fp16) regardless of theisAlignedtemplate flag. When bias stride was not a multiple of 8, the unaligned kernel was correctly selected but BiasLoader still used 128-bit loads →cudaErrorMisalignedAddress.Fix: Use
kAlignmentA(4 for unaligned, 8 for aligned) instead of hardcoded 8.Testing