Skip to content

Fix CUDA EP: add opset 24 kernel registrations for Reshape/Cast + CUTLASS alignment#28366

Closed
justinchuby wants to merge 1 commit intomainfrom
fix-cuda-opset24-registrations
Closed

Fix CUDA EP: add opset 24 kernel registrations for Reshape/Cast + CUTLASS alignment#28366
justinchuby wants to merge 1 commit intomainfrom
fix-cuda-opset24-registrations

Conversation

@justinchuby
Copy link
Copy Markdown
Contributor

Summary

Two fixes for CUDA EP models using ONNX opset 24:

1. Missing opset 24 CUDA kernel registrations (Reshape, Cast)

ONNX opset 24 bumped Reshape and Cast (added float8e8m0 type). ORT CUDA EP only has opset 23 registrations. When these ops fall to CPUExecutionProvider, they produce ~280 MemcpyFromHost/MemcpyToHost nodes that cascade through the entire model pipeline.

Fix: Version opset 23 registrations to (23, 23) and add non-versioned opset 24 registrations. Same kernel code, only registration metadata.

Result: 282 memcpy → 4 memcpy for Gemma4 opset 24 CUDA EP model.

2. CUTLASS FMHA BiasLoader alignment

BiasLoader hardcoded 128-bit vectorized loads (ElementsPerAccess = 128/sizeof_bits = 8 for fp16) regardless of the isAligned template flag. When bias stride was not a multiple of 8, the unaligned kernel was correctly selected but BiasLoader still used 128-bit loads → cudaErrorMisalignedAddress.

Fix: Use kAlignmentA (4 for unaligned, 8 for aligned) instead of hardcoded 8.

Testing

  • Gemma4 E2B-it (2B, opset 24): 282 memcpy → 4 memcpy
  • All sequence lengths 1–32 pass with Attention + mask + GQA config
  • GenAI text generation: 12.5 tok/s on H200

…lignment

Two fixes for CUDA EP models using ONNX opset 24:

1. Add opset 24 CUDA kernel registrations for Reshape and Cast.
   ONNX opset 24 bumped these ops (added float8e8m0 type support).
   Without opset 24 registrations, these ops fall to CPUExecutionProvider,
   producing ~280 MemcpyFromHost/MemcpyToHost nodes that cascade through
   the entire model. Version existing opset 23 registrations to (23, 23)
   and add new non-versioned opset 24 registrations. Same kernel code.

   Result: 282 memcpy -> 4 memcpy for opset 24 models.

2. Fix CUTLASS FMHA BiasLoader vectorized load alignment.
   BiasLoader hardcoded 128-bit (8 fp16 element) vectorized loads via
   `ElementsPerAccess = 128 / sizeof_bits<scalar_t>` regardless of the
   isAligned template parameter. When the attention bias stride
   (total_sequence_length) was not a multiple of 8 elements, the
   unaligned kernel was selected but still used 128-bit loads on the
   bias, causing cudaErrorMisalignedAddress.

   Fix: Use kAlignmentA (which is kMinimumAlignment=4 for the unaligned
   path, kAlignmentA=8 for the aligned path) as BiasLoader's
   ElementsPerAccess. This allows the unaligned kernel to use 64-bit
   loads for the bias while the aligned kernel continues with 128-bit.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Signed-off-by: Justin Chu <justinchu@microsoft.com>
@justinchuby
Copy link
Copy Markdown
Contributor Author

Split into separate PRs for easier review: #28368 (opset 24 registrations) and #28369 (CUTLASS BiasLoader alignment).

@justinchuby justinchuby closed this May 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant