-
Notifications
You must be signed in to change notification settings - Fork 261
[CK_TILE] FMHA BWD Optimizations for D48 for GFX950 #2866
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
bbc044d to
cdf54e4
Compare
asleepzzz
approved these changes
Oct 13, 2025
AviralGoelAMD
pushed a commit
that referenced
this pull request
Oct 16, 2025
Co-authored-by: asleepzzz <hanwen.chang@amd.com>
bartekxk
added a commit
that referenced
this pull request
Dec 5, 2025
* [CK_TILE] FMHA Test Ignore Known Errors (#2872)
* Fix UB caused by reinterpret_cast (#2849)
* Use bit_cast instead of reinterpret_cast to avoid UB
* Apply same fix in ck_tile
* [CK_TILE] Multiple-ABD GEMM example (#2788)
* Multi ABD - initial commit
* Clang-foramt fix
* block gemm, unify the name of CDataType
* Apply chnages to mem-pipeline
* Rollback prefix for DType and Layout
* Gemm Kernel Basic, rename
* WMMA config
* Grouped GEMM
* Clang-format
* Dropout, name
* Review v2
* Move element_wise fn to unnary, remov old ones fn
* clang-format
* Fix issue review
* WP operator adjust to universal gemm
* v2 prepare
* Remove unused comment
* Remove vectorsize
* Rollback
* Adjust pipeline for abd
* Shuffle argument
* CI-fail fix quant
* Fix ag_br pipeline
* Failing tests
* Typo
* Single argument support
* Add gemm weight preshuffle pk_int_t support (#2858)
* Factor out the three separate copies of load_interleaved_pk_type into a common utility class
* Add preprocessing with optional cache flushing and clearing of output for k_batch > 1 to the weight preshuffle GEMM example
* Remove a duplicate function
* Add support for B tensor type pk_int4_t for the weight preshuffle GEMM, with tests included
* I4 support introduced more failing test cases that mirror the existing ones for F8
* Simplify the check for which tests to skip (they all have F8 as A tensor type)
* Add a changelog entry
* add the test for v2 wp pipeline, polish the code, add the support of int4 for v2 wp pipeline
* have a workable version for atomic add
* Revert "have a workable version for atomic add"
This reverts commit 792377a590c26cfff9c8f545d9a9e8484a7422eb.
---------
Co-authored-by: ThomasNing <thomas.ning@amd.com>
* poc convert fnuz fp8 to non-native dtype similar to ocp (#2871)
* Add input fp8 and output bf16 attention (#2726)
* change host using fp16 to check
* fp8 to fp8 compare
* rewrite input parameters
* add not squant
* remove some output code
* for scale = 1
* format
* saturates only for fp8
* add fp8bf16 data type
* add fp8bf16 data type
* fix test fp8 code
* add run_fp8bf16_tests
* change fmha fwd example parameter(adding fp8bf16)
* Support fp8bf16 for Aiter
* Support aiter fp8bf16 in c++
* fix comment about fp8 in readme.md
* add fp8fp32
* add fp8fp32 test
* remove range_q etc.
* format
* fix test parameters about squant and fmha example input fp8bf16 fp8fp32 data type
* add fp8bf16 to data_type function
* change colmajor to rowmajor in test_ck_tile_fmha_fwd_fp8
* format
* reset atol for fp8
* fix bug for atol
---------
Co-authored-by: rocking <ChunYu.Lai@amd.com>
Co-authored-by: asleepzzz <hanwen.chang@amd.com>
* [CK_TILE] FMHA Fix synchronization issues in BWD pipelines (#2876)
* Run ctest with --output-on-failure
* Fix synchronization issues in bwd pipelines
The bwd kernel reuses the same area of LDS for ds (SGrad), bias and
dbias (BiasGrad). This means that there must be block_sync_lds between
loading one tensor and storing another to the same area.
Heavy instructions like MFMA/WMMA and global loads are executed between
reuses of the same memory so in MOST cases loading is finished by all
warps before storing is started. However, sometimes warps progress at
different speeds.
Running the tests multiple times and, preferably, with multiple
processes on the same GPU helps to trigger this issue:
bin/test_ck_tile_fmha_bwd_bf16 --gtest_repeat=-1 --gtest_shuffle --gtest_throw_on_failure
* [CK_TILE] Add sequence padding and variable length support in fmha (a… (#2851)
* [CK_TILE] Add sequence padding and variable length support in fmha (and v3)
- Group Mode Padding: Introduces the `-s_qpad` argument to support
physically padded layouts. Kernels now use padded start pointers
(`seqstart_padded_*_ptr`) for memory addressing.
- Batch Mode Variable Length: Adds `-q_eff_lens` and `-kv_eff_lens`
arguments for efficient processing of variable-length sequences by
passing cumulative effective lengths (`cu_seqlen_*_ptr`) to the kernel.
- FMHA examples: Support padding and variable length both in
group and batch mode. Dispatcher is updated as well (dispatch to
kPadSeqLenK enabled pipeline).
- New padding test cases: Add padding test cases to `smoke_test_fwd.sh`,
and add benchmarks to `benchmark_fwd.sh` and `benchmark_fwd_v3.sh` as well.
These test cases and benchmarks that specifically validate/benchmark the
new padding and variable-length functionalities in both group and batch modes.
* [CK_TILE] Fix build error in fmha unit tests
---------
Co-authored-by: Po Yen Chen <PoYen.Chen@amd.com>
Co-authored-by: Yi DING <yi.ding@amd.com>
* [CK_TILE] FMHA BWD Fix Decode Accuracy (#2881)
* [CK_TILE] FMHA BWD Fix Decode Accuracy
* use s_waitcnt utils
* Disable bwd weight split-k autodeduce for single stage kernels (#2856)
* Disable bwd weight split-k autodeduce for single stage kernels
* update interface tests
---------
Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>
* Revert "[CK_TILE] Add sequence padding and variable length support in fmha (a…" (#2883)
This reverts commit 86dd59cd01e41a4190bf2405a0fb0e89d9498b4c.
* [CK_TILE] Tensor-wise scaled quant gemm kernel (#2846)
* rename gemm_group_quant to gemm_quant
* Add TensorWise quant mode
* Cshuffle epilogue tests with tensor scaling
* Add tensor quant to example
* Don't use readfirstlane for reading scales - doesn't work for some reason
* Add to changelog
* revert include - from a merge problem?
* revert common.hpp include
* revert host.hpp include
* remove unused utility function
* rename quant pipeline problem
* refactor quant tests
* remove aquant utils
* use TEST_F
* fix all tests by changing gemm config
* Use typed tests
* fix copyright
* [CK_TILE] Add conv bwd weight two stage support (#2855)
* resolved conflicts
* add conv bwd weight twostage
* fix one file
* fixes after review
* fixes
* fixes
* Fix
---------
Co-authored-by: Bartlomiej Kocot <barkocot@amd.com>
* fixup build for #2871 when multiple device targets are used (#2885)
* Wmma support for multiple ABD GEMM (#2803)
* multi_abd wmma support:
- Add multiple A and B support to multiple D implementation (gridwise level)
- Add multi_abd GEMM (device level)
- Add instances (xdl parity)
- Add tests (both xdl and wmma)
- Add examples
- Add ckProfiler support (both xdl and wmma)
* Fix bug in device print function
* Fix unused template parameter
* Fix batched gemm for multiABD gridwise implementation
* Fix gemm_universal_reduce with multiABDs gridwise implementation
---------
Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>
* FMHA BWD Avoid SetZero (#2799)
* [CK_TILE] FMHA FWD bug fix (#2888)
* tempsave debug
* fix the bug in fmha fwd_kernel
* Remove unnecessary changes
* Fix the buggy part
* remove fmha fwd known failure cases
* [CK-Tile] Add the API to load SGPR (#2878)
* Have a workable version for SGPR
* have a workable version for atomic add
* Revert "have a workable version for atomic add"
This reverts commit 792377a590c26cfff9c8f545d9a9e8484a7422eb.
* substitute with the new sgpr read api
* update the CHANGELOG
* have a workable version for atomic add
* Revert "have a workable version for atomic add"
This reverts commit 792377a590c26cfff9c8f545d9a9e8484a7422eb.
* change to static for logic
* have a workable version for atomic add
* Revert "have a workable version for atomic add"
This reverts commit 792377a590c26cfff9c8f545d9a9e8484a7422eb.
* [CK_TILE] Fix fmha bwd (#2865)
* Fix fmha bwd filter
* remove unnecessary change
* enable test cases
---------
Co-authored-by: Yi DING <yi.ding@amd.com>
* [FMHA FWD] gfx950 Accuracy enhancement & bug fix (#2900)
* disable cast_tile_pk_fp16_fp32 on gfx950
* fix wrong encoding when hdim is not exponentiation of 2
---------
Co-authored-by: asleepzzz <hanwen.chang@amd.com>
* Revert "[CK-Tile] Add the API to load SGPR (#2878)" (#2904)
This reverts commit 2cbbf5dcb3bf315b9486a2c677ffcd6aa72b5298.
* Fix the gfx950 numerical errors (#2911)
* Update grouped_gemm example and pipeline
* find the root cause error in did not enable the transpose in gfx950 correctly
* Fix v3 pipeline, row and col major
* Disable f8 datatype tests, it fails on gfx950
* fix the abd test by clear the runtime argument unsupported
---------
Co-authored-by: AviralGoelAMD <aviral.goel@amd.com>
Co-authored-by: Mateusz Ozga <mateusz.ozga@amd.com>
* [CK_TILE] Fix cshuffle epilogue issue with IsLoadableTile (#2903)
* Fix issue with constexpr checks in scaling/cshuffle
* Remove IsLoadableTile
* Move amd_wave_read_first_lane before first usage
* add fmha dtype fp32 (#2914)
* [CK Tile] Implement Invoker pattern for remaining grouped convolution examples (#2894)
* Invoker for grouped_conv_fwd
* Invoker for grouped_conv_bwd_data
* Fix incorrect out layout identifier
* [CK_TILE] FMHA BWD Add D96 Instances (#2916)
* Upgrade to ROCm7.0.1 compiler. (#2909)
* upgrade default docker to rocm7.0.1
* turn on build and test on gfx950 by default
* use rocm-dev instead of rocm
* link libhiprtc for codegen targets
* resolving codegen compilation errors: removed calls to other std functions, resolved issues with int32_t: needed the correct header, put use of e8m0 into header guards
---------
Co-authored-by: Astha Rai <astha.rai713@gmail.com>
* [CK] Fix misc issues in CK examples (#2890)
* [CK] Fix misc CK issues
* revert fp8 change, it causes CI fail.
* resubmit fp8 change
* Conv:TF32: add more instances - 1 (#2867)
* conv:tf32:add more instances
* add instances of device_grouped_conv_fwd_xdl_f32_comp_instances
* add instances of device_grouped_conv_fwd_xdl_f32_tf32_mem_instances
* add instances of device_grouped_conv_fwd_xdl_large_tensor_f32_tf32_instances
* remove gnhwc/ngchw/ngcdhw instances
* fix fmha fwd kernel name (#2880)
* fix fmha fwd kernel name
* if the input and output types are the same, keep the original code
* [Jenkins] Remove 'Jenkins - ' prefix (#2920)
The prefix is causing the status updates from
gitStatusWrapper to be unique to the status updates that
are created by the Jenkins server, which creates duplicates
* fix clang format (#2926)
* Add AITER test_mha_varlen (#2927)
* add aiter test_mha_varlen
* don't fail until all aiter test run
* use the original way to run tests, just add new test
* Fix for Add the API to load SGPR (#2913)
* Revert "Revert "[CK-Tile] Add the API to load SGPR (#2878)" (#2904)"
This reverts commit f161b5b738781c71bd5f2c191561b81f679ba9ed.
* Fix: sgpr minor issue
* cyclic dependency resolved
* clang formatted
* removing unused variable
* clang formatted
---------
Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>
* Congma/ck tile/remove cpp 20 code (#2873)
* Remove C++20 code
C++20 features should not be used in CK. Remove all C++20 code.
* fix c++17 build
* format
* fix merge issue
---------
Co-authored-by: Thomas Ning <Thomas.Ning@amd.com>
Co-authored-by: Max Podkorytov <4273004+tenpercent@users.noreply.github.com>
* use default docker for build/test on gfx950 (#2928)
* Enable CI on gfx1100 (#2930)
* run CI on different versions of gfx11
* do not use gfx1151 systems
* Verify `HostTensorDescriptor` when it is created (#2829)
* add proper GEMM layout verification
* Handle "auto" strides.
CalculateStrides only called when tensor's strides are empty or all of them are <=0 (auto strides).
CalculateStrides now supports GEMM::ColumnsMajor order. The assumption is still that it applies only to the inner two dims.
ValidateStrides throws if any of the tensor's strides is <=0.
profile_gemm_multiply_add updated to support "auto" strides for tensors.
Manual tests for profile_gemm_multiply_add (matrix B in Row and Col modes)
auto-strides
bin/ckProfiler gemm_multiply_add 0 0 1 1 0 1 128 128 128 0 0 0 0 0
bin/ckProfiler gemm_multiply_add 0 1 1 1 0 1 128 128 128 0 0 0 0 0
bin/ckProfiler gemm_multiply_add 0 0 1 1 0 1 128 128 128 -1 -1 -1 -1 -1
Note, -1 should be deprecated (use 0 instead)
explicit strides (same as auto)
bin/ckProfiler gemm_multiply_add 0 0 1 1 0 1 128 128 128 128 128 128 128 128
bin/ckProfiler gemm_multiply_add 0 1 1 1 0 1 128 128 128 128 128 128 128 128
explicit strides (not the same as auto)
bin/ckProfiler gemm_multiply_add 0 0 1 1 0 1 128 128 128 130 132 134 136 138
bin/ckProfiler gemm_multiply_add 0 1 1 1 0 1 128 128 128 130 132 134 136 138
mix of explicit and auto strides
bin/ckProfiler gemm_multiply_add 0 0 1 1 0 1 128 128 128 128 128 128 128 0
invalid stride
bin/ckProfiler gemm_multiply_add 0 0 1 1 0 1 128 128 128 0 0 0 0 64
terminate called after throwing an instance of 'std::runtime_error'
what(): Invalid strides for RowMajor: mLens: 128 128 , mStrides: 64 1
Aborted (core dumped)
* - add more names to ck::tensor_layout for easier namespace hierarchy checking
- updated convolutional layouts to use explicit ones or BaseConvolutionalLayout where it is not clear which layout to use (TBD) - see include/ck/library/utility/convolution_host_tensor_descriptor_helper.hpp
* added handling of partially initialized strides for GEMM. fixed more tests.
* clang-format and more fixes
* replace long dash by a simple hyphen - causes build failure in CK codegen.
* increase sizeof input, otherwise output size becomes zero or negative with large filter size
* select stride based on layout
* specify layout explicitly to avoid errors in HostTensorDescriptor creation
* add validation for higher GEMM tensor dimensions.; Add docstring to `HostTensorDescriptor`
* Not clear why permute test in test/permute_scale/test_permute_scale.cpp uses a lot of invalid strides. Setting layout to BypassLayoutVerification to avoid a lot of errors
* fix test (incl removing invalid config)
* fix moe examples:
- (in .cpp) add layout argument to non-2D tensors
- (in .hpp) fix asserts/failures that show up in Debug mode, specifically addressing 2D tensor by a single index (and 3D tensor by 2d index)
* fix moe_gemm2 example.
* fix profile and wmma examples
* clean-up early mods for ckprofile. verified with:
```
ckProfiler gemm_multiply_add 0 0 1 1 0 1 128 128 128 0 0 0 0 0
ckProfiler gemm_multiply_add 0 1 1 1 0 1 128 128 128 0 0 0 0 0
ckProfiler gemm_multiply_add 0 0 1 1 0 1 128 128 128 130 132 134 136 138
ckProfiler gemm_multiply_add 0 1 1 1 0 1 128 128 128 130 132 134 136 138
#
ckProfiler gemm_fastgelu 1 0 1 2 0 1 128 128 128 0 0 0
ckProfiler gemm_fastgelu 1 1 1 2 0 1 128 128 128 0 0 0
ckProfiler gemm_fastgelu 1 2 1 2 0 1 128 128 128 0 0 0
ckProfiler gemm_fastgelu 1 3 1 2 0 1 128 128 128 0 0 0
ckProfiler gemm_fastgelu 1 0 1 2 0 1 128 128 128 128 128 128
#
ckProfiler gemm_add_relu 0 0 1 1 0 1 128 128 128 0 0 0 0
# ckProfiler gemm_add_relu 0 1 1 1 0 1 128 128 128 0 0 0 0 # not implemented
# ckProfiler gemm_add_relu 0 2 1 1 0 1 128 128 128 0 0 0 0 # not implemented
# ckProfiler gemm_add_relu 0 3 1 1 0 1 128 128 128 0 0 0 0 # not implemented
ckProfiler gemm_add_relu 0 0 1 1 0 1 128 128 128 128 128 128 128
#
ckProfiler gemm_add_relu_add_layernorm 1 0 1 1 0 0 128 128 128 0 0 0 0 0
ckProfiler gemm_add_relu_add_layernorm 1 1 1 1 0 0 128 128 128 0 0 0 0 0
ckProfiler gemm_add_relu_add_layernorm 1 2 1 1 0 0 128 128 128 0 0 0 0 0
ckProfiler gemm_add_relu_add_layernorm 1 3 1 1 0 0 128 128 128 0 0 0 0 0
ckProfiler gemm_add_relu_add_layernorm 1 0 1 1 0 0 128 128 128 130 132 134 136 138
#
example_gemm_add_multiply_dl_fp16
example_gemm_add_multiply_xdl_fp16
#
ckProfiler gemm_blockscale_wp 7 1 1 1 1 0 1 128 128 128 0 0 0
ckProfiler gemm_blockscale_wp 7 1 1 1 1 0 1 128 128 128 128 128 128
```
* temporary skip first 8 test configs - they throw error
* temporary skip first 8 test configs in wmma too - they throw error
---------
Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>
* use inline function in hpp (#2922)
* Add sequence padding and variable length support in fmha (#2932)
* * [CK_TILE] Add sequence padding and variable length support in fmha (and v3)
- Group Mode Padding: Introduces the `-s_qpad` argument to support
physically padded layouts. Kernels now use padded start pointers
(`seqstart_padded_*_ptr`) for memory addressing.
- Batch Mode Variable Length: Adds `-q_eff_lens` and `-kv_eff_lens`
arguments for efficient processing of variable-length sequences by
passing cumulative effective lengths (`cu_seqlen_*_ptr`) to the kernel.
- FMHA examples: Support padding and variable length both in
group and batch mode. Dispatcher is updated as well (dispatch to
kPadSeqLenK enabled pipeline).
- New padding test cases: Add padding test cases to `smoke_test_fwd.sh` and
`test_fmha_fwd.inc`, and add benchmarks to `benchmark_fwd.sh` and
`benchmark_fwd_v3.sh` as well. These test cases and benchmarks that
specifically validate/benchmark the new padding and variable-length
functionalities in both group and batch modes.
* [CK_TILE] Fix build error in fmha unit tests
* [CK_TILE] add mqa, gqa to sequence padding unit tests
* [CI_TILE] Reduce the number of padding seqlen unit tests in FMHA to avoid timeouts in CI
* [CK_TILE] remove unnecessary MageKArgs overload in FmhaFwdV3Kernel and FmhaFwdKernel
* [CK_TILE] FMHA BWD Pad HDim to a Multiple of 8 (#2918)
* Update CODEOWNERS
* Disable Rapid Json to be used by Default (#2936)
To enable the json dump we can now build with -DCK_ENABLE_JSON_DUMP=1
* [TheRock CI] Adding MIOpen at HEAD (#2929)
* Adding MIOpen at HEAD
* Adding container and also adding CI run for .github paths
* Adding correct flags
* Adding patches
* Adding exception for ck
* rocm-libraries at new path
* adding global safe dir
* reorder
* Fixing paths
* Adding sharding
* Integrate Multi D GEMMs into Grouped GEMMs along with unit tests (#2923)
* feat(grouped_gemm_multi_d): add new example that integrates grouped_gemm and multi_d_gemm feature
* feat: generalized grouped_gemm_kernel.hpp
* feat: generalized grouped_gemm_kernel.hpp even further by removing hardcoded 0
* refactor: grouped_gemm_multi_d relies on grouped_gemm_kernel
* tests(grouped_gemm): grouped_gemm test suite passes with minor adjustments
* fix: segfault fix by passing correct parameters for d tensors
* docs: add multi d info and trim down outdated content
* tests: add unit tests for grouped_gemm_multi_d and minor changes in grouped_gemm related test for compatibility
* style: clang format
* fix: incorrect validation method and Dtensor layout in test suite
* fix wp gemm bug when permuteN is false (#2935)
* fix wp gemm bug when permuteN is false
* code clean
---------
Co-authored-by: valarLip <340077269@qq.com>
* fix copy-paste bug in get_matrix_b; re-enable all tests in multi_abd (#2939)
* [CK_TILE] FMHA Fix synchronization issue in FWD splitkv combine pipeline (#2934)
* Fix validation of rotary embedding with time_kernel_
When rotary embedding is used, the appendkv kernel modifies the q tensor
(multiple times when time_kernel_ is set). We need to reset the q buffer
and rerun all kernels.
* Fix synchronization issue in splitkv combine pipeline
Different warps can read and then rewrite the same values of lse_acc_lds.
Sometimes warps progress at different speeds, one warp can rewrite
values that are still being read by another warp.
Running the tests multiple times and, preferably, with multiple
processes on the same GPU helps to trigger this issue:
bin/test_ck_tile_fmha_fwd_fp16 --gtest_repeat=-1 --gtest_shuffle --gtest_throw_on_failure --gtest_filter="TestCkTileFmhaFwd/*KV*"
* [CK_TILE] Support f32 in FMHA (fwd and bwd) (#2836)
* Support 16x16 (MFMA, WMMA) and 32x32 (MFMA) tiles in fwd and bwd BlockDropout
Add comments with dropout implementation details
Fix performance regression of fwd+dropout
* Remove some usage of type punning (reinterpret_cast with ref or ptr) in Philox;
* "scalarize" seed and offset, they may come either from kernel args or from device memory
(presumably loaded with vector loads).
These changes help the compiler to procude more optimal code and reduce register spilling.
Use WarpGemmDispatcher instead of explicit WarpGemmMfma... to get CWarpDstrEncoding
Use code based on BlockDropout in BlockDropoutBwd
Refactor BlockDropout (fwd)
Implement BlockDropout (fwd) for WMMA
Originally BlockDropout only supported 32x32 tiles (IsWG32 = true),
this version supports 16x16 tiles.
If MPerBlock > MWarp * 16, it can generate numbers for two 16x16 tiles, similarly
to BlockDropoutBwd.
Implement BlockDropoutBwd for WMMA
Remove MakeRandValLds* functions unused in BlockDropoutBwd
Remove unused Run overload from BlockDropoutBwd
* Fix regression with philox seed and offset when they exceed 32-bit int
__builtin_amdgcn_readfirstlane works with 32-bit values, seed and offset
are 64-bit so they get truncated.
* Add F32 MFMA warp gemms
* Support f32 in fwd FMHA
* Implement transpose_vectors for 4-byte types (float)
* Fix unexpected implicit f32->uint32 cast in buffer_store<4>
__builtin_amdgcn_raw_buffer_store_b32 expects unsigned int but float was passed (implicitly casted to uint).
mbuf_t types in other buffer_store<> are changed for consistency.
* Support F32 in bwd FMHA
hdim = 256 is disabled for now because it uses too much memory on gfx90a
* Support Headdim = 48 (divisible by 16) in fwd
* Add fp32-specific receipts (800 and 801)
* Tune fwd tiles
* Tune bwd tiles
* Use small tiles only for small seqlen_q
* Fix after rebasing
* Fix selection of a fallback tile based on bm0
The assumption that the largest bm0 == 128 is not always true for
current fp32 tiles.
* Remove constraints and adjust filtering for fp32
Custom constraints are no longer needed because now the smallest tile
is selected automtically based on seqlen_q.
Filters related to qr_async_trload disabled valid fp32 tiles.
* Add fp32 tests
* Make splitkv and appendkv compile for fp32 only
There are no instances yet, but API still must compile when only fp32 is
requested.
* Remove unimportant f32 instances
* Add test_ck_tile_fmha_*_fp32 to REGRESSION_TESTS
* Replace magic numbers with a constant, improve comments for dropout
* Update changelog
* Fix condition that dq_acc must be set to zero when mask is used
The change was introduced in #2799
* Replace warp_uniform with recently added amd_wave_read_first_lane
* Add hdim = 96 and 192 to fwd
* Use git ls-files to select candidate files for clang format
This change ensures that the files being selected for clang format validation are exactly the ones tracked by the git repo we are testing. This protects against an known issue where the repo being tested contained "stray files" from a previous test.
* [CK_TILE] Fixing Type Conversions in PassThroughPack8 (#2769)
* Change the return type of run_gemm_combinations in the basic tests
* Change the return type of run_gemm_combinations in the universal tests
* Add universal GEMM tests for bf16 x pk_i4 and fp16 x pk_i4
* Add universal GEMM test for fp8 x pk_i4
* Add basic GEMM tests for bf16 x pk_i4, fp16 x pk_i4 and fp8 x pk_i4.
* Add missing GemmTypeConfig<ck_tile::fp8_t, ck_tile::pk_int4_t, ck_tile::half_t>
* Add missing GemmTypeConfig<ck_tile::bf16_t, ck_tile::pk_int4_t, ck_tile::bf16_t>
* No need for utility in test_ck_tile_elementwise_1d
* Fix conversion from pk_int4x4_t to bf16x8_t in PassThroughPack8
* Avoid union-based type punning in float_to_bf16_truc_raw to make it constexpr compliant
* For consistency also make float_to_bf16_truc_nan_raw constexpr compliant by removing the union
* Use a static_cast to bfloat16_t only when CK_TILE_USE_LLVM_BUILTIN_BF16 is enforced
* Convert from float to bf16 during compilation rather than using magic values
* Fix conversion from pk_int4x4_t to fp8x8_t in PassThroughPack8
* Comment out the basic test for fp16 x pk_i4 as it does not pass
* Add missing GemmTypeConfig<ck_tile::bf8_t, ck_tile::pk_int4_t, ck_tile::half_t>
* Fix conversion from pk_int4x4_t to bf8x8_t in PassThroughPack8
* Add basic and universal GEMM tests for bf8 x pk_i4
* Switch back to amd_assembly_i4_to_fp8x8 in PassThroughPack8 as it works now
* Switch back to amd_assembly_i4_to_bf8x8 in PassThroughPack8 as it works now
* Remove the inefficient fallbacks for fp8 and bf8 in elementwise/unary_element_wise_operation.hpp
* Use explicit macros for enabling and disabling the the constexpr lookup based converters
* Fix two failing tests
* Avoid union-based type punning in float_to_bf16_rtn_raw to make it constexpr compliant
* Use float_to_bf16_rtn_raw instead of float_to_bf16 to create the bf16 lookup table for use in conversions from pk_int4 to bf16
* On ROCm 7.0.1 we need an explicit cast to from uint16_t to bf16_t
* Grouped Conv Bwd Data out index calculation optimizations (#2917)
* Grouped Conv Bwd Data index calculation optimizations
* fixes
* refactor instances
* gfx12 fixes
* temporary disable splitK for gfx12
* [CK] Fix example_grouped_conv_bwd_data_xdl_fp16 with ksplit = 2 (#2943)
root cause: AK1 and BK1 may different in class template. so we need calculate k0 per block separately when ksplit is not 1.
* fix:tf32:fix build fail for all supported targets (#2942)
* fix:tf32:fix build fail for all supported targets
* new fix code
* [CK][Examples] Extending support for rdna3/4 in following examples: (#2884)
* [CK][Examples] Extending support for rdna3/4 in following examples:
-example_gemm_xdl_splitk_reduce_multi_d_fp16
-example_gemm_xdl_splitk_reduce_multi_d_bf16
-example_gemm_xdl_splitk_reduce_bf16A_i8B
-example_gemm_xdl_splitk_reduce_bfp16
-example_splitk_gemm_bias_e_permute_xdl_fp32
-example_gemm_add_multiply_xdl_fp16
-example_complex_contraction_bilinear_xdl_fp32
-example_grouped_gemm_lower_triangle_scale_softmax_gemm_permute_xdl_fp16
-example_batched_gemm_bias_e_permute_xdl_fp16
-example_gemm_xdl_fp16
-example_gemm_xdl_fp16_av2
-example_gemm_xdl_wavelet_fp16
-example_gemm_add_add_fastgelu_xdl_bf16
-example_gemm_add_add_fastgelu_xdl_fp16
-example_gemm_add_add_fastgelu_xdl_fp32
-example_grouped_gemm_xdl_fp32
-example_grouped_gemm_xdl_fp16
-example_grouped_gemm_xdl_bf16
-example_cgemm_xdl_bf16
-example_cgemm_xdl_fp16
Signed-off-by: Michal Kulikowski <Michal.Kulikowski@amd.com>
* [CK][Examples] Extending support for rdna3/4 in following examples:
-example_gemm_xdl_splitk_reduce_multi_d_fp16
-example_gemm_xdl_splitk_reduce_multi_d_bf16
-example_gemm_xdl_splitk_reduce_bf16A_i8B
-example_gemm_xdl_splitk_reduce_bfp16
-example_splitk_gemm_bias_e_permute_xdl_fp32
-example_gemm_add_multiply_xdl_fp16
-example_complex_contraction_bilinear_xdl_fp32
-example_grouped_gemm_lower_triangle_scale_softmax_gemm_permute_xdl_fp16
-example_batched_gemm_bias_e_permute_xdl_fp16
-example_gemm_xdl_fp16
-example_gemm_xdl_fp16_av2
-example_gemm_xdl_wavelet_fp16
-example_gemm_add_add_fastgelu_xdl_bf16
-example_gemm_add_add_fastgelu_xdl_fp16
-example_gemm_add_add_fastgelu_xdl_fp32
-example_grouped_gemm_xdl_fp32
-example_grouped_gemm_xdl_fp16
-example_grouped_gemm_xdl_bf16
-example_cgemm_xdl_bf16
-example_cgemm_xdl_fp16
Signed-off-by: Michal Kulikowski <Michal.Kulikowski@amd.com>
---------
Signed-off-by: Michal Kulikowski <Michal.Kulikowski@amd.com>
* hot fix check eid range (#2924)
* hot fix check eid range
* fix clang format
---------
Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>
Co-authored-by: illsilin_amdeng <Illia.Silin@amd.com>
* Weight Preshuffle Block Scale gemm support (#2877)
* initial commit
* remove extra files
* fixing errors
* updated ReadMe file for mapping of diff quants with diff configs
* addressing review comments
* addressing review comments
* Resolved merge conflicts
* [CK TILE GEMM] Replace get_preshuffle_or with is_quantpreshuffle_enabled
The get_preshuffle_or was not working as expected, which led to incorrect behavior
in the quantization preshuffle process. This change replaces it with the more reliable
is_quantpreshuffle_enabled function to properly determine when preshuffle should be applied.
* initial commit
* debugging
* working fp8 for init constant
* fp8 working with all inits
* updated block level code with comments
* changing the loop iter
* debugging
* debugging
* debugging
* code fix
* code clean up
* clang formatted
* Add comment
* code cleanup
* clang formatted
* merge conflicts fixes
* applying the latest int4 changes to the piepline
* fixing test code for updated traits
* Adding gtest
* review comments addressed
* addressing review comments
* remove c++20 code
* added flush cache changes
---------
Co-authored-by: Cong Ma <congma13@amd.com>
Co-authored-by: root <root@banff-cyxtera-s73-2.ctr.dcgpu>
* increase time limit for AITER tests (#2948)
* Add CK Tile Stream-K bf16 and fp16 examples
Addition of initial CK Tile Stream-K example for bf16 and fp16. These
examples are minimal. As more functionality and gtests are added for
Stream-K (coming in future PRs), these examples will be expanded.
* Code style clean-up and documentation
The following changes were made:
- Clean-up of variable namings
- Addition of README
- Removal of num_cu and occupancy args; such options are meant for
testing purposes and should not be exposed to the user
- Removal of CK_TILE_PIPELINE_MEMORY macro and PipelineTypeTraits class
since we only support one pipeline at the moment.
* Extend Grouped GEMM with MultiD (Single & Double Shared Memory) feature to use persistent kernel option (#2933)
* feat(grouped_gemm_multi_d): add new example that integrates grouped_gemm and multi_d_gemm feature
* refactor: grouped_gemm_multi_d relies on grouped_gemm_kernel
* tests(grouped_gemm): grouped_gemm test suite passes with minor adjustments
* fix: segfault fix by passing correct parameters for d tensors
* style: clang format
* WIP: host code for grouped_gemm_multi_d persistent kernel compiles but segfaults
* feat(grouped_gemm_multi_d): add functionality to run persistant kernel
* feat(grouped_gemm_multi_d): add new example that integrates grouped_gemm and multi_d_gemm feature
* refactor: grouped_gemm_multi_d relies on grouped_gemm_kernel
* tests(grouped_gemm): grouped_gemm test suite passes with minor adjustments
* fix: segfault fix by passing correct parameters for d tensors
* style: clang format
* fix: incorrect validation method and Dtensor layout in test suite
* docs: improved README text based on review comments
* fix: parameterize NumDTensor in GroupedGemmHostArgs and remove lint
* Fix timing issue in CK_TILE GEMM example (#2940)
* [CK] Add command option instance_index and param_mask to run partial ck test (#2889)
* [CK] Add command option instance_index and param_mask to run partial ck test
Many CK test are instance test. it will loop all instance in the instance library. It causes test often out-of-time if we run test on simulator/emulator.
This PR add option instance_index and param_mask to reduce the workload of instance test
instance_index: only run test 1 available instance with specified index.
param_mask: filter the embedded parameter with specified mask
* fix CI error
* fix clang format
---------
Co-authored-by: illsilin_amdeng <Illia.Silin@amd.com>
* [CK_TILE]enhance elementwise test (#2683)
* enhance elementwise
* fix ci issues
* Use __builtin_amdgcn_readfirstlane for buffer resource in fused_moe (#2893)
* Use __builtin_amdgcn_readfirstlane for buffer resource in fused_moe
* also do the same for amd_buffer_addressing_builtins.hpp
* merge with develop
* fix clang format
---------
Co-authored-by: Adam Osewski <19374865+aosewski@users.noreply.github.com>
Co-authored-by: ThomasNing <thomas.ning@amd.com>
Co-authored-by: illsilin_amdeng <Illia.Silin@amd.com>
* Remove default constructor to fix c++17 build issue (#2953)
* Remove default constructor to fix build issue
* Restore default CTOR, remove constexpr, add init
---------
Co-authored-by: Bartłomiej Kocot <barkocot@amd.com>
* updated mxfp4 moe gemm2 config (#2330)
Co-authored-by: mtgu0705 <mtgu@amd.com>
* tweak version (#2954)
* tests: add unit tests for grouped_gemm_multi_d persistent kernels (#2941)
* feat(grouped_gemm_multi_d): add new example that integrates grouped_gemm and multi_d_gemm feature
* refactor: grouped_gemm_multi_d relies on grouped_gemm_kernel
* tests(grouped_gemm): grouped_gemm test suite passes with minor adjustments
* fix: segfault fix by passing correct parameters for d tensors
* style: clang format
* WIP: host code for grouped_gemm_multi_d persistent kernel compiles but segfaults
* feat(grouped_gemm_multi_d): add functionality to run persistant kernel
* feat(grouped_gemm_multi_d): add new example that integrates grouped_gemm and multi_d_gemm feature
* refactor: grouped_gemm_multi_d relies on grouped_gemm_kernel
* tests(grouped_gemm): grouped_gemm test suite passes with minor adjustments
* fix: segfault fix by passing correct parameters for d tensors
* style: clang format
* fix: incorrect validation method and Dtensor layout in test suite
* tests: add unit tests for grouped_gemm_multi_d persistent kernels
* parent 5b0af640369b93849335b126d6826b204ccc43a3
author AviralGoelAMD <aviral.goel@amd.com> 1758919991 +0000
committer AviralGoelAMD <aviral.goel@amd.com> 1759338256 +0000
docs: updated changelog with new feature info
fix wp gemm bug when permuteN is false (#2935)
* fix wp gemm bug when permuteN is false
* code clean
---------
Co-authored-by: valarLip <340077269@qq.com>
fix copy-paste bug in get_matrix_b; re-enable all tests in multi_abd (#2939)
[CK_TILE] FMHA Fix synchronization issue in FWD splitkv combine pipeline (#2934)
* Fix validation of rotary embedding with time_kernel_
When rotary embedding is used, the appendkv kernel modifies the q tensor
(multiple times when time_kernel_ is set). We need to reset the q buffer
and rerun all kernels.
* Fix synchronization issue in splitkv combine pipeline
Different warps can read and then rewrite the same values of lse_acc_lds.
Sometimes warps progress at different speeds, one warp can rewrite
values that are still being read by another warp.
Running the tests multiple times and, preferably, with multiple
processes on the same GPU helps to trigger this issue:
bin/test_ck_tile_fmha_fwd_fp16 --gtest_repeat=-1 --gtest_shuffle --gtest_throw_on_failure --gtest_filter="TestCkTileFmhaFwd/*KV*"
[CK_TILE] Support f32 in FMHA (fwd and bwd) (#2836)
* Support 16x16 (MFMA, WMMA) and 32x32 (MFMA) tiles in fwd and bwd BlockDropout
Add comments with dropout implementation details
Fix performance regression of fwd+dropout
* Remove some usage of type punning (reinterpret_cast with ref or ptr) in Philox;
* "scalarize" seed and offset, they may come either from kernel args or from device memory
(presumably loaded with vector loads).
These changes help the compiler to procude more optimal code and reduce register spilling.
Use WarpGemmDispatcher instead of explicit WarpGemmMfma... to get CWarpDstrEncoding
Use code based on BlockDropout in BlockDropoutBwd
Refactor BlockDropout (fwd)
Implement BlockDropout (fwd) for WMMA
Originally BlockDropout only supported 32x32 tiles (IsWG32 = true),
this version supports 16x16 tiles.
If MPerBlock > MWarp * 16, it can generate numbers for two 16x16 tiles, similarly
to BlockDropoutBwd.
Implement BlockDropoutBwd for WMMA
Remove MakeRandValLds* functions unused in BlockDropoutBwd
Remove unused Run overload from BlockDropoutBwd
* Fix regression with philox seed and offset when they exceed 32-bit int
__builtin_amdgcn_readfirstlane works with 32-bit values, seed and offset
are 64-bit so they get truncated.
* Add F32 MFMA warp gemms
* Support f32 in fwd FMHA
* Implement transpose_vectors for 4-byte types (float)
* Fix unexpected implicit f32->uint32 cast in buffer_store<4>
__builtin_amdgcn_raw_buffer_store_b32 expects unsigned int but float was passed (implicitly casted to uint).
mbuf_t types in other buffer_store<> are changed for consistency.
* Support F32 in bwd FMHA
hdim = 256 is disabled for now because it uses too much memory on gfx90a
* Support Headdim = 48 (divisible by 16) in fwd
* Add fp32-specific receipts (800 and 801)
* Tune fwd tiles
* Tune bwd tiles
* Use small tiles only for small seqlen_q
* Fix after rebasing
* Fix selection of a fallback tile based on bm0
The assumption that the largest bm0 == 128 is not always true for
current fp32 tiles.
* Remove constraints and adjust filtering for fp32
Custom constraints are no longer needed because now the smallest tile
is selected automtically based on seqlen_q.
Filters related to qr_async_trload disabled valid fp32 tiles.
* Add fp32 tests
* Make splitkv and appendkv compile for fp32 only
There are no instances yet, but API still must compile when only fp32 is
requested.
* Remove unimportant f32 instances
* Add test_ck_tile_fmha_*_fp32 to REGRESSION_TESTS
* Replace magic numbers with a constant, improve comments for dropout
* Update changelog
* Fix condition that dq_acc must be set to zero when mask is used
The change was introduced in #2799
* Replace warp_uniform with recently added amd_wave_read_first_lane
* Add hdim = 96 and 192 to fwd
Use git ls-files to select candidate files for clang format
This change ensures that the files being selected for clang format validation are exactly the ones tracked by the git repo we are testing. This protects against an known issue where the repo being tested contained "stray files" from a previous test.
[CK_TILE] Fixing Type Conversions in PassThroughPack8 (#2769)
* Change the return type of run_gemm_combinations in the basic tests
* Change the return type of run_gemm_combinations in the universal tests
* Add universal GEMM tests for bf16 x pk_i4 and fp16 x pk_i4
* Add universal GEMM test for fp8 x pk_i4
* Add basic GEMM tests for bf16 x pk_i4, fp16 x pk_i4 and fp8 x pk_i4.
* Add missing GemmTypeConfig<ck_tile::fp8_t, ck_tile::pk_int4_t, ck_tile::half_t>
* Add missing GemmTypeConfig<ck_tile::bf16_t, ck_tile::pk_int4_t, ck_tile::bf16_t>
* No need for utility in test_ck_tile_elementwise_1d
* Fix conversion from pk_int4x4_t to bf16x8_t in PassThroughPack8
* Avoid union-based type punning in float_to_bf16_truc_raw to make it constexpr compliant
* For consistency also make float_to_bf16_truc_nan_raw constexpr compliant by removing the union
* Use a static_cast to bfloat16_t only when CK_TILE_USE_LLVM_BUILTIN_BF16 is enforced
* Convert from float to bf16 during compilation rather than using magic values
* Fix conversion from pk_int4x4_t to fp8x8_t in PassThroughPack8
* Comment out the basic test for fp16 x pk_i4 as it does not pass
* Add missing GemmTypeConfig<ck_tile::bf8_t, ck_tile::pk_int4_t, ck_tile::half_t>
* Fix conversion from pk_int4x4_t to bf8x8_t in PassThroughPack8
* Add basic and universal GEMM tests for bf8 x pk_i4
* Switch back to amd_assembly_i4_to_fp8x8 in PassThroughPack8 as it works now
* Switch back to amd_assembly_i4_to_bf8x8 in PassThroughPack8 as it works now
* Remove the inefficient fallbacks for fp8 and bf8 in elementwise/unary_element_wise_operation.hpp
* Use explicit macros for enabling and disabling the the constexpr lookup based converters
* Fix two failing tests
* Avoid union-based type punning in float_to_bf16_rtn_raw to make it constexpr compliant
* Use float_to_bf16_rtn_raw instead of float_to_bf16 to create the bf16 lookup table for use in conversions from pk_int4 to bf16
* On ROCm 7.0.1 we need an explicit cast to from uint16_t to bf16_t
Grouped Conv Bwd Data out index calculation optimizations (#2917)
* Grouped Conv Bwd Data index calculation optimizations
* fixes
* refactor instances
* gfx12 fixes
* temporary disable splitK for gfx12
[CK] Fix example_grouped_conv_bwd_data_xdl_fp16 with ksplit = 2 (#2943)
root cause: AK1 and BK1 may different in class template. so we need calculate k0 per block separately when ksplit is not 1.
[CK][Examples] Extending support for rdna3/4 in following examples: (#2884)
* [CK][Examples] Extending support for rdna3/4 in following examples:
-example_gemm_xdl_splitk_reduce_multi_d_fp16
-example_gemm_xdl_splitk_reduce_multi_d_bf16
-example_gemm_xdl_splitk_reduce_bf16A_i8B
-example_gemm_xdl_splitk_reduce_bfp16
-example_splitk_gemm_bias_e_permute_xdl_fp32
-example_gemm_add_multiply_xdl_fp16
-example_complex_contraction_bilinear_xdl_fp32
-example_grouped_gemm_lower_triangle_scale_softmax_gemm_permute_xdl_fp16
-example_batched_gemm_bias_e_permute_xdl_fp16
-example_gemm_xdl_fp16
-example_gemm_xdl_fp16_av2
-example_gemm_xdl_wavelet_fp16
-example_gemm_add_add_fastgelu_xdl_bf16
-example_gemm_add_add_fastgelu_xdl_fp16
-example_gemm_add_add_fastgelu_xdl_fp32
-example_grouped_gemm_xdl_fp32
-example_grouped_gemm_xdl_fp16
-example_grouped_gemm_xdl_bf16
-example_cgemm_xdl_bf16
-example_cgemm_xdl_fp16
Signed-off-by: Michal Kulikowski <Michal.Kulikowski@amd.com>
* [CK][Examples] Extending support for rdna3/4 in following examples:
-example_gemm_xdl_splitk_reduce_multi_d_fp16
-example_gemm_xdl_splitk_reduce_multi_d_bf16
-example_gemm_xdl_splitk_reduce_bf16A_i8B
-example_gemm_xdl_splitk_reduce_bfp16
-example_splitk_gemm_bias_e_permute_xdl_fp32
-example_gemm_add_multiply_xdl_fp16
-example_complex_contraction_bilinear_xdl_fp32
-example_grouped_gemm_lower_triangle_scale_softmax_gemm_permute_xdl_fp16
-example_batched_gemm_bias_e_permute_xdl_fp16
-example_gemm_xdl_fp16
-example_gemm_xdl_fp16_av2
-example_gemm_xdl_wavelet_fp16
-example_gemm_add_add_fastgelu_xdl_bf16
-example_gemm_add_add_fastgelu_xdl_fp16
-example_gemm_add_add_fastgelu_xdl_fp32
-example_grouped_gemm_xdl_fp32
-example_grouped_gemm_xdl_fp16
-example_grouped_gemm_xdl_bf16
-example_cgemm_xdl_bf16
-example_cgemm_xdl_fp16
Signed-off-by: Michal Kulikowski <Michal.Kulikowski@amd.com>
---------
Signed-off-by: Michal Kulikowski <Michal.Kulikowski@amd.com>
hot fix check eid range (#2924)
* hot fix check eid range
* fix clang format
---------
Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>
Co-authored-by: illsilin_amdeng <Illia.Silin@amd.com>
Weight Preshuffle Block Scale gemm support (#2877)
* initial commit
* remove extra files
* fixing errors
* updated ReadMe file for mapping of diff quants with diff configs
* addressing review comments
* addressing review comments
* Resolved merge conflicts
* [CK TILE GEMM] Replace get_preshuffle_or with is_quantpreshuffle_enabled
The get_preshuffle_or was not working as expected, which led to incorrect behavior
in the quantization preshuffle process. This change replaces it with the more reliable
is_quantpreshuffle_enabled function to properly determine when preshuffle should be applied.
* initial commit
* debugging
* working fp8 for init constant
* fp8 working with all inits
* updated block level code with comments
* changing the loop iter
* debugging
* debugging
* debugging
* code fix
* code clean up
* clang formatted
* Add comment
* code cleanup
* clang formatted
* merge conflicts fixes
* applying the latest int4 changes to the piepline
* fixing test code for updated traits
* Adding gtest
* review comments addressed
* addressing review comments
* remove c++20 code
* added flush cache changes
---------
Co-authored-by: Cong Ma <congma13@amd.com>
Co-authored-by: root <root@banff-cyxtera-s73-2.ctr.dcgpu>
increase time limit for AITER tests (#2948)
Code style clean-up and documentation
The following changes were made:
- Clean-up of variable namings
- Addition of README
- Removal of num_cu and occupancy args; such options are meant for
testing purposes and should not be exposed to the user
- Removal of CK_TILE_PIPELINE_MEMORY macro and PipelineTypeTraits class
since we only support one pipeline at the moment.
Fix timing issue in CK_TILE GEMM example (#2940)
* feat(grouped_gemm_multi_d): add new example that integrates grouped_gemm and multi_d_gemm feature
* WIP: host code for grouped_gemm_multi_d persistent kernel compiles but segfaults
* feat(grouped_gemm_multi_d): add functionality to run persistant kernel
* fix: parameterize NumDTensor in GroupedGemmHostArgs and remove lint
Fix timing issue in CK_TILE GEMM example (#2940)
* style: clang format
* refactor: removed unused file
[CK] Add command option instance_index and param_mask to run partial ck test (#2889)
* [CK] Add command option instance_index and param_mask to run partial ck test
Many CK test are instance test. it will loop all instance in the instance library. It causes test often out-of-time if we run test on simulator/emulator.
This PR add option instance_index and param_mask to reduce the workload of instance test
instance_index: only run test 1 available instance with specified index.
param_mask: filter the embedded parameter with specified mask
* fix CI error
* fix clang format
---------
Co-authored-by: illsilin_amdeng <Illia.Silin@amd.com>
[CK_TILE]enhance elementwise test (#2683)
* enhance elementwise
* fix ci issues
* Add a new gemm pipeline based on ComputeV4 which utilizes async copy API (#2949)
* check in pipeline and policy
for async load in mi350, need to make sure TileAccessPattern is warp_raked or block_raked
solve merge conflicts
* fix cmakelists
* make it build
* fix? buffer async fence
* relax fences; it appears it only is needed between pairs of ping-pongs
* remove fences
* remove fences
* cleanup and reformat
* add steps annotations
* comment all pipeline steps / remove unexplainable syncs
* clang-format
* add comment
* cleanup kernel types for test
* fix comment
* fix hardcoded warp size
* faithfully copy block gemm from compute v4 policy to async policy
* make async test gfx950 only
* fix cmake logic
* set separate compile options for async
* refine comment in policy
* try update hotloop scheduler
* cleanup comments
* test more K block sizes
* unhardcode Ks, sort of
* add large odd test case
* fix build for quant
* add comment to hot loop scheduler and rename enum
* reformat
* reword the pipeline description
* reformat
* address review / add static asserts / typo fix
* update changelog
* Fix building test_fmha_bwd_fp32 on SLES15 (#2962)
* [CK TILE GEMM] Support Aquant GEMM with transposeC and preshuffle (#2897)
* [CK TILE GEMM] Support Aquant GEMM with transposeC and preshuffle
When TransposeC and QuantPreshuffle are both true, Aquant generates
correct result.
* [CK TILE GEMM] Support Aquant GEMM with transposeC and preshuffle
- Add unit tests
* Fix bug in is_quantpreshuffle_enabled
* clang format
---------
Co-authored-by: ThomasNing <thomas.ning@amd.com>
* add the check of granularity for atomic add (#2959)
* fix build on legacy systems without cpp20 compiler (#2958)
Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>
* fix compilation errors on RHEL8 and SLES15 (#2967)
* [CK Tile] CShuffle Tile Permute N all warp compatible (#2966)
* solve the hard_code issue of kM2
* clang format
* [TheRock CI] Bumping hash for TheRock (#2972)
* Adding new hash for TheRock
* Removing package
* ck tile engine integrate with gemm unit tests (#2601)
* first try to understand how tile engine works
* 1st implemented unit tests
* manage different types for unit tests
* manage using different config files to have different unit tests
* manage different layouts
* making instances and running them by unit test
* Add reference calculation
* manage different input dimension combination
* add splitk to unit tests. clean code.
* remove unused files
* clean and test with a simple json file
* Fixing hash (#2973)
* fix: nil performance results for gemm examples (#2950)
* add the sync barrier for persistent kernel (#2977)
* [CK TILE GEMM] Refactor the code of transposeC and quantpreshuffle of AQuant Gemm (#2965)
Refactor the code of transposeC and quantpreshuffle of AQuant Gemm to make it easier to maintain.
Co-authored-by: Thomas Ning <Thomas.Ning@amd.com>
* [CK][Examples] Extending support for rdna3/4 part 2:
-example_batched_gemm_xdl_int8
-example_batched_gemm_xdl_fp8_rowwise_v3
-example_batched_gemm_xdl_fp32
-example_batched_gemm_xdl_bf16
-example_batched_gemm_xdl_bf16_v3
-example_batched_gemm_xdl_fp16
-example_splitk_gemm_bias_e_permute_xdl_fp32
*fixing return value to return 0 as success in above examples.
Fixing cmdline parameters in:
-example_sparse_embedding3_forward_layernorm
-example_elementwise_binary_4D_fp16
-elementwise_scale_permute_amax_2D_fp16_fp8
Signed-off-by: Michal Kulikowski <Michal.Kulikowski@amd.com>
* [CK][Examples] Extending support for rdna3/4 part 3:
-example_gemm_xdl_int8
-example_gemm_xdl_fp8
-example_gemm_xdl_fp8_bf8
-example_gemm_xdl_fp16_fp8
-example_gemm_add_add_fastgelu_xdl_int8
-example_grouped_gemm_xdl_int8
-example_grouped_conv_bwd_weight_xdl_bf16
-example_cgemm_xdl_fp32
-example_cgemm_xdl_int8
fixing cmdlines for:
-example_22_cgemm
-example_24_batched_gemm
-example_batched_gemm_xdl_fp16int4_b_scale_v3
Signed-off-by: Michal Kulikowski <Michal.Kulikowski@amd.com>
* CI Skip and Status Checks Fix (#2952)
* Update Jenkinsfile
Adding logic to skip CI checks when a commit contains changes to non-relevant files like docs, .md, licenses, and .github workflow files.
* Update Jenkinsfile
* Update Jenkinsfile
* Update Jenkinsfile
Testing skip env var
* Update Jenkinsfile
Fixing syntax
* Update Jenkinsfile
Simplifying CI check logic
* Update Jenkinsfile
Testing skipping logic on stages.
* Update Jenkinsfile
Removing post block. The status for skipped stages are already reported.
* Testing Docs
Testing modifications to files in the docs folder do not trigger a the build and test stages.
* Testing Multifile Trigger
Removed Jenkinsfile from the skip patterns. Reversed change to docs file. This test should not skip CI checks.
* Clean code
Renamed setup stage to be more descriptive.
Added pipeline env variable for consistency.
Moved performance test results stage conditional up a level so the parent stage appropriate reports the status if it is skipped.
* Fixing syntax error
* Updated CRON Flags
Added the FORCE_CI flag to the CRON instructions. This will ensure CI does not skip the job.
* Updating logging
Making logs more explicit.
* Comment update
Cleaning comments.
* Update Jenkinsfile
Reverting performance reports when condition.
* Parallel Test
Testing stage status with parallel stages
* Update Jenkinsfile
* Update Jenkinsfile
Removing stages for quick testing
* Update Jenkinsfile
* Testing skipped parallel stages
Testing the addition of a coordination stage to always pass and give an update to skipped parent stages with parallel sub-stages.
* Testing parallel stages
Adding coordination stage to test if parent check status is correctly updated.
* Simplified performance results stage
Removed parent stage as there are no other parallel stages to execute (yet).
* Testing final clean up stage
* Testing check status update
Testing - forcing status to update after a stage skip.
* Testing results stage skip
* Removing test stage
* Testing pipeline
* Testing post status updates
* Process Test Results Post Event Update
The stage will report success when it skips or is successful.
* Testing non-relevant file change
This should skip build and test in CI
* Reverting test
updating regex file patterns to use strings instead of regex literal syntax.
* Fixing file matching regex
* Testing docs modification
* Fixing default env var value
* Correcting env var assignment
* Pipeline test
Updating docs file. Should skip ci.
* Testing Pipeline
Setting default run ci state.
* Adding debugging
* Removing debugging
* Pipeline test
Should skip pipeline
* Pipeline Test
Mixed files to trigger a CI run
* Adding additional status updates
The parent stage sometimes remains in pending even if the child stage completes when skipped. Added an additional status update for the parent stage.
* Fixing variable name
* Moving stage names
Moved the performance stage names to a single location because they are referenced multiple times. This reduces errors with typos in the future.
* Revert "Moving stage names"
This reverts commit 7cf6743e548c6ebdbf7f26b1647c308ce556e06c.
* Update Jenkinsfile
Handle both truly empty arrays and arrays containing only empty strings.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* [CI] Enable ccache w/ namespace for external use (#2988)
* Enable ccache w/ namespace for external use
* Add TheRock parent directory to log path
* Fix typo for TheRock
* Add Memory pipeline for AQuant Block Scale GEMM (#2987)
* WIP: add memory pipeline boiler plate code that compiles and works for one block
* WIP: tail handling works for memory pipeline
* WIP: numerical errors appears to have gone by adding block_sync_lds()
* fix: numerical error with memory pipeline by adding block_sync_lds() and new tail handler
* refactror: remove debug print statements and lints
* fix: remove redundant sync barriars
* chore: remove lint
* fix: remove unused code from tile handler and remove redundant block_sync_lds()
* fix: correct parent struct name for memory pipeline
* fix: remove static assert check from parent struct and add it to child struct because not all child structs needs to static assert
* fix: defer block sync lds to just before prefill
* Add KBatch support for gemm_ab_scale (#2740)
* Add KBatch support for gemm_ab_scale
* Revert kernel parameters change
* Remove printing
* fix formatting
* fix check
* Use {} in if
---------
Co-authored-by: Adam Osewski <19374865+aosewski@users.noreply.github.com>
* [CK_TILE] Pooling FWD (Lwpck 3683) (#2956)
* Pooling 2D/3D with refernce
* Tests & cleanup
- added test for ppoling
- cleanup
- removed 2d example
* Comment resolution
- README added
- example target name rectified
- appropriate arg description and comments added
* clang-format
* appropriate blocksize calc
* modifications for future indexing addition
- instead of transforming views we now transform the descriptors, so
that the same descriptor can be re-used for index tensor in the future
* some basic fixes
* comment resolutions
* comment resolutions
---------
Co-authored-by: Illia Silin <98187287+illsilin@users.noreply.github.com>
* [CK_TILE] fix pk_fp4 compilation for non-gfx950 GPUs (#2983)
See build error log from
https://github.com/ROCm/composable_kernel/issues/2271#issuecomment-3150218542
This PR make vector element access constexpr-safe by avoiding operator[] on
ext_vector_type(2) and replace those sites in the pk_fp4 conversions so they
can be used in constant expressions, as The operator[] on ext_vector_type(2)
isn't allowed in constant expressions, which caused "constexpr function never
produces a constant expression" with a note at x[0]. Using `bit_cast` to a
trivial array representation keeps it constexpr-compatible.
Signed-off-by: Hollow Man <hollowman@opensuse.org>
* [CK_TILE] FMHA Tests Enhancement (#2945)
* fmha-gtest-wip
* Thanks Copilot!
* Fix splitK for grouped conv bwd data (#2991)
* Conv:TF32: add more instances - 2 (#2879)
* add instances of device_grouped_conv_fwd_xdl_f32_comp_instances
* add instances of device_grouped_conv_fwd_xdl_f32_tf32_mem_instances
* add instances of device_grouped_conv_fwd_xdl_large_tensor_f32_tf32_instances
* tf32:conv:add instances for base class DeviceConvFwd
* tf32:conv:add instances for base class DeviceGroupedConvBwdDataMultipleD
* tf32:conv:add instances for base class DeviceGroupedConvBwdWeight
* add tf32 in profiler
* remove gnhwc/ngchw/ngcdhw instances
* remove non-ndhwgc/nhwgc/nhwc instances
* add check in IsSupportedArgument()
* [CK-Tile] functional support for transposed inputs in compute-bound double-lds-buffer pipeline with async loads from global memory to LDS (#2984)
* reuse local prefetch logic from compute v4 pipeline
add single-tile test
explicit lambda capture
reuse lds block descriptors from base policy for the transposed case
match the test case kernel configuration with compute v4
* add comments
* supporting prefill shapes for preshuffle block scale gemm (#2975)
* debugging
* debugging for prefill shapes
* comment unused code
* fix for prefill shapes
* clearing up the code
* add int4 to universal gemm example
* clang formatted
* adding test for prefill shapes in block scale gemm
* lil improv on the block pipeline
* Address Review Comment
---------
Co-authored-by: ThomasNing <thomas.ning@amd.com>
* Fix GCC 7 CTAD compilation error in test_fmha_bwd.cpp (#3001)
Fixes compilation error on SLES15 with GCC 7 for gfx942 builds:
error: 'vector' may not intend to support class template argument deduction [-Werror,-Wctad-maybe-unsupported]
Changes:
- Explicitly specify template argument for `std::vector<mode_enum>` instead of relying on C++17 CTAD
- Maintains compatibility with both older (GCC 7) and newer compilers
* Streamk functional tests (#2974)
* Add initial fp16_mem_128x128x32_2x2x1_32x32x16_NonPersistent test suite
* Account for stride when computing K offsets for A and B tensor
This change ensures that the correct stride is used when computing the K
offsets into the A and B tensors in the Stream-K Kernel's operator()
function. This ensures that the kernel executes correct regardless of
whether A and B are row or column major.
* Move helper code to test_gemm_streamk_util.hpp
* Separate tests into smoke/regression/extended. Add bf16 datatype
* Run clang-format
* Refactor combinatorial macro expansion and naming
* Adjust the initialization values to account for better tolerance on bf16
* Correct BF16 datatypes in comments
* Move the extended tests under the REGRESSION_TESTS label
* Apply suggestions from code review
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Emily Martins <emily.martins@amd.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* [CK_TILE] FMHA BWD Add Instance for D48 on GFX950 (#2866)
Co-authored-by: asleepzzz <hanwen.chang@amd.com>
* Build only grouped conv profilers.
* Skeleton for the ckTileProfiler.
* WIP: CK Tile conv bwd profiler.
* Added a placeholder conv bwd instance factory for CK Tile profiler.
* Create invoker for the kernel and a factory for creating invokers.
* Rename CK Tile grouped conv factory.
* Add empty instance factory.
* Fully functional CK Tile profiler.
* Fix transferring data back to host for validation.
* Improve the grouped conv kernel name generation in CK Tile.
* Remove unnecessary compilations.
* Add min blocks per CU to invoker name.
* Add more instances.
* Better split-K handling in the template instantiation.
* Add more instances.
* Enable vector loads in grouped conv bwd weight kernels.
* Add more instances.
* [CK Tile] contraction multi d - kernel & example (#2901)
* Initial commit. create batched_contraction_kernel file
* initial problem definition
* implement initial example to launch kernel
* add universal gemm to contraction. initial phase
* complete implementation for special case all Dims are 1 and no Ds
* clean code
* initial changes to support multi dimensional G
* more progress in implementing multiple G
* tmp commit
* manage dynamic NumDimG in kernel
* improving example for multi M,N,K,G handling. start generalizing kernel. it is a temporary commit
* implement the example for general Multi dimension G M N K and test different reference calculation algorithms
* 2 functions for reference using multi dimensional and flat indexing
* clean the code for muti dimentional G, M, N, K contraction and add some logs
* Add Make descriptor function in kernel for merging Ms, Ns, Ks for A, B, E
* some cleaning on kernel
* clean the code for calculating the offsets from flatten batch number
* Start adding MultiD support to kernel and example
* more changes to manage multi D in kernel and example
* manage passing multi d to kernel and testing.
* complete multi D support in kernel. modify example code to support it
* Correct algorithm to calc the correct offset values for D tensor batches and some code cleaning
* Minor fix
* Generalize example code for variable NumD tensors and apply cleanup based on review feedback
* Refactored code and addressed review feedback
* refactoring, cleaning, add documents, in kernel side and example codes
* Optimize batch offset calculation in kernel
* Inline CalculateBatchOffset in batched contraction kernel, update CHANGELOG.md
---------
Co-authored-by: Adam Osewski <19374865+aosewski@users.noreply.github.com>
* Update include path to break the remod's cyclic dep issue (#2978)
* Update include path to break the cyclic dep issue
* Use ck_tile::permute_vectors_i4x4_b in tile engine
---------
Co-authored-by: Damien Lejeune <damien.lejeune@amd.com>
Co-authored-by: Adam Osewski <19374865+aosewski@users.noreply.github.com>
* [CK_TILE] Batched Gemm Kernel IsSupported function checks (#2860)
* Add valid check batched gemm part1
* [CK_TILE] Add batched gemm kernel IsSupported func checks
* revert broken pre-commit hook changes
* revert broken pre-commit hook changes v2
* Clarify error messages
* [CK_TILE] Blockwise GEMM pipeline v6 - port of v5 from old CK (#2955)
* First checkpoint
* Second checkpoint - hot loop scheduler
* Third checkpoint - init main operator
* Fourth checkpoint - main loop ready
* Fifth checkpoint - main loop fix
* Sixth checkpoint - ReadWritecompFunc
* Seventh checkpoint - Tail finished
* [CK_TILE] Blockwise gemm pipeline v5 complete
* Working
* Working fixes 2
* Rename v5 to v77 temporarily
* Data type adjustment
* Data type adjustment 2
* [CK_TILE] Blockwise Gemm pipeline v5 add tests
* [CK_TILE] Fix calculation error
* TEMP: check pipeline
* Fix name to V6
* naming and documentation changes
* WIP dump
* Try fixing v1
* Failing tests v5
* Debugging
* Changes v2
* F16 tests working great
* Working BlockwiseGemmPipelineV5 as V6
* Cleanup and format
* Merging changes part1
* [CK_TILE] Blockwise Gemm Pipeline Comp V5/V6
* Remove commented code
* Fix gfx950 build issues
* Fix file formatting
* Review changes, more concat info, add bf16 bf8 tests
* Fix formatting
* Add bf16 and bf8 tests
---------
Co-authored-by: Adam Osewski <Adam.Osewski@amd.com>
* [CK_TILE] Non-K Major from old CK to CK-Tile (#2442)
* Enable the adapted LDS B layout for Row-Major
* fix formatting
* Implement specialized col-major A LDS block descriptor
* Fix formatting
* Use VecLoadSize for AK1/BK1
* Fix some thread access pattern values
* Use GetVectorSizeA for A
* Fix formatting
* Add extra condition to avoid division by zero
* disable layout for wave32
* remove extra else
* fix formatting
* Fix formatting
* Rename one remaining TileDistributionEncodingPattern2D
* Use integer ceil division
* revert remod.py changes
* also revert utility.hpp
* use getA/BTileAccessPattern everywhere
* use integer_divide_ceil for AK0 too
---------
Co-authored-by: Adam Osewski <19374865+aosewski@users.noreply.github.com>
Co-authored-by: Adam Osewski <Adam.Osewski@amd.com>
* Enable GMock and improve gtest configuration (#2976)
Our current cmake/gtest.cmake file does not enable gmock. Gmock is needed for matchers that are needed for more readable unit tests. This PR enables gmock and does a little cleanup in gtest.cmake:
* Enable BUILD_GMOCK by default (was previously disabled)
* Patch gtest-src/googlemock/CMakeLists.txt for broken include path.
* Add configuration to gmock if the target is used.
No other changes in this PR, but I've verified I can use gmock matchers correctly once I include these changes in other code.
* [CK_TILE] Correct BlockWarps calculation and fix smoke-test in rmsnorm (#2540)
* [CK_TILE] Correct BlockWarps calculation and fix smoke-test in rmsnorm
* Update rmsnorm host reference
* Update tree reduction of rmsnorm for reference host
* Fix cross warp for m > 1 cases
* Add RMSNorm model selectable option for host reference
* Fix save_unquant cases
* Update reference rmsnorm forward function to use enum for model sensitivity
* Update reference rmsnorm calculation for model sensitivity
* Fix m warp for layernorm
* Adjust parameter of reference for twoPass
* Fix clang format
* Run clang-format-overwrite.sh to fix formating issue
* fix clang format
---------
Co-authored-by: MHYang <mengyang@amd.com>
Co-authored-by: illsilin_amdeng <Illia.Silin@amd.com>
Co-authored-by: ThomasNing <thomas.ning@amd.com>
* Fix: Handle JSON boolean values (pad_m, pad_n, pad_k and persistent) in gemm_instance_builder (#3008)
* [CK_TILE] Switch into universal gemms for conv bwds (#2981)
* switch into universal gemms for conv bwds
* some fixes and support universal gemm in conv fwd
* add reviewer comments
* Revert "[CK_TILE] Non-K Major from old CK to CK-Tile (#2442)" (#3017)
This reverts commit d2bbca3eca2bd14014e3daae39ae70846ec8218b.
* update s_barrier's logic in gfx12 architecture (#3003)
change s_waitcnt's logic in gfx1250
change s_waitcnt's logic in gfx1250
update comment
* fixing group id (#3002)
* feat(grouped_gemm_multi_d): add support for bf16
* test(grouped_gemm_multi_d): add unit test for bf16 support
* Felix/opt sorting (#2902)
* merge felix/sortin…
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Proposed changes
As Title
kPadHeadDimQ/Vfrom bool to index_t of0(h is a multiple of tile size) /8(h is a multiple of 8) /1(h can be any integer)Checklist
Please put an
xinto the boxes that apply. You can also fill these out after creating the PR. If you're not sure, please don't hesitate to ask.clang-formaton all changed files