Skip to content

Clean up CPU kernel definition for opset 13 Pad#7867

Merged
hariharans29 merged 2 commits intomasterfrom
hari/padTypeSupportRefinement
May 28, 2021
Merged

Clean up CPU kernel definition for opset 13 Pad#7867
hariharans29 merged 2 commits intomasterfrom
hari/padTypeSupportRefinement

Conversation

@hariharans29
Copy link
Copy Markdown
Member

Description: Since we don't guarantee backwards compat yet for kernel def hashes, for this release we cleanup types supported for opset -13 Pad CPU kernel and change expected hash value in the test

Motivation and Context
Based on offline discussion for comment in #7856

@hariharans29 hariharans29 requested a review from a team as a code owner May 27, 2021 22:04
@hariharans29 hariharans29 changed the title Clean up kernel definition for opset 13 Pad Clean up CPU kernel definition for opset 13 Pad May 27, 2021
Copy link
Copy Markdown
Contributor

@skottmckay skottmckay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

@hariharans29 hariharans29 merged commit 0255c83 into master May 28, 2021
@hariharans29 hariharans29 deleted the hari/padTypeSupportRefinement branch May 28, 2021 02:32
xzhu1900 added a commit that referenced this pull request May 28, 2021
* Fix bug in Transpose CUDA kernel (#7329)

* Fix permission error for ORTModule lock file (#7814)

* fix topo sort in quant tool (#7833)

* fix topo sort in quant tool

* add unit test and make the topo sort stable

* Relax tol for Conv1D fp16 test (#7844)

* Relax tol for Conv1D fp16 test

Co-authored-by: Sherlock Huang <bahuang@OrtTrainingDev3.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>

* Resolve issue with wrapped ORTModule load_state_dict (#7847)

* Encapsulate children modules inside a ModuleAccessor object to prevent erroneuos iteration over children while loading the state dictionary

* Add named_models, models, apply methods, change ModuleAccessor to ModuleMetadata and modify unit tests

* Change ModuleMetadata module getter logic, raise NotImplementedError for add_modules

* Add comment explaining why overriding _load_from_state_dict method is needed

* fixed bugs in packed mode and enable pack mode tests in ci (#7848)

* fixed bugs in packed mode and enable pack mode tests in ci

* removed unnecessary space

* pr comments

* pr comments

* disable an average pool test

* try disabling another avg pool

* disable more avg pool tests

* disable maxpool tests

* add environment variable to control default training package's local version (#7849)

* [js] update documents (#7852)

* [js] update documents

* escape double quotes

* update operators.md

* resolve comments

* Support bool type for Pad CPU (#7856)

* Initial commit

* update

* nit

* Include ORT C/C++ API headers in the ORT Mobile AAR package (#7858)

* Add header files of ort c/c++ api to aar package

* Move header file selection to cmake based on EP choice

* fix duplicated node name (#7865)

* Clean up CPU kernel definition for opset 13 Pad (#7867)

Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: Thiago Crepaldi <thiago.crepaldi@microsoft.com>
Co-authored-by: Yufeng Li <liyufeng1987@gmail.com>
Co-authored-by: Sherlock <baihan.huang@gmail.com>
Co-authored-by: Sherlock Huang <bahuang@OrtTrainingDev3.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
Co-authored-by: baijumeswani <bmeswani@microsoft.com>
Co-authored-by: Tixxx <tix@microsoft.com>
Co-authored-by: liqunfu <liqfu@microsoft.com>
Co-authored-by: Yulong Wang <yulongw@microsoft.com>
Co-authored-by: Guoyu Wang <62914304+gwang-msft@users.noreply.github.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
titaiwangms added a commit to titaiwangms/onnxruntime that referenced this pull request May 6, 2026
…g bundled-ONNX update

Per architect 8b9842c3's recommendation (lead-39245992/pr1v2-onnx-fixture-handling.md):
mirror the existing lines 951-960 precedent ("Skipped until cmake/external/onnx
points to onnx 1.19 ... @onnx/onnxmicrosoft/pull/7074") and add a skip-with-cite block
for the attention fixtures regenerated upstream by onnx/onnx#7867 and
onnx/onnx#7913.

The bundled cmake/external/onnx is v1.21.0 (predates both PRs). Our impl
emits the corrected post-spec output, which disagrees with the still-old
fixtures shipped in v1.21.0. Skip until cmake/external/onnx is bumped to
>= v1.22, at which point the entries can be removed in a single cleanup
commit (greppable via 'v1.22 (includes onnx/onnx#7867').

20 entries added (10 base + 10 _expanded):
  - 4 softcap-related (cite onnx#7867)
  - 14 bias / qk_matmul_output_mode-related (cite onnx#7913)
  - 2 mask4d_padded_kv (cite onnx#7867 — same root cause; pre-existing
    QNN-only skip at line 1498 promoted to all providers)

Why not bump cmake/external/onnx instead: ONNX v1.22 has not shipped (latest
v1.21.0 = 2026-03-27; microsoft#7867 merged 2026-04-30, microsoft#7913 merged 2026-05-04). A
non-tagged SHA pin would cascade into opset registrations, fusion passes,
function-body decompositions, possibly opset-25 ops, and 80+ unrelated
fixture regenerations from microsoft#7867 alone — out of scope for a CPU behavioral
fix. Bump deserves its own dedicated PR.

Verification (./build/Linux/Debug/onnx_test_runner -e cpu -j 1
cmake/external/onnx/onnx/backend/test/data/node):
  - Pre-patch attention failures: 11 (10 from new-spec + 1 mask4d_padded_kv)
  - Post-patch attention failures: 0
  - Total cases: 1588 -> 1568 (20 skipped, matching added entries)
  - Only remaining failure: convinteger_with_padding (pre-existing, unrelated)
  - AttentionTest.* still 60/60 PASS
  - lintrunner clean

Refs: lead-39245992/pr1v2-onnx-fixture-handling.md (architect 8b9842c3)

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants