Conversation
|
/azp run Linux CPU CI Pipeline, Linux CPU x64 NoContribops CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, Linux OpenVINO CI Pipeline, MacOS CI Pipeline, MacOS NoContribops CI Pipeline, Windows CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline |
|
/azp run Windows GPU CI Pipeline, WIndows GPU TensorRT CI Pipeline, centos7_cpu, centos7_cpu (linux_centos_ci Debug), centos7_cpu (linux_centos_ci Release), orttraining-linux-ci-pipeline, orttraining-linux-gpu-ci-pipeline, orttraining-distributed, orttraining-amd-gpu-ci-pipeline, Linux Nuphar CI Pipeline |
| #include <thrust/device_vector.h> | ||
| #include <thrust/execution_policy.h> | ||
|
|
||
| #include "non_max_suppression_impl.h" |
There was a problem hiding this comment.
the related header should be first
https://google.github.io/styleguide/cppguide.html#Names_and_Order_of_Includes
| ASSERT_TRUE(nonzero_shape->dim_size() == 2); | ||
| ASSERT_TRUE(nonzero_shape->dim(0).dim_value() == 2); | ||
| ASSERT_TRUE(nonzero_shape->dim(1).dim_param() == "nonzero_nonzero_count"); | ||
| //ASSERT_TRUE(nonzero_shape->dim(1).dim_param() == "nonzero_nonzero_count"); |
There was a problem hiding this comment.
why is this commented out?
There was a problem hiding this comment.
marked the PR as WIP. This is a temp change... working on a formal fix...
In onnx, shape inference for NonZero was added, this shape inf method simply adds a unknown dim for 2nd dim and this takes precedence over the one added by transformer ... trying to understand whether we even need this transformer, now that onnx has added a shape inf method for this op.
Wanted to run validations for other changes so commented this one out to unblock
|
/azp run inux GPU TensorRT CI Pipeline |
|
No pipelines are associated with this pull request. |
Description: Picking ONNX 1.9 release candidate. Updates in this PR include:
Motivation and Context