Skip to content

Enable Inference Results Saving in onnx-test-runner#24210

Merged
HectorSVC merged 1 commit intomicrosoft:mainfrom
CodeLinaro:dev/quic-hungjuiw/save_infer_result
Apr 15, 2025
Merged

Enable Inference Results Saving in onnx-test-runner#24210
HectorSVC merged 1 commit intomicrosoft:mainfrom
CodeLinaro:dev/quic-hungjuiw/save_infer_result

Conversation

@quic-hungjuiw
Copy link
Contributor

Description

  • Add flag to determine whether to save inference results.
  • Implement infrastructure to transform OrtValue into TensorProto
  • Update the README with corresponding descriptions.

Motivation and Context

  • The PR aims to save the inference results of onnx-test-runner for inference purpose.
  • Developer can proceed with custom metrics and verifications.

@HectorSVC
Copy link
Contributor

/azp run Big Models,Linux Android Emulator QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline,Windows GPU TensorRT CI Pipeline,Windows x64 QNN CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines will not run the associated pipelines, because the pull request was updated after the run command was issued. Review the pull request again and issue a new run command.

@HectorSVC
Copy link
Contributor

/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,MacOS CI Pipeline,ONNX Runtime Web CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,Linux QNN CI Pipeline,Linux OpenVINO CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 4 pipeline(s).

return Status::OK();
}

Status MLValueToTensorProto(Ort::Value& value, onnx::TensorProto& tensor_proto) {

Check warning

Code scanning / CodeQL

Poorly documented large function

Poorly documented function: fewer than 2% comments for a function of 118 lines.
@quic-hungjuiw
Copy link
Contributor Author

Several CI checks are failing at the docker-related step:

Is there any action required from us?

@HectorSVC
Copy link
Contributor

Several CI checks are failing at the docker-related step:

Is there any action required from us?

You only need to care about the Qnn or format related issues.

@HectorSVC HectorSVC closed this Apr 7, 2025
@HectorSVC HectorSVC reopened this Apr 7, 2025
@HectorSVC
Copy link
Contributor

close and re-open to trigger the re-build

@HectorSVC
Copy link
Contributor

Please fix code format issue reported from int / Python format (pull_request)

@quic-hungjuiw quic-hungjuiw force-pushed the dev/quic-hungjuiw/save_infer_result branch from 2e63fc8 to b893294 Compare April 10, 2025 06:28
@quic-hungjuiw
Copy link
Contributor Author

I have fixed the reported code format issue. Please let me know if there is any problem.

@HectorSVC
Copy link
Contributor

/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,MacOS CI Pipeline,ONNX Runtime Web CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,Linux QNN CI Pipeline,Linux OpenVINO CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 3 pipeline(s).

@HectorSVC
Copy link
Contributor

/azp run Big Models,Linux Android Emulator QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline,Windows GPU TensorRT CI Pipeline,Windows x64 QNN CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 6 pipeline(s).

@quic-hungjuiw quic-hungjuiw force-pushed the dev/quic-hungjuiw/save_infer_result branch from b893294 to 1825a5a Compare April 14, 2025 08:44
@quic-hungjuiw
Copy link
Contributor Author

Sorry for the missing on tensorprotoutils.cc. Could you trigger it again?

@HectorSVC
Copy link
Contributor

/azp run Big Models,Linux Android Emulator QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline,Windows GPU TensorRT CI Pipeline,Windows x64 QNN CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 6 pipeline(s).

@HectorSVC
Copy link
Contributor

/azp run Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline, Linux QNN CI Pipeline,Linux OpenVINO CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 3 pipeline(s).

@HectorSVC
Copy link
Contributor

/azp run Win_TRT_Minimal_CUDA_Test_CI

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

- Add flag to determine whether to save inference result
- Add infrastructure to transform OrtValue into TensorProto
- Add corresponding description to README
@quic-hungjuiw quic-hungjuiw force-pushed the dev/quic-hungjuiw/save_infer_result branch from 1825a5a to d6648e9 Compare April 15, 2025 06:22
@HectorSVC
Copy link
Contributor

/azp run Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline, Linux QNN CI Pipeline,Linux OpenVINO CI Pipeline

@HectorSVC
Copy link
Contributor

/azp run Big Models,Linux Android Emulator QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline,Windows GPU TensorRT CI Pipeline,Windows x64 QNN CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 3 pipeline(s).

@azure-pipelines
Copy link

Azure Pipelines successfully started running 6 pipeline(s).

@HectorSVC HectorSVC merged commit 98f075c into microsoft:main Apr 15, 2025
71 of 76 checks passed
ashrit-ms pushed a commit that referenced this pull request Apr 24, 2025
### Description
<!-- Describe your changes. -->
- Add flag to determine whether to save inference results.
- Implement infrastructure to transform OrtValue into TensorProto
- Update the README with corresponding descriptions.

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

- The PR aims to save the inference results of onnx-test-runner for inference purpose.
- Developer can proceed with custom metrics and verifications.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants