Enable Inference Results Saving in onnx-test-runner#24210
Enable Inference Results Saving in onnx-test-runner#24210HectorSVC merged 1 commit intomicrosoft:mainfrom
Conversation
|
/azp run Big Models,Linux Android Emulator QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline,Windows GPU TensorRT CI Pipeline,Windows x64 QNN CI Pipeline |
|
Azure Pipelines will not run the associated pipelines, because the pull request was updated after the run command was issued. Review the pull request again and issue a new run command. |
|
/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,MacOS CI Pipeline,ONNX Runtime Web CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,Linux QNN CI Pipeline,Linux OpenVINO CI Pipeline |
|
Azure Pipelines successfully started running 4 pipeline(s). |
|
Several CI checks are failing at the docker-related step:
Is there any action required from us? |
You only need to care about the Qnn or format related issues. |
|
close and re-open to trigger the re-build |
|
Please fix code format issue reported from int / Python format (pull_request) |
2e63fc8 to
b893294
Compare
|
I have fixed the reported code format issue. Please let me know if there is any problem. |
|
/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,MacOS CI Pipeline,ONNX Runtime Web CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,Linux QNN CI Pipeline,Linux OpenVINO CI Pipeline |
|
Azure Pipelines successfully started running 3 pipeline(s). |
|
/azp run Big Models,Linux Android Emulator QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline,Windows GPU TensorRT CI Pipeline,Windows x64 QNN CI Pipeline |
|
Azure Pipelines successfully started running 6 pipeline(s). |
b893294 to
1825a5a
Compare
|
Sorry for the missing on |
|
/azp run Big Models,Linux Android Emulator QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline,Windows GPU TensorRT CI Pipeline,Windows x64 QNN CI Pipeline |
|
Azure Pipelines successfully started running 6 pipeline(s). |
|
/azp run Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline, Linux QNN CI Pipeline,Linux OpenVINO CI Pipeline |
|
Azure Pipelines successfully started running 3 pipeline(s). |
|
/azp run Win_TRT_Minimal_CUDA_Test_CI |
|
Azure Pipelines successfully started running 1 pipeline(s). |
- Add flag to determine whether to save inference result - Add infrastructure to transform OrtValue into TensorProto - Add corresponding description to README
1825a5a to
d6648e9
Compare
|
/azp run Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline, Linux QNN CI Pipeline,Linux OpenVINO CI Pipeline |
|
/azp run Big Models,Linux Android Emulator QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline,Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline,Windows GPU TensorRT CI Pipeline,Windows x64 QNN CI Pipeline |
|
Azure Pipelines successfully started running 3 pipeline(s). |
|
Azure Pipelines successfully started running 6 pipeline(s). |
### Description <!-- Describe your changes. --> - Add flag to determine whether to save inference results. - Implement infrastructure to transform OrtValue into TensorProto - Update the README with corresponding descriptions. ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. --> - The PR aims to save the inference results of onnx-test-runner for inference purpose. - Developer can proceed with custom metrics and verifications.
Description
Motivation and Context